Citation: UNSPECIFIED.
PDF (Pre-published version)
Revisiting the conclusion instability issue.pdf - Draft Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Download (199kB)
Revisiting the conclusion instability issue.pdf - Draft Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Download (199kB)
Official URL: http://ksiresearchorg.ipage.com/seke/seke18paper/s...
Abstract
Conclusion instability is the absence of observing the same effect under varying experimental conditions. Deep Neural Network (DNN) and ElasticNet software effort estimation (SEE) models were applied to two SEE datasets with the view of resolving the conclusion instability issue and assessing the suitability of ElasticNet as a viable SEE benchmark model. Results were mixed as both model types attain conclusion stability for the Kitchenham dataset whilst conclusion instability existed in the Desharnais dataset. ElasticNet was outperformed by DNN and as such it is not recommended to be used as a SEE benchmark model.
Item Type: | Paper presented at a conference, workshop or other event, and published in the proceedings |
---|---|
Uncontrolled Keywords: | conclusion instability, software effort estimation, prediction model, elasticNet, deep neural network |
Subjects: | Q Science > QA Mathematics > QA76 Computer software |
Divisions: | Schools > Centre for Business, Information Technology and Enterprise > School of Information Technology |
Depositing User: | Michael Bosu |
Date Deposited: | 16 Jul 2018 23:00 |
Last Modified: | 21 Jul 2023 07:07 |
URI: | http://researcharchive.wintec.ac.nz/id/eprint/6105 |