Revisiting the conclusion instability issue in software effort estimation

Bosu, Michael Franklin and Mensah, Solomon and Bennin, Kwabena and Abuaiadah, Diab (2018) Revisiting the conclusion instability issue in software effort estimation. 30th Software Engineering and Knowledge Engineering, San Francisco, CA, 1 - 3 July, 2018.

PDF (Pre-published version) - Draft Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.


Official URL:

Abstract or Summary

Conclusion instability is the absence of observing the same effect under varying experimental conditions. Deep Neural Network (DNN) and ElasticNet software effort estimation (SEE) models were applied to two SEE datasets with the view of resolving the conclusion instability issue and assessing the suitability of ElasticNet as a viable SEE benchmark model. Results were mixed as both model types attain conclusion stability for the Kitchenham dataset whilst conclusion instability existed in the Desharnais dataset. ElasticNet was outperformed by DNN and as such it is not recommended to be used as a SEE benchmark model.

Item Type:Paper presented at a conference, workshop or other event, and published in the proceedings
Keywords that describe the item:conclusion instability, software effort estimation, prediction model, elasticNet, deep neural network
Subjects:Q Science > QA Mathematics > QA76 Computer software
Divisions:Schools > Centre for Business, Information Technology and Enterprise > School of Information Technology
ID Code:6105
Deposited By:
Deposited On:16 Jul 2018 23:00
Last Modified:19 Dec 2018 20:08

Repository Staff Only: item control page