Show simple item record

dc.contributor.authorFoss, Tron
dc.contributor.authorStensrud, Erik
dc.contributor.authorKitchenham, Barbara
dc.contributor.authorMyrtveit, Ingunn
dc.description.abstractThe Mean Magnitude of Relative Error, MMRE, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. It seems obvious that the purpose of MMRE is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that MMRE does not select the best model. The consequences are dramatic for a vast body of knowledge in software engineering. The implications of this finding are that the results and conclusions on prediction models over the past 15-25 years are unreliable and may have misled the entire software engineering discipline. We therefore strongly recommend not using MMRE to evaluate and compare prediction models. Instead, we recommend using a combination of theoretical justification of the models we propose together with other metrics proposed in this paper.en
dc.format.extent414807 bytes
dc.relation.ispartofseriesDiscussion Paperen
dc.titleA Simulation Study of the Model Evaluation Criterion MMREen
dc.typeWorking paperen
dc.subject.nsiVDP::Mathematics and natural science: 400::Information and communication science: 420en

Files in this item


This item appears in the following Collection(s)

  • Discussion Papers [30]
    This collection contains BI's Discussion Papers series, published online from 2000. The series was terminated in 2009.

Show simple item record