Show simple item record

dc.contributor.authorFoss, Tron
dc.contributor.authorStensrud, Erik
dc.contributor.authorKitchenham, Barbara
dc.contributor.authorMyrtveit, Ingunn
dc.date.accessioned2008-05-28T12:10:51Z
dc.date.issued2002
dc.identifier.issn0807-3406
dc.identifier.urihttp://hdl.handle.net/11250/94042
dc.description.abstractThe Mean Magnitude of Relative Error, MMRE, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. It seems obvious that the purpose of MMRE is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that MMRE does not select the best model. The consequences are dramatic for a vast body of knowledge in software engineering. The implications of this finding are that the results and conclusions on prediction models over the past 15-25 years are unreliable and may have misled the entire software engineering discipline. We therefore strongly recommend not using MMRE to evaluate and compare prediction models. Instead, we recommend using a combination of theoretical justification of the models we propose together with other metrics proposed in this paper.en
dc.format.extent414807 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoengen
dc.relation.ispartofseriesDiscussion Paperen
dc.relation.ispartofseries03/2002en
dc.titleA Simulation Study of the Model Evaluation Criterion MMREen
dc.typeWorking paperen
dc.subject.nsiVDP::Mathematics and natural science: 400::Information and communication science: 420en


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Discussion Papers [30]
    This collection contains BI's Discussion Papers series, published online from 2000. The series was terminated in 2009.

Show simple item record