Kybernetika 50 no. 5, 647-660, 2014

Relative cost curves: An alternative to AUC and an extension to 3-class problems

Olga Montvida and Frank KlawonnDOI: 10.14736/kyb-2014-5-0647


Performance evaluation of classifiers is a crucial step for selecting the best classifier or the best set of parameters for a classifier. Receiver Operating Characteristic (ROC) curves and Area Under the ROC Curve (AUC) are widely used to analyse performance of a classifier. However, the approach does not take into account that misclassification for different classes might have more or less serious consequences. On the other hand, it is often difficult to specify exactly the consequences or costs of misclassifications. This paper is devoted to Relative Cost Curves (RCC) - a graphical technique for visualising the performance of binary classifiers over the full range of possible relative misclassification costs. This curve provides helpful information to choose the best set of classifiers or to estimate misclassification costs if those are not known precisely. In this paper, the concept of Area Above the RCC (AAC) is introduced, a scalar measure of classifier performance under unequal misclassification costs problem. We also extend RCC to multicategory problems when misclassification costs depend only on the true class.


classifier, performance evaluation, misclassification costs, cost curves, ROC curves, AUC


93E12, 62A10


  1. A. P. Bradley: The use of the area under the {R}{O}{C} curve in the evaluation of machine learning algorithms. Pattern Recognition 30 (1997), 1145-1159.   CrossRef
  2. C. Drummond and R. C. Holte: Cost curves: An improved method for visualizing classifier performance. Machine Learning 65 (2006) 95-130.   CrossRef
  3. T. Fawcett: An introduction to roc analysis. Pattern Recognition Lett. 27 (2006), 861-874.   CrossRef
  4. D. J. Hand: Measuring classifier performance: a coherent alternative to the area under the ROC curve. Machine Learning 77 (2009), 103-123.   CrossRef
  5. D. J. Hand and R. J. Till: A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning 45 (2001), 171-186.   CrossRef
  6. J. A. Hanley: Receiver operating characteristic (ROC) methodology: the state of the art. Critical Reviews in Diagnostic Imaging 29 (1989), 307-335.   CrossRef
  7. J. Hernández-Orallo, P. Flach and C. Ferri: Brier curves: a new cost-based visualisation of classifier performance. In: Proc. 28th International Conference on Machine Learning (ICML-11) (L. Getoor and T. Scheffer, eds.), ACM, New York 2011, pp. 585-592.   CrossRef
  8. F. Klawonn, F. Höppner and S. May: An alternative to ROC and AUC analysis of classifiers. In: Advances in Intelligent Data Analysis X, (J. Gama, E. Bradley, and J. Hollmén, eds.), Springer, Berlin 2011, p. 210-221.   CrossRef
  9. W. J. Krzanowski and D. J. Hand: ROC Curves for Continuous data. Chapman and Hall, London 2009.   CrossRef
  10. J. Li and J. P. Fine: ROC analysis with multiple classes and multiple tests: methodology and its application in microarray studies. Biostatistics 9 (2008), 566-576.   CrossRef
  11. P. M. Murphy and D. W. Aha: Uci repository of machine learning databases. 1992. Avaible: \url{}.   CrossRef
  12. D. Mossman: Three-way ROCs. Medical Decision Making 19 (1999), 78-89.   CrossRef