Advanced search
1 file | 676.46 KB Add to list

On the Bayes-optimality of F-measure maximizers

Author
Organization
Abstract
The F-measure, which has originally been introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction. Optimizing this measure is a statistically and computationally challenging problem, since no closed-form solution exists. Adopting a decision-theoretic perspective, this article provides a formal and experimental analysis of different approaches for maximizing the F-measure. We start with a Bayes-risk analysis of related loss functions, such as Hamming loss and subset zero-one loss, showing that optimizing such losses as a surrogate of the F-measure leads to a high worst-case regret. Subsequently, we perform a similar type of analysis for F-measure maximizing algorithms, showing that such algorithms are approximate, while relying on additional assumptions regarding the statistical distribution of the binary response variables. Furthermore, we present a new algorithm which is not only computationally efficient but also Bayes-optimal, regardless of the underlying distribution. To this end, the algorithm requires only a quadratic (with respect to the number of binary responses) number of parameters of the joint distribution. We illustrate the practical performance of all analyzed methods by means of experiments with multi-label classification problems.
Keywords
structured output prediction, multi-label classification, MULTI-LABEL CLASSIFICATION, CONSISTENCY, TREES, RISK, MINIMIZATION, CLASSIFIERS, RESPONSES, BOUNDS, regret, Bayes-optimal predictions, F-measure, statistical decision theory

Downloads

  • KERMIT-A1-347.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 676.46 KB

Citation

Please use this url to cite or link to this publication:

MLA
Waegeman, Willem, et al. “On the Bayes-Optimality of F-Measure Maximizers.” JOURNAL OF MACHINE LEARNING RESEARCH, vol. 15, 2014, pp. 3333–88.
APA
Waegeman, W., Dembczyński, K., Jachnik, A., Cheng, W., & Hüllermeier, E. (2014). On the Bayes-optimality of F-measure maximizers. JOURNAL OF MACHINE LEARNING RESEARCH, 15, 3333–3388.
Chicago author-date
Waegeman, Willem, Krzysztof Dembczyński, Arkadiusz Jachnik, Weiwei Cheng, and Eyke Hüllermeier. 2014. “On the Bayes-Optimality of F-Measure Maximizers.” JOURNAL OF MACHINE LEARNING RESEARCH 15: 3333–88.
Chicago author-date (all authors)
Waegeman, Willem, Krzysztof Dembczyński, Arkadiusz Jachnik, Weiwei Cheng, and Eyke Hüllermeier. 2014. “On the Bayes-Optimality of F-Measure Maximizers.” JOURNAL OF MACHINE LEARNING RESEARCH 15: 3333–3388.
Vancouver
1.
Waegeman W, Dembczyński K, Jachnik A, Cheng W, Hüllermeier E. On the Bayes-optimality of F-measure maximizers. JOURNAL OF MACHINE LEARNING RESEARCH. 2014;15:3333–88.
IEEE
[1]
W. Waegeman, K. Dembczyński, A. Jachnik, W. Cheng, and E. Hüllermeier, “On the Bayes-optimality of F-measure maximizers,” JOURNAL OF MACHINE LEARNING RESEARCH, vol. 15, pp. 3333–3388, 2014.
@article{6846779,
  abstract     = {{The F-measure, which has originally been introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction. Optimizing this measure is a statistically and computationally challenging problem, since no closed-form solution exists. Adopting a decision-theoretic perspective, this article provides a formal and experimental analysis of different approaches for maximizing the F-measure. We start with a Bayes-risk analysis of related loss functions, such as Hamming loss and subset zero-one loss, showing that optimizing such losses as a surrogate of the F-measure leads to a high worst-case regret. Subsequently, we perform a similar type of analysis for F-measure maximizing algorithms, showing that such algorithms are approximate, while relying on additional assumptions regarding the statistical distribution of the binary response variables. Furthermore, we present a new algorithm which is not only computationally efficient but also Bayes-optimal, regardless of the underlying distribution. To this end, the algorithm requires only a quadratic (with respect to the number of binary responses) number of parameters of the joint distribution. We illustrate the practical performance of all analyzed methods by means of experiments with multi-label classification problems.}},
  author       = {{Waegeman, Willem and Dembczyński, Krzysztof and Jachnik, Arkadiusz and Cheng, Weiwei and Hüllermeier, Eyke}},
  issn         = {{1532-4435}},
  journal      = {{JOURNAL OF MACHINE LEARNING RESEARCH}},
  keywords     = {{structured output prediction,multi-label classification,MULTI-LABEL CLASSIFICATION,CONSISTENCY,TREES,RISK,MINIMIZATION,CLASSIFIERS,RESPONSES,BOUNDS,regret,Bayes-optimal predictions,F-measure,statistical decision theory}},
  language     = {{eng}},
  pages        = {{3333--3388}},
  title        = {{On the Bayes-optimality of F-measure maximizers}},
  volume       = {{15}},
  year         = {{2014}},
}

Web of Science
Times cited: