Advanced search
1 file | 279.08 KB

Benchmarking regression algorithms for loss given default modeling

Author
Organization
Abstract
The introduction of the Basel II Accord has had a huge impact on financial institutions, allowing them to build credit risk models for three key risk parameters: PD (probability of default), LGD (loss given default) and EAD (exposure at default). Until recently, credit risk research has focused largely on the estimation and validation of the PD parameter, and much less on LGD modeling. In this first large-scale LGD benchmarking study, various regression techniques for modeling and predicting LGD are investigated. These include one-stage models, such as those built by ordinary least squares regression, beta regression, robust regression, ridge regression, regression splines, neural networks, support vector machines and regression trees, as well as two-stage models which combine multiple techniques. A total of 24 techniques are compared using six real-life loss datasets from major international banks. It is found that much of the variance in LGD remains unexplained, as the average prediction performance of the models in terms of R squared ranges from 4% to 43%. Nonetheless, there is a clear trend that non-linear techniques, and in particular support vector machines and neural networks, perform significantly better than more traditional linear techniques. Also, two-stage models built by a combination of linear and non-linear techniques are shown to have a similarly good predictive power, with the added advantage of having a comprehensible linear model component.
Keywords
Credit risk, Basel II, CLASSIFIERS, LEAST-SQUARES, RATINGS, LGD, Data mining, Prediction, RULE EXTRACTION, STATISTICAL COMPARISONS, SUPPORT VECTOR MACHINES, MULTIPLE DATA SETS

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 279.08 KB

Citation

Please use this url to cite or link to this publication:

Chicago
Loterman, Gert, Iain Brown, David Martens, Christophe Mues, and Bart Baesens. 2012. “Benchmarking Regression Algorithms for Loss Given Default Modeling.” International Journal of Forecasting 28 (1): 161–170.
APA
Loterman, G., Brown, I., Martens, D., Mues, C., & Baesens, B. (2012). Benchmarking regression algorithms for loss given default modeling. INTERNATIONAL JOURNAL OF FORECASTING, 28(1), 161–170.
Vancouver
1.
Loterman G, Brown I, Martens D, Mues C, Baesens B. Benchmarking regression algorithms for loss given default modeling. INTERNATIONAL JOURNAL OF FORECASTING. 2012;28(1):161–70.
MLA
Loterman, Gert, Iain Brown, David Martens, et al. “Benchmarking Regression Algorithms for Loss Given Default Modeling.” INTERNATIONAL JOURNAL OF FORECASTING 28.1 (2012): 161–170. Print.
@article{3152730,
  abstract     = {The introduction of the Basel II Accord has had a huge impact on financial institutions, allowing them to build credit risk models for three key risk parameters: PD (probability of default), LGD (loss given default) and EAD (exposure at default). Until recently, credit risk research has focused largely on the estimation and validation of the PD parameter, and much less on LGD modeling. In this first large-scale LGD benchmarking study, various regression techniques for modeling and predicting LGD are investigated. These include one-stage models, such as those built by ordinary least squares regression, beta regression, robust regression, ridge regression, regression splines, neural networks, support vector machines and regression trees, as well as two-stage models which combine multiple techniques. A total of 24 techniques are compared using six real-life loss datasets from major international banks. It is found that much of the variance in LGD remains unexplained, as the average prediction performance of the models in terms of R squared ranges from 4% to 43%. Nonetheless, there is a clear trend that non-linear techniques, and in particular support vector machines and neural networks, perform significantly better than more traditional linear techniques. Also, two-stage models built by a combination of linear and non-linear techniques are shown to have a similarly good predictive power, with the added advantage of having a comprehensible linear model component.},
  author       = {Loterman, Gert and Brown, Iain and Martens, David and Mues, Christophe and Baesens, Bart},
  issn         = {0169-2070},
  journal      = {INTERNATIONAL JOURNAL OF FORECASTING},
  keywords     = {Credit risk,Basel II,CLASSIFIERS,LEAST-SQUARES,RATINGS,LGD,Data mining,Prediction,RULE EXTRACTION,STATISTICAL COMPARISONS,SUPPORT VECTOR MACHINES,MULTIPLE DATA SETS},
  language     = {eng},
  number       = {1},
  pages        = {161--170},
  title        = {Benchmarking regression algorithms for loss given default modeling},
  url          = {http://dx.doi.org/10.1016/j.ijforecast.2011.01.006},
  volume       = {28},
  year         = {2012},
}

Altmetric
View in Altmetric
Web of Science
Times cited: