Credit Scoring Models
- 1 Catalog of Credit Scoring Models
- 2 Scope
- 3 Model Classification Dimensions
- 3.1 Generative versus Discriminative Models
- 3.2 Parametric versus Non-Parametric Models
- 3.3 Linear versus Non-Linear Models
- 3.4 Model Outcomes: Classification versus Prediction (or Regression) versus Clustering
- 3.5 Supervised versus Unsupervised Models
- 3.6 Observed Variable versus Hidden (latent) Variable Models
- 3.7 Elementary versus Ensemble (Composite) Algorithms or meta-Algorithms
- 3.8 Frequentist versus Bayesian estimation Approaches
- 4 List of Credit Scoring Models
- 5 See Also
- 6 References
Catalog of Credit Scoring Models
This entry aims to be a comprehensive collection of publicly documented models and algorithms used for Credit Scoring.
The credit scoring model collection focuses on the classic one period credit assessment / classification problem that typically produces a credit score and/or a probabilistic estimate of credit risk on the basis of selected characteristics of a borrower.
Out of scope for this list and covered elsewhere in the Open Risk Manual are related credit risk modelling categories of:
- credit migration models that besides default aim to model an entire credit rating system and associated rating migrations
- multi-period default models (involving a full credit curve or credit term structure) and
- loss-given-default estimation models
The above excluded types of credit risk models are related to credit scoring but:
- involve different and more complex data sets
- are based - in general - on different algorithms (less related to classification problems and less standardized; with fewer commonalities with statistical / machine learning algorithms)
- are used in different workflows (Regulatory Capital under Basel III, IFRS 9 reporting, Economic Capital etc.)
Common Elements of Credit Scoring Models
The following characteristics define the credit scoring model collection documented in this catalog:
- Credit scoring algorithms are statistical in nature: they are using empirical evidence (historical Credit Event realizations) in order to formulate predictions about future events
- The algorithms do not explicitly embed prior theoretical rules driven by economic theory (rational expectations theory, no-arbitrage etc.) NB: It may well be that economic theory insights are used in the selection and/or structuring of Characteristics.
- The credit scoring models in scope comprise essentially of a single algorithmic step. For example a statistical model that is estimated in a single well defined automated procedure. Out of scope are more complex modeling setups where several quantification stages might be chained together using, e.g., also expert knowledge or further assumptions. In this respect see Credit Scorecard Combination.
- By-and-large the credit scoring algorithms covered here are using machine learning techniques in the broad sense (with some notable exceptions)
Model Classification Dimensions
Credit scoring models have been used globally for decades and in a variety of contexts. The significant overlap of credit scoring methodology with other statistical disciplines means that the arsenal of statistical methods has been available and it has been tried, with varying degrees of success, usability and adoption. We identify key model attributes that can help categorize the variety of models.
These attributes are focused on characterizing the models themselves and not the domain to which they are applied. For example a logistic regression based credit score model applied to individuals might differ from one applied to SME Lending in the number and type of characteristics used. For the purposes of this catalog these two model classes belong to the same category.
Generative versus Discriminative Models
Generative models produce distributions for the entire set of variables, that is, also for the population characteristics. In classic credit scoring the population characteristics are typically analyzed statistically but not are not modeled jointly with the outcome variable. Examples: Hidden Markov Models, Naive Bayes. Examples). Examples of Discriminative models: Linear/Logistic Regression, Random Forests, Support Vector Machines, Boosting (meta-algorithm), Conditional Random Fields, Neural Networks Examples.
Parametric versus Non-Parametric Models
Parametric models posit explicit functional relations between a finite number of variables. Compare to non-parametric models which imply a functional form directly from the data, implicitly allowing an infinite number of variables. Parametric Model, wikipedia:Nonparametric_statistics Non Parametric Statistics. There can be also mixtures (semi-parametric models, combining an explicit set of variables together with non-parametric treatment of others). Examples: Models employing Kernel Density Estimation, KNN
Linear versus Non-Linear Models
This is essentially a sub-category of parametric models. Linear models impose linear relations between the variables of the model. Generalized linear models relax this constraint, but only in the relationship between input and output variables, thereby retaining significant tractability versus a fully non-linear model GLM Examples: Logistic Regression, Non-linear Neural Networks
Model Outcomes: Classification versus Prediction (or Regression) versus Clustering
Predictive models allow the estimation of a continuous variable whereas classification models predict membership of a class (expressed by a category). In classic credit scoring the response variable is actually binary, hence most algorithms can be seen as classification problems, even if they are actually regressions. Example: Logistic Regression. Clustering algorithms provide as their primary output an identification of similarity classes.
Supervised versus Unsupervised Models
Supervised models require the presence of labels (e.g. realized credit events) in the training data set. Unsupervised models do not require such information (and therefore will only indirectly classify or predict credit events). Unsupervised models are further sub-divided into clustering (identifying population groupings) and association rules. Example: K-means Clustering. Semi-supervised machine learning corresponds in credit scoring to a situation of censored dataset.
Observed Variable versus Hidden (latent) Variable Models
In the first category all variables are in principle observable (manifest). In the second category there is an assumption that important dependencies between observable variables are intermediated with latent (hidden, unobservable) variables. Such variables may represent an internal "state" that has its own well defined meaning (e.g. credit worthiness), unobserved inhomogeneity, wikipedia:Latent_variable_model Latent Variable Model or hidden layers (sets of intermediate variables) as in neural network models
Elementary versus Ensemble (Composite) Algorithms or meta-Algorithms
Elementary algorithms are a single defined set of statistical relationships. Composite algorithms are instead constructed out of ensembles or averages of more elementary models. Ensemble Learning. There are various options for constructing the ensemble: Boostrap, Adaboost etc.
Frequentist versus Bayesian estimation Approaches
In a frequentist approach models are fit to data without any use of prior knowledge about model parameters (hence assuming uniform, or non-informative, priors). A Bayesian approach will allow the systematic incorporation of prior information in the model estimation. Bayesian Inference. Examples: Markov Chain Monte Carlo.
List of Credit Scoring Models
This a live catalog of credit scoring models (algorithms). The granularity of both model coverage and model characteristics may increase!
|Linear Discriminant Analysis (LDA) ||No||Yes||Regr.||Yes||Yes||Yes||Yes||Yes|
|Logistic Regression ||No||Yes||Yes||Yes (GLM)||Yes||Yes||Yes||Yes|
|Tobit / Probit Regression ||No||Yes||Regr.||Yes (GLM)||Yes||Yes||Yes||Yes|
|Classification Tree ||No||No||Clas.||No||Yes||Yes||Yes||Yes|
|Random Forest ||No||No||Clas.||No||Yes||Yes||No||Yes|
|Support Vector Machine ||No||No||Clas.||No||Yes||No||Yes||Yes|
|k-Nearest Neighboors (k-NN) ||No||No||Clas.||No||Yes||Yes||Yes||Yes|
|Multilayer Perceptron ||No||No||Clas.||No||Yes||No||Yes||Yes|
|k-Means Clustering ||No||No||Clus.||No||No||Yes||Yes||Yes|
|Naive Bayes Classifier ||Yes||Yes||Regr.||Yes (GLM)||Yes||Yes||Yes||Yes|
|Bayesian Network ||Yes||Yes||Regr.||Yes (GLM)||Yes||Yes||Yes||Yes|
List of references (academic / other publications). Preference should be given to:
- openly accessible references (e.g. a downloadable PDF file)
- reviews that provide pointers to further references
- references that provide explicit and high quality documentation of algorithms (no Word formulae)
- focus on credit scoring requirements / applications, not general statistical / machine learning papers
NB: The list is not aimed to establish academic priority but to provide sufficient documentation for each listed model. Multiple references are ok if they complement each other.
Usual disclaimer applies: Inclusion in the list does not imply any assurances about correctness, completeness or suitability.
- E.Altman, "Financial ratios, discriminant analysis and the prediction of corporate bankruptcy", J.Finance 23 (1968) 589-609
- J. Wiginton "A Note on the Comparison of Logit and Discriminant Models of Consumer Credit Behavior", The Journal of Financial and quantiative analysis 15 (1980) 757-770
- K Roszbach "Bank lending policy, credit scoring and the survival of loans", (1998)
- Galindo, J. and P. Tamayo, “Credit Risk Assessment Using Statistical And Machine Learning: Basic Methodology And Risk Modeling Applications”, Computational Economics 15 (2000), 107–143
- L. Breiman, "Random Forests", (2001) Preprint
- C.Hsu, C.Chang, and C.Lin, "A Practical Guide to Support Vector Classification", Preprint (2003)
- W.Henley, D.Hand, "$k$-Nearest-Neighbour classifier for Assessing Consumer Credit Risk", The Statistician 45 (1996) 77-95
- D. West, "Neural network credit scoring models", Computers & Operations Research 27 (2000) 1131-1152
- Kanungo et al, "An Efficient k-Means Clustering Algorithm", IEEE Transaction on pattern analysis and machine intelligence 24 (2002) 881-892
- N Friedman, D. Geiger, M. Goldszmidt, "Bayesian Network Classifiers", Machine Learning, 29, 131–163 (1997)
- Hand D.J., McConway K.J., and Stanghellini, E. "Graphical models of applicants for credit", IMA Journal of Mathematics Applied in Business and Industry, 8 (1996), 143-155