How to Build a Credit Scorecard

From Open Risk Manual

How to Build a Credit Scorecard

This article covers all the stages involved in building and deploying a general Credit Scorecard in a business context. It is thus considerably broader than the steps involved in the quantitative Model Development process (assuming that a statistical process is even applicable). To best understand the development process for a scorecard we place it in the context of the overall lifecycle of a Risk Model

More specific examples in this list

The Six Stages of the Credit Scorecard Lifecycle

First a brief overview of all the distinct stages involved. Depending on the business and regulatory context[1] some stages might be more extensive than others or potentially bundled as one stage.

  • Stage 1: Preliminary Considerations: This stage defines the scope and objectives of the credit scorecard project. The outcome of this stage may include a formal Model Origination or Requirements Document, a Project Plan, and/or other detailed documentation depending on the organizational governance of the entity introducing the credit scorecard
  • Stage 2: Development: This stage captures the main technical activities (Data Collection, Data Review, Data Cleansing, Model Development, Expert Analysis, Model Documentation etc.) that produce a complete credit scorecard specification
  • Stage 3: Model Validation: This stage (sometimes bundled with development) provides a more or less formal review of the Stage 2 developed scorecard. When scorecards are used in regulated / audited context this stage may reject the scorecard specification produced in Stage 2, offering concrete reasons
  • Stage 4: Deployment: This stage includes Production Implementation, Acceptance Testing, User Training etc. The outcome is an operating scorecard that is processing actual client data, is fully embedded in the Credit Rating System of the organization and any related risk / management processes
  • Stage 5: Monitoring: Scorecard performance is monitored throughout its active life to identify pathologies such as Model Decay. Typically there is a Model Monitoring Report that captures the essential performance indicators (including also historical development)
  • Stage 6: Adjustment or Decommissioning. If the monitoring report or other current information suggests so, the scorecard might need to be re-estimated using additional data or decommissioned. In case of re-estimation / redevelopment the six stages are repeated but now on the basis of the pre-existing implementation.

Stage 1: Preliminary considerations

This stage defines the scope and objectives of the credit scorecard project.

Indicatively the documentation at this stage will define any of the below:

Selecting the type of scorecard

There is a very large variety of possible credit scorecards. Selecting the right type requires identifying the concrete needs of the project in terms of abilities and functionality but also the practicalities of implementation (availability of data, computer systems, human expertise, degree of automation). Some key decision points that are relevant are as follows:

Availability of Data

Statistical Models have minimum requimements on data availability and Data Quality. In the absence of available data (or for other reasons) one may opt instead for more judgemental of Expert Based Models.

Regulatory Context

Credit Scorecards used by regulated financial institutions must comply with the requirements of all regulatory bodies involved. This may place constraints on data requirements, model explainability etc.

Stage 2: Scorecard Development

The specifics of the scorecard development process depend on the type of scorecard. The following is a list of activities that will generally be required for most common types. We split the list in two: the practical side which we might term the Data Engineering component and the conceptual side, which we might term the Data Science component

Practical Development Steps (Data Engineering)

The steps in this sequence aim to provide suitable resources for the development of the required scorecard

  • Data Collection. This step helps establishing links with existing databases or files. Depending on the available systems it involves writing and testing queries and filters and importing data
  • Data Cleaning. This steps involves reviewing and establishing Data Quality of collected data.
  • Missing Data. In this step (where appropriate) Missing Data may be remedied with Missing Data Imputation
  • Creating a Master Data Table. This is a table of potential characteristics and outcomes (see Credit Event) is the basic input to the quantitative estimation using common statistical models
  • Setting up a machine learning estimation framework (if applicable). This can be achieved using either a commercial or open source toolkit. Judgemental scorecards also need some form of implementation (e.g. spreadsheets)


The above steps are not necessarily sequential nor do they precede the data science component (for example after pursuing a certain modelling approach it may transpire that there are additional data requirements)

Conceptual Development Steps (Data Science)

Assuming the resources of the previous sequence are available, the conceptual development aims to identify a specific model to underpin the scorecard. There may be legal, regulatory or business (cost) limitations in the available paths. The relevant concepts for quantitative (statistical development) are:

  • The Historical Sample Selection (the relevant population, temporal period, any exclusions). Both in achieving Model Stability and in regulatory context the Representativeness of the data is particularly important.
  • Portfolio Segmentation. It is possible that the scorecard will be applied on distinct sub-segments of the relevant population
  • The precise Credit Event definition. It is essentially what the scorecard aims to predict. It may have implications for data availability. E.g relaxing the definition may significantly increase event rate.
  • The Identification of Characteristics to include in the model. There is an enormous variety of possible characteristics depending of the type of credit risk being evaluated:
  • Characteristic Selection. Narrowing down the list of characteristics, e.g. using Backward Selection.
  • Transformation Methodologies. Investigating the application of non-linear transformations to characteristics (wikipedia:Feature Engineering)
  • Selection of model family (e.g logistic regression or any of the catalog of Credit Scoring Models)
  • Performing the actual statistical fit, usually by running a statistical algorithm such as maximum likelihood
  • Performing and reviewing model estimation outcomes ( model accuracy, out-of-sample performance etc.)

Stage 3: Model Validation

The Model Validation stage will include the following steps, depending on the rigour / independence required

  • Review of conceptual methodology
  • Review of practical development steps
  • Independent replication of the model

Stage 4: Model Deployment

Depending on the systems of the entity using the scorecard, the following will be typical steps

  • Implementation of the developed model as a scorecard inference system in production systems. (Production systems typically do not require ability to re-estimate models on the fly)
  • Selection of operational parameters (like cutoffs) where applicable
  • Acceptance testing of the implementation by the operating unit
  • User training

Stage 5: Model Monitoring

Monitoring is done at the appropriate timescale (e.g., from daily to quarterly). Various levels of monitoring might be used. Monitoring typically produces updated metrics for the set of metrics that was already used in the selection / validation of the model. This includes primarily

  • Portfolio evolution statistics
  • Model performance statistics

Stage 6: Adjustment / Decommissioning

When a monitoring report or other insight suggests the scorecard in production is no longer fit-for-purpose then depending on the context the scorecard must be adjusted or replaced. Model adjustment can be

  • minimal (e.g. re-estimation using an adjusted dataset)
  • substantial (e.g. introduction of a new characteristic, changing segmentation)
  • significant intervention (e.g. changing the model family)

Depending on the context (e.g. regulation) any significant change may be classified as a new model and triggers a full validation / implementation cycle.


Issues and Challenges

Developing and using quantitative risk models such as credit scorecards if full of pitfalls.

  • Executive buy-in may be limited
  • Resource intensive (time, money, expertise, project management support)
  • Impact on existing (legacy) processes and systems
  • Very diverse range of possible implementations with varying degrees of suitability
  • Data scarcity and Data Quality issues
  • Check also the The Zen of Modeling for a high level list of modelling pitfalls

References

  1. USAID: A handbook for developing credit scoring systems in a microfinance context

Contributors to this article

» Wiki admin