Validation of rating systems under the IRB approach

EBA (European Banking Authority)

The EBA must develop and maintain an up-to-date handbook on the supervision of financial institutions which is to set out supervisory best practices and high-quality methodologies and processes.

The assessment methodology and objectives of the validation function are described in CRD/CRR. However, the EBA has identified some heterogeneity in the expectations of competent authorities (CA) relative to the validation function.


EBA - Validation of rating systems under the IRB approach

Watch video

Executive summary

The EBA has published a Consultation Paper (CP) on the supervisory handbook on the validation of IRB rating systems, which provides some general guidance on the expectations relative to the validation function.

Main content

This Technical Note summarizes the main aspects of the ITS:

  • Context of validation function. The assessment of the model performance is performed by several functions: i) credit risk control unit (CRCU), ii) the validation function and iii) the Internal Audit (IA). However, the EU regulation requires institutions to set up a specific independent validation function with its own responsibilities.
  • Initial validation vs. recurrent validation. The guidelines differentiate between initial and recurrent validations to determine the actions to be taken by entities within the validation process:
    • Initial validation of newly developed rating systems. It refers to the validation of either a newly introduced models or the validation of changes or extensions to changed models. In the first validation it is important to address the model design and risk quantification choices and in case of a model change, the validation function is expected to compare the performance of the new models with the previous ones.
    • Recurrent validation. It refers to the validation of either an unchanged model or the validation of unchanged aspects. The subsequent validation differs from the first validation in two ways: i) It benefits from additional data and observations; and ii) it has at its disposal previous conclusions from the first validation.

In both the first and the subsequent validation there are certain specificities in relation to the performance of the core model and the modelling environment.

  • Validation content. A differentiation must be made between the tasks related to the pure model performance assessment and the ones dealing with on the modelling environment:
    • Assessment of the core model performance. One of the objectives of the validation function is to assess the core performance of the rating system and this assessment can be broken down in:
      • Risk differentiation: consistency and comprehensiveness of the rating assignment process; and accuracy of the rating assignment in the model development.
      • Risk quantification: the accuracy of the best estimates the conservatism of the risk estimates; and for the Loss given default (LGD) and Credit conversion factor (CF) parameters, the appropriateness of the estimates.
      • Specificities: related to the validation of defaulted exposures; credit risk mitigation; and Exposures risk weighted according to the slotting approach.
    • Assessment of the modelling environment. To ensure a proper assessment of the data quality and maintenance, the data quality framework should clearly define policies, roles and responsibilities in data processing and data quality management. Furthermore, the validation function is expected to verify the adequacy of the implementation of internal ratings and risk parameters in IT systems.
  • Validation challenges.
    • Use of external data in the model development. The validation of a rating system built on external data is expected to follow five principles: i) representativeness, ii) access to data, iii) methodological choices’ assessment, iv) performance assessment, v) data quality.
    • Outsourcing of validation task. It is expected that the institution performs a comprehensive analysis of its compliance with all the regulatory requirements on outsourcing.
    • Validation in the context of data scarcity. The validation of ratings systems in a context of data scarcity brings some additional challenges, for example, the adaptation of the validation policy.

Next Steps:

  • The draft supervisory handbook has been published for a three-month consultation period (until 28 October 2022). The responses received during the consultation period will be considered when specifying the final handbook.

Download the technical note Validation of rating systems under the IRB approach.