Explainable Artificial Intelligence (XAI): Challenges of model interpretability


Video: XAI
Table of contents
Download document



Artificial intelligence (AI) is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable”.  

 This was the definition of AI offered by John McCarthy, professor at Stanford University, one of the founders of this discipline and co-author of the term “artificial intelligence”.

However, as early as 1950 Alan Turing asked: “can machines think?” and formulated what would later become known as the “Turing test”: a test of a machine's ability to display intelligence indistinguishable from that of a human being. Turing proposed that a human evaluator judge natural language conversations between a person and a machine designed to generate human-like responses. If the evaluator was unable to distinguish the machine from the human, the machine would have passed the test.

Although there is controversy in this regard, many authors consider that there are already artificial intelligences that could pass the Turing test, such as GPT-4, from the Open AI Foundation, although GPT-4 itself is not so sure about it.  There are also more sophisticated tests, such as Winograd's schema test, which consists of solving complex anaphora that require knowledge and common sense , something that the current AI does not seem to be able to do yet.

Conversation with GPT-4 about its ability to pass the Turing test

Chat GPT-4 y el test de Turing 

 

Even so, although the field of AI is not new, dizzying breakthroughs have been made in recent years, with applications ranging from self-driving cars to medical diagnostics, automatic trading, facial recognition, energy management, cybersecurity, robotics or machine translation, to name a few. 

A distinguishing feature of today's AI is precisely linked to McCarthy's definition mentioned above: it is not limited to observable methods, and, when it reaches a certain level of complexity, it poses interpretability challenges. In other words: AI models tend to have a high performance, much higher than traditional algorithms; but in each specific case it can be extremely complex to explain why the model has produced a given result. 

Although there are applications of AI where it is not as important to be able to understand or explain why the algorithm has returned a particular value, in many cases it is essential and is a regulatory requirement. For example, in the European Union, under the General Data Protection Regulation (GDPR), consumers have what is known as the “right to an explanation“: 

[...] not to be subject to a decision based solely on automated processing [...], such as automatic refusal of an online credit application [...] without any human intervention”, and [the data subject] has the right “to obtain an explanation of the decision reached [...] and to challenge the decision”.

All this has led to the development of the Explainable Artificial Intelligence (XAI) discipline, which is the field of study that aims to make AI systems understandable to humans, as opposed to the notion of “black box”, which refers to algorithms in which only the results are observable and the operation of the model is unknown, or the basis for the results cannot be explained. 

It can be concluded that an algorithm falls within the XAI discipline if it follows three principles: transparency, interpretability and explainability. Transparency occurs if the processes that calculate the parameters of the models and produce the results can be described and justified. Interpretability describes the ability to understand the model and present how it makes decisions in a human-understandable way. Explainability refers to the ability to decipher why a particular observation has received a particular value. In practice, these three terms are closely linked and are often used interchangeably, in the absence of a consensus on their precise definitions. 

These principles are achieved through basically two strategies: either develop algorithms that are interpretable and explainable by their nature (including linear regressions, logistic or multinomial models, and certain types of deep neural networks, among others), or use interpretability techniques as tools to achieve compliance with these principles.

XAI deals both with the techniques to try to explain the behavior of certain opaque models (”black box”) and the design of inherently interpretable algorithms (”white box”). 

XAI is essential for AI development, and therefore for professionals working in this area, due to at least three factors:

1) It contributes to building confidence in making decisions that are based on AI models; without this confidence, model users might show resistance to adopting these models.

2) It is a regulatory requirement in certain areas (e.g. data protection, consumer protection, equal opportunities in the employee recruitment process, regulation of models in the financial industry).

3) It leads to improved and more robust AI models (e.g. by identifying and eliminating bias, understanding the relevant information to produce a certain result, or anticipating potential errors in observations not included in the model’s training sample). All of this helps to develop ethical algorithms and allows organizations to focus their efforts on identifying and ensuring the quality of the data that is relevant to the decision process.

Although the development of XAI systems is receiving a great deal of attention from the academic community, industry and regulators, it still poses numerous challenges.

This paper will review the context and rationale for XAI, including XAI regulations and their implications for organizations; the state of the art and key techniques of XAI; and the advances and unsolved challenges in XAI. Finally, a case study on XAI will be provided to help illustrate its practical application.

 

 

TABLE OF  CONTENTS


Go to Chapter 1

Introduction

Go to Chapter 2

Executive summary

Go to Chapter 3

Context and rationale for XAI

Interpretability techniques: state of the art


Go to Chapter 5

Interpretability use case

Go to Chapter 6

Conclusions

Go to Glossary

Glossary

References

References



Explainable Artificial Intelligence (XAI): Challenges of model interpretability
Access the full document