top of page
image.png

AI interpretability: what are the limits?

  • sarah61533
  • May 26
  • 4 min read
ree

Interpretability is a process. It enables data scientists to take a step back and look at the algorithm being used. It provides meaning to the choices made by the Artificial Intelligence system. But are the choices always fair? For data scientists, it’s essential to understand algorithms if AI is going to be more widely used in our society. In this interview, Senior Lecturer at the Centre for Applied Mathematics at France’s École Polytechnique (CMAP) since 2016, Erwan Scornet talks about the limits of interpretability and the challenges that lie ahead.


How does the interpretability of algorithms have an influence on their performance?


Erwan Scornet: When you opt for interpretable algorithms, you lose out in terms of performance. The greater the understanding, the more data scientists lose accuracy in their predictions. You are limited to a less flexible class of algorithms, and which are therefore less accurate.


Work is underway to create predictive, interpretable algorithms. Algorithms are based on simple rules of decision-making, such as ‘if, and if, then…’ All of these decision rules create a process that is both predictive and interpretable, but it’s not stable. Currently, I’m working with a PhD student and other colleagues in a research project to create a set of stable rules. We are starting from random forests, and we are identifying the rules most regularly used by trees in those forests, which will enable us to establish a set of stable rules that offer a good level of accuracy.


But currently, the basic paradigm is that what you gain in interpretability, you lose in accuracy. And the same in reverse.


Can interpretability also help to combat biases?


Erwan Scornet: Interpretability enables you understand what an algorithm will predict, so it becomes easier to identify any obvious biases – but you cannot always remove them entirely. 


For example, the US justice system uses an algorithm called COMPAS, which predicts re-offending rates based on socio-economic data. In 2016, researchers published a paper indicating that the algorithm was biased in a negative way against black people. Other researchers tried to make the algorithm interpretable, which means they looked to see if there were any simple rules that could reproduce the way the algorithm behaved. What they found was a simple algorithm based on two variables: the person’s age and the number of crimes committed previously. So, on the face of it, there was no reason to suspect that the algorithm was biased. Instead, it was the data that contained the bias. Black people in the US are more likely to be arrested and convicted than white people.  So, by including the number of previous convictions, the algorithm was indirectly featuring a bias based on a person’s skin colour.

Bias and interpretability are therefore two different notions.


What’s the difference between the interpretability and transparency of algorithms?


Erwan Scornet: Transparency is a rather vague term. According to the General Data Protection Regulation, transparency implies information about the handling of data being clear, intelligible and easily accessible. So, you can say that interpretability and transparency are two different things. 


Algorithms often belong to private companies, who do not publish anything about their technology. For an algorithm to be transparent, the code has to be made public, although that’s not enough to make it easy to understand and the task is made even more difficult without seeing the original data. The algorithm is trained by the data, which determines the parameters. However, for reasons of confidentiality, the data is not shared. And you need access to both the algorithm (in other words, the process) and the data (the raw material) 

in order to understand how predictions are being made, and therefore to provide transparency. But making everything public doesn’t necessarily mean that the algorithm is interpretable.


To give you an example, the APB algorithm, an early version of Parcoursup, provided a way of assigning university courses to school students. The algorithm was made public, and was therefore transparent. Later, it was studied by a number of scientists who wanted to understand and to interpret the way it worked. This led to a new Parcoursup algorithm, which is both transparent and interpretable, but this doesn’t mean that it’s optimal. 


So, it’s impossible to interpret without having transparency, but there is a difference between transparency and interpretability.


What can the interpretability of algorithms bring to our future understanding of AI?


Erwan Scornet: Interpretability allows you to discover new ideas, and to innovate. Predicting is not the end in itself. Interpretability will enable us to discover new information, to improve our knowledge and to guarantee a degree of algorithm reliability that will give users greater confidence about its deployment. 


Final question, which is more about predictability than interpretability. Often, people interested in AI think it will be able to predict the future. Why isn’t that the case?


Erwan Scornet : When you apply Machine Learning algorithms, you’re hypothesising that the link between input and output is the same for all the data. And you can imagine that this link will change over time (and as more data is collected), but the basic paradigm of statistical learning is that the world stays as it is. There are no unexpected events. For example, Covid-19 could not have been predicted by using AI. When these events happen, they change the link between the input and the output, and we have no way of predicting that. Clearly, you cannot expect miracles from an algorithm. To predict such a change, you would need models containing explicit hypotheses about how the world is going to evolve. In Machine Learning, we generally try to avoid making hypotheses, and to concentrate instead on the data. It’s a frustrating answer for many people, starting with the data scientists!

bottom of page