top of page
image.png

Interpretability in AI, a key challenge for data scientists

  • sarah61533
  • May 26
  • 3 min read

How do you get to understand the workings of a ‘black box’? Can we trust a machine, and forget about the opinion of an expert? Senior Lecturer at the Centre for Applied Mathematics at France’s École Polytechnique (CMAP) since 2016, Erwan Scornet looks at the challenges of interpretability for data scientists.

Why is interpretability so important for data scientists?


Erwan Scornet: It’s a topical question at the moment because we are starting to see AI applied in sensitive areas such as the justice system, defence and healthcare. In these areas, the consequences of the decisions being taken are significant and always involve people, who can call on the machine to help them make their decision.


In healthcare, it could be about choosing whether or not to give a course of treatment, which would then have an influence on a patient’s life expectancy. As citizens, in critical situations like these, we don’t want the decision to be taken solely by an algorithm, or for the algorithm to be completely hidden from whoever is using it. In healthcare, it’s unthinkable for an algorithm to make a prediction and for that to be applied without recourse to the opinion of a medical expert. We can imagine a situation where the machine makes a recommendation to the doctor, and there could be an interaction between the doctor and the algorithm, so the doctor understood how its conclusion had been reached.


In sensitive areas, the human being always has to take the final decision. It’s an essential aspect of many applications, and one that raises questions for society at large. Do we really want to live in a society where everything is automated or robotised? It’s absolutely essential to understand the societal impact of the algorithms we are developing. It’s why the École Polytechnique regularly organises seminars on ethics with guest speakers. As a scientist, you must never lose sight of your work’s impact on the environment, society and the economy.


And yet, wouldn’t a decision taken by an algorithm be fairer?


Erwan Scornet: As I mentioned earlier, when it comes to sensitive areas, the final decision will always have to be taken by a person. If we leave it to an algorithm and a patient dies, who is responsible? The company that created the algorithm? The engineer who designed it? The engineers responsible for collecting the data on which the algorithm was trained? It raises complex legal questions. There will always need to be someone who weighs up the pros and cons, and who accepts responsibility for a decision.


It’s also worth bearing in mind that an algorithm is no less biased than a person. It is based on data taken from our society, and that data will contain society’s biases. It’s illusory to think that an algorithm is objective. Allowing an algorithm to do what is wants will not help us have fairer, more equitable decisions – and societies.


However, there are areas where interpretability is not essential. For example, if a video content platform recommends a film that turns out to be poorly targeted, it’s a shame but it’s not really important. And for most applications, you don't need to understand how the algorithm works in order to use it. It would be a problem for a company if the recommendations being made were constantly wrong. But from the point of view of societal impacts, it’s not going to change our lives! As a user, we don’t really want to understand how these types of algorithms work. We just want them to work effectively.


bottom of page