AI & Ethics
- sarah61533
- May 26
- 3 min read
Ethical AI: "Robotics experts, computer scientists, lawyers and philosophers need to join forces to deal with the complexity of the issues.”

A lot of work is being carried out by national and international institutions on ethical artificial intelligence (AI), reflecting the growing role of AI in our daily lives, particularly through the use of connected objects. However, its growth also raises many issues around the protection of human rights. Jean-Gabriel Ganascia, an AI researcher at LIP6 (a joint research lab of Sorbonne University in Paris and France’s national research body, the CNRS). A former president of the CNRS Ethics Committee (COMETS), he shares his philosophical approach to the challenges of ethical AI with Homiwoo.
The idea that a machine can become autonomous has always inspired people in science and the arts. For Jean-Gabriel Ganascia, however, there has been a fundamental misunderstanding about AI from the very start: "The aim of this discipline is not, as some fear, to create an intelligent and autonomous machine that would be a double of a human being, but rather to study human intelligence by simulating the various cognitive functions - perception, reasoning, memory, learning, etc. - on machines.”
The technology is certainly a dazzling success story: from telephones to cars, a whole ‘Internet of Things’ has been developed using AI to help us in our daily tasks. Systems based on AI have a far greater ability to process data than basic human intelligence, and can draw a conclusion or make a recommendation in just a few seconds from millions of parameters.
Regulating ethical AI: mission impossible?
While AI is delivering advances in many areas, the way its algorithms are created requires a proper framework to prevent existing inequalities being perpetuated or even increased, and to guard against security breaches, among other issues… Faced with these key challenges, many institutions are working towards a common goal for ethical AI, albeit a goal that is "all the more difficult to achieve as ethics are based on moral standards that usually reflect the customs and traditions of a given society. However, given the relatively recent nature of digital technology, we have no such traditions in this area," underlines Jean-Gabriel Ganascia.
As an example, the European Union published a draft regulation in April 2021 to harmonise the rules on artificial intelligence. In the preliminary work, the idea was to apply the principles of autonomy, non-maleficence (doing no harm) and justice, which form the basis of medical ethics, to the digital domain. “These values are difficult to establish as absolute principles in the digital environment," says Jean-Gabriel Ganascia. “Take the example of an aircraft pilot who falls ill in mid-flight: in the absence of a human alternative - if there is no co-pilot, for example - the machine has to take control to ensure passenger safety.” The example speaks for itself: human autonomy cannot handle every situation; but at the same time, it’s risky to base all AI ethics on a single set of governing principles as every situation is different and requires a specific assessment.
Ethical AI: best practice
So, what is best practice for developing an AI system that respects human rights? Jean-Gabriel Ganascia recommends an approach that takes into account the specific context and that systematically includes a human presence. While he points out that "all recommendations, however numerous, cannot prevent some applications being problematic from an ethical point of view", he also offers some advice:
Introduce moral imperatives in the programming of AI systems;
Subject the trained algorithms to statistical evaluation to check for any drift;
Improve the judgment of machines, so they can not only assess any situation, but can also defer to a human being should there be any risk or doubt;
Create a "supervisor" that can automatically prevent a planned course of action from being taken, if doing so would breach the accepted rules for a given situation;
As far as possible, promote transparency around the system by providing an accurate description – that can be understood by humans -- of the logic that led to its decisions.
Lastly, the researcher encourages the formation of multidisciplinary think tanks on the topic of AI ethics: " Robotics experts, computer scientists, lawyers and philosophers really need to join forces to deal with the complexity of these issues.” The road to ethical AI is being built as different uses of the technology are being developed: one step at a time.


