Portrait of Julien Chiaroni – Former director of the Grand Défi IA at Sécrétariat Général pour l’Investissement (SGPI)

29 Jan 2024

“I believe, in any case I hope, that the collaborations of the Confiance.ai program have made it possible to lay the foundations or, at least, to contribute to this ecosystem of trust in France, in Europe but also internationally .”

Can you introduce yourself and your background?

Julien Chiaroni : I would first like to thank you for this opportunity to express myself, knowing that I left my position as Director of the Grand Défi “securing, reliability and certification of AI-based systems” almost a year ago and that I am therefore no longer responsible. Since then, I have joined the CEA LIST teams.

Since the start of my career, I have held operational positions of increasing responsibility in the field of technological research, and more particularly in digital technology. I started at LETI in Grenoble on semiconductors, then at LIST in Saclay on themes such as AI or cyber security, before joining the General Secretariat for Investment (SGPI). It is therefore a trajectory going from hardware (electronic) to software allowing me to have a transversal vision in the digital field, as illustrated by the pairing of AI and computing power.

The Confiance.ai program is one of the three pillars of the Grand Défi strategy, validated by the steering committee of the innovation council, and aims to bring together a large industrial and academic ecosystem, French but also international, in order to respond to the issues of trust in AI-based systems and to enable the implementation of future European regulations currently being negotiated. Collaboration has therefore been one of the axes that the collective has pursued for several years. And I am delighted to see the results today.

 

How would you define trustworthy AI?

Julien Chiaroni : It is very difficult to give a general definition of trustworthy AI. And this is just as much the case when it refers to AI. First of all, it nevertheless seems important to me to clarify that we are talking about AI-based systems, and not just AI which is a general-use technology, even if it has the particularity of carrying the function of the system. Once this has been clarified: we expect products to be valid, robust, resilient, fair, understandable, ethical, reliable, reproducible, safe and even certifiable. AI is no exception, reinforced by a series of inherent risks including the difficulty of reporting the results of algorithms. We therefore refer to properties and values ​​that the AI-based system must respect, therefore to a set of trust characteristics. These obviously depend on the uses and associated risks. They are either technical (for example robustness), or related to the interaction between the system and humans (for example explainability), or ethical (for example bias).

The issue is twofold:

  • agree on all of these characteristics and their definition: the standardization and standardization bodies have also worked on the production of taxonomies and the Confiance.ai program has also proposed one;
  • to ensure that this is “operationalizable” for manufacturers who want or need to refer to it and guarantee compliance with future European regulations, the AI ​​Act.

 

When you were director of the Grand Défi on AI, you initiated several collaborations between Confiance.ai and other international initiatives. Which ones? What were your main motivations behind these collaborations?

Julien Chiaroni : I would like to point out that the collective has initiated international collaborations and that I have contributed to them. To cite an example that illustrates my point: the presence of a Thales team both in France and in Quebec was the trigger for an ambitious collaboration between two ecosystems at the forefront of AI. Afterwards, everyone was the driving force to make this happen operationally. But to come back to your question: numerous international cooperations have been initiated by the Confiance.ai program. They relate to research; for the moment with Quebec, with the Confiance.ia program, but discussions are continuing with Germany and the United Kingdom. They also strengthen actions on norms, standards or labels currently being drafted to support future European regulations, the AI ​​Act, whether with Afnor, Positive.ai, VDE or IEEE. They are also generic or target a specific application sector such as aeronautics with Eurocae and EASA.

The motivation is simple: to provide a scientific and technological response to a socio-economic issue that concerns us all, to create digital commons to respond to it.

 

How do these collaborations contribute to resolving regulatory issues and removing scientific and technological obstacles around trusted AI?

Julien Chiaroni : I think that the partners of the Confiance.ai program would be better able than me to respond precisely on the scientific and technological content of the collaborations that have been implemented. So I will try to answer from another angle. From the moment we consider that the use of AI in systems, products, services presents risks, that we agree on the risks linked to uses and the values ​​to be respected, remains to be defined how to achieve this. We will then necessarily think about regulation. And I will add norms, standards, solution offerings, evaluation and audit bodies, etc. The outline of an ecosystem of trust then emerges to allow operationalization, or concrete implementation, alongside private actors and in accordance with citizens’ expectations.

I believe, in any case I hope, that the collaborations of the Confiance.ai program have made it possible to lay the foundations or at least to contribute to this ecosystem of trust in France, in Europe, but also internationally.