Portrait of Bertrand Braunschweig – Scientific coordinator of Confiance.ai

18 Oct 2023

“The Confiance.ai program highlights the synergy between actors from varied backgrounds. For me, this is the only possible way to tackle a problem of such magnitude: creating operational AI in contexts with high economic, social and societal challenges.

Can you introduce yourself and your background?

Bertrand Braunschweig: I have spent most of my career in the field of artificial intelligence although I started it on quite different subjects, in technical-economic modeling and simulation, in the oil industry. Subsequently, for twelve years, I led an AI research group at IFP, now IFP Energies Nouvelles; then I joined the National Research Agency (ANR) where, for several years, I headed the STIC department, which funded collaborative research in digital technology and applied mathematics in France. Then, I joined Inria for around ten years where I directed two beautiful research centers: Rennes and Saclay and coordinated the national research strategy on artificial intelligence, for which the State had asked to Inria to ensure coordination. Even though I started in industry, during my career I have been increasingly close to the world of research, out of personal interest and because it is a necessity for the development of the economy. and society. Today my main action is to be the scientific coordinator of the Confiance.ai program since its beginnings at the end of 2020.

 

How would you define trustworthy AI?

Bertrand Braunschweig: Trust in artificial intelligence is an absolutely essential subject for what we call, in Confiance.ai, critical applications for people’s lives, security, the economy and the environment. This is also the case for high-risk applications (European Commission terminology). There are many factors that contribute to trust:

  • Technological factors: these are those that we mainly work on in the program such as robustness, precision, safety and security;
  • Factors of interaction between systems and humans with transparency, explainability and maintenance of control by humans;
  • Sociological factors so that the systems we develop are accepted by society, so for example are non-biased, resource-efficient, inclusive.

Trustworthy AI must be an answer to all these big questions; it is therefore the combination of technological factors, interaction factors and social factors.

 

What is your long-term vision for Confiance.ai as scientific coordinator of the program?

Bertrand Braunschweig: The Confiance.ai program addresses scientifically difficult topics, such as robustness, precision, explainability, etc., all topics that have been the subject of research for several years and which are far from being completely addressed today. today, especially as the complexity of AI systems is increasing. It was already very strong with the arrival of deep learning from the 2010s and is becoming even stronger with large language models or large vision models which are a new challenge due to their size and structure. (we are talking about hundreds of billions of parameters). All the technologies that we are capable of implementing on “small” systems are called into question when we work on large systems such as these, in particular when it comes to conducting reasoning on internal models. There are also many formalisms used in AI, both in digital AI with all the neural network architectures that have been imagined but also in other sub-domains of AI (symbolic AI, planning, constraints etc.). In summary, many scientific challenges are posed, challenges that we have begun to address in the program with, of course, significant results for a certain number of them even if they will remain, for many, still the subject of research in the years to come.

 

How is the scientific aspect handled within Confiance.ai? What is a scientific challenge?

Bertrand Braunschweig: All of the work that is carried out in Confiance.ai has a very important scientific basis, science is therefore an essential element of the program. We approach things in various ways: first in the form of scientific monitoring, many Confiance.ai researchers closely follow the evolution of technologies, methods, algorithms, models; we are able to provide a fairly relevant state of the art on all subjects of trust in AI. We also run the program thanks to a dedicated working group which organizes many collective events: scientific seminars every 15 days, given by members of the program or by external researchers; quarterly scientific days where one or two subjects are generally treated in depth. The working group also contributes to the annual Confiance.ai event, Confiance.ai Day, open to the French and international community and which always devotes a moment to science. The working group encourages scientific publications from members of the program and manages the publications workflow: we endeavor to verify a certain number of criteria so that the scientific publications of the program are both well identified and carry the messages we want. towards the outside and respect intellectual property constraints. Finally, the working group maintains the list of scientific challenges of the program: we fairly quickly identified the main categories of challenges on which Confiance.ai was going to work, such as the design, validation and deployment of trusted AI components, mastery data that is used for learning, throughout its life cycle, or the methods of interaction between humans and systems. Then, we analyzed these scientific challenges in much more detail, which resulted in a list of around 70 individual challenges, which we relate to all the developments made within the program’s action sheets. In the coming weeks, we will publish, little by little on the new Confiance.ai website, texts summarizing these scientific challenges which will allow you to better understand what we are working on.