October 4
9am – 9:30am: Welcome
9:30am – 10am: Introduction by the program direction
- David SADEK, president of the executive committee
- Paul LABROGÈRE, SystemX General director
- Rodolphe GELIN, President of the steering committee
- Yannick BONHOMME, Program Confiance.ai director
- Julien CHIARONI, AI Grand Defi Director
10am – 10:30am: Batch 1 summary and feedback
- Loïc CANTAT, Technical coordinator of the Confiance.ai program
- Yves NICOLAS and Fabien TSCHIRHART, EC1 projects co-leaders
- Feedback by Jacques YELLOZ, Safran
10:30am – 11am: Pause and visit of the villages
11am – 12:30 p.m.: Batch 2 general overview and detailed presentations
- Building Batch 2 with Loic CANTAT
- New uses cases integrated into the program with Loic CANTAT and Amélie BOSCA
- The V2 for partners:
- End-to-end approach for trusted AI Systems with Boris ROBERT, Hugo Guillermo CHALE GONGORA and Fabien TSCHIRHART
- Bringing trust from ODD to Data with Morayo ADEDJOUMA, Angélique LOESCH and Flora DELLINGER
- RobustAI and Monitoring with Yassine TAHIRI and Rodolphe GELIN
- Explainability: Tools and Processes for understanding with Philippe DEJEAN and Antonin POCHE
- Trustworthy Embedded AI with Thomas WOUTERS and Jacques YELLOZ
- Compliance Activites, HMI Interaction with Christophe ALIX
- Q&A session
12:30 p.m. – 2 p.m.: Lunch and visit of the villages
2 p.m. – 3:30 p.m.: Workshops in parallel
Registration for workshops is done on your space, “Program” tab once you have registered for Confiance.ai Days.
- Amphitheater – “Bringing trust from ODD to Data” by Morayo ADEDJOUMA and Flora DELLINGER: The introduction of AI components into critical systems involves a close attention on data. Producing quality data requires some novel processes, like improving data engineering or considering the ODD (Operational Design Domain) as an input. In this village, we outlook some key elements that will enable to bring trust into AI-based critical systems, from ODD to data.
- Classroom EE004 – “RobustAI & Monitoring” by Hatem HAJRI and Fateh KAAKAI: Robustness to outliers is an essential property of AI trustworthiness to ensure that an invalid input data will not lead to an unsafe state of the system. Robustness can be reached “by-design” and it can also be monitored by a specific component, the monitor, running in parallel to the AI model. Therefore, robustness and monitoring are two very related topics in the lifecycle of an AI product. In the Robustness & Monitoring Village, we present methods and tools that are already or will be integrated Confiance.ai environment.
- Classroom EC001 – “Trustworthy Embedded AI” by Thomas WOUTERS and Jacques YELLOZ: How to guide and handle the deployment of trustworthy AI components on target hardware in the frame of industrial applications?
The objective of the “Trustworthy Embedded AI” village is to show you how to address these challenges through toolchain benchmarking, resources estimation and algorithm compression for system specification and design, as well as compatibility of COTS frameworks with certification standards. - Classroom EC002 – “Explainability Tools: Processes for understanding” by Philippe DEJEAN: The domain of the explicability of models is one of the foundations of trust. Beyond the study of the usability of existing methods and algorithms for the use cases proposed by the partners of Confiance.ai, explainability is studied in all its forms and everywhere in the processes of data and model definition. We propose an overview of these studies around explainability that prepare the next interpretability works.
- Classroom EA008 – “End-to-end approach for Trusted AI systems”by Hugo Guillermo CHALÉ GONGORA and Boris ROBERT: In order to assist industrial members in the production of trusted AI-based systems, one of Confiance.ai’s main goals is to deliver:
- an end-to-end engineering process, covering the specificities of AI in system development activities and system IVVQ strategies,
- a method to characterize and evaluate a trust score for an AI-based system,
- and a configurable Trustworthy Environment supporting this engineering process and this evaluation of trust, by integrating tools developed or selected by Confiance.ai.
- Classroom EE005 – “Batch 3 perspectives” by Loïc CANTAT: Identification of key challenge that might be address during batch 3, a propose priority between them
3:30 p.m. – 5 p.m.: Pause and visit of the villages
5 p.m. – 6:30 p.m.: Workshops in parallel
Registration for workshops is done on your space, “Program” tab once you have registered for Confiance.ai Days.
- Amphitheater – “Bringing trust from ODD to Data” by Morayo ADEDJOUMA and Flora DELLINGER
- Classroom EE004 – “RobustAI & Monitoring” by Hatem HAJRI and Fateh KAAKAI
- Classroom EC001 – “Trustworthy Embedded AI” by Thomas WOUTERS and Jacques YELLOZ
- Classroom EC002 – “Explainability Tools: Processes for understanding” by Philippe DEJEAN
- Classroom EA008 – “End-to-end approach for Trusted AI systems” by Hugo Guillermo CHALÉ GONGORA and Boris ROBERT
- Classroom EE005 – “Batch 3 perspectives” by Loïc CANTAT
7 p.m.: After
October 5
9am – 9:30am: Welcome
9:30am – 10:15am: Opening session
- Michel MORVAN, IRT SystemX president
- Bruno BONNELL, pilot of France2030 (on video)
- Romain SOUBEYRAN, CentraleSupelec president
- Michel GUIDAL, University of Paris-Saclay vice president Science and Engineering
10:15am – 10:45am : “From sensing to understanding a wild world” conference, Patrick PEREZ, Scientific Director (VALEO)
With the fast progress of machine learning, autonomous systems that can operate safely in the wild seem in reach. Yet, the road remains crammed with major challenges regarding life-time performance, robustness and reliability, data availability, model interpretability, among others. With examples from the automatisation of road-vehicle driving, where multi-modal sensing aims at building a thorough understanding of the surrounding world in order to forecast and act, this presentation will discuss these challenges and promising research efforts toward solving them.
10:45 – 11:30: Pause and visit of the villages
11:30 – 12 p.m.: “TAILOR – Advancing the Scientific Foundations for Trustworthy AI”, Fredrik HEINTZ, Coordinator TAILOR (TAILOR)
Europe has taken a clear stand that we want AI, but we do not want any AI. We want AI that we can trust. The TAILOR Network of Excellence is dedicated to developing the scientific foundations for Trustworthy AI through integrating learning, optimisation and reasoning, and thus help realizing the European AI vision. This talk will present the TAILOR network and some of our latest research results.
12 p.m. – 12:30 p.m.: “Designing for Values in AI”, Stefan BUIJSMAN, Delft University
To develop AI responsibly it’s important to actively consider ethical values throughout the design process. Methodologies such as Design for Values, which was conceived at TU Delft are available, but the large outstanding challenge is to make the step from values to design requirements. This talk will look at recent interdisciplinary cooperation around meaningful human control and explainability on precisely this step from values to design.
12:30 p.m. – 2 p.m.: Lunch and visit of the villages
2 p.m. – 3:15 p.m.: Outcomes of projects
- Project “Process, methodologies and guidelines” with Hugo Guillermo CHALÉ GONGORA and Boris ROBERT
- Project “IVV&Q strategy toward homologation / certification” with Morayo ADEDJOUMA and Christophe ALIX
- Project “Integration and use cases” with Yves NICOLAS and Fabien TSCHIRHART
- Project “Data, information and knowledge engineering for trusted AI” with Flora DELLINGER and Angelique LOESCH
- Project “Design for Trustworthy AI” with Yacine TAHIRI and Hatem HAJRI
- Project “Characterization and qualification of trustworthy AI” with Philippe DEJEAN and Rodolphe GELIN
- Project “Target Embedded AI” with Thomas WOUTERS and Jacques YELLOZ
3:15 p.m. – 4 p.m.: Flash presentations: Posters & PhDs
4 p.m. – 5 p.m.: Pause and visit of the villages
5 p.m. – 6:30 p.m.: Scientific workshops
Registration for workshops is done on your space, “Program” tab once you have registered for Confiance.ai Days.
- Classroom EE004 – “HybridAI” (chair: Juliette MATTIOLI): Usually, the term “hybrid” refers to the combination of two different types or species. Applied to AI, it generally refers to the extension or optimization of AI models based on machine learning (ML) with expert knowledge. But Hybrid AI (HAI) is more than a combination of symbolic AI and ML approaches, HAI encompasses any synergistic combinations of various AI techniques, and also AI enhanced by mathematics or physics such as Knowledge informed neural network. This workshop is a real opportunity to hear about some of the latest innovation advances in HAI. Program:
- “Physical simulation/Machine learning hybrid modelling”, Mouadh YAGOUBI (IRT Systemx) et Milad LEYLI-ABADI (IRT SystemX)
- “Nook, a world first for Hybrid AI”, Jean-Baptiste FANTUN (Nukkai)
- “Designing Rule-based Decision-Policy with Reinforcement Learning”, Nicolas MUSEUX (Thales Research & Technology – France)
- Q&A session and brainstorm to underline some technical issues to address trustworthy Hybrid AI.
- Classroom EC001 – “Data for AI” (chair: Angelique LOESCH): The performance of AI modules is strongly correlated to the available data used to train them, evaluate them or guarantee their reliability. In the “Data for AI” workshop, different ways of exploiting data will be addressed:
- “Approaches to practical Zero-Shot Learning”, Hervé LE BORGNE (CEA)
- “Multi-modality for autonomous vehicle perception”, Sonia KHATCHADOURIAN (Valeo) and Florian CHABOT (CEA)
- “Data anonymization – issues, constraints and feedback”, Souhaiel KHALFAOUI (Valeo) and Marc NABHAN (Air Liquide)
- Classroom EC002 – “Synthetic Data” (Chair: Guillaume OLLER): Access to quality data in terms of variety, quantity and labelling is a crucial element for building efficient and trusted AI. In this workshop we will present and discuss different use cases in which the use of synthetic data can bring an answer to some of these challenges. Program:
- “Introduction on Synthetic Data for AI and AI for Synthetic Data”, Bertrand LEROY (Renault)
- “Validated and Spectraly faithfull Synthetic Data”, Pierre NOUBEL (OKTAL-SE)
- “Realistic Crowd and Traffic Simulation for Synthetic Data”, Stéphane DONIKIAN (GOLAEM)
- “Data augmentation with style transfer and object insertion”, Emmanuel BENAZERA (JoliBrain)
- “Synthetic-to-real style transfer”, Ricardo MENDOZA (CERVVAL)
- “Confiance.AI ongoing actions on Synthetic Data”, Amélie BOSCA (IRT SYSTEMX)
- Q&A session
- Classroom EA008 – “ODD, Safety, Quality assurance” (Chair: Morayo ADEDJOUMA): The workshop will focus on methods and systematic approaches for verification and validation of trustworthy and resilient autonomous and/or AI systems in their context. Beyond the results of the Confiance.ai program, this workshop will be an opportunity to share the point of view developed in the United Kingdom within the framework of the UKRI Trustworthy Autonomous Systems program, thanks to our invited speaker, Prof Radu CALINESCU. The keynote will be followed by a Q&A session. Program:
- Introduction
- Keynote, Prof Radu CALINESCU, Principal Investigator at UKRI Resilience Node (University of York)
- Q&A session
- Classroom EE005 – “Recent advances and challenges on explainability” (Chair: Bertrand BRAUNSCHWEIG): The goal of the workshop is to highlight a few individual developments regarding XAI (explainable AI) and to generate discussions about what can be done in the remaining years of Confiance.ai. Four papers presenting various aspects of explainability will be followed by an invited talk by Gilles DOWEK, Inria, on fundamental issues regarding explanations and proofs. Program:
- “Design of explainable models”, Elodie Guasch, Airbus Protect
- “Diffusion models for Counterfactual Explanation”, Guillaume JEANNERET SAN MIGUEL (University of Caen GREYC)
- “Fairness and explainability”, Jean-Michel LOUBES (ANITI)
- “Explainability developments by the SINCLAIR Lab”, Christophe LABREUCHE (Thales)
- “Towards a definition of the notion of explanation”, Gilles DOWEK, Inria
- Q&A session
8 p.m.: Networking dinner
Domaine de Quincampoix, route de Roussigny, 91470 LES MOLIÈRES (Voir les informations pratiques).
October 6
9am – 9:30am: Welcome
9:30am – 10am: Introduction
- Bertrand BRAUNSCHWEIG, Confiance.ai scientific coordinator
- Julien CHIARONI, AI Grand Defi Director
10am – 11:30am: Presentation of projects/programs and scientific conferences
- “Confiance.ai program”, Juliette MATTIOLI (Thales) and Rodolphe GELIN (Renault)
- “Safe.trAIn”, Marc ZELLER (Siemens) and Thomas WASCHULZIK (Siemens)
- “Confiance.ia program”, Françoys LABONTÉ (CRIM, Montreal)
- “Trusted-AI: a DFKI perspective”, Philipp SLUSALLEK (DFKI)
11:30am – 12 p.m.: Pause and visit of the villages
12 p.m. – 1 p.m.: Presentation of projects/programs and scientific conferences
- “Assessment of AI-based system Trustworthiness”, Agnès DELABORDE (LNE) and Henri SOHIER (IRT SystemX)
- “Tools and Methods for assessing the Trustworthiness of AI Models, Components and Systems”, Dr Daniel BECKER (Fraunhofer IAIS & ZERTIFIZIERTE KI)
- “How standardization brings AI ethics into practice”, Sebastian HALLENSLEBEN (VDE)
1 p.m. – 2 p.m.: Lunch and visit of the villages
2 p.m. – 3 p.m.: In parallel Visit of the villages or Scientific conferences from Quebec (in videoconference)
- “AI ethics”, Joé MARTINEAU, associate professor, HEC Montreal, Department of Management
- ” Cybersecurity, safety and privacy”, Ulrich AIVODJYN, Professor, École de Technologie Supérieure du Québec, Département de Technologies Logicielles
- “Sustainable AI”, Soumaya CHERKAOUI, Full professor, Polytechnique Montréal, Department of Computer Engineering and Software Engineering
3 p.m. – 4:45 p.m.: Standardization and labeling activities
- “European AI Standards”, Patrick BEZOMBES (CEN/CENELEC JTC21)
- “Current standardization activities within the project CERTIFIED AI”, Christine FUß (DIN)
- “EUROCAE AI standard”, Fateh KAAKAI (Thales)
- “How to assess AI-systems – a field report”, Bernd EISEMANN (CertAI and Munich Reinsurance Company
- “AI Certification programme”, Agnès DELABORDE (LNE)
4:45 p.m. – 5:30 p.m.: Round table on international collaboration and the AI Act
- Philipp SLUSALLEK (DFKI)
- Françoys LABONTÉ (CRIM Montreal)
- Patrick BEZOMBES (CEN/CENELEC)
- Agnès DELABORDE (LNE)
5:30 p.m. – 6 p.m.: Conclusion
6 p.m. – 7 p.m.: After
Autres actualités

Les start-up et PME, lauréates de l’AMI #1 : elles partagent leur éxpérience
Le programme Confiance.ai vise à proposer un Environnement de confiance contenant des méthodes et des outils pour apporter des garanties au bon fonctionnement de l’intelligence artificielle (IA) quand elle est utilisée dans des systèmes critiques, au sens de l’AI Act....

Portrait de Juliette Mattioli – Présidente du comité de pilotage de Confiance.ai
"En tant que présidente du comité de pilotage, je veux comprendre les "dessous" de tous les actifs de Confiance.ai pour mieux les promouvoir au sein de Thales notamment. Aujourd'hui, ce sont des diamants bruts qu'il faut tailler pour pouvoir en montrer la richesse"...