October 5
9am – 9:30am: Welcome
9:30am – 10:15am: Opening session
- Michel MORVAN, IRT SystemX president
- Bruno BONNELL, pilot of France2030 (on video)
- Romain SOUBEYRAN, CentraleSupelec president
- Michel GUIDAL, University of Paris-Saclay vice president Science and Engineering
10:15am – 10:45am : “From sensing to understanding a wild world” conference, Patrick PEREZ, Scientific Director (VALEO)
With the fast progress of machine learning, autonomous systems that can operate safely in the wild seem in reach. Yet, the road remains crammed with major challenges regarding life-time performance, robustness and reliability, data availability, model interpretability, among others. With examples from the automatisation of road-vehicle driving, where multi-modal sensing aims at building a thorough understanding of the surrounding world in order to forecast and act, this presentation will discuss these challenges and promising research efforts toward solving them.
10:45 – 11:30: Pause and visit of the villages
11:30 – 12 p.m.: “TAILOR – Advancing the Scientific Foundations for Trustworthy AI”, Fredrik HEINTZ, Coordinator TAILOR (TAILOR)
Europe has taken a clear stand that we want AI, but we do not want any AI. We want AI that we can trust. The TAILOR Network of Excellence is dedicated to developing the scientific foundations for Trustworthy AI through integrating learning, optimisation and reasoning, and thus help realizing the European AI vision. This talk will present the TAILOR network and some of our latest research results.
12 p.m. – 12:30 p.m.: “Designing for Values in AI”, Stefan BUIJSMAN, Delft University
To develop AI responsibly it’s important to actively consider ethical values throughout the design process. Methodologies such as Design for Values, which was conceived at TU Delft are available, but the large outstanding challenge is to make the step from values to design requirements. This talk will look at recent interdisciplinary cooperation around meaningful human control and explainability on precisely this step from values to design.
12:30 p.m. – 2 p.m.: Lunch and visit of the villages
2 p.m. – 3:15 p.m.: Outcomes of projects
- Project “Process, methodologies and guidelines” with Hugo Guillermo CHALÉ GONGORA and Boris ROBERT
- Project “IVV&Q strategy toward homologation / certification” with Morayo ADEDJOUMA and Christophe ALIX
- Project “Integration and use cases” with Yves NICOLAS and Fabien TSCHIRHART
- Project “Data, information and knowledge engineering for trusted AI” with Flora DELLINGER and Angelique LOESCH
- Project “Design for Trustworthy AI” with Yacine TAHIRI and Hatem HAJRI
- Project “Characterization and qualification of trustworthy AI” with Philippe DEJEAN and Rodolphe GELIN
- Project “Target Embedded AI” with Thomas WOUTERS and Jacques YELLOZ
3:15 p.m. – 4 p.m.: Flash presentations: Posters & PhDs
4 p.m. – 5 p.m.: Pause and visit of the villages
5 p.m. – 6:30 p.m.: Scientific workshops
Registration for workshops is done on your space, “Program” tab once you have registered for Confiance.ai Days.
- Classroom EE004 – “HybridAI” (chair: Juliette MATTIOLI): Usually, the term “hybrid” refers to the combination of two different types or species. Applied to AI, it generally refers to the extension or optimization of AI models based on machine learning (ML) with expert knowledge. But Hybrid AI (HAI) is more than a combination of symbolic AI and ML approaches, HAI encompasses any synergistic combinations of various AI techniques, and also AI enhanced by mathematics or physics such as Knowledge informed neural network. This workshop is a real opportunity to hear about some of the latest innovation advances in HAI. Program:
- “Physical simulation/Machine learning hybrid modelling”, Mouadh YAGOUBI (IRT Systemx) et Milad LEYLI-ABADI (IRT SystemX)
- “Nook, a world first for Hybrid AI”, Jean-Baptiste FANTUN (Nukkai)
- “Designing Rule-based Decision-Policy with Reinforcement Learning”, Nicolas MUSEUX (Thales Research & Technology – France)
- Q&A session and brainstorm to underline some technical issues to address trustworthy Hybrid AI.
- Classroom EC001 – “Data for AI” (chair: Angelique LOESCH): The performance of AI modules is strongly correlated to the available data used to train them, evaluate them or guarantee their reliability. In the “Data for AI” workshop, different ways of exploiting data will be addressed:
- “Approaches to practical Zero-Shot Learning”, Hervé LE BORGNE (CEA)
- “Multi-modality for autonomous vehicle perception”, Sonia KHATCHADOURIAN (Valeo) and Florian CHABOT (CEA)
- “Data anonymization – issues, constraints and feedback”, Souhaiel KHALFAOUI (Valeo) and Marc NABHAN (Air Liquide)
- Classroom EC002 – “Synthetic Data” (Chair: Guillaume OLLER): Access to quality data in terms of variety, quantity and labelling is a crucial element for building efficient and trusted AI. In this workshop we will present and discuss different use cases in which the use of synthetic data can bring an answer to some of these challenges. Program:
- “Introduction on Synthetic Data for AI and AI for Synthetic Data”, Bertrand LEROY (Renault)
- “Validated and Spectraly faithfull Synthetic Data”, Pierre NOUBEL (OKTAL-SE)
- “Realistic Crowd and Traffic Simulation for Synthetic Data”, Stéphane DONIKIAN (GOLAEM)
- “Data augmentation with style transfer and object insertion”, Emmanuel BENAZERA (JoliBrain)
- “Synthetic-to-real style transfer”, Ricardo MENDOZA (CERVVAL)
- “Confiance.AI ongoing actions on Synthetic Data”, Amélie BOSCA (IRT SYSTEMX)
- Q&A session
- Classroom EA008 – “ODD, Safety, Quality assurance” (Chair: Morayo ADEDJOUMA): The workshop will focus on methods and systematic approaches for verification and validation of trustworthy and resilient autonomous and/or AI systems in their context. Beyond the results of the Confiance.ai program, this workshop will be an opportunity to share the point of view developed in the United Kingdom within the framework of the UKRI Trustworthy Autonomous Systems program, thanks to our invited speaker, Prof Radu CALINESCU. The keynote will be followed by a Q&A session. Program:
- Introduction
- Keynote, Prof Radu CALINESCU, Principal Investigator at UKRI Resilience Node (University of York)
- Q&A session
- Classroom EE005 – “Recent advances and challenges on explainability” (Chair: Bertrand BRAUNSCHWEIG): The goal of the workshop is to highlight a few individual developments regarding XAI (explainable AI) and to generate discussions about what can be done in the remaining years of Confiance.ai. Four papers presenting various aspects of explainability will be followed by an invited talk by Gilles DOWEK, Inria, on fundamental issues regarding explanations and proofs. Program:
- “Design of explainable models”, Elodie Guasch, Airbus Protect
- “Diffusion models for Counterfactual Explanation”, Guillaume JEANNERET SAN MIGUEL (University of Caen GREYC)
- “Fairness and explainability”, Jean-Michel LOUBES (ANITI)
- “Explainability developments by the SINCLAIR Lab”, Christophe LABREUCHE (Thales)
- “Towards a definition of the notion of explanation”, Gilles DOWEK, Inria
- Q&A session
8 p.m.: Networking dinner
Domaine de Quincampoix, route de Roussigny, 91470 LES MOLIÈRES (Voir les informations pratiques).
October 6
9am – 9:30am: Welcome
9:30am – 10am: Introduction
- Bertrand BRAUNSCHWEIG, Confiance.ai scientific coordinator
- Julien CHIARONI, AI Grand Defi Director
10am – 11:30am: Presentation of projects/programs and scientific conferences
- “Confiance.ai program”, Juliette MATTIOLI (Thales) and Rodolphe GELIN (Renault)
- “Safe.trAIn”, Marc ZELLER (Siemens) and Thomas WASCHULZIK (Siemens)
- “Confiance.ia program”, Françoys LABONTÉ (CRIM, Montreal)
- “Trusted-AI: a DFKI perspective”, Philipp SLUSALLEK (DFKI)
11:30am – 12 p.m.: Pause and visit of the villages
12 p.m. – 1 p.m.: Presentation of projects/programs and scientific conferences
- “Assessment of AI-based system Trustworthiness”, Agnès DELABORDE (LNE) and Henri SOHIER (IRT SystemX)
- “Tools and Methods for assessing the Trustworthiness of AI Models, Components and Systems”, Dr Daniel BECKER (Fraunhofer IAIS & ZERTIFIZIERTE KI)
- “How standardization brings AI ethics into practice”, Sebastian HALLENSLEBEN (VDE)
1 p.m. – 2 p.m.: Lunch and visit of the villages
2 p.m. – 3 p.m.: In parallel Visit of the villages or Scientific conferences from Quebec (in videoconference)
- “AI ethics”, Joé MARTINEAU, associate professor, HEC Montreal, Department of Management
- ” Cybersecurity, safety and privacy”, Ulrich AIVODJYN, Professor, École de Technologie Supérieure du Québec, Département de Technologies Logicielles
- “Sustainable AI”, Soumaya CHERKAOUI, Full professor, Polytechnique Montréal, Department of Computer Engineering and Software Engineering
3 p.m. – 4:45 p.m.: Standardization and labeling activities
- “European AI Standards”, Patrick BEZOMBES (CEN/CENELEC JTC21)
- “Current standardization activities within the project CERTIFIED AI”, Christine FUß (DIN)
- “EUROCAE AI standard”, Fateh KAAKAI (Thales)
- “How to assess AI-systems – a field report”, Bernd EISEMANN (CertAI and Munich Reinsurance Company
- “AI Certification programme”, Agnès DELABORDE (LNE)
4:45 p.m. – 5:30 p.m.: Round table on international collaboration and the AI Act
- Philipp SLUSALLEK (DFKI)
- Françoys LABONTÉ (CRIM Montreal)
- Patrick BEZOMBES (CEN/CENELEC)
- Agnès DELABORDE (LNE)
5:30 p.m. – 6 p.m.: Conclusion
6 p.m. – 7 p.m.: After
Autres actualités

Peut-on savoir ce que l’on ne sait pas ? La quantification d’incertitude pour accéder au risque (presque) zéro
Les méthodes de quantification d’incertitude dans les codes de calcul classiques sont fiables et robustes mais deviennent un défi important quand il s’agit d’intelligence artificielle ; on vous dit tout dans cet article. Qu'est-ce que la quantification d'incertitude...

Portrait de Luca Mossina – ingénieur de recherche en IA
"Le programme Confiance.ai met en avant la synergie entre acteurs d'horizons variés. Celle-ci est pour moi la seule façon possible d'attaquer un problème d'une telle ampleur : créer une IA opérationnelle dans des contextes à fort enjeux économiques, sociaux et...