The health pass will be mandatory to enter the site and the barrier gestures will have to be respected.

Follow the event live on Youtube.

Event program

9:00 – 9:30 – Welcome

9:30 – 10:30 – Keynote: Assessing the trustworthiness of AI systems | Live streaming

Maximilian Poretschkin, Director of the AI Certification Project, Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS
Artificial intelligence (AI) is penetrating more and more areas of business and society and taking on increasingly responsible tasks. It is clear that the potential of AI can only be fully exploited if its use is technically reliable and there is sufficient trust in the technologies concerned. In general, it can be expected that the requirements for trustworthiness of AI systems will be shaped by both legal regulations (high-risk areas) and market requirements.
This talk will first give an overview of the requirements for trustworthiness of AI systems, addressing the recently published results of the German standardization roadmap AI and the planned EU regulation. In a next step, it will be discussed how trustworthiness can be systematically implemented and evaluated. For this purpose, we present an assessment catalog, which includes the structured assessment of AI risks and the evaluation of their mitigation measures. Furthermore, we discuss some examples of new testing tools that are needed to assess the technical quality of AI systems. In addition, first assessment experiences from practice are reported.Finally, German lighthouse projects are presented that further advance the operationalization of trustworthiness.

Chair: Bertrand Braunschweig, IRT SystemX, scientific coordinator of Confiance.ai

10:30 – 11:10 – Scientific challenges of the program | Live streaming
Patrice Aknin, Scientific Director, IRT SystemX
Frédéric Jurie, AI expert, Safran
Georges Hébrail, Head of « Data Science and Interaction » science team, IRT SystemX

11:10 – 11h50 – Formal Methods for AI | Live streaming
Zakaria Chihani, Researcher, CEA List

11:50 – 12:30 – Posters session | Live streaming
Including 2 minutes presentation

12:30 – 14:00 – Lunch break and posters exhibition

14:00 – 15:00 – Keynote: GFlowNets for Generative Active Learning | Live streaming
Yoshua Bengio, Scientific Director, Mila – Quebec Artificial Intelligence Institute
We consider the following setup: a ML system can interact with an expensive oracle (the “real world”) by iteratively proposing batches of candidate experiments and then obtaining a score for each experiment (“how well did it work?”). The data from all the rounds of queries and results can be used to train a proxy for the oracle, a form of world model. The world model can then be queried (much more cheaply than the world model) in order to train (in-silico) a generative model which proposes experiments, to form the next round of queries. Systems which can do that well can be applied in interactive recommendations, crowdsourcing, to discover new drugs, new materials, control plants or learn how to reason and build a causal model. They involve many interesting ML research threads, including active learning, reinforcement learning, representation learning, exploration, meta-learning, Bayesian optimization, black-box optimization. What should be the training criterion for this generative model? Why not simply use Monte-Carlo Markov chain (MCMC) methods to generate these samples? Is it possible to bypass the mode-mixing limitation of MCMCs? How can the generative model guess where good experiments might be before having tried them? How should the world model construct a representation of its epistemic uncertainty, i.e., where it expects to predict well or not? On the path to answering these questions, we will introduce a new and exciting deep learning framework called GFlowNets which can amortize the very expensive work normally done by MCMC to convert an energy function into samples and opens the door to fascinating possibilities for probabilistic modeling, including the ability to quickly estimate marginalized probabilities and efficiently represent distributions over sets and graphs.

Chair: Juliette Mattioli, Thales, senior AI expert

15:00 – 17:00 – Scientific workshops in parallel
Each workshop will present the state of the art on the subject, show original work by the speakers and lead to an in-depth discussion with the participants on what can be done within the framework of Confiance.ai.

Workshop 1: Can artificial intelligence systems be made more robust?
Chair: Frédéric Jurie, AI expert, Safran
Stéphane Canu, INSA ROUEN
Elvis Dohmatob, FAIR
Jalal Fadili, ENSICAEN
Teddy Furon, Inria
Cédric Gouy-Pailler, CEA
Hatem Hajri, IRT SystemX

Workshop 2: Data for Machine Learning: how to ensure good predictions acting on training data?
Chair: Georges Hébrail, Head of « Data Science and Interaction » science team, IRT SystemX
Johanna Baro, IRT SystemX
Laure Berti-Equille, IRD Espace-DEV Montpellier
Raphaël Braud, IRT SystemX
Flora Dellinger, Valeo
Camille Dupont, CEA
Stéphane Herbin, ONERA 

Workshop 3: Explainability of AI: recent research, technological achievements and what can be done within the Confiance.ai programme?
Chair: Bertrand Braunschweig, Scientific coordinator, Confiance.ai
Nicholas Asher, ANITI – CNRS
Thibaut Boissin, IRT Saint Exupéry
Fosca Gianotti, Information Science and Technology Institute
François-Marie Lesaffre, SopraSteria / Aerospace Valley
Pierre Marquis, Université d’Artois / Institut Universitaire de France / CRIL
Michèle Sebag, CNRS / Académie des Technologies

17:00 – 18:00 – Closing session
With the participation of Prefect Renaud Vedel, coordinator of the French national AI strategy
Yannick Bonhomme (IRT SystemX), Program Director, Confiance.ai
Julien Chiaroni, Director of the great Challenge on “Trustworthy AI for Industry”, SGPI
Emmanuelle Escorihuela (Airbus), Chair of Steering committee, Confiance.ai
David Sadek (Thales), Chair of Executive committee, Confiance.ai

Registration 

To register for online event, click here.

Practical informations

Call for posters

The Confiance.ai days poster session presents an opportunity to publish late-breaking results, technical descriptions, smaller research contributions, works-in-progress and student projects in a concise and visible format. Accepted posters will be displayed in the forum venue, including a designated poster / demo session, providing presenters with an opportunity to engage in discussion with an expert audience of scientists and engineers from programme partners and other interested parties.
Posters can cover preliminary or exploratory work within the scope of the confiance.ai programme, or present any other research that would generate discussion and benefit from this forum. Posters are encouraged in themes relevant to Confiance.ai including, but not limited to, the programme technical challenges.

Important dates

  • Poster Submission Deadline: September 17
  • Notification of acceptance: September 24
  • Poster session: October 6

All submission instructions and information for author are available here.

 

An event organized with the support of:

Autres actualités