“Confiance.ai” programme: First significant results and an increasingly rich ecosystem of partners
Technology Pillar of the Grand Défi[1] “Ensuring the security, reliability and certification of systems based on artificial intelligence”, the Confiance.ai program launched in January 2021 and led by the SystemX Institute of Technological Research (IRT), is showing sustained momentum and is unveiling its first results at the first Annual Day, to be held in Toulouse on 6 October 2021. Driven by a group of 13 French companies and research organisations, this €45 million programme spanning 4 years aims to meet the challenge of industrialising artificial intelligence (AI) in critical products and services
“Trustworthy AI is one of the key current issues in the industrial world, as AI is spreading rapidly in all sectors, especially those that do not tolerate decision errors. We can only be extremely satisfied with the progress that has been made in France within the Confiance.ai programme. Fully operational since January, it has aroused the enthusiasm of many SMEs and academic players who have come to join our Common House and contribute their expertise and technologies, in collaboration with the 13 structuring partners who are the driving force behind the programme”, commented Julien Chiaroni, Director of the Trustworthy AI Grand Défi within the General Secretariat for Investment.
A first version of the trustworthy environment by the end of the year incorporating more than 20 technological or methodological components
To address these challenges, the programme partners focused on six initial use cases: camera-based scene understanding, welding vision inspection, liquid air demand prediction, aerial photo interpretation, visual industrial control, and airborne collision avoidance for unmanned aircraft systems.
These practical use cases have made it possible to assess the relevance of the first 20 technological components or methodological building blocks (e.g. evaluation of neural network robustness, quantification of prediction confidence intervals, generation of models robust to certain disturbances, methods for constructing and characterising datasets, etc.), which have already been incorporated into the first version of the trustworthy environment developed and which will be made available to partners by the end of the year. This environment will eventually offer a sovereign, open, interoperable and sustainable software tool platform for the design, validation, qualification, deployment and maintenance of critical AI-based products and services.
An ecosystem of over 40 partners by the end of the year
Confiance.ai is part of an open and integrative approach. The programme is developing and federating the national ecosystem around trustworthy AI and has opened up to many associated partners.
In addition to the 13 core contributors to the programme, 11 laboratories have recently joined the network, having been selected in the context of a call for expressions of interest focused on scientific challenges: Institut de mathématiques de Toulouse – ANITI, CRIL – Université d’Artois, CRISTAL – CNRS, LAMIH/UPHF / CRISTAL – Université de Lille, Inria KAIROS, IRIT – Université de Toulouse, LIP6 – Sorbonne Université, LITEM – Medial Lab – IMT, LITIS – Insa Rouen, ONERA and U2IS – ENSTA Paris. They will contribute to the maturation of scientific work or to the resolution of upstream scientific obstacles, most often in the form of a thesis or doctorate. To date, the programme has already seen 9 theses and 4 post-docs.
Other partners such as Apsys, LNE, Numalis and ONERA have also joined the programme, bringing their expertise and technologies.
Another AMI aimed at deeptech start-ups and innovative SMEs launched last July should enable a dozen French nuggets to be included in the programme from the 4th quarter of 2021.
In total, a powerful network of more than 40 partners will collaborate on the sites of Paris Saclay and Toulouse, two important places of trustworthy AI, towards the elaboration of this trustworthy environment where everyone will be able to derive value from these efforts. Confiance.ai is fortunate to be able to rely on this ecosystem and in particular on the Franco-Quebec program DEEL, the 3IA Aniti and the DataIA initiative.
A programme that contributes to the European Commission’s “AI Act” regulation project
The European Commission has proposed a legal framework to promote trustworthy AI. However, its operational implementation requires the development of technical solutions enabling companies to respond. Confiance.ai will thus enable the creation of a technical environment (on certification, reliability, evaluation, transparency and auditability of algorithms and systems) that will guarantee, tomorrow, a high level of trust in AI technologies..
[1] The Grands Défis are public investment programmes that aim to develop disruptive technologies and innovations with high social and economic impact.Pour aller plus loin :