The Confiance.ai collective selects four winners following its call for demonstrations (AMI) SHS
In April 2022, the Confiance.ai program launched a call for expressions of interest aimed at the research community in the Humanities and Social Sciences, which works in particular on trust artificial intelligence. Eleven proposals were studied to finally retain four winners.
Back to call for expressions of interest Humanities and Social Sciences
The objective of this call for expressions of interest was to inform and supplement the technological developments of the Confiance.ai program with work, fully funded by the program, on the appropriation of trusted AI by those who will be its designers, users and customers. The call for expressions of interest received eleven proposals which were evaluated by a selection committee composed of program representatives and two external personalities. The committee then proceeded to seven hearings to finally recommend four proposals to the Confiance.ai steering committee, which followed these recommendations. The winners will mainly be mobilized on use cases provided by the industrial partners of the Confiance.ai program. In most cases, work will begin at the start of the 2022 academic year to provide recommendations and additional elements to the program at the start of 2023. Some will then give rise, depending on the needs identified, to research work in the form of theses or postdoctoral stays. The official work launch event should be held in September 2022. The four winning proposals are as follows.
Benoit Leblanc (ENSC) & coll. with ONERA and Institut Cognition: Experimenting with the trust of an AI system user
The proposal goes in the direction of exploring the reactions of individuals to systems using AI; trust being a pillar of these reactions. The study of these reactions brings scientific elements of interest both for the industrialization of anthropotechnical systems such as transport devices, but also for the deployment of these systems in fields of application.
Enrico Panaï (EM Lyon) & Laurence Devillers (LISN-CNRS): Mapping the moral situation: Use case analyzes
To build trust, the authors suggest delimiting the space for action and identifying its constituent elements. One of the most interesting methods is to map the space in which the action is carried out. This process allows moral situations to be positioned at an appropriate level of granularity in order to recognize sets of actions at the individual, social, organizational or human-machine interface levels.
Marion Ho-Dac (CDEP, Univ. Artois) et coll., with AITI institute: Respect for the values of the European Union by design by AI systems
The CDEP brings expertise specifically drawn from the legal sciences, focusing on compliance with the broadly understood EU legal framework, including in particular the values of the Union (within the meaning of Article 2 of the EU Treaty), the Charter of Fundamental Rights of the EU and the legal framework of the European judicial area.
Arnaud Latil et coll. (SCAI): Interfaces of algorithmic systems: what information to communicate to generate trust?
The authors propose to focus their analysis on the interfaces of algorithmic systems. The aim is to study the effects on trust of the legal messages communicated by the producers of AI systems.
Note that a proposal with a strong content in the humanities and social sciences had already been selected last year by Confiance.ai as part of a Call for Expressions of General Interest for academic laboratories. The team, made up of Christine Balagué (LITEM) and Dominique Cardon (Sciences Po), is working on the autopsy of failed AI applications.