Overview of international initiatives for trustworthy AI

24 Jan 2024Non classifié(e), Standard and label, Various

If there is consensus today around artificial intelligence (AI), it is on the way in which it has revolutionized the way we live, work and think. However, this rapid transformation comes with its share of challenges, particularly when it comes to trust. In a world where crucial decisions are increasingly made through algorithms, it is imperative to ensure that AI is trustworthy, ethical and fair. This is why many initiatives, in France and around the world, are emerging to design trustworthy AI.

In this article, we present to you the different international initiatives with which Confiance.ai is collaborating to forge a more secure future of AI.

 

How and why Confiance.ai interacts with these initiatives? What is the role and positioning of the program in this ecosystem?

From the launch of the program, it was clear that the subject of artificial intelligence of trust went far beyond the single national framework within which Confiance.ai was born, and this for several reasons:

  • The main industrial players in the program have an international presence, are present in research and development in many countries in Europe and beyond, and their markets are global;
  • Even if Saclay and Toulouse are among the high places of trusted AI in our country and more widely, there are various initiatives around the world whose themes are similar and with which we have an interest in collaborating – and the reciprocal is also true.
  • Privileged relationships have been built with our Canadian (Quebec/Montreal) and German (on various sites) colleagues which should be continued and strengthened;
  • The labeling and standardization activities, which appear in the roadmap of the Grand National Challenge on verifiable and certifiable AI, take on their full meaning in the European context, which itself relies on the Franco-German axis.

Confiance.ai thus appears to be at the center of various European and international initiatives on trusted AI. By regularly inviting these international partners to our annual event, Confiance.ai Day, and being invited to their main events, by involving them in our initiatives such as the AITA 2023 symposium (AI Trustworthiness Assessment) and its sequels, by establishing an agreement partnership with major international players in AI labeling, by maintaining regular contact with these partners, we are gradually building an international community of researchers and developers on the subject from which everyone can benefit in return.

The rest of this article presents the main initiatives with which we maintain regular relationships.

 

 

International players in trusted AI

Confiance.ia in Canada (Quebec)

Supported by CRIM, Confiance.ia is a Quebec consortium that brings together private and public actors to co-develop methods and tools to generate concrete solutions to the challenges linked to the industrialization of sustainable, ethical, safe and responsible AI . CRIM will develop, coordinate and lead projects in collaboration with the similar French program, Confiance.ai.

The Confiance.ia program targets, among others, certain regulated sectors such as aeronautics, transport, finance, for which it is not yet possible to demonstrate the conformity of solutions based on AI.

The Confiance.ai program has a privileged relationship with Confiance.ia with which it has collaborated since the creation of their national proposal. This collaboration should expand in the coming months.

 

ZERTIFIZIERTE KI in Germany

The ZERTIFIZIERTE KI project, jointly led by the Fraunhofer IAIS, the German Institute for Standardization (DIN) and the German Federal Office for Information Security (BSI), as well as other research partners, is developing procedures test for the evaluation of artificial intelligence (AI) systems. The aim is to ensure technical reliability and responsible use of technology. Industrial requirements are taken into account through the involvement of numerous companies and associated organizations representing various sectors such as telecommunications, banking, insurance, chemicals and retail.

The objectives of ZERTIFIZIERTE KI are:

  • The development of AI assessment criteria and assessment tools;
  • Developing safeguards to mitigate AI-related risks;
  • The development of AI standards;
  • Investigating new business models and markets for AI assessments and certifications;
  • The holistic vision including legal and ethical issues.

One point of note is an AI assessment catalog from the project that presents a risk-based approach to assessing AI systems.

Frequent exchanges took place between Confiance.ai and ZERTIFIZIERTE KI, which notably participated in each edition of the Confiance.ai Day event and with whom we launched the AI ​​Trustworthiness Assessment (AITA) initiative in particular during the symposium of March 2023 in San Francisco.

 

VDE in Germany

VDE, one of Europe’s leading technology organizations, has stood for innovation and technological progress for over 125 years. VDE is the only organization in the world to bring together science, standardization, testing, certification and application consulting under one roof. For more than 100 years, the VDE brand has been associated with the highest safety standards and consumer protection.

Their passion lies in the advancement of technology, the next generation of engineers and technologists, lifelong learning and “on the job” career development. Within the VDE network, more than 2,000 employees at more than 60 locations worldwide, more than 100,000 honorary experts and around 1,500 companies are dedicated to ensuring a future worth living: networked, digital, electric .

VDE is one of the partners in the labeling agreement that we signed with Positive.ai and IEEE. We regularly collaborate with VDE to define a common label.

 

CERTAIN (DFKI) in Germany

CERTAIN is a collaborative initiative of DFKI (Deutsches ForschungsZentrum für Kunstlische Intelligenze / German Research Center for Artificial Intelligence) involving various partners, focused on the research, development, standardization and promotion of trusted AI techniques, with the aim of providing guarantees and certification of AI systems.

The aim of the CERTAIN consortium is to work across the value chain, from fundamental research to society, focusing on the development, optimization and implementation of trusted AI techniques in order to provide guarantees and certifications for AI systems in specific use cases. CERTAIN initiates, coordinates and promotes research projects addressing key aspects of reliable AI systems and methods to create and verify them: models and explanations, causality and basis, modularity and compositionality, human factor and monitoring. The consortium allows collaboration on the project with internal and external partners. In addition to research, the consortium collaborates with industry, standards bodies, and policy and societal stakeholders to define certification requirements and to define AI trustmarks and foster AI literacy.

CERTAIN started last September, so it is a more recent initiative. We participated in the launch of the initiative (three representatives of Confiance.ai were present in Saarbrück), and we frequently communicate with the DKFI, which also has a strong collaboration with Inria, established for several years.

 

TAILOR in Europe

The aim of the European TAILOR project is to strengthen the capacity to provide the scientific basis for trustworthy AI in Europe by developing a network of centers of research excellence exploiting and combining learning, optimization and reasoning . These systems are intended to provide descriptive, predictive and prescriptive systems integrating data-driven and knowledge-based approaches.

Artificial intelligence (AI) has developed at an unprecedented rate over the past decade. AI has been applied to many industrial and service sectors, becoming omnipresent in our daily lives. Increasingly, AI systems are being used to suggest decisions to human experts, suggest actions and provide predictions. Because these systems can influence our lives and have a significant impact on how we decide, they must be trustworthy. How can a radiologist trust an AI system analyzing medical images? How can a financial investment broker trust an AI system providing stock price predictions? How can a passenger trust an autonomous car? These are fundamental questions that require in-depth analysis and fundamental research activities, as well as a new generation of AI talent mastering the scientific foundations of trustworthy AI, knowing how to evaluate and design trustworthy AI systems. Some of the current problems with lack of trust in AI systems are a direct consequence of the widespread use of data-only black-box methods. We need to lay the foundations for a new generation of AI systems drawing not only on data-driven approaches but also on the full range of AI techniques, including symbolic AI methods, optimization, reasoning and planning.

We collaborated with TAILOR to organize a workshop during the ECML/PKDD conference in Grenoble at the end of 2022, and TAILOR was co-sponsor of the AITA event cited above for ZKI.

 

RAI UK in the United Kingdom

RAI UK brings together researchers from across the four nations of the UK to understand how we should shape the development of AI for the benefit of people, communities and society and, recognizing the global nature of the challenge, to make connections with international partners pursuing similar research. It is an open and multidisciplinary network, drawing on a wide range of academic disciplines. This stems from RAI UK’s belief that developing responsible AI will require as much focus on humans and human societies as it does on AI. RAI UK will invest in the following four priority areas:

  • Ecosystem creation: the consortium will define a portfolio of thematic areas, translational activities and strategic partnerships with academia, business and government, as well as associated impact measures. This will expand and consolidate the network both nationally and internationally and also help identify course adjustments to national policy (such as the Industrial Strategy and the UK AI Strategy);
  • Research and innovation programs: RAI UK will carry out research into socio-technical and creative practices led by other consortia that address fundamental challenges with multidisciplinary and industrial perspectives. These integrative research projects will connect established research teams across the community and early-stage, industry-led research and innovation projects. In doing so, RAI UK will help expand the UK ecosystem and develop the next generation of research leaders;
  • A skills programme: RAI UK will translate its research into skills frameworks and training for AI users, customers and developers. The consortium will also contribute to the call for the UK AI Strategy’s Online Academy;
  • Public and policy engagement: RAI UK will work with the network of policy makers, regulators and key stakeholders to address emerging concerns and the need for new standards. The consortium will also build capacity on public accountability and provide evidence-based advice to the public and policy makers. Launched in May 2023, the first calls for impact acceleration and international partnership links were recently announced.

We do not yet have an established collaboration with RAI UK which only started in mid-2023, but Thales is a member of both programs, and a representative from Confiance.ai was present at the program launch.

 

CSIRO’s Data61 in Australia

To address the challenge of responsible/trustworthy AI, a number of high-level principles (e.g. the Australian AI Ethics Principles co-developed by Data61) and risk frameworks have been developed. A principles-based approach allows for technology-neutral, future-proof, and context-specific interpretations and operationalization. However, without concrete technologies and tools for development and evaluation, practitioners are left with nothing but truisms. Additionally, efforts have largely been devoted to AI model/algorithm-level solutions that primarily focus on a subset of mathematical principles (such as privacy and fairness) and correctness. However, responsible AI issues can arise at any stage of the development lifecycle and affect many AI, non-AI components, and data blocks of systems beyond AI algorithms and models. The goal of Data61’s “Operationalizing Responsible AI” initiative is to develop innovative software/systems engineering technologies and tools that AI experts, system developers, and other stakeholders can use to make AI systems and their development processes trustworthy. This initiative takes a risk-based approach that includes three main areas of research:

  • Human-centered AI: sector- or concern-specific guidelines and system-level risk assessment methods, complemented by metrics and measurement approaches;
  • Responsible AI engineering: tools and methods for responsible AI at the system level, covering complete lifecycles. Examples include the Responsible AI Model Catalog. ;
  • Knowledge engineering for responsible AI: a high-quality responsible AI knowledge base including knowledge graphs, incident databases and question banks. This knowledge is integrated into accessible methods/tools (e.g. Chatbot) for a diverse set of stakeholders.

CSIRO/Data61 works closely with Australia’s National AI Centre (NAIC), where AI governance and responsible AI implementation practices are disseminated through Australia’s Responsible AI Network (RAIN).

Two CSIRO representatives were present during the AITA event, and relations have developed since then with regular exchanges, a visit from them to Saclay, and participation in the program committee for the next Confiance.ai Day.

 

 

Initiatives in working order for the AI ​​Act

Europe has placed trust at the heart of its artificial intelligence regulatory policy. The proposed regulations (AI Act) and directive on liability in matters of AI (AI Liability Directive) are directly inspired by the work of the “HLEG”, this group of high-level experts on AI which produced principles and recommendations on the subject. The requirements of future regulations mainly focus on high-risk and systemic-risk systems, and relate to various aspects of trust: robustness, explainability, maintenance of control by humans, transparency, absence of bias, etc. and all this in the service of the great principles that govern us: security, health and respect for people’s rights; Environmental Protection ; democracy and respect for European law.

 

The work of Confiance.ai and the projects with which we have established links provide concrete elements to serve these objectives: taxonomies, methodologies, technologies, tools; these elements benefit from collaborations with the various initiatives mentioned in this article.

 

The presentation of the different international players in trusted AI showed us the wealth of initiatives, expertise and ideas brought together around trusted artificial intelligence.

It is precisely because we believe in this collaboration and want to cultivate it that we asked these international actors to contribute to our annual event, Confiance.ai Day. See you on March 7, 2024 in Paris to discuss with this entire trusted AI ecosystem.