Trustworthy environment

An environment to ensure industrials can trust their AI systems.

Through the development and availability of its trustworthy environment, the programme will deal with the issue of AI trustworthiness. The trustworthy environment aims to be the simple, valid and efficient solution that industrials need to adopt trustworthy AI. The trustworthy environment will ensure a progressive addition of trustworthy AI in existing industrial engineering processes by revisiting existing concepts and methods.

Initial deployments of the trustworthy environment have already taken place at the industrial partners of the programme. The anticipated feedback from both partners and their engineering teams is at the heart of the programme. This will ensure that the components delivered are in line with their needs and constraints. Currently, constant developments are made available to the partners of the programme. At every year’s end, a new version is delivered in answer to the objectives defined 12 months before.

A modular and interoperable trustworthy environment

Ultimate goal of the programme, the trustworthy environment aims to answer the needs of industrials, from the specification of the issue through to the maintenance in operational condition and cybersecurity of the system integrating AI. This will involve controlling the behaviour of the system and, where necessary, updating its components. The instantiation of the trustworthy environment will be in the form of a tool-based chain, which will be both modular and interoperable. It will be modular as it aims to cover and reach different attributes of trust depending on formulated use case and whose variability could not be addressed through a simple unique set of operable tools in order to ensure its progressive integration within existing workshops and design environments. It will be interoperable, on the one hand, through the compatibility of the tool-based chain with the design tools used in the environments and, on the other hand, through the agnosticism of the underlying operating environment in relation to the infrastructures likely to host it. As such, the adopted process will seamlessly complement existing engineering workshops, without transforming them.

A set of tool-based design tools will come in support of the trustworthy environment: these will be available to users, whatever their role, and are designed to accompany them throughout the creation process of an AI-based trustworthy system.

9 Functional sets

grouped in a trusted environment

The results of the programmes will be assembled in functional sets that will each address one thematic area of trustworthy AI by offering methods to handle it and tools to establish it. The functional sets are as follows:

  • End-to-end engineering
  • Data lifecycle
  • Model Component life cycle
  • Embedded and non-embedded deployment
  • Operation and Monitoring
  • Evaluation of AI components
  • Robustness of AI components
  • Uncertainty of AI components
  • Explainability of AI components

Dernières Actualités

de confiance

Look back at Day 2024, the must-attend event for trustworthy AI

Look back at Day 2024, the must-attend event for trustworthy AI Day, the must-attend event on trustworthy AI, was held on March 7, 2024 at the Direction Générale des Entreprises (DGE). For the 360 participants in attendance, it was a unique opportunity to meet the AI engineering and scientific communities and discover the methods and tools deployed by the collective. Lire plus

Our contribution in support of future European AI regulation

Our contribution in support of future European AI regulation

Home > News > produces technologies and methodologies that can be used to help companies meet the requirements of future ... Lire plus

The design of robust AI algorithms and applications for industrial use from

The design of robust AI algorithms and applications for industrial use from

Home > News > In our article "8 scientific challenges on the global approach for AI components with controlled trust", we listed a ... Lire plus