Événements à venir

Evénement satellite lors des JdS 2021 (Nice)

Lancement de frENBIS (antenne locale du réseau ENBIS) et co-organisation avec le groupe Fiabilité et incertitudes de la SFdS

Date : vendredi 11 juin 2021 (14h00-17h30)

Les exposés et discussions seront en français

Thème scientifique :

Approches stochastiques pour la certification d’algorithmes de machine learning

Organisateurs :

Comité de pilotage de frENBIS (Yannig Goude, Bertrand Iooss, Jairo Cugliari, Anne Gégout-Petit, Jean-Michel Poggi) & Groupe « Fiabilité et Incertitudes » de la SFdS (Présidente : Mitra Fouladirad)

Cette demi-journée sera l’occasion d’inaugurer la création du réseau français d’ENBIS qui a pour but de promouvoir la statistique pour les entreprises et l’industrie, favoriser la fertilisation mutuelle entre industrie et monde académique tout en assurant et en développant les liens entre la communauté statistique française et ENBIS.

Le sujet proposé est celui de la Certification des modèles de machine learning pour leur intégration dans des systèmes critiques. Deux exposés émanant de l’industrie (aéronautique, ferroviaire ou automobile) et du monde académique (pour un exemple de formalisation mathématique d’un problème) seront proposés. Enfin, les enjeux de l’interprétabilité du machine learning liés aux problématiques de régulation et du management des risques dans le domaine de l’énergie seront également traités. Une large place sera laissée aux discussions pour faire émerger des sujets ou des problèmes propices à la collaboration entre académie et industrie.

Planning prévisionnel :

Pour chaque orateur : 25mn d’exposé + 15mn de questions & discussions

14h00 – Introduction SFdS / frENBIS

14h10 – Grégory Flandin (IRT Saint Exupéry / Projet DEEL) : Machine Learning in Certified Systems

14h50 – Jayant Sen Gupta (Airbus AI Research) : Of the necessity of building models that are robust to the training distribution

15h30 – Pause

15h50 – Joseph Mikael (EDF R&D) : Reinforcement Learning in Risk & Asset Management

16h30 – Freddy Lecue (Chief AI Scientist @Thales Canada, Research Associate @INRIA France) : On the role of domain knowledge in explainable machine learning

17h10 – Discussions ouvertes

17h40 - Fin

Abstracts

  • Grégory Flandin

Machine Learning in Certified Systems - Machine Learning seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices?

  • Jayant Sen Gupta

Of the necessity of building models that are robust to the training distribution - In this presentation, I will try to show why certification of machine learning models for safety critical applications requires to be robust to the training distribution. Indeed, as for standard systems, systems embedding data-based models should perform safely even in the worst case scenario. Having the error of the model uniform on the support of the input distribution would solve this issue. How can we improve model training to be closer to this ideal? I will show what Airbus has started to work on in the scope of DEEL project and now ANITI, hopefully motivating the French statistics community to work further in this direction.

  • Joseph Mikael

Reinforcement Learning in Risk & Asset Management - In the past recent years, Reinforcement learning based methods has been proposed by practitioners and academics to enhance traditional risk & asset management decisions. These algorithms show tremendously good performances. However, before using these methods in an operational process a lot of questions arise for both practitioners and regulators. We are stuck in a middle ground in which the methodology is ready but confidence remains to be built. These questions are common with the automotive industry on autonomous vehicles: a framework certifying that we have given ourselves the means to build trust must be found. We propose to list out some of these questions and some of the attempts that are proposed to answer these questions.

  • Freddy Lecue

On the role of domain knowledge in explainable machine learning - Machine Learning (ML), as one of the key drivers of Artificial Intelligence, has demonstrated disruptive results in numerous industries. However one of the most fundamental problems of applying ML, and particularly Artificial Neural Network models, in critical systems is its inability to provide a rationale for their decisions. For instance a ML system recognizes an object to be a warfare mine through comparison with its similar observations. No human-transposable rationale is given, mainly because common sense knowledge or reasoning is out-of-scope of ML systems. We present how domain knowledge through knowledge graphs could be applied to expose more human-understandable machine learning decisions, and present an asset, combining ML and knowledge graphs to expose a human-like explanation when recognizing an object of any class in a knowledge graph of 4,233,000 resources.

Formulaire d’inscription