Upcoming Events

Satellite event at JdS 2021 (Nice)

Opening event of frENBIS (French local network of ENBIS) co-organized with the

Reliability and Uncertainty group of the SFdS

Date : Friday, June 11, 2021 (2:00 - 5:30 p.m.)

Talks and discussions will be held in French

Scientific Topic:

Stochastic approaches for certification of machine learning algorithms.

Organizers : frENBIS Steering Committee (Yannig Goude, Bertrand Iooss, Jairo Cugliari, Anne Gégout-Petit, Jean-Michel Poggi) & SFdS “Reliability and Uncertainty” group (Chair: Mitra Fouladirad)

This half-day will be the opportunity to inaugurate the creation of the French network of ENBIS which aims to promote statistics for business and industry, to foster mutual fertilization between industry and academia while ensuring and developing links between the French statistical community and ENBIS.

The proposed topic is the certification of machine learning models for their integration in critical systems. Two presentations from the industry (aeronautics, railways, or automotive) and the academic world (for an example of mathematical problem formalization) will be proposed. Finally, the issues of interpretability of machine learning related to the problems of regulation and risk management in the field of energy will also be addressed. A large place will be left to discussions in order to bring out topics or problems suitable for collaboration between academia and industry.

Provisional schedule :

For each speaker: 25mn of presentation + 15mn of questions & discussions

2:00 – Introduction SFdS / frENBIS

2:10 – Grégory Flandin (IRT Saint Exupéry / Projet DEEL): Machine Learning in Certified Systems

2:50 – Jayant Sen Gupta (Airbus AI Research): Of the necessity of building models that are robust to the training distribution

3:30 – Pause

3:50 – Freddy Lecue (Chief AI Scientist @Thales Canada, Research Associate @INRIA France): On the role of domain knowledge in explainable machine learning

4:30 – Joseph Mikael (EDF R&D): Reinforcement Learning in Risk & Asset Management

5:10 – Open discussion

5:40 - End

Abstracts

  • Grégory Flandin

Machine Learning in Certified Systems - Machine Learning seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices?

  • Jayant Sen Gupta

Of the necessity of building models that are robust to the training distribution - In this presentation, I will try to show why certification of machine learning models for safety critical applications requires to be robust to the training distribution. Indeed, as for standard systems, systems embedding data-based models should perform safely even in the worst case scenario. Having the error of the model uniform on the support of the input distribution would solve this issue. How can we improve model training to be closer to this ideal? I will show what Airbus has started to work on in the scope of DEEL project and now ANITI, hopefully motivating the French statistics community to work further in this direction.

  • Freddy Lecue

On the role of domain knowledge in explainable machine learning - Machine Learning (ML), as one of the key drivers of Artificial Intelligence, has demonstrated disruptive results in numerous industries. However one of the most fundamental problems of applying ML, and particularly Artificial Neural Network models, in critical systems is its inability to provide a rationale for their decisions. For instance a ML system recognizes an object to be a warfare mine through comparison with its similar observations. No human-transposable rationale is given, mainly because common sense knowledge or reasoning is out-of-scope of ML systems. We present how domain knowledge through knowledge graphs could be applied to expose more human-understandable machine learning decisions, and present an asset, combining ML and knowledge graphs to expose a human-like explanation when recognizing an object of any class in a knowledge graph of 4,233,000 resources.

  • Joseph Mikael:

Reinforcement Learning in Risk & Asset Management - In the past recent years, Reinforcement learning based methods has been proposed by practitioners and academics to enhance traditional risk & asset management decisions. These algorithms show tremendously good performances. However, before using these methods in an operational process a lot of questions arise for both practitioners and regulators. We are stuck in a middle ground in which the methodology is ready but confidence remains to be built. These questions are common with the automotive industry on autonomous vehicles: a framework certifying that we have given ourselves the means to build trust must be found. We propose to list out some of these questions and some of the attempts that are proposed to answer these questions.

Registration Form