free portfolio website templates

ForMaL

DigiCosme Spring School on
Formal Methods and Machine Learning

4th-7th June 2019, ENS Paris-Saclay, Cachan, France

ForMaL is a Spring School under the auspices of Labex DigiCosme.

Nowadays, there is an increasing need for providing formal guarantees for machine-learning algorithms. Conversely, machine learning techniques have been successfully applied in the realm of formal methods and, in particular, verification.

This school will bring together student and senior researchers from, and at the interface of, formal methods and machine learning, with the aim to create synergies between them. It will feature invited lectures, covering fundamental concepts in machine learning and formal methods, as well as more advanced topics highlighting current research challenges.

In addition to the invited talks, we will provide a platform for all participants to present their (published or ongoing) work. Submission instructions can be found below.

Registration is now closed.

Invited Speakers

EPFL, Lausanne, Switzerland

Measuring the security of black-box systems via Machine Learning

We consider the problem of measuring the information leakage of a generic system, seen as a black-box. The black-box takes secret inputs, and returns outputs according to unknown underlying distributions; we are interested in determining how much an adversary can predict of the secret input given the respective output. This formulation captures a wide class of attacks, ranging from side channels and traffic analysis to membership inferences.

The problem of measuring a black-box's leakage has historically been based on the principles of classical statistics; unfortunately, the methods derived from these principles do not scale to most real-world systems (e.g., black-boxes whose output space is very large or even continuous), and thus had a limited application.

In this talk, we re-frame this problem within a Machine Learning framework. This: i) gives us access to a large set of practical tools for measuring a black-box's leakage, which scale to very large systems, and ii) it allows us to better scope what can be done in the future (e.g., establishing impossibility results). The tools presented here were introduced in this context last year by the author, and they were used for deriving the first generic security bounds for a major class of traffic analysis attacks (Website Fingerprinting). We will give a formal introduction to these tools, discuss their applications and theoretical limitations, and highlight open questions.

CNRS, LaBRI, Bordeaux, France
The Alan Turing Institute, London, UK

Program Synthesis

In this talk, I will survey recent approaches for tackling the notoriously difficult question of automatically generating programs from a specification. I will focus on one setting called Programming by Example, where the specification is given by a few inputs and outputs. More specifically, the focus will be on the use of machine learning models for improving program synthesis. The goal is to convince the audience that this is a fertile ground for research with a great need for formal methods tools.

The Hebrew University of Jerusalem, Israel

Verification of Deep Neural Networks

Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in incorporating them as controllers in safety-critical systems is the great difficulty in providing formal guarantees about their behavior. In recent years, attempts have been made to address this obstacle by formally verifying neural networks. However, neural network verification is a computationally difficult task, and traditional verification techniques have often had only limited success - leading to the development of novel techniques. In this talk we will survey the state of the art in neural network verification, focusing on Satisfiability Modulo Theories (SMT) based approaches, and discuss some of the open challenges in the field.

Max Planck Institute for Software Systems, Kaiserslautern, Germany

Deductive Verification, the Inductive Way

Deductive verification is one of the most well-established and effective methods to formally prove the correctness of soft- and hardware. It can be seen as consisting of two parts: 1) writing annotations in terms of inductive loop invariants, class invariants, and method contracts, and 2) establishing that these annotations indeed prove the program correct using theorem proving. Although automation of the latter has seen tremendous progress in the last two decades (mainly due to advances in the development of effective SMT solvers), the former remains a challenging, often manual task, which impedes the widespread adaptation of automated verification techniques in practice.

In this tutorial, we will explore ICE learning, a framework for deductive verification that leverages inductive methods from machine learning to automate the verification process. We will give an overview of deductive verification, introduce ICE learning, and investigate various different learning algorithms for this setting. Moreover, we discuss applications of ICE learning beyond verification, such as the synthesis of reactive controllers.

National University of Singapore

Trusting Machine Learning: Privacy, Robustness, and Interpretability Challenges

Machine learning algorithms have shown an unprecedented predictive power for many complex learning tasks. As they are increasingly being deployed in large scale critical applications for processing various types of data, new questions related to their trustworthiness would arise. Can machine learning algorithms be trusted to have access to individuals' sensitive data? Can they be robust against noisy or adversarially perturbed data? Can we reliably interpret their learning process, and explain their predictions? In this talk, I will go over the challenges of building trustworthy machine learning algorithms in centralized and distributed (federated) settings, and will discuss the inter-relation between privacy, robustness, and interpretability.

Reza shokri is an Assistant Professor of Computer Science at the National University of Singapore (NUS), where he holds the NUS Presidential Young Professorship. His research is on adversarial and privacy-preserving computation, notably for machine learning algorithms. He is an active member of the security and privacy community, and has served as a PC member of IEEE S&P, ACM CCS, Usenix Security, NDSS, and PETS. He received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2018, for his work on analyzing the privacy risks of machine learning models, and was a runner-up in 2012, for his work on quantifying location privacy. He obtained his PhD from EPFL. More information: https://www.comp.nus.edu.sg/~reza/

Czech Technical University in Prague, Czech Republic

TBA

TBA

ETH Zürich, Switzerland

Safe and Robust Deep Learning

This tutorial will introduce students to the latest automated reasoning methods for proving that deep neural networks behave as expected (satisfy a given specification, be it robustness against perturbations or general safety properties). These methods are based on abstract interpretation (specifically, the talk will introduce custom, new numerical domains which scale to large networks, e.g., Resnet-34) as well as combinations of these methods with classic mixed linear solvers. In the process we will also discuss how to apply these techniques not only for verification but also for training provably robust networks. Finally, the talk will discuss practical issues one must consider as well as existing verification systems (e.g., ERAN: https://github.com/eth-sri/eran) and demonstrate how students can become involved in improving these tools and trying out new types of networks.



Tentative Schedule


Tuesday, 4th June

09:30 - 10:00 Welcome
10:00 - 12:00 Machine Learning & Deep Learning Tutorial (Part 1)
Guillaume Charpiat
12:00 - 13:30 Lunch break
13:30 - 16:00 Machine Learning & Deep Learning Tutorial (Part 2)
Guillaume Charpiat
16:00 - 16:30 Coffee break
16:30 - 16:50 Verification of RNN-Based Neural Agent-Environment Systems
Michael Akintunde
16:50 - 17:10 A security study of ODE Nets
Julien Girard-Satabin

Wednesday, 5th June

09:30 - 10:00 Welcome
10:00 - 12:00 Verification of Deep Neural Networks
Guy Katz
12:00 - 13:30 Lunch break
13:30 - 15:30 Program Synthesis
Nathanaël Fijalkow
15:30 - 16:00 Coffee break
16:00 - 18:00 Trusting Machine Learning: Privacy, Robustness, and Interpretability Challenges
Reza Shokri

Thursday, 6th June

09:30 - 10:00 Welcome
10:00 - 12:00 Safe and Robust Deep Learning
Martin Vechev
12:00 - 13:30 Lunch break
13:30 - 15:30 Deductive Verification, the Inductive Way
Daniel Neider
15:30 - 16:00 Coffee break
16:00 - 17:20 An Introduction to Process Mining and Conformance Checking
Thomas Chatain
17:20 - 17:40 Generalized Alignment-Based Trace Clustering
Mathilde Boltenhagen
17:40 - 18:00 Global PAC Bounds for Learning Discrete Time Markov Chains
Blaise Genest
19:00 - Reception

Friday, 7th June

09:30 - 10:00 Welcome
10:00 - 12:00 Measuring the security of black-box systems via Machine Learning
Giovanni Cherubin
12:00 - 13:30 Lunch break
13:30 - 15:00 Generating Optimal Privacy-Protection Mechanisms via Machine Learning
Catuscia Palamidessi
15:00 - 15:30 Coffee break
15:30 - 17:00 TBA
Josef Urban



Mobirise

Venue

ForMaL takes place at ENS Paris-Saclay, Cachan, France. Lectures take place in the Amphithéâtre Chemla (Institut d'Alembert).

Mobirise

Talks and Poster Sessions

We welcome proposals of talks and poster presentations from all participants. Talks will last around 15-20 minutes, depending on the available slots.

Submission is closed.

Registration


Participation is free (including coffee and lunch breaks), but registration is required. To register, please fill in the following form. The organizers will send a confirmation mail after manual verification.

Registration is closed.

Accomodation

Here is a list of hotels that are close by the venue or can be easily reached with public transportation. Please do not hesitate to contact the organizers for any assistance.

Scientific Committee

Stefan Haar (chair)

Benedikt Bollig (co-chair)

Marc Aiguier

Florence d'Alché-Buc

Alexandre Allauzen

Sébastien Bardin

Guillaume Charpiat

Thomas Chatain

Zakaria Chihani

Stéphan Clémençon

Philippe Dague

Alain Finkel

Julien Girard

Alexandre Gramfort

Serge Haddad

Céline Hudelot

Igor Khmelnitsky

Catuscia Palamidessi

Marc Schoenauer

Michèle Sebag

Lina Ye

Contact

To contact the organizers, please use the following form:

Sponsors

© Copyright 2019 ForMaL