MATH FOR MACHINE LEARNING SUMMER SCHOOL

vedette1

 

Maths for Machine Learning is a one-week summer school organized by Ecole Polytechnique and EMINES-UM6P in the framework of the joint chair between Mohammed VI University (UM6P), Ben Guerir, Morocco and the Ecole Polytechnique "Data Science and Industrial Process". The first edition 2022 of the summer school will take place at EMINES - Université Mohammed VI (UM6P), Ben Guerir, Morocco from July 25 to 30, 2022 and will cover the theoretical aspects of machine learning. Lectures will present the mathematical foundations of different algorithms with a strong focus on generative models. The week-long summer school will allow participants to learn from renowned professors about the latest research in the field. These high caliber educators will be available all week to exchange with the students and answer their questions. In addition, they will provide them with valuable academic advice as they present their work in dedicated sessions, allowing them to further improve their future research.

Learn more about the organizers :

The summer school is orginized by the joint chair between EMINES - Mohammed VI University (UM6P), Ben Guerir, Morocco and the Ecole Polytechnique held by Éric Moulines, Professor of Statistics at the École polytechnique, affiliate professor at EMINES and member of the Paris Academy of Sciences.

PROGRAM :

  Sun 24/7 Mon 25/7 Tue 26/7 Wed 27/7 Thu 28/7 Fri 29/7
    Breakfast
09H00 - 09H50 Giovanni Conforti Valentin De Bortoli Alexey Naumov Dmitry Kropotov Dmitry Kropotov
  Break
10H00 - 10H50 Ali Idri Giovanni Conforti Giovanni Conforti Ali Idri / Fatima ezzahrae Nakach Valentin De Bortoli
  Break
11H00 - 11H50 Alain Durmus Alain Durmus Dmitry Kropotov Alain Durmus Valentin De Bortoli
    Lunch Break
14H00 - 14H50
RECEPTION & CHECK-IN
Alexey Naumov Alexey Naumov Ali Idri/ Hasnae Zerouaoui Giovanni Conforti  
  Break
15H00 - 15h30 Samsonov Plassier Tiapkin Jimenez-Moreno  
15h30 - 16h00 Puchkin Philippenko Clavier Kodryan  
16h00 - 16h30 Lecomte Allard Matthew Janati  
16h30 - 17h00 Break
17h00 - 17h30   Huix Abadie Said Sinilshchikov  
17h30 - 18h00   Rémi        
20h30  
Ping-pong, chess tournament, Jam Session
FootBall tournament
VolleyBall tournament
   

 Student presentations last 45' and are structured as follows:

  • 15' of a conference type presentation with slides + 30' of a more detailed presentation of a result, which can be theoretical or practical [software demonstration].
  • The schedule of the presentation of the students will be announced later.
  • All students who wish to present a result must attach a presentation proposal to their application and send a request to: This email address is being protected from spambots. You need JavaScript enabled to view it.

 SPEAKERS :

Alain Durmus : Associate professor at ENS Paris-Saclay (previously Cachan) and a member of the Borelli Centre
Valentin De Bortoli : CNRS researcher Center for Science of Data, ENS Ulm, Paris
Alexey Naumov : National Research University Higher School of Economics
Jamal Atif : Professor at the University of Paris-Dauphine, Project Manager "Data science and artificial intelligence" at the Institute of Information Sciences and their Interactions
Dmitry Kropotov : Research fellow Lomonosov Moscow State University
Ali Idri : Affiliate Professor at MSDA, UM6P, Ben Guerir
Giovanni Conforti : Assistant professor in probability at CMAP, École Polytechnique

ABSTRACT OF CONFERENCES :

Posterior sampling and Bayesian bootstrap: sample complexity and regret bounds - Alexey Naumov : In reinforcement learning (RL), an agent interacts with an environment with the objective of maximizing the sum of collected rewards. In order to fulfill this objective, the agent should balance between exploring the environment and exploiting the current knowledge to accumulate rewards. We model the environment as an unknown episodic tabular Markov decision process (MDP) with S states, A actions, and episodes of length H. After T episodes, we measure the performance of the agent by its cumulative regret which is the difference between the total reward collected by an optimal policy and the total reward collected by the agent during the learning. In particular, we study the non-stationary setting where rewards and transitions can change within an episode. In this course: 
  • we provide an overview of the existing methods. In particular, we consider methods based on the principle of optimism in face of uncertainty and optimism by injecting noise. We consider advantages and drawbacks of the methods.
  • propose the BayesUCBVI algorithm which uses the quantile of a Q-value function posterior as upper confidence bound on the optimal Q-value function. We guarantee a high-probability regret bound of order at most  O(SAT ) that matches the lower-bound of Ω(SAT ) .
  • Crucial to our analysis is a new fine-grained anti-concentration bound for a weighted Dirichlet sum that can be of independent interest.
The course is based on the joint result with D. Tiapkin, D. Belomestny, E. Moulines, S. Samsonov, Y. Tang, M. Valko, P. Menard, From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses, ICML 2022, https://arxiv.org/abs/2205.07704
 
Generative modeling: from diffusion models to Schrodinger bridges - Valentin De Bortoli : Generative modeling is the task of synthesizing new samples from an unknown distribution given a set of examples. This challenge is ubiquitous in machine learning with applications in image synthesis, audio synthesis, protein modeling and forecasting...
In this short course, we will quickly review the main flavors of state-of-the-art (SOTA) generative modeling (EBMs, VAEs, GANs, Normalizing Flows...) Then, we will turn to a very recent contender for SOTA synthesis: score-based generative models (SGMs) (also called diffusion models). We will explain the principles of SGMs and present some theoretical results on the topic.
In particular, we will provide quantitative bounds for the convergence of such models. Finally, we will discuss the links between SGMs, stochastic control and optimal transport. More precisely, we will show that SGMs can be seen as one iteration of the celebrated Iterative Proportional Fitting algorithm.
Support : Long version of the course can be found herehttps://vdeborto.github.io/project/generative_modeling/
 
An introduction to (stochastic) optimization on Riemaniann manifolds - Alain Durmus : The main objective of this mini-course is to provide an introduction to deterministic and stochastic methods for optimizing an objective function defined on a Riemannian manifold. In a first part, I will present the necessary tools and concepts of Riemannian geometry and explain how they are natural extensions of well-known object of the Euclidean setting. I will also take the opportunity to motivate the introduction of such a framework by introducing machine learning and statistical problems that can be integrated into it, such as principal component analysis and barycenter computation. The second session will focus on deterministic optimization methods in particular exponential gradient descent and Riemannian Newton scheme. In our last lecture, we will review some stochastic optimization algorithms and give theoretical guarantees for them. Finally, if time permits, we will conclude with a brief introduction to the problem of sampling from a target distribution on a Riemannian manifold.
 
Bayesian models in deep learning - Alexey Naumov : Bayesian paradigm provides a mathematical framework for building machine learning models with nice properties. When the number of data significantly exceeds the number of parameters to be estimated (or learned if we use terminology from ML community) Bayesian modeling becomes equivalent to classical maximal likelihood estimation. But when the amount of trainable parameters is comparable to the amount of available data these two approaches differ. This is exactly the case of modern over-parameterized deep neural networks. Although the closed-form Bayesian inference is impossible for DNNs there exist several techniques for approximated variational inference. In the course we will review an important tool of double-stochastic variational inference which makes approximate Bayesian inference scalable enough to be applied to large models and datasets and derive the model of variational auto-encoder (VAE). The latter is a flexible model which provides numerous ways for its generalizations. In particular we will describe state-of-the-art diffusion modelling framework through the prism of hierarchical VAE.
 
Small noise convergence of the Schrödinger to optimal transport - Giovanni Conforti : One of the reasons for the success of recent applications of entropic optimal transport and Schrödinger bridges in data science and machine learning is that they provide with a more regular, more convex and more tractable version of the optimal transport problem, that is recovered in the small-noise limit. The aim of this mini-course is to illustrate some theoretical results that rigorously justify the convergence of the Schrödinger towards optimal transport. We shall begin by reviewing some fundamental results on both problems and dual version and then proceed to discuss convergence of the Schrödinger potentials to the Kantorovich potentials and convergence of the gradient of the Schrödinger potential to the Brenier map.
 
Ensemble Learning - Ali Idri : Ensemble learning is the process of strategically generating and combining multiple models, such as classifiers or experts, to solve a given computational intelligence problem. Ensemble learning is primarily used to improve the performance of classification and regression tasks. Other applications of ensemble learning include assigning a confidence to the decision made by the model, optimal feature selection and data fusion, incremental learning, non-stationary learning, and error correction. This seminar focuses on the use of ensemble learning in classification and regression.
 
 REGISTRATION
  • There is no fee to attend the school
  • Accommodation and board on campus are provided. Click on the following link to learn more about accommodation.
  • Travel costs are not covered and remain the responsibility of the participants. A shuttle will be provided from Casablanca airport on July 24th. If it is necessary to obtain a visa, an invitation letter will be sent to you
  • Registration is open for PhD students in the field of Machine Learning, interested Masters students are also encouraged to apply.
  • Complete the following FORM to send your application, full CV and presentation summary are required for submission.
  • For more information on registration, please contact: This email address is being protected from spambots. You need JavaScript enabled to view it.