Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Local Identifiability of Deep ReLU Neural Networks: the Theory

Published in Advances in Neural Information Processing Systems 35, 2022

Is a sample rich enough to determine, at least locally, the parameters of a neural network? To answer this question, we introduce a new local parameterization of a given deep ReLU neural network by fixing the values of some of its weights. This allows us to define local lifting operators whose inverses are charts of a smooth manifold of a high dimensional space. The function implemented by the deep ReLU neural network composes the local lifting with a linear operator which depends on the sample. We derive from this convenient representation a geometrical necessary and sufficient condition of local identifiability. Looking at tangent spaces, the geometrical condition provides: 1/ a sharp and testable necessary condition of identifiability and 2/ a sharp and testable sufficient condition of local identifiability. The validity of the conditions can be tested numerically using backpropagation and matrix rank computations.

Recommended citation: Bona-Pellissier, Joachim, François Malgouyres, and François Bachoc. "Local Identifiability of Deep ReLU Neural Networks: the Theory." Advances in Neural Information Processing Systems 35 (2022): 27549-27562. https://proceedings.neurips.cc/paper_files/paper/2022/hash/b0ae046e198a5e43141519868a959c74-Abstract-Conference.html

Parameter identifiability of a deep feedforward ReLU neural network

Published in Machine Learning, 2023

The possibility for one to recover the parameters-weights and biases-of a neural network thanks to the knowledge of its function on a subset of the input space can be, depending on the situation, a curse or a blessing. On one hand, recovering the parameters allows for better adversarial attacks and could also disclose sensitive information from the dataset used to construct the network. On the other hand, if the parameters of a network can be recovered, it guarantees the user that the features in the latent spaces can be interpreted. It also provides foundations to obtain formal guarantees on the performances of the network. It is therefore important to characterize the networks whose parameters can be identified and those whose parameters cannot. In this article, we provide a set of conditions on a deep fully-connected feedforward ReLU neural network under which the parameters of the network are uniquely identified-modulo permutation and positive rescaling-from the function it implements on a subset of the input space.

Recommended citation: Bona-Pellissier, J., Bachoc, F. & Malgouyres, F. Parameter identifiability of a deep feedforward ReLU neural network. Mach Learn (2023). https://doi.org/10.1007/s10994-023-06355-4

teaching

Convex Optimization tutorial (in English)

Graduate course, Université Toulouse 1 Capitole, 2020

Convex functions, Optimization without constraints, Optimization with equality and/or inequality constraints, Lagrange Theorem, KKT Conditions.

Deep Learning practical

Graduate (mathematics), Paul Sabatier University, Toulouse., 2022

Introduction to several architectures such as MLPs, CNNs, Autoencoders, U-nets. Introduction to Keras, Tensorflow, Scikit-learn.

Optimization practical

Undergraduate (mathematics), Paul Sabatier University, Toulouse., 2022

First order methods: GD with Armijo/Wolfe conditions. Second order methods: Newton, BFGS and DFP methods. Optimization with constraints: Quadratic problems, SQP method.

Linear Algebra tutorial

Undergraduate course, Université Toulouse 3 Paul Sabatier, 2023

Euclidian and hermitian spaces, isometries, spectral theorem, quadratic forms, and more.

Machine Learning tutorial & practical

Undergraduate (mathematics), Paul Sabatier University, Toulouse, 2023

Introduction to regression, classification and clustering problems. Linear regression, ridge and LASSO penalization, KNN, logistic regression, SVM, K-means.