Habilitation thesis of Aurélien Bellet

Contributions to Decentralized and Privacy-Preserving Machine Learning

This manuscript presents, in a unified way, some of my contributions to the topic of decentralized and privacy-preserving machine learning. Decentralized learning, also known as federated learning, aims to allow a set of participants with local datasets to collaboratively train machine learning models while keeping their data decentralized. A key challenge in this context is to design decentralized algorithms that (i) can efficiently solve a variety of learning tasks on highly heterogeneous local datasets, and (ii) provide rigorous privacy guarantees while minimizing the impact on the utility of the learned models. To tackle these challenges, I describe three sets of contributions. First, I present a decentralized approach to collaboratively learn a personalized model for each user. Second, I address the problem of decentralized estimation and learning with pairwise loss functions. In both cases, privacy-preserving versions of these algorithms are introduced under the strong model of local differential privacy. Finally, to reduce the cost in utility induced by local differential privacy, I propose two approaches to improve the privacy-utility trade-offs of decentralized learning through appropriate relaxations of the local model

defended on 30/11/2021

Habilitation thesis of Aurélien Bellet

Contributions to Decentralized and Privacy-Preserving Machine Learning

This manuscript presents, in a unified way, some of my contributions to the topic of decentralized and privacy-preserving machine learning. Decentralized learning, also known as federated learning, aims to allow a set of participants with local datasets to collaboratively train machine learning models while keeping their data decentralized. A key challenge in this context is to design decentralized algorithms that (i) can efficiently solve a variety of learning tasks on highly heterogeneous local datasets, and (ii) provide rigorous privacy guarantees while minimizing the impact on the utility of the learned models. To tackle these challenges, I describe three sets of contributions. First, I present a decentralized approach to collaboratively learn a personalized model for each user. Second, I address the problem of decentralized estimation and learning with pairwise loss functions. In both cases, privacy-preserving versions of these algorithms are introduced under the strong model of local differential privacy. Finally, to reduce the cost in utility induced by local differential privacy, I propose two approaches to improve the privacy-utility trade-offs of decentralized learning through appropriate relaxations of the local model

defended on 30/11/2021