Discover past and upcoming seminars held by the cybersecurity axis.
Date and time: TBD
Venue: TBD
Date and time : May 13, 2025, 14:00-16:00
Lieu : IRCICA
Title : Conception d’attaques sur la chaîne logistique logicielle, Jean-Yves Marion
Abstract :
Les classifieurs de fonctions binaires sont des méthodes génériques qui jouent un rôle crucial dans le contrôle de l’intégrité logicielle d’un système, notamment en détectant des portes dérobées, des vulnérabilités et du code malveillant. De nombreux classifieurs reposent sur des techniques d’apprentissage automatique, et peuvent être contournés par la construction d’exemples adverses.
Dans cet exposé, nous montrerons comment générer de tels exemples adverses sans aucune connaissance a priori du classifieur à contourner. Nous irons plus loin en montrant comment insérer une charge malveillante dans une fonction quelconque choisie par l’attaquant, tout en faisant en sorte que la fonction compromise soit toujours identifiée comme la fonction cible par le classifieur.
Cette approche illustre comment un attaquant peut mener une attaque sur la chaîne logistique logicielle, en compromettant un système à travers la mise à jour d’une application ou d’une bibliothèque.
Date et time: December 5, 2024, 10:00-11:00 AM
Venue: Amphi B, Inria building B
Talk 1: Which Online Platforms and Dark Patterns Should Be Regulated under Article 25 of the DSA?, Nataliia Bielova
Abstract:
On 17 February 2024 the Digital Services Act (DSA) became directly applicable across the EU, explicitly codifying and prohibiting dark patterns in online interfaces for the first time in its Article 25(1). Current enforcement investigations on dark patterns focus on Very Large Online Platforms (VLOPs) like Meta, Temu, and X. Still, ambiguity remains about which dark patterns the DSA addresses that are not already regulated by the General Data Protection Regulation (GDPR) or the Unfair Commercial Practices Directive (UCPD). By establishing an interdisciplinary collaboration between experts in law and human-computer interaction (HCI), we conduct a thorough analysis of Article 25 and Recital 67 of the DSA to provide a comprehensive examination of the types of dark patterns encompassed within these legal provisions. We align the extracted dark patterns with the most comprehensive established ontology of dark patterns that combines the existing definitions of dark patterns. Together with computer science experts, we analyse very popular services used by website publishers, such as Google Tag Manager and Google Analytics, detect dark patterns within the interfaces of these services, and demonstrate the importance of recognizing business users as potential subjects of dark patterns which are in potential violation of Article 25.
Talk 2: Browser Extension (In)Security, Aurore Fass
Abstract:
Browser extensions are popular to enhance user browsing experience: they offer additional functionality to Web users, such as ad blocking, grammar checks, or password management. To operate, browser extensions need elevated privileges compared to web pages. Therefore, browser extensions are an attractive target for attackers and can pose a significant threat to Web users.
Specifically, how can extensions put the security and privacy of Web users at risk? How many dangerous extensions have been in the Chrome Web Store? How can we detect dangerous extensions?
In this presentation, I will answer these questions. To this end, I will first define classes of “Security-Noteworthy Extensions” (SNE) that can harm users. Through this talk, I aim to raise awareness about the risks posed by browser extensions and discuss some mitigation strategies.
Date et time: November 26, 2024, 14:00-15:00 PM
Venue: Amphi B, Inria building B
Title: Libra: Dream of Secure Balanced Execution on High-End Processors? - Let’s Make it Real!, Lesly-Ann Daniel
Abstract:
Control-flow leakage (CFL) attacks enable an attacker to expose control-flow decisions of a victim program via side-channels observations. Linearization (i.e., elimination) of secret-dependent control flow is the main countermeasure against these attacks, yet it comes at a non-negligible cost. Conversely, balancing secret-dependent branches often incurs a smaller overhead, but is notoriously insecure on high-end processors. Hence, linearization has been widely believed to be the only effective countermeasure against CFL attacks. In this talk, I will challenge this belief and investigate an unexplored alternative: how to securely balance secret-dependent branches on higher-end processors? Finally, I will take a step back and present some research challenges related to hardware/software co-design against microarchiteral attacks.
Date and time: October 21, 2024, 13:30-15:00 PM
Venue: Agora 2, ESPRIT building
Talk 1: Controlling False Positives of Deep Learning Detectors, Jan Butora
Abstract:
Deep convolutional neural networks are state-of-the-art detectors for various digital image forensics techniques, such as steganalysis - detection of steganography. However, their performance is often measured only by empirically evaluating a given testing set of images of a fixed size. Moreover, the so-called cover-source mismatch makes the detectors unusable on images with different noise properties. In this talk, we will follow the real-world requirements of a forensic analyst, demanding robustness against image size, image source, and theoretical guarantees on very small False Positive (FP) rates, such as 10^-4 or lower, which are in practice hard to achieve empirically. First, I will introduce the Reverse JPEG Compatibility Attack that allows us to model cover (pristine) images in order to control the FP rate. Then we will use this technique in a deep learning classifier and demonstrate that by carefully modifying the architecture and studying its soft outputs - the logits - we can accurately predict the distribution of the logits w.r.t. image size. Such a detector, used as a one-class classifier, accurately follows theoretically prescribed FP rates and still generalizes enough to correctly detect unseen steganography.
Talk 2: An Overview of Automated Program Analysis, Raphaël Monat
Abstract:
This talk will present a brief overview of automated program analysis such as fuzzing, symbolic execution, and conservative static analysis. These techniques can be used to detect various bugs in programs, including buffer overflows which may create security vulnerabilities. I will then describe the methodology used by the abstract interpretation community to develop conservative static analyses, and comment ongoing research efforts (and struggles!) we encounter within Mopsa, an open-source static analysis platform I am co-developing.
Date and time: April 12, 2023, 9:30 AM
Venue: Amphitheater Atrium, ESPRIT building
Time | Track | Title |
---|---|---|
9h00 | Opening | |
9h15 | Session 1 | Multimedia security and privacy |
Imane Fouad, Inria | Security and Privacy at Spirals team | |
Patrick Bas, CNRS | Information securing activities at Sigma team | |
10h30 | Break | |
11h15 | Session 2 | Systems and software security |
Guillermo Polito, Inria | Empirical Detection of Software Vulnerabilities in the RMoD Team | |
Thomas Vantroys, Polytech Lille | Connected objects: from securing to identification | |
Clément Ballabriga, Université of Lille | Abstract interpretation to prove absence of stack overflow | |
12h30 | Lunch | |
14h00 | Session 3 | Network security |
Virginie Deniau, Université Gustave Eiffel | Attack detection against wireless communication using radio frequency activity monitoring | |
Valeria Loscri, Inria | How Machine Learning is changing the Cybersecurity landscape in Wireless Communication Networks | |
Michaël Hauspie, IUT de Lille | Network security: cloud and IoT | |
15h15 | Break & Poster session | |
16h15 | Session 4 | Privacy and IA security |
Deise Santana Maia, Université de Lille | 3D facial biometry | |
Marc Tommasi, Université de Lille | Decentralized learning, privacy and security | |
Debabrota Basu, Inria | The Privacy Game: Attacks with and Defenses for Online Learning Algorithms | |
Soukaina Aji | Spiking Neural Networks and the struggle against adversarial attacks |