Events

Public defence in Computer Science, M.Sc. (Tech) Buse Gül Atli Tekgül

Title of the doctoal thesis is "Securing Machine Learning: Streamlining Attacks and Defenses Under Realistic Adversary Models"
Doctoral thesis hanging on the wall

Over the last decade, many applications have widely used machine learning (ML) solutions due to their remarkable performance in various domains. However, the rapid progress in ML induces new attacks compromising confidentiality, integrity, and availability of ML-driven systems. For example, model evasion attacks that deliberately fool ML models have become a significant threat to safety-critical applications. Adversaries are also motivated to steal ML models and illegally monetize them using model extraction attacks. Both these attacks pose serious threats even if the ML model is deployed behind an application program interface and does not expose any information about the model itself to end users.

This thesis investigates the realistic security threats to ML systems caused by model evasion and extraction attacks. This thesis contains three parts. In the first part, we develop model evasion attacks, which simultaneously attain high effectiveness and efficiency on image classifiers and deep reinforcement learning agents. In both applications, we focus on operating within realistic adversary models. In the second part, we propose a novel ownership verification method that integrates ML model watermarking solutions into the federated learning process. We also demonstrate that dataset watermarking approaches can only reliably demonstrate the ownership of big datasets under limited adversarial capabilities. In the last part, we show that the effectiveness of model extraction attacks depends on the adversary’s capabilities or knowledge. We also develop alternative model ownership verification methods that can survive during model extraction attacks. The findings in this dissertation will help ML model owners evaluate potential vulnerabilities and remedies against model evasion and extraction attacks considering different security requirements and realistic adversary models.

Opponent: Professor Simin Nadjm-Tehrani, Linköping University, Sweden

Custos: Professor N Asokan, Aalto University School of Science, Department of Computer Science

Contact details of the doctoral student: [email protected]

The public defence will be organised on campus (Maarintie 8, lecture hall AS1).

The thesis is publicly displayed 10 days before the defence in the publication archive Aaltodoc of Aalto University. 

Electronic thesis

  • Published:
  • Updated: