Analysis of Adversarial Examples (PhD Thesis)

Analysis of Adversarial Examples

I earned my PhD with magna cum laude from Heidelberg University, Germany, where I was advised by my ex-team lead Janis Keuper and supervised by Prof. Ullrich Köthe. I am an external Ph.D. student at the University of Heidelberg (2nd best-ranked German university). My key research interest was the analysis of adversarial examples on DNN. In particular, deep neural networks’ robustness and trustworthiness.

I also had the opportunity to intern with Prof. Sijia Liu (MSU & MIT-IBM) and Pin-Yu Chen (MIT-IBM), focusing on adversarial machine learning. Our “Visual Prompting for Adversarial Robustness” paper achieved recognition within the top 3% at the ICASSP conference.

Thesis Abstract

The rise of artificial intelligence (AI) has significantly impacted the field of computer vision (CV). In particular, deep learning (DL) has advanced the development of algorithms that comprehend visual data. In specific tasks, DL exhibits human capabilities and is impacting our everyday lives such as virtual assistants, entertainment or web searches. Despite of the success of visual algorithms, in this thesis we study the threat adversarial examples, which are input manipulation to let to misclassification.

Defense announcement

Slides

Thesis




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Google Gemini updates: Flash 1.5, Gemma 2 and Project Astra
  • How to write a good scientific review?
  • Habits
  • Regulate your blood sugar! — Nourish to Flourish: Harnessing Glycogen for Peak Performance at Work
  • How to lead when you are not in charge?