Peter Lorenz

NTU Singapore

profile_blue.png

NTU

Adversarial Machine Learning and Model Stealing

I am currently a Postdoctoral Research Fellow at Nanyang Technological University (NTU) Singapore, ranked 2nd worldwide in AI research, focusing on trustworthy AI. I recently completed my Ph.D. with magna cum laude from Heidelberg University, Germany, specializing in adversarial machine learning for classification, generative diffusion models with focus on harmful outlier detection to increase the trustworthness of AI models and prepare them for open-world problems. During my Ph.D. (thesis), which I completed in 3.5 years, I conducted an internship at MIT-IBM Watson AI Lab and achieved notable results, including a top 3% paper award at the ICASSP conference, a top-20 ranking in the CVPR 2022 Art-of-Robustness Challenge, and two times Oxford summer school acceptance.

In addition to my research, I actively contribute to the machine learning community as a reviewer for top-tier conferences such as ICML, ICLR, and NeurIPS. My passion lies in advancing the field of machine learning, with a particular focus on its practical applications.

Previously, I worked in academia and industry on the CARLA simulator for pedestrian safety and classification models for mobile robotics, including vehicles and drones. Before that, I contributed to the vision system of autonomous robots (Team TEDUSAR @TU Graz, that won the robocup competition “best in autonomy” in 2016.) in terms of my Bachelor thesis.

Leave me an anonymous comment!

news

Jan 07, 2025 Check out my continuously updated reading list about model stealing!
Mar 18, 2024 I am happy to announce that I am a reviewer at the CVPR Workshop Robustness of Foundation Models 🎉
Jan 29, 2024 I am accepted for the Oxford Summer School - Representation Learning
Oct 18, 2023 I am happy to announce that I am reviewer at ICASSP on the topics federated / split learning and quantum privacy 😄
Aug 26, 2023 Check out my writeups from the Lakera Gandalf hackathon.

latest posts

Mar 28, 2025 Recommender Systems
Mar 28, 2025 Imbalanced Data
Mar 28, 2025 MLSD

selected publications

  1. ICML
    Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors
    Peter Lorenz, Mario Fernandez, Jens Mueller, and 1 more author
    In ICML 2024 Workshop on the Next Generation of AI Safety , 2024
  2. IJCNN
    Adversarial Examples are Misaligned in Diffusion Model Manifolds
    Peter Lorenz, Ricard Durall, and Janis Keuper
    In IJCNN , 2024
  3. NeurIPS
    Visual Prompting for Adversarial Robustness (top 3% @ ICASSP23)
    Aochuan Chen*, Peter Lorenz*, Yuguang Yao, and 2 more authors
    In NeurIPS WS TSRML, Safety ML WS, ICASSP23 , 2022
  4. AAAI
    Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?
    Peter Lorenz, Dominik Strassel, Margret Keuper, and 1 more author
    In The AAAI-22 Workshop on Adversarial Machine Learning and Beyond , 2022
  5. ICML
    Detecting AutoAttack Perturbations in the Frequency Domain
    Peter Lorenz, Paula Harder, Dominik Straßel, and 2 more authors
    In ICML 2021 Workshop on Adversarial Machine Learning , 2021