Peter Lorenz

NTU Singapore

profile_blue.png

NTU

Machine Learning and Cryptanalysis

I am currently a postdoctoral researcher at Nanyang Technological University (NTU), Singapore, ranked 2nd worldwide in AI research. My current research conducts adversarial machine learning and cryptanalysis, e.g. model stealing or new adversarial attacks.

Previously, I earned my PhD with magna cum laude from Heidelberg University, Germany, where I was advised by my ex-team lead Janis Keuper and supervised by Prof. Ullrich Köthe. I am an external Ph.D. student at the University of Heidelberg (2nd best-ranked German university). My key research interest was the analysis of adversarial examples on DNN. In particular, deep neural networks’ robustness and trustworthiness.

I also had the opportunity to intern with Prof. Sijia Liu (MSU & MIT-IBM) and Pin-Yu Chen (MIT-IBM), focusing on adversarial machine learning. Our “Visual Prompting for Adversarial Robustness” paper achieved recognition within the top 3% at the ICASSP conference.

Leave me an anonymous comment!

news

Mar 18, 2024 I am happy to announce that I am a reviewer at the CVPR Workshop Robustness of Foundation Models 🎉
Jan 29, 2024 I am accepted for the Oxford Summer School - Representation Learning
Oct 18, 2023 I am happy to announce that I am reviewer at ICASSP on the topics federated / split learning and quantum privacy 😄
Aug 26, 2023 Check out my writeups from the Lakera Gandalf hackathon.

latest posts

selected publications

  1. ICML
    Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors
    Peter Lorenz, Mario Fernandez, Jens Mueller, and 1 more author
    In ICML 2024 Workshop on the Next Generation of AI Safety , 2024
  2. IJCNN
    Adversarial Examples are Misaligned in Diffusion Model Manifolds
    Peter Lorenz, Ricard Durall, and Janis Keuper
    In IJCNN , 2024
  3. NeurIPS
    Visual Prompting for Adversarial Robustness (top 3% @ ICASSP23)
    Aochuan Chen*, Peter Lorenz*, Yuguang Yao, and 2 more authors
    In NeurIPS WS TSRML, Safety ML WS, ICASSP23 , 2022
  4. AAAI
    Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?
    Peter Lorenz, Dominik Strassel, Margret Keuper, and 1 more author
    In The AAAI-22 Workshop on Adversarial Machine Learning and Beyond , 2022
  5. ICML
    Detecting AutoAttack Perturbations in the Frequency Domain
    Peter Lorenz, Paula Harder, Dominik Straßel, and 2 more authors
    In ICML 2021 Workshop on Adversarial Machine Learning , 2021