Peter Lorenz

ITWM Fraunhofer + Heidelberg University


ITWM, Fraunhofer Institute for Industrial Mathematics

Fraunhoferplatz 1

Kaiserslautern, Germany

Computer science Ph.D. stipend holder at the Fraunhofer Institute for Industrial Mathematics (ITWM) and Fraunhofer Research Center Machine Learning. I am interested in the intersection of ML and computer security. In particular, robustness and trustworthiness of deep neural networks. My key research interest is the analysis of adversarial examples on DNN.

I am advised by Janis Keuper and supervised by Ullrich Köthe. I am an external Ph.D. student at the University of Heidelberg (2nd best-ranked German university).

Leave me an anonymous comment!

News: I am looking for a research position.


Mar 18, 2024 I am happy to announce that I am a reviewer at the CVPR Workshop Robustness of Foundation Models 🎉
Jan 29, 2024 I am accepted for the Oxford Summer School - Representation Learning
Oct 18, 2023 I am happy to announce that I am reviewer at ICASSP on the topics federated / split learning and quantum privacy 😄
Aug 26, 2023 Check out my writeups from the Lakera Gandalf hackathon.
Aug 8, 2023 I am glad to share that I am selected as ICCV WS AROW 2023 reviewer 🥳

selected publications

  1. IJCNN
    Adversarial Examples are Misaligned in Diffusion Model Manifolds
    Peter LorenzRicard Durall, and Janis Keuper
    In IJCNN, 2024
  2. ICCV
    Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
    Peter LorenzRicard Durall, and Janis Keuper
    In ICCV Workshop and Challenge on DeepFake Analysis and Detection , 2023
  3. NeurIPS
    Visual Prompting for Adversarial Robustness (top 3% @ ICASSP23)
    Aochuan Chen*, Peter Lorenz*, Yuguang Yao, and 2 more authors
    In NeurIPS WS TSRML, Safety ML WS, ICASSP23, 2022
  4. AAAI
    Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?
    Peter LorenzDominik Strassel, Margret Keuper, and 1 more author
    In The AAAI-22 Workshop on Adversarial Machine Learning and Beyond, 2022
  5. ICML
    Detecting AutoAttack Perturbations in the Frequency Domain
    Peter LorenzPaula HarderDominik Straßel, and 2 more authors
    In ICML 2021 Workshop on Adversarial Machine Learning, 2021