Nils Philipp Walter

I am a third year Ph.D. student at CISPA Helmholtz Center for Information Security, supervised by Jilles Vreeken. I am broadly interested in robust and explainable machine learning for large-scale real-world applications. In my Ph.D, I intend to develop new approaches that are at the same time descriptive and predictive. That is the models not only offer predictive capabilities but also facilitate practitioners to gain deeper insights into the problems they are addressing. Currently, I mostly work on understanding how neural networks process information and building methods to describe when and how models make errors.

Before joining CISPA, I was a research assistant in the goup of Bernt Schiele at the Max-Planck-Institut for Informatics, supervised by David Stutz . My research focused on adversarial and out-of-distribution robustness of Quantized Neural Networks. I also worked on the influence of Batch Normalization on the vulnerability and generalization capabilities of neural networks.

news

Jan '26 Our paper When Flatness Does (Not) Guarantee Adversarial Robustness got accepted to ICLR 26!
Oct '25 Our work on learning Rule list classifiers in a fully differentiable manner has been accepted at NeurIps 2025. 🎉
Sep '25 I started my research intership at Stanford University together with Jure Leskovec.
Apr '25 I was invited to give a talk at the Institute for Artificial Intelligence in Medicine in Essen. The slides are available here.
Mar '25 Preprint of our paper Now you see me! A framework for obtaining class-relevant saliency maps is available on arXiv.
Nov '24 I was invited to give a talk at the Efficient Machine Learning Reading Group. The slides are available here
and a recording can be found on
YouTube .
Nov '24 A new preprint of our work on learning rule list classifiers in a fully differentiable manner is available on arXiv.
Sep '24 I gave a talk at the Gutenberg Workshop on AI for Scientific Discovery. The slides are available here.
Jun '24 Happy to announce that our paper Learning Exceptional Subgroups by End-to-End Maximizing KL-divergence received
a spotlight at ICML 24! 🎉
May '24 Preprint of our paper The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective is available on arXiv.

publications

2026

  1. ICLR
    When Flatness Does (Not) Guarantee Adversarial Robustness
    Nils Philipp Walter, Linara Adilova, Jilles Vreeken, and Michael Kamp
    In The Fourteenth International Conference on Learning Representations, 2026

2025

  1. NeurIPS
    Neural Rule Lists: Learning Discretizations, Rules, and Order in One Go
    Sascha Xu, Nils Philipp Walter, and Jilles Vreeken
    In The Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025
  2. arXiv
    Soft Instruction De-escalation Defense
    Nils Philipp Walter, Chawin Sitawarin, Jamie Hayes, David Stutz, and 1 more author
    arXiv preprint arXiv:2510.21057, 2025
  3. arXiv
    Hidden in Plain Sight – Class Competition Focuses Attribution Maps
    Nils Philipp Walter, Jilles Vreeken, and Jonas Fischer
    arXiv preprint arXiv:2503.07346, 2025

2024

  1. arXiv
    The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective
    Nils Philipp Walter, Linara Adilova, Jilles Vreeken, and Michael Kamp
    arXiv preprint arXiv:2405.16918, 2024
  2. ICML
    Learning Exceptional Subgroups by End-to-End Maximizing KL-divergence
    Sascha Xu, Nils Philipp Walter, Janis Kalofolias, and Jilles Vreeken
    In Proceedings of the International Conference on Machine Learning (ICML), 2024
  3. AAAI
    Finding Interpretable Class-Specific Patterns through Efficient Neural Search
    Nils Philipp Walter, Jonas Fischer, and Jilles Vreeken
    In Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence, 2024

2022

  1. CVPR
    On Fragile Features and Batch Normalization in Adversarial Training
    Nils Philipp Walter, David Stutz, and Bernt Schiele
    arXiv preprint arXiv:2204.12393, 2022