Tobias Leemann

Position

Visiting Researcher

E-mail

tobias.leemann@tum.de

Address

Room B.359 (3rd Floor)
Richard-Wagner-Straße 1
80333 München

Office hours

By appointment

 

Short CV and Research Interests

I am currently pursuing a PhD at the University of Tübingen in Germany under the guidance of Prof. Dr. Gjergji Kasneci. Additionally, I am proud to be part of the newly established Chair for Responsible Data Science (RDS) as a visiting researcher since June 2023.

I obtained my Bachelor's and Master's degrees from the University of Erlangen-Nuremberg (FAU), where I had the opportunity to spend an enriching exchange semester in Montréal, Canada. During my Master's program, I delved into the intriguing field of Traffic Behavior Prediction for autonomous driving while conducting research for my thesis at AUDI AG.

Throughout my graduate studies, my primary research focus has been on advancing eXplainable Artificial Intelligence (XAI) and privacy-preserving machine learning. My goal is to develop explainable and private machine learning systems that are not only rooted in solid theory but also genuinely beneficial for end-users. Moreover, I hold a profound interest in the broader domain of trustworthy ML, encompassing critical aspects such as fairness and compliance with regulatory frameworks.

I am currently on leave while working as an Applied Science intern at Amazon Web Services (AWS) in New York City, United States. 

Invited Talks

  • "Objects, Colors, and Shapes: Constructing and Evaluating Conceptual Explanations": BIFOLD Summer School 2022: Ethics in Machine Learning & Data Management, Berlin, Germany, 2022
  • "When are Post-hoc Conceptual Explanations Identifiable":  2nd Nice Workshop on Interpretability, Nice, France, 2023
  • "Rethinking Tabular Data Inference, Generation and Privacy in the Age of Foundation Models" at IBM Research, Yorktown Heights, USA (virtual), 2024

 

Selected Publications

The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI by Miriam Schirmer, Tobias Leemann, Gjergji Kasneci, Jürgen Pfeffer, and David Jurgens. Findings of EMNLP 2024 (Accepted).

I Prefer not to Say: Protecting User Consent in Models with Optional Personal Data by Tobias Leemann*, Martin Pawelczyk*,  Christian Thomas Eberle, and Gjergji Kasneci. AAAI Conference on Artificial Intelligence (AAAI-24), 2024.

Gaussian Membership Inference Privacy by Tobias Leemann*, Martin Pawelczyk*, and Gjergji Kasneci. Advances in Neural Information Processing Systems (NeurIPS), 2023.

When are Post-hoc Conceptual Explanations Identifiable? by Tobias Leemann*, Michael Kirchhof*Yao RongEnkelejda Kasneci, and Gjergji KasneciConference on Uncertainty in Artificial Intelligence (UAI), 2023. 

Language Models are Realistic Tabular Data Generators by Vadim Borisov, Kathrin Seßler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. International Conference on Learning Representations (ICLR), 2023.

On the Trade-Off between Actionable Explanations and the Right to be Forgotten by Martin Pawelczyk, Tobias Leemann, Asia Biega und Gjergji KasneciInternational Conference on Learning Representations (ICLR), 2023.

Deep Neural Networks and Tabular Data: A Survey by Vadim Borisov, Tobias Leemann, Kathrin SeßlerJohannes HaugMartin Pawelczyk and Gjergji KasneciIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022.

A Consistent and Efficient Evaluation Strategy for Attribution Methods by Yao Rong*, Tobias Leemann*, Vadim Borisov, Gjergji Kasneci and Enkelejda Kasneci. International Conference on Machine Learning (ICML), 2022

Please see Google Scholar for a full list of publications. * used to denote shared first authorship.