Ahmad Dawar Hakimi

ELLIS PhD Student
CIS @ LMU Munich · CopeNLU

Pushing the boundaries of Natural Language Processing through interpretability research, active learning, and large language model analysis. Co-supervised by Hinrich Schütze and Isabelle Augenstein.

Ahmad Dawar Hakimi
Interpretability Active Learning LLMs Summarization

Research Focus

🔍 Interpretability

Uncovering how neural networks process and represent language, making AI systems more transparent and trustworthy through mechanistic interpretability.

🎯 Active Learning

Developing efficient methods for training models with minimal labeled data, reducing annotation costs while maintaining high performance.

🤖 Large Language Models

Analyzing and understanding the behavior of large-scale language models, exploring their capabilities and limitations.

📝 Summarization

Creating intelligent systems for condensing scientific papers and documents while preserving critical information and context.

Selected Publications

Time Course MechInterp: Analyzing the Evolution of Components and Knowledge in Large Language Models

Ahmad Dawar Hakimi, Ali Modarressi, Philipp Wicke, Hinrich Schütze
CoRR, 2025

On Relation-Specific Neurons in Large Language Models

Yihong Liu, Runsheng Chen, Lea Hirlimann, Ahmad Dawar Hakimi, et al.
CoRR, 2025

BlackboxNLP-2025 MIB Shared Task: Exploring Ensemble Strategies for Circuit Localization Methods

Philipp Mondorf, Mingyang Wang, Sebastian Gerstner, Ahmad Dawar Hakimi, et al.
arXiv preprint, 2025

Citance-Contextualized Summarization of Scientific Papers

Ahmad Dawar Hakimi, Shahbaz Syed, Khalid Al Khatib, Martin Potthast
EMNLP Findings, 2023

Latest Blog Posts

🧠
Dec 20, 2024 • Interpretability

Understanding Neural Circuits: A Deep Dive

How do we identify which parts of a neural network are responsible for specific behaviors? Exploring mechanistic interpretability techniques...

Read More →
🎯
Nov 15, 2024 • Active Learning

Efficient Training with Active Learning

Why label millions of examples when you can achieve similar performance with thousands? A practical guide to active learning strategies...

Read More →
🤖
Oct 28, 2024 • LLMs

What Do Language Models Really Know?

Investigating how large language models store and retrieve factual knowledge. Surprising findings about relation-specific neurons...

Read More →
📚
Sep 10, 2024 • Research Life

Lessons from My First Year as a PhD Student

Reflections on navigating research challenges, finding your niche, and balancing work with life as an ELLIS PhD student...

Read More →

Let's Connect

Interested in collaborating on interpretability or active learning? Let's push the boundaries of NLP together.