Dr. Dimitar Dimitrov


I work in the general area of programming languages. My past work focused mostly on program analysis for concurrent programs. During my PhD (at ETH Zürich) I explored two main topics: 1) efficient data race detection for structured-parallel programs, and 2) correctness criteria for concurrent use of abstract data types. The first topic lead to a modest generalization of Tarjan’s offline LCA algorithm to a wider class of structures, the two-dimensional lattices. The second topic lead to a notion of data races based on commutativity, and a conflict serializability criterion for eventually consistent systems. In the meantime and after that (at PwC Switzerland, and ChainSecurity), I investigated analysis and verification of smart contracts.


My current interests are practical foundations of programming languages, and the intersection between programming languages and mathematical logic. I think that design and implementation of real-world programming languages is in a dire need for a more systematic and rigorous approach, and that it can greatly benefit from the vast research, old and new, in programming language theory.

Awards:

  • 2020, ETH Medal for outstanding PhD thesis


Education:

  • 2020, PhD in Computer Science, ETH Zurich
  • 2012, MSc in Mathematical Logic, Sofia University
  • 2009, BSc in Informatics, Sofia University

2025

Maria Drencheva, Ivo Petrov, Maximilian Baader, Dimitar I. Dimitrov, Martin Vechev
GRAIN: Exact Graph Reconstruction from Gradients
In: International Conference on Learning Representations (ICLR 2025)

Hristo Venev, Thien Udomsrirungruang, Dimitar Dimitrov, Timon Gehr, Martin Vechev
qblaze: An Efficient and Scalable Sparse Quantum Simulator
In: OOPSLA (SPLASH 2025)

Csaba Dékány, Stefan Balauca, Robin Staab, Dimitar I. Dimitrov, Martin Vechev
MixAT: Combining Continuous and Discrete Adversarial Training for LLMs
In: The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)

Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
SPEAR++: Scaling Gradient Inversion via Sparse Dictionary Learning
In: NeurIPS 2025 (Workshop)

2024

Kostadin Garov, Dimitar I. Dimitrov, Nikola Jovanović, Martin Vechev
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning
In: International Conference on Learning Representations (ICLR 2024)

Hristo Venev, Timon Gehr, Dimitar Dimitrov, and Martin Vechev
Modular Synthesis of Efficient Quantum Uncomputation
In: Proceedings of the ACM on Programming Languages, Volume 8, Issue OOPSLA2, 2024

Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
SPEAR: Exact Gradient Inversion of Batches in Federated Learning
In: Conference on Neural Information Processing Systems (NeurIPS 2024)

Ivo Petrov, Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
DAGER: Exact Gradient Inversion for Large Language Models
In: Conference on Neural Information Processing Systems (NeurIPS 2024)