I am a PhD student in Machine Learning (ML) at the Max Planck Institute for Human Cognitive and Brain Sciences and TU Berlin. I decipher object representations in deep neural networks (DNNs), try to make them more human-interpretable, and work on their out-of-distribution generalizability. I am mainly advised by Martin Hebart and Klaus-Robert Müller, but frequently collaborate with ML researchers in Zurich, Copenhagen, and Washington. Prreviously, I was a MSc student in IT & Cognition / Computer Science of Isabelle Augenstein, Johannes Bjerva, and Maria Barrett, mainly supervised by Johannes and Isabelle.
My main interests lie in the areas of representation learning and interpretability. In particular, I am interested in limited-data regimes, where robust representations and out-of-distribution generalization both appear hard to achieve. In so doing, I use approximate Bayesian methods and other approaches from probability theory. Mostly, however, I am debugging, examining model behaviour, using print statements, and contemplating why things are not working. Have a look at the projects or publications section for more information about my work and interests. Feel free to reach out to me, if you believe our research intentions are aligned and you are keen to collaborate on a project.