Lobentanzer team

ACCESSIBLE BIOMEDICAL AI RESEARCH

MAKING AI A RELIABLE COLLABORATOR FOR BIOMEDICAL SCIENCE

Sebastian Lobentanzer's Helmholtz AI young investigator group, located at Helmholtz Munich's Computational Health Center, works at the intersection of biomedical knowledge representation and trustworthy artificial intelligence. As foundation models grow more powerful, so does the challenge of understanding how they reason, and whether their outputs can be trusted in high-stakes scientific contexts. The group addresses this dual problem: building AI systems that are not only capable, but transparent, well-grounded, and genuinely useful to researchers in their everyday workflows.

A pharmacologist turned research software engineer, Sebastian brings a rare combination of biomedical expertise and computational depth to his work. His research focuses on disentangling causal relationships in molecular biology, developing the infrastructure needed to manage and connect biomedical knowledge at scale, and rigorously benchmarking how and why large language models behave as they do. The group maintains strong collaborative ties with Helmholtz AI, ELIXIR Germany, and the Open Targets Platform at EMBL-EBI, and co-leads the Computational Biology Unit at the German Center for Diabetes Research (DZD).

Visit Sebastian Lobentanzer's lab website →

Research lines

  • Knowledge representation and management: Biomedical data is abundant but fragmented. The group develops frameworks (including the open-source tool BioCypher) to automate the harmonisation of biomedical knowledge and make it accessible for machine learning and AI applications across research tasks.
  • Agentic systems for research: Rather than treating AI as a black box, the group designs modular, agentic systems that allow researchers to interact with AI as a genuine collaborator. This includes work on community-oriented infrastructure such as BioContextAI, enabling standardised access to validated scientific knowledge for AI pipelines.
  • Mechanistic interpretability and benchmarking: The group investigates model internals to understand how reasoning unfolds inside large language models and which components are most critical to performance. Their benchmarks go beyond accuracy, exploring why models work.
  • When not to use AI: An often-overlooked question in biomedical AI is when a large, opaque model is not the right tool. The group actively develops criteria and alternatives, advocating for thoughtful, sustainable AI practice over uncritical adoption.

Publications and projects

Join the Agentic AI community

Curious about agentic AI in scientific research? Agentic AI at Helmholtz is an open space for researchers across the Helmholtz Association to exchange ideas, share tools, and shape the conversation around autonomous AI systems in science.

Join on Mattermost