Albarqouni group

Deep federated learning in healthcare

Learning to distill and share knowledge among AI agents

Federated learning (FL) has been recently introduced to enable training deep learning (DL) models or AI agents without sharing the data. In other words, AI agents at local hubs, e.g. hospitals, are trained on their own data and only share the trained parameters with a centralized AI model or other AI agents. Leveraging such a massive amount of data in a privacy-preserved fashion adhering to the General Data Protection Regulation (GDPR) would have a great impact on medical diagnosis, outbreak detection, and other healthcare services. 

Yet, principal challenges, to overcome, concern the nature of medical data, namely data heterogeneity; severe class-imbalance, few amounts of annotated data, inter-/intra-scanners variability (domain shift), inter-/intra- observer variability (noisy annotations); system heterogeneity, and explainability and robustness. 

The mission of this Helmholtz AI young investigator group is to develop novel algorithms for a groundbreaking new generation of deep federated learning, which can learn to reCognize, AdapT, lEarn, Reason and exPlain, dIstiLl the knowledge and coLlAboRate with other AI agents (CATERPILLAR) in a robust and privacy-preserved fashion, to provide personalized healthcare services.

Visit Shadi Albarqouni's personal website




Research lines

  • Medical Imaging with Deep Learning: We will continue our research directions to develop fully-automated, high accurate solutions that save export labor and efforts, and mitigate the challenges in medical imaging, i.e. i) the availability of a few annotated data, ii) low inter-/intra-observers agreement, iii) high-class imbalance, iv) inter-/intra-scanners variability and v) domain shift. Our research portfolio can be categorized into Learn to Recognize, Adapt, Learn, Reason and Explain, incorporate prior knowledge.
  • Federated Learning in Healthcare: We will focus our research on developing innovative deep Federated Learning algorithms that can distill and share the knowledge among AI agents in a robust and privacy-preserved fashion. Research topics include, but not limited to, i) handling distributed DL models with data heterogeneity including non i.i.d, and domain shifts, ii) developing explainability and quality control tools for distributed models, and iii) robustness to poisoning models.
  • Affordable AI and Healthcare: In addition, we are also interested in developing affordable AI solutions suitable for poor-quality data generated by low infrastructure and point-of-care diagnosis.

Selected publications

A full list of publication can be found here


Community engagement

Coming soon.