Interview

FUTURE-AI: Making AI in Healthcare Trustworthy!

The project FUTURE-AI aims to bridge the gap between AI research and clinical adoption in healthcare. It provides guidelines for developing trustworthy AI tools, built on six guiding principles and 30 best practices. 

The project FUTURE-AI aims to bridge the gap between AI research and clinical adoption in healthcare. It provides guidelines for developing trustworthy AI tools, built on six guiding principles and 30 best practices. 

Machine learning algorithms with an illustration of data  AI models predicting future trends

Helmholtz Munich experts Prof. Julia Schnabel and Dr. Georgios Kaissis work in a team of 117 experts from 50 countries on this international project. In this interview, Julia Schnabel describes the necessity and goal of the project.

"The FUTURE-AI guidelines will by design result in trustworthy, transparent and deployable medical AI tools, thereby providing a competitive advantage for regulatory approval."

Prof. Dr. Julia Anne Schnabel

Prof. Julia Schnabel

Director of the Institute of Machine Learning in Biomedical Imaging

You’re pointing out that the FUTURE-AI guidelines will result in trustworthy, transparent and deployable medical AI tools. Can you tell us why, and what the consortium’s main goal is?

JS: The FUTURE-AI consortium unites 117 experts from 50 countries across all continents. Their aim is to define international guidelines for trustworthy healthcare AI in a structure and holistic manner. 

What are the biggest challenges in making AI trustworthy for healthcare?

JS: There has been a significant amount of research in AI for healthcare over the past years, but only a limited number of AI tools have made the transition to clinical practice, due to major and persisting clinical, technical, socio-ethical and legal challenges, owing to limited trust and ethical concerns. These concerns relate to potential errors resulting in patient harm, biases and increased health inequalities, lack of transparency and accountability, as well as data privacy and security breaches.

What are the guiding principles of FUTURE-AI?

JS: FUTURE-AI offers a risk informed framework, helping to address some of these challenges and concerns, by focusing on six guiding principles:

Fairness, Universality, Traceability, Usability, Robustness, Explainability.

This is what FUTURE-AI stands for. Our recommendations include continuous risk assessment and mitigation, addressing biases, data variations, and evolving challenges during the dynamic AI lifecycle – for example the design phase, development phase, evaluation phase, and the deployment phase.

German Medical Award 2025 for Prof. Ulrike Protzer

“Novel and affordable solutions should empower clinics to make more accurate, fast and reliable decisions for early detection, treatment planning and improved patient outcome.”
Prof. Julia Schnabel, Director of the Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich

What is AI?

Prof. Julia Schnabel is Director of the Institute of Machine Learning in Biomedical Imaging at Helmholtz Munich and Professor of Computational Imaging and AI in Medicine at the Technical University of Munich, with a secondary appointment as Chair in Computational Imaging at King’s College London.

She holds degrees in Computer Science from TU Berlin (MSc equivalent, 1993) and University College London (PhD, 1998). After positions at the University of Oxford (2007–2015) and King’s College London (2015–2021), she joined Helmholtz Munich and TUM in 2021.

"The FUTURE-AI guidelines will by design result in trustworthy, transparent and deployable medical AI tools, thereby providing a competitive advantage for regulatory approval."
Prof. Julia Schnabel

What is the key to developing and implementing FUTURE-AI?

Sancar_Screening_for_insulin_sensitizers

Teaser für: Text mit Bild
JS: The key is to involve stakeholders already early in the development phase of the medical AI lifecycle, and to build trust in the patient community and the public. The FUTURE-AI paper is an important first step in offering a dynamic, living framework that offers insights for regulating medical AI, even though these recommendations have yet to be fully incorporated into regulatory procedures. Yet, an early adoption of the FUTURE-AI guidelines will by design result in trustworthy, transparent, and deployable medical AI tools, thereby providing a competitive advantage for regulatory approval.

How did international and interdisciplinary perspectives contribute to the FUTURE-AI guiding principles?

JS: The international – and highly interdisciplinary - collaboration resulted in an international consensus guideline of many voices across different countries, effectively bringing in the different disciplines (data science, medical research, clinical medicine, computer engineering, medical ethics, social sciences) and healthcare data domains (radiology, genomics, mobile health, electronic health records, surgery, pathology). Through a modified and highly iterative Delphi survey approach conducted over a period of 24 months, the six guiding principles (FUTURE-AI) and a set of recommendations were developed and refined before reaching consensus (less than 5% disagreement between experts). We thus adhered to our own proposed guiding principles, notably fairness, transparency, and robustness, in bringing such a diverse set of international experts together in our consortium, and in seeking their consensus view.

In which real-world healthcare settings are the six guiding principles already contributing to a reliable AI?

JS: Not all six guiding principles may be needed for a specific medical AI tool. As an example, the Fairness principle is guiding a medical AI development team in identifying potential sources of bias already early in the development phase, to ensure the same performance of their medical AI tool across individuals or groups of individuals – including under-represented and disadvantaged groups. These biases may be due to individual patient attributes such as age, gender, ethnicity or due to differences in data origin like image scanners, hospital sites. It is particularly important that a medical AI tool works for the whole patient group that it is intended for and does not fail on some underrepresented subgroups. Fairness is in fact closely related to the Universality principle – the tool should be generalizable and not only work in a controlled training environment as well as the Robustness principle – the ability of a medical AI tool to maintain its performance and accuracy under expected or unexpected variations in the input data. 

Heuschnupfen

Bildergalerie

More About the Researchers

Find Out More About Prof. Julia Schnabel

Prof. Julia Schnabel is Director of the Institute of Machine Learning in Biomedical Imaging at Helmholtz Munich and Professor of Computational Imaging and AI in Medicine at the Technical University of Munich, with a secondary appointment as Chair in Computational Imaging at King’s College London.

julia.schnabelspam prevention@helmholtz-munich.de

Find out more about Dr Georgios Kaissis

Dr Georgios Kaissis, MHBA is an adjunct assistant professor at the Technical University of Munich, serving as senior research scientist at the Institute of Artificial Intelligence in Medicine and as attending radiologist at the Institute of Radiology.

georgios.kaississpam prevention@helmholtz-munich.de

Latest update: March 2024.

Related Articles

server room with server racks in datacenter banner. 3d illustration
Doctor using ultrasound equipment screening of pregnant woman.