Interview FUTURE-AI: Making AI in Healthcare Trustworthy!
The project FUTURE-AI aims to bridge the gap between AI research and clinical adoption in healthcare. It provides guidelines for developing trustworthy AI tools, built on six guiding principles and 30 best practices.
The project FUTURE-AI aims to bridge the gap between AI research and clinical adoption in healthcare. It provides guidelines for developing trustworthy AI tools, built on six guiding principles and 30 best practices.
Helmholtz Munich experts Prof. Julia Schnabel and Dr. Georgios Kaissis work in a team of 117 experts from 50 countries on this international project. In this interview, Julia Schnabel describes the necessity and goal of the project.
"Novel and affordable solutions should empower clinics to make more accurate, fast and reliable decisions for early detection, treatment planning and improved patient outcome."
Prof. Julia Schnabel, Director of the Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich
What is the aim of the FUTURE-AI Consortium?
JS: The FUTURE-AI consortium unites 117 experts from 50 countries across all continents. Their aim is to define international guidelines for trustworthy healthcare AI in a structure and holistic manner.
What are the biggest challenges in making AI trustworthy for healthcare?
JS: There has been a significant amount of research in AI for healthcare over the past years, but only a limited number of AI tools have made the transition to clinical practice, due to major and persisting clinical, technical, socio-ethical and legal challenges, owing to limited trust and ethical concerns. These concerns relate to potential errors resulting in patient harm, biases and increased health inequalities, lack of transparency and accountability, as well as data privacy and security breaches.
What are the guiding principles of FUTURE-AI?
JS: FUTURE-AI offers a risk informed framework, helping to address some of these challenges and concerns, by focusing on six guiding principles:
Fairness, Universality, Traceability, Usability, Robustness, Explainability.
This is what FUTURE-AI stands for. Our recommendations include continuous risk assessment and mitigation, addressing biases, data variations, and evolving challenges during the dynamic AI lifecycle – for example the design phase, development phase, evaluation phase, and the deployment phase.
"The FUTURE-AI guidelines will by design result in trustworthy, transparent and deployable medical AI tools, thereby providing a competitive advantage for regulatory approval."
Prof. Julia Schnabel
What is the key to developing and implementing FUTURE-AI?
JS: The key is to involve stakeholders already early in the development phase of the medical AI lifecycle, and to build trust in the patient community and the public. The FUTURE-AI paper is an important first step in offering a dynamic, living framework that offers insights for regulating medical AI, even though these recommendations have yet to be fully incorporated into regulatory procedures. Yet, an early adoption of the FUTURE-AI guidelines will by design result in trustworthy, transparent, and deployable medical AI tools, thereby providing a competitive advantage for regulatory approval.
How did international and interdisciplinary perspectives contribute to the FUTURE-AI guiding principles?
JS: The international – and highly interdisciplinary - collaboration resulted in an international consensus guideline of many voices across different countries, effectively bringing in the different disciplines (data science, medical research, clinical medicine, computer engineering, medical ethics, social sciences) and healthcare data domains (radiology, genomics, mobile health, electronic health records, surgery, pathology). Through a modified and highly iterative Delphi survey approach conducted over a period of 24 months, the six guiding principles (FUTURE-AI) and a set of recommendations were developed and refined before reaching consensus (less than 5% disagreement between experts). We thus adhered to our own proposed guiding principles, notably fairness, transparency, and robustness, in bringing such a diverse set of international experts together in our consortium, and in seeking their consensus view.
"The FUTURE-AI framework provides the necessary dynamic flexibility and some more general recommendations, such as for early stakeholder engagement, data protection, risk management, evaluation planning, regulatory compliance, and in dealing with ethical, social and environmental issues."
Prof. Julia Schnabel
In which real-world healthcare settings are the six guiding principles already contributing to a reliable AI?
JS: Not all six guiding principles may be needed for a specific medical AI tool. As an example, the Fairness principle is guiding a medical AI development team in identifying potential sources of bias already early in the development phase, to ensure the same performance of their medical AI tool across individuals or groups of individuals – including under-represented and disadvantaged groups. These biases may be due to individual patient attributes such as age, gender, ethnicity or due to differences in data origin like image scanners, hospital sites. It is particularly important that a medical AI tool works for the whole patient group that it is intended for and does not fail on some underrepresented subgroups. Fairness is in fact closely related to the Universality principle – the tool should be generalizable and not only work in a controlled training environment as well as the Robustness principle – the ability of a medical AI tool to maintain its performance and accuracy under expected or unexpected variations in the input data.
"Using advanced deep learning techniques, we work at the intersection of privacy-preserving artificial intelligence and AI safety. My team and I are developing the next generation of AI algorithms with a focus on data privacy, robustness, and safety for medical applications.”
Dr. Georgios Kaissis, Group Leader of the Research Group “Reliable AI”, Helmholtz Munich
Find Out More About Prof. Julia Schnabel, Dr. Georgios Kaissis & Related Research
Prof. Julia Schnabel is Director of the Institute of Machine Learning in Biomedical Imaging at Helmholtz Munich.
Contact: julia.schnabel@helmholtz-munich.de
Dr. Georgios Kaissis is the Group Leader of the Research Group “Reliable AI” at Helmholtz Munich
Contact: georgios.kaissis@helmholtz-munich.de
Research & News:
- FUTURE-AI: International Consensus Guideline for Trustworthy and Deployable Artificial Intelligence in Healthcare
- Julia Schnabel to lead new Institute of Machine Learning in Biomedical Imaging
- Podcast: AI-Ready Healthcare
- Responsible Use of Artificial Intelligence in Medicine: New Ad Hoc Working Group of BAdW in Cooperation With Helmholtz Munich Launched
Latest update: March 2024.