Skip to main content
AdobeStock_287138820.jpeg
metamorworks - stock.adobe.com

Dr. Georgios Kaissis Reliable AI

The Team Reliable AI develops next-generation trustworthy artificial intelligence algorithms for medical applications. We employ advanced deep learning techniques and work on the intersection between trustworthy and probabilistic machine learning.

The Team Reliable AI develops next-generation trustworthy artificial intelligence algorithms for medical applications. We employ advanced deep learning techniques and work on the intersection between trustworthy and probabilistic machine learning.

Our Topics

Our group develops next-generation privacy-preserving and trustworthy artificial intelligence algorithms for medical applications.

AI in Medicine requires large, diverse, and representative datasets to train fair, generalisable and reliable models. However, such datasets contain sensitive personal information. Privacy-preserving machine learning bridges the gap between data utilisation and data protection by allowing the training of machine learning models on sensitive data while offering formal privacy guarantees. 

Our group focuses on applications of Differential Privacy to machine learning and deep learning, both on unstructured datasets such as images and on structured data such as tabular and graph databases. Moreover, we develop techniques for mitigating privacy-utility and privacy-performance trade-offs. Furthermore, we investigate attacks against collaborative machine learning protocols (such as federated learning) and develop defences against them.

Building trust in AI requires techniques to quantify the uncertainty of model outputs, incorporate domain expertise and optimally tackle training on small datasets. A further focus of our group is the development of probabilistic machine learning models which counteract the tendency of conventional models to produce overconfident and poorly calibrated predictions. We employ computational Bayesian techniques to train both statistical and deep learning algorithms and work on the intersection between probabilistic machine learning and privacy preservation.

 

AI in Medicine requires large, diverse, and representative datasets to train fair, generalisable and reliable models. However, such datasets contain sensitive personal information. Privacy-preserving machine learning bridges the gap between data utilisation and data protection by allowing the training of machine learning models on sensitive data while offering formal privacy guarantees. 

Our group focuses on applications of Differential Privacy to machine learning and deep learning, both on unstructured datasets such as images and on structured data such as tabular and graph databases. Moreover, we develop techniques for mitigating privacy-utility and privacy-performance trade-offs. Furthermore, we investigate attacks against collaborative machine learning protocols (such as federated learning) and develop defences against them.

Building trust in AI requires techniques to quantify the uncertainty of model outputs, incorporate domain expertise and optimally tackle training on small datasets. A further focus of our group is the development of probabilistic machine learning models which counteract the tendency of conventional models to produce overconfident and poorly calibrated predictions. We employ computational Bayesian techniques to train both statistical and deep learning algorithms and work on the intersection between probabilistic machine learning and privacy preservation.

 

Publications

Read more

Collaboration Opportunities

We are always looking for talented group members who wish to work with us as part of their research project or thesis.

We are especially interested in collaborators with backgrounds in:

  •     Applied or theoretical machine learning/ deep learning
  •     Cryptography
  •     Signal processing and information theory
  •     Pure and applied mathematics/ theoretical computer science
  •     Scientific/numerical computing and probabilistic programming

If you are interested in collaborating with us, please write us an email.

Contact Office

Sandra Mayer

Office Management

  • Twitter:
  • LinkedIn:
Building 35.33, Room 204