Skip to main content
Machine Learning Model
adobe stock - Starlight

International Conference Contributions

On this page, you will find the latest contributions from Computational Health Center researchers at international AI conferences:

 

Hyesu Lim, Jinho Choi, Jaegul Choo, Steffen Schneider
Sparse autoencoders reveal selective remapping of visual concepts during adaptation. arXiv


Rodrigo González Laiz, Tobias Schmidt, Steffen Schneider
Self-supervised contrastive learning performs non-linear system identification. arXiv


Massimo Bini, Leander Girrbach, Zeynep Akata
Decoupling Angles and Strength in Low-rank Adaptation. arXiv


Théo Uscidda, Luca Eyring, Karsten Roth, Fabian J Theis, Zeynep Akata, Marco Cuturi
Disentangled Representation Learning with the Gromov-Monge Gap. arXiv


Shuchen Wu, Mirko Thalmann, Peter Dayan, Zeynep Akata, Eric Schulz
Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences. arXiv


Can Demircan, Tankred Saanum, Akshay K. Jagadish, Marcel Binz, Eric Schulz
Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models. arXiv


Alex Kipnis, Konstantinos Voudouris, Luca M. Schulze Buschoff, Eric Schulz
metabench -- A Sparse Benchmark to Measure General Ability in Large Language Models. arXiv


Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, Niki Kilbertus
Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes. arXiv


Tristan Cinquin, Stanley Lo, Felix Strieth-Kalthoff , Alan Aspuru-Guzik, Geoff Pleiss, Robert Bamler, Tim G. J. Rudner, Vincent Fortuin, Agustinus Kristiadi
What Actually Matters for Materials Discovery: Pitfalls and Recommendations in Bayesian Optimization. OpenReview


Yasin Esfandiari, Stefan Bauer, Sebastian Stich, Andrea Dittadi
Sample Quality-Likelihood trade-off in Diffusion Models. OpenReview


Amir Mohammad Karimi Mamaghan, Samuele Papa, Karl H. Johansson, Stefan Bauer, Andrea Dittadi
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models. arXiv


Alessandro Palma, Till Richter, Hanyi Zhang, Manuel Lubetzki, Alexander Tong, Andrea Dittadi, Fabian Theis
Multi-Modal and Multi-Attribute Generation of Single Cells with CFGen. arXiv


Jonas Schweisthal, Dennis Frauen, Maresa Schröder , Konstantin Hess, Niki Kilbertus, Stefan Feuerriegel
Learning Representations of Instruments for Partial Identification of Treatment Effects. arXiv


Ferdinand Kapl, Amir Mohammad Karimi Mamaghan, Max Horn, Carsten Marr, Stefan Bauer, Andrea Dittadi
Object-Centric Representations Generalize Better Compositionally with Less Compute. OpenReview


Kristina Ulicna, Rebecca Boiarsky, Eeshaan Jain, Till Richter, Giovanni Palla, Jason Hartford, Oren Kraus, Aleksandrina Goeva, Charlotte Bunne, Fabian Theis
Learning Meaningful Representations of Life (LMRL) Workshop @ ICLR 2025. OpenReview

Sanghwan Kim, Rui Xiao, Iuliana Georgescu, Stephan Alaniz, Zeynep Akata 
COSMOS: Cross-Modality Self-Distillation for Vision Language Pretraining. arXiv


Rui Xiao, Sanghwan Kim, Iuliana Georgescu, Zeynep Akata, Stephan Alaniz 
FLAIR: VLM with Fine-grained Language-informed Image Representations. arXiv


Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth, Ameya Prabhu, Zeynep Akata, Samuel Albanie, Matthias Bethge 
How to Merge Your Multimodal Models Over Time? arXiv


Karsten Roth, Zeynep Akata, Dima Damen, Ivana Balazevic, Olivier J Henaff 
Context-Aware Multimodal Pretraining.arXiv

 

Steffen Schneider, Rodrigo González Laiz, Anastasiia Filippova, Markus Frey, Mackenzie W Mathis
Time-series attribution maps with regularized contrastive learning. arXiv


Xudong Sun, Nutan Chen, Alexej Gossmann, Yu Xing, Matteo Wohlrapp, Emilio Dorigatti, Carla Feistner, Felix Drost, Daniele Scarcella, Lisa Helen Beer, Carsten Marr
Multi-objective Hierarchical Feedback Optimization of Penalty Multiplier for Domain Invariant Auto-encoding. arXiv


Małgorzata Łazęcka, Ewa Szczurek
Factor Analysis with Correlated Topic Model for Multi-Modal Data. OpenReview

Yang An, Felix Drost, Adrian Straub, Annalisa Marsico, Dirk Busch, Benjamin Schubert
TCRGenesis: Generation of SIINFEKL-specific T-cell receptor sequences using autoregressive Transformer. OpenReview

Massimo Bini, Karsten Roth, Zeynep Akata, Anna Khoreva
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections. arXiv


Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Ratsch, Vincent Fortuin
Improving Neural Additive Models with Bayesian Principles. arXiv


Theodore Papamarkou, Maria Skoularidou,Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A Osborne, Tim G. J. Rudner
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI. arXiv


Julian Coda-Forno, Marcel Binz, Jane X. Wang, Eric Schulz
CogBench: A large language model walks into a psychology lab. arXiv


Johannes A. Schubert, Akshay K. Jagadish, Marcel Binz, Eric Schulz
In-context learning agents are asymmetric belief updaters. arXiv


Akshay K. Jagadish, Julian Coda-Forno, Mirko Thalmann, Eric Schulz, Marcel Binz

Ecologically rational meta-learned inference explains human category learning. arXiv


Theodore Papamarkou, Tolga Birdal, Michael M. Bronstein, Gunnar E. Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Lio, Paolo Di Lorenzo,Vasileios Maroulas, Nina Miolane, Farzana Nasrin, Karthikeyan Natesan Ramamurthy, Bastian Rieck, Simone Scardapane, Michael T Schaub, Petar Veličković, Bei Wang, Yusu Wang
Position: Topological Deep Learning is the New Frontier for Relational Learning. arXiv


Jeremy Wayland, Corinna Coupette, Bastian Rieck
Mapping the Multiverse of Latent Representations. arXiv


Georgios Kaissis, Stefan Kolek, Borja Balle, Jamie Hayes, Daniel Rueckert
Beyond the Calibration Point: Mechanism Comparison in Differential Privacy. arXiv


 

Dominik Klein, Théo Uscidda, Fabian Theis, Marco Cuturi
GENOT: Entropic (Gromov) Wasserstein Flow Matching with Applications to Single-Cell Genomics. arXiv


Artur Szałata, Andrew Benz, Robrecht Cannoodt, Mauricio Cortes, Jason Fong, Sunil Kuppasani, Richard Lieberman, Tianyu Liu, Javier Mas-Rosario, Rico Meinl, Jalil Nourisa, Jared Tumiel, Tin M. Tunjic, Mengbo Wang, Noah Weber, Hongyu Zhao, Benedict Anchang, Fabian Theis, Malte Luecken, Daniel Burkhardt
A Benchmark for Prediction of Transcriptomic Responses to Chemical Perturbations Across Cell Types.


Sirine Ayadi, Leon Hetzel, Johanna Sommer, Fabian Theis, Stephan Günnemann
Unified Guidance for Geometry-Conditioned Molecular Generation. arXiv


Tristan Cinquin, Marvin Pförtner, Vincent Fortuin, Philipp Hennig, Robert Bamler
FSP-Laplace: Function-Space Priors for the Laplace Approximation in Bayesian Deep Learning. arXiv


Rayen Dhahri, Alexander Immer, Bertrand Charpentier, Stephan Günnemann, Vincent Fortuin
Shaving Weights with Occam’s Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood. arXiv


Karsten Roth, Vishaal Udandarao, Sebastian Dziadzio, Ameya Prabhu, Medhi Cherti, Oriol Vinyals, Olivier Henaff, Samuel Albanie, Matthias Bethge, Zeynep Akata
A Practitioner’s Guide to Continual Multimodal Pretraining. arXiv


Luca Eyring, Shyamgopal Karthik, Karsten Roth, Alexey Dosovitskiy, Zeynep Akata
ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization. arXiv


Elisabeth Ailer, Niclas Dern, Jason Hartford, Niki Kilbertus
Targeted Sequential Indirect Experiment Design. arXiv


Yashas Annadani, Panagiotis Tigas, Stefan Bauer, Adam Foster
Amortized Active Causal Induction with Deep Reinforcement Learning. arXiv


Christina Bukas, Harshavardhan Subramanian, Fenja See, Carina Steinchen, Ivan Ezhov, Gowtham Boosarpu, Sara Asgharpour, Gerald Burgstaller, Mareike Lehmann, Florian Kofler, Marie Piraud
MultiOrg: A Multi-rater Organoid-detection Dataset. arXiv


Can Demircan, Tankred Saanum, Leonardo Pettini, Marcel Binz, Blazej Baczkowski, Christian Doeller, Mona Garvert, Eric Schulz
Evaluating Alignment Between Humans and Neural Network Representations in Image Based Learning Tasks. arXiv


Thomas Altstidl, David Dobre, Arthur Kosmala, Bjoern Eskofier, Gauthier Gidel, Leo Schwinn
On the Scalability of Certified Adversarial Robustness with Generated Data.


Katharina Limbeck, Rayna Andreeva, Rik Sarkar, Bastian Rieck
Metric Space Magnitude for Evaluating the Diversity of Latent Representations. arXiv


AI in Health - News