Interview AI in Science: Ethical and Practical Challenges
The integration of Large Language Models (LLMs) into scientific workflows is growing. Diverse groups of scientists reflect and engaged in debate on the question „How should the advent of large language models affect the practice of science?”
From Helmholtz Munich, Dr. Eric Schulz, Dr. Marcel Binz, Prof. Zeynep Akata, and Prof. Stephan Alaniz (all Computational Health Center) collaborated with the interdisciplinary team on this opinion piece.
The integration of Large Language Models (LLMs) into scientific workflows is growing, but the full impact is not yet clear. The Helmholtz Munich experts Dr. Eric Schulz and Dr. Marcel Binz share their thoughts on how LLMs could affect science.
In this interview Dr. Marcel Binz, Postdoctoral Researcher at HCA, shares his thoughts on how LLMs could impact science.
"A shift in mindset is needed: We will spend less time gathering information and more time verifying it."
Dr. Marcel Binz
How should researchers navigate the ethical and practical implications of relying on AI for tasks such as drafting papers or interpreting data?
MB: There needs to be an inevitable shift in mindset. We may be able to spend less time gathering information, but we will need to spend more verifying it.
What steps or policies should be put in place to help open-source models become more widely used in academic research?
MB: We cannot rely on the big companies to build these models but instead need to establish the right infrastructures to do it ourselves. There are already some good steps taken in this direction. For academia, the biggest challenge in my opinion is to attract (and retain) the talent that is able to build such systems.
“Sensitive and confidential information requires the utmost care: it should rely on locally-hosted open-source models where we have full control over the data.”
Dr. Marcel Binz (left), Helmholtz Munich, with PI Dr. Eric Schulz, Helmholtz Munich
How can we make AI-driven reviews more efficient while ensuring data security?
MB: Anything with sensitive and confidential information must rely on locally hosted open-source models where we have full control over the data.
Large Language Models are constantly evolving: How can AI remain a trustworthy tool?
MB: The main goal is to bring these technologies into education, teaching both how they work and how to use them responsibly. Students need to learn to use these models effectively while understanding the importance of using them ethically.
Find Out More About Marcel Binz
About Marcel Binz
Contact: marcel.binz@helmholtz-munich.de
Research & News
ArXiv: How should the advent of large language models affect the practice of science?
Latest update: April 2025