Welcome!

This blog is where we share stories, announcements, and insights from around the iGEM community.

The Ethics of AI-Generated Scientific Research – Can AI Replace Scientists?

The Ethics of AI-Generated Scientific Research – Can AI Replace Scientists?

Written by Ankita De, Gokul Bhaskaran and Nayera Nasser

This blog is a part of an educational series of blogs on the role of AI in Science Communication by SciComm Made Easy. Read the previous blog “The Role of Generative AI in Science and Sustainability Communications


Can AI replace Scientists? 

This is a question that brings both excitement and unease. With artificial intelligence (AI) growing at a rapid pace, it’s natural to envision a future where machines could possibly replace researchers. Here’s the quick answer: Not Yet! Currently, AI excels in data-centric tasks such as processing large datasets, identifying patterns and automating routine steps (Dave & Patel, 2023). In drug discovery, for example, AI can sift through massive chemical libraries in record time. Tools like AlphaFold are revolutionizing our ability to predict protein structures, and astronomers rely on AI to process cosmic data faster than ever before. These breakthroughs are not just impressive, they’re game-changing. They enable researchers to dedicate more attention to interdisciplinary discovery and problem-solving.  (Paul et al., 2021).

But here’s the catch: AI lacks something fundamental. It doesn’t possess creativity, intuition, or the ability to understand context in the way humans do. Formulating hypotheses, designing experiments, and interpreting results within broader societal or ethical frameworks require human judgment (Ding & Li, 2025). AI works on existing data, limiting its ability to lead entirely novel concepts or navigate uncharted domains without human guidance. For instance, Einstein’s theory of relativity or CRISPR gene-editing emerged from imaginative leaps, not data-driven algorithms (Gostimskaya, 2022). Even in recent iGEM projects, such as Munich Bioinformatics 2023’s use of hypergraphs and chemical language models to predict drug interactions, AI tools functioned within predefined parameters. They support the process, but they don’t drive it on their own (Munich Bioinformatics, 2023)..

There’s also a human side to science; it’s inherently collaborative. Ethical questions, like the societal implications of gene editing or equitable access to climate solutions, require more than data. They demand empathy, cultural awareness and a shared sense of responsibility. AI can simulate outcomes, but it can’t weigh moral choices or navigate human relationships (Resnik & Elliott, 2016).

Rather than a replacing tool, AI acts as a human aid. It takes on repetitive tasks, accelerates discovery and helps reveal patterns hidden in data. This symbiosis is already reshaping fields like medicine, genomics and materials science. But we need to stay vigilant: Biases in training data and the temptation to over-automate carry real risks (Xu et al., 2021).

AI is not the scientist, it’s the assistant. The future lies in collaboration, where human ingenuity directs AI’s analytical prowess, driving discoveries while preserving the ethical and creative core of science.

Ethical Considerations of Scientific Research in the Age of AI

As artificial intelligence becomes increasingly integrated into scientific processes, speeding up drug development, anticipating outbreaks of disease, and even cracking the code of our DNA, it has great potential. But with these abilities come intricate ethical dilemmas that require our consideration.

Private Today, Public Tomorrow?

AI, especially generative models like ChatGPT, introduces real concerns about data privacy and confidentiality. What happens when researchers input drafts of grant proposals, patient data, experimental protocols, or internal documents into these tools? Unless users actively opt out, this information could be retained and used to train future models (Eliot, 2023 and Grad, 2023)

Bias Beneath the Surface

AI systems are only as unbiased as the data they're learning from. Unfortunately, that data tends to be racially, ethnically or politically biased, with gender bias also prevalent. Since AI learns from this unbalanced information, it can become an extension of those biases by reinforcing them or even enhancing them. The computer science maxim “garbage in, garbage out” applies here. In healthcare, these disparities can have life or death consequences. Biased training data have already influenced how AI systems interpret medical images or forecast disease risk. Likewise, genomic studies threaten to perpetuate historic injustices when AI ignores underrepresented groups in genomic databases (Resnik & Hosseini, 2024)

AI Said So (So It Must Be True?)

One of the most troubling aspects of AI in scientific work is its tendency to generate errors, sometimes subtly, sometimes egregiously. Even well-trained large language models (LLMs) like ChatGPT can fabricate citations, misinterpret facts, or confidently provide incorrect information.

 (Chen et al., 2023). For example, Bhattacharyya et al. used ChatGPT 3.5 to generate 30 short papers (200 words or less) on medical topics. 47% of the references produced by the chatbot were fabricated, 46% were authentic but inaccurately used, and only 7% were correct (Bhattacharyya, Miller, Bhattacharyya, & Miller, 2023). Like bias, random errors can undermine the validity and reliability of scientific knowledge and have disastrous consequences for public health, safety and social policy (Shamoo & Resnik, 2009).

Advancing Research with AI: The Need for Human Oversight

There’s no doubt that AI is transforming research by automating data analysis, identifying patterns, and optimizing experimental design. Yet, human intervention remains indispensable for interpretation, ethical considerations and creative problem-solving.

  • Machine learning models can process vast datasets at unprecedented speeds, detecting correlations that might elude human researchers (Jordan & Mitchell, 2015).

  • In fields like genomics and drug discovery, AI-driven algorithms accelerate the identification of potential therapeutic targets, while scientists validate findings through rigorous experimentation (Esteva et al., 2019).

  • Similarly, in behavioural sciences, AI-assisted image and video analysis enhance observational studies, but human oversight ensures contextual accuracy and nuanced interpretation (Mathis et al., 2018).

But AI is not a substitute for human intelligence. Scientists contribute critical thinking, ethical thinking and creative intuition, attributes that are still far beyond the reach of AI. Researchers don't merely examine data; they pose questions of "why," "what if," and "should we?" 

Human judgment is necessary when applying AI to screen scientific literature, evaluate study quality, or interpret behavioral data. Without it, even the most precise output can be short of useful insight.

Above all, responsible use of AI requires human discernment. We need to continuously inquire: Are we using this technology responsibly? Are we clearly explaining its boundaries? Are we putting people first?

References:

Yizhi ‘Patrick’ Cai: An iGEMer who is unraveling nature’s design rules to create synthetic life

Yizhi ‘Patrick’ Cai: An iGEMer who is unraveling nature’s design rules to create synthetic life