Welcome!

This blog is where we share stories, announcements, and insights from around the iGEM community.

The Role of Generative AI in Science and Sustainability Communications

The Role of Generative AI in Science and Sustainability Communications

Written by: Félicités Rapon, Mounia Boudjedri, Saleem Ullah
Edited by: Dewuni De Silva, Gokul Bhaskaran


Generative AI is increasingly recognized for its ability to distill specialized scientific knowledge into accessible narratives, potentially accelerating public engagement on topics ranging from climate policy to vaccine research. Recent advances in Large Language Models (LLMs) suggest AI-generated summaries can help simplify complex scientific research for broader audiences.

Current evidence suggests that generative AI can be strategically integrated into science communication and sustainability efforts, creating opportunities for heightened public participation and engagement (Humble et al., 2024; Julián Peller (2024); Worden et al., (2024); Humble et al. (2024) found that AI-generated plain-language summaries were not only more comprehensible but also perceived more favorably than traditionally written abstracts. Similarly, Julián Peller (2024) reported that machine-generated lay abstracts preserved the core findings of research with minimal loss of critical details. Furthermore, Worden et al., (2024) suggest that well-structured AI-generated content improved comprehension among non-experts and bolstered trust in psychological research.

Bridging Language Barriers and Resource Gaps in Science Communication

Language barriers and resource constraints have historically limited the dissemination of high-impact research, disproportionately affecting underrepresented regions. AI tools that translate content instantly and adapt stories to local contexts might bridge these gaps, extending scientific insights beyond academic and policy circles. In doing so, generative AI aligns with the broader goals of open science, fostering more inclusive and constructive scientific dialogs.

Despite these benefits, concerns remain regarding the accuracy of AI-generated content, particularly its ability to convey nuanced concepts that require domain-specific expertise. To address this, researchers advocate for "traceable text" capabilities, which link AI-generated summaries directly to supporting data (Markowitz, D. 2024). This alignment enables readers to verify the accuracy of reinterpretations, mitigating potential oversimplifications or misrepresentations. Peer review has long been key to vetting research, and we’ll likely need similar checks to keep AI outputs accurate.

Trust in AI-driven science communication also depends on the integration of robust, frequently updated datasets, particularly in fields such as global health and environmental science, where new findings emerge rapidly (Turner et al., 2015).

The Impact of Generative AI on Sustainability Communication

Generative AI has the potential to transform complex scientific data into interactive and accessible formats. One notable example is Resource Watch, a platform developed by the World Resources Institute (WRI), which provides interactive dashboards that monitor environmental indicators. This tool aids policymakers and the public in understanding and addressing environmental challenges. Scientific terminology can often be difficult to comprehend, but visual representations enhance accessibility and understanding. Many organizations leverage AI-driven environmental awareness campaigns to create engaging video and image-based content, effectively translating complex issues into digestible formats for the public.

A key strength of generative AI lies in its ability to tailor sustainability messages for different target audiences. In Ghana, an agritech company developed Darli, an AI-powered chatbot that provides farmers with advice on fertilizer use, harvesting, crop rotation and market logistics. Accessible via WhatsApp, this tool is widely used across Africa, Asia, and South America, supporting 27 languages, including local languages such as Twi, Swahili and Yoruba

Generative AI also empowers consumers to make sustainable choices. AI-driven chatbots can provide personalized recommendations for eco-friendly products, answer questions about environmental impact and suggest strategies for reducing carbon footprints. These tools enhance consumer engagement and encourage behavioral shifts toward sustainability.

Ethical Concerns in AI-Driven Sustainability Communication

While generative AI offers significant benefits in sustainability communication, it also presents ethical concerns. Some companies have been criticized for using AI-generated content to exaggerate their sustainability efforts without implementing substantial environmental practices. For instance, in the fashion industry, AI is employed to optimize design and production processes, potentially reducing waste. However, some brands may exploit AI-generated content to misleadingly enhance their sustainability image. For instance, a fast-fashion brand might use AI to create imagery of a non-existent “sustainable factory”, including solar panels, and green landscapes, that bear no resemblance to its actual supply chain or sustainability practices.

To mitigate these ethical risks, companies should disclose the use of AI in their sustainability communications. Transparency enables stakeholders to critically assess the authenticity of environmental claims. The Partnership on AI advocates for such openness to maintain trust and accountability in AI applications.

Opportunities and Challenges in Science Communication with AI

Since the public release of ChatGPT by OpenAI, we have entered a new digital era in which chatbots can generate text, images, and even videos, often integrating these elements seamlessly.

This technological advancement raises important questions about the role of artificial intelligence in scientific communication. How can these tools make science more accessible while mitigating potential risks?

While Generative AI has gained traction in the scientific communication community due to its capabilities in instant translation, summarization, tone adaptation and vocabulary modification, the initial reactions were mixed, particularly following the release of GPT-3 in 2021, which was criticized for generating inaccurate responses. Over time, with improvements in training models and access to broader datasets, ChatGPT has established itself as a valuable tool for science communicators (Schäfer, 2023).

Despite its advantages, generative AI presents challenges in maintaining scientific accuracy. These models generate content based on their training data, which remains largely opaque due to proprietary algorithms and undisclosed datasets. This "black box" nature raises concerns about the validity and reliability of AI-generated scientific information (Caliskan et al., 2024).

To address these biases, researchers have explored ways to improve AI-assisted science communication. One approach involves combining generative AI tools like Perplexity.ai with academic search engines such as Google Scholar to ensure that responses are grounded in credible sources (Caliskan et al., 2024). Additionally, ongoing research examines how interactions between users and LLMs influence response biases, further highlighting the need for critical evaluation of AI-generated content.

AI's Influence on Public Engagement with Science

From a sociological perspective, generative AI also changes how individuals engage with scientific information. Traditional communication barriers, such as a hesitancy to ask complex questions, are reduced, as AI allows users to rephrase and refine their queries until they receive a comprehensible response. This iterative interaction fosters a new form of science communication that accommodates diverse audiences.

Beyond text generation, AI-powered tools have the potential to revolutionize public engagement with science through interactive media. AI can be used to create educational videos, simulations, and even interactive games, making scientific concepts more engaging and accessible.

However, this advancement also comes with ethical concerns. The ability of generative AI to produce misleading or false information at scale underscores the need for vigilance. Without proper oversight, AI-generated content could contribute to misinformation rather than scientific literacy.


Generative AI's Future: A Balance of Innovation and Responsibility

Generative AI presents both opportunities and challenges for science communication. While it enhances accessibility and engagement, its reliability depends on how it is used. Ensuring responsible AI usage requires ongoing research, interdisciplinary collaboration, and transparency in AI development. As we navigate this evolving landscape, critical evaluation and ethical considerations will be essential in leveraging AI for the advancement of scientific communication.

References:

1)  Caliskan, A., West, J., & Kambhamettu, C. (2024). Science communication with generative AI. Nature Human Behaviour. Retrieved from https://jevinwest.org/papers/Caliskan2024NatureHumanBehaviour.pdf

2)  Markowitz, D. (2024). From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science. Retrieved from https://arxiv.org/pdf/2405.00706

3)  Worden, D., & Richards, D. (2024). Assessing the quality of generative artificial intelligence for science communication in environmental research Retrieved from http://biorxiv.org/content/10.1101/2024.11.11.623072v1.full

4)  Humble, N., & Mozelius, P. (2024). Generative Artificial Intelligence and the Impact on Sustainability. Retrieved from https://papers.academic-conferences.org/index.php/icair/article/view/3024/2906

5)  Schäfer, M. S. (2023). The Notorious GPT: Science Communication in the Age of Artificial Intelligence. Retrieved from
https://jcom.sissa.it/article/pubid/JCOM_2202_2023_Y02/

6)  Peller J. (2024). ChatGPT: Two Years Later. Towards Data Science. Retrieved from https://medium.com/towards-data-science/chatgpt-two-years-later-df37b015fd8a

Other web resources:

  1. https://www.wri.org/initiatives/resource-watch

  2. https://beetroot.co/greentech/ai-chatbots-and-virtual-assistants-in-green-tech-practical-tools-for-smarter-sustainability/

  3. https://www.ibm.com/products/environmental-intelligence

  4. https://time.com/7094874/farmerline-darli-ai/

  5. https://www.voguebusiness.com/story/sustainability/why-fashion-should-think-carefully-about-using-generative-ai

  6. https://en.wikipedia.org/wiki/Partnership_on_AI


This blog-post is a part of a series of blogs on Science Communication by the project SciComm Made Easy.

Stephanie Michelsen, Co-Founder and CEO of Jellatech: How Her iGEM Experience Helped Her Build Jellatech

Stephanie Michelsen, Co-Founder and CEO of Jellatech: How Her iGEM Experience Helped Her Build Jellatech