Summary: AI-generated summaries make scientific studies more accessible and improve public trust in scientists.
While promising, using AI in science communication raises ethical concerns about accuracy, transparency, and potential oversimplification.
As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals.
While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered.
PNAS Nexus Abstract From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science This article evaluated the effectiveness of using generative AI to simplify science communication and enhance the public’s understanding of science.
In summary, AI-generated summaries increase public trust in scientists and make scientific studies more accessible. In contrast to human-written summaries, researchers produced simplified summaries using GPT-4 that were simpler to read and comprehend.
More credibility and trustworthiness were rated by participants for scientists whose work was explained in plainer language. AI in science communication presents ethical questions regarding accuracy, transparency, and possible oversimplification, despite its potential.
Key Facts:.
Summaries produced by AI help the general public understand complex studies.
Scientists’ credibility and trust are increased when they use simpler language.
The loss of nuance and the requirement for transparency in the application of AI are ethical issues.
Michigan State University is cited.
Have you ever felt as though a scientific discovery was written in a language other than your own?
Like the majority of Americans, you may find it difficult to comprehend new scientific information, particularly if you attempt to read a scientific article published in a research journal.
In a time when making informed decisions requires scientific literacy, the ability to communicate and understand complex information is more vital than ever. For years, there has been a decline in public trust in science, and the difficulty of comprehending scientific jargon may be one of the causes.
A possible remedy is the use of artificial intelligence, or AI, to make science communication easier, according to recent research by David Markowitz, an associate professor of communication at Michigan State University.
By simply making scientific content more accessible, his work shows how AI-generated summaries could contribute to the restoration of public confidence in scientists and, consequently, promote increased public involvement with scientific issues.
Since people frequently depend on science to guide their daily decisions—from deciding what to eat to making important health care decisions—the issue of trust is especially significant.
A portion of an article that was first published in The Conversation is included in the responses.
What was the impact of more straightforward, AI-generated summaries on the public’s understanding of scientific research?
According to a recent study by Markowitz published in PNAS Nexus, artificial intelligence can produce summaries of scientific papers that, when compared to human-written summaries, make complex information easier for the general public to understand.
AI-generated summaries improved public perceptions of scientists and increased public understanding of science.
Markowitz produced straightforward summaries of scientific papers using OpenAI’s well-known large language model, GPT-4; this type of writing is frequently referred to as a significance statement.
Compared to summaries written by the researchers who had completed the work, the AI-generated summaries used simpler language, with more common words like “job” instead of “occupation” and easier reading according to a readability index.
In one test, he discovered that readers of the AI-generated statements were more knowledgeable about the science and gave more thorough, precise summaries of the material than those who read the statements that were manually written.
What was the impact of more straightforward, AI-generated summaries on the public’s opinion of scientists?
In a different experiment, participants gave scientists who explained their work in straightforward terms higher ratings for credibility and trustworthiness than those who used more complicated language.
Participants in both trials were unaware of the author of each summary. AI always produced the simpler texts, while humans always produced the more complex ones. Interestingly, when I asked participants who they thought wrote each summary, they assumed that humans wrote the simpler ones and artificial intelligence (AI) wrote the more complex ones.
What is left to learn about science communication and AI?
As AI develops further, it might play a bigger part in science communication, particularly if generative AI is adopted more widely or is approved by journals. It is true that standards for the application of AI are still being established in the field of academic publishing. AI could encourage greater discussion of complex issues by making scientific writing simpler.
Even though there are undoubtedly advantages to AI-generated science communication, there are also moral issues to take into account. When using AI to simplify scientific material, there is a chance that subtleties will be lost, which could result in oversimplification or misunderstandings.
Errors are always possible if no one pays close attention. Transparency is also essential. To prevent potential biases, readers should be aware when summaries are generated by AI.
Simple scientific explanations are better and more useful than complicated ones, and artificial intelligence (AI) tools can assist. However, scientists could also accomplish the same goals by putting in more effort to communicate clearly and reduce jargon—no AI is required.
About this news about science communication and AI research.
Alex Tekip wrote this.
Michigan State University.
Get in touch with Michigan State University’s Alex Tekip.
Picture: Neuroscience News is credited with this picture.
Original Research: Not available to the public.
The article “From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science” was written by David Markowitz and colleagues. PNAS Nexus.
abstract.
From intricacy to lucidity: How AI improves public understanding of science and how scientists are perceived.
This article assessed how well generative AI works to improve public understanding of science and streamline science communication.
This study evaluated the linguistic simplicity of the summaries and public perceptions in follow-up experiments by comparing lay summaries of PNAS journal articles with those produced by AI.
According to study 1a, which specifically examined the simplicity features of significance statements (lay summaries) and PNAS abstracts (scientific summaries), lay summaries were linguistically simpler, although the effect size differences were not significant.
The average effect size was more than doubled without any fine-tuning in Study 1b, which used a large language model, GPT-4, to generate significance statements based on paper abstracts.
In contrast to more intricately written human PNAS summaries, Study 2 experimentally showed that simply written generative pre-trained transformer (GPT) summaries promoted more positive views of scientists (they were viewed as more reliable and trustworthy, but less intelligent).
Importantly, study 3 experimentally showed that participants who read simple GPT summaries as opposed to complex PNAS summaries were better able to understand scientific writing.
After reading GPT summaries as opposed to PNAS summaries of the same article, participants also described scientific papers in their own words in a more thorough and specific way.
Through the use of a straightforward language heuristic, AI can interact with the public and scientific communities, promoting its incorporation into the dissemination of science for a better informed public.