Date:

AI Stirs Up Trouble in Science Peer Review

Scientific Publishing in Confronting an Increasingly Provocative Issue: What Do You Do About AI in Peer Review?

The Growing Concern

Ecologist Timothée Poisot recently received a review that was clearly generated by ChatGPT. The document had the following telltale string of words attached: “Here is a revised version of your review with improved clarity and structure.” Poisot was incensed. “I submit a manuscript for review in the hope of getting comments from my peers,” he fumed in a blog post. “If this assumption is not met, the entire social contract of peer review is gone.”

A Growing Trend

Poisot’s experience is not an isolated incident. A recent study published in Nature found that up to 17% of reviews for AI conference papers in 2023-24 showed signs of substantial modification by language models. In a separate Nature survey, nearly one in five researchers admitted to using AI to speed up and ease the peer review process.

The Risks

There are two risks: a) peer reviewers using AI to review content, and b) AI-generated content slipping through the peer review process. When AI-generated content slips through the peer review process, it can lead to absurd results, as seen in a 2024 paper published in the Frontiers journal, which explored complex cell signaling pathways. The paper contained bizarre, nonsensical diagrams generated by the AI art tool Midjourney, including one image depicting a deformed rat, and others that were just random swirls and squiggles, filled with gibberish text.

Publisher Responses

Publishers are responding to the issues. Elsevier has banned generative AI in peer review outright. Wiley and Springer Nature allow “limited use” with disclosure. A few, like the American Institute of Physics, are gingerly piloting AI tools to supplement – but not supplant – human feedback.

The Debate

Some see the benefits of using AI in peer review, while others argue that it undermines the integrity of the process. A Stanford study found 40% of scientists felt ChatGPT reviews of their work could be as helpful as human ones, and 20% more helpful. However, others argue that the whole point of peer review is considered feedback from fellow experts – not an algorithmic rubber stamp.

Conclusion

The use of AI in peer review is a complex issue that requires a nuanced approach. While AI has the potential to streamline the process, it must be used judiciously to ensure the integrity of the scientific publishing process.

FAQs

Q: What is the current state of AI in peer review?
A: AI is being used to review content, and some publishers are allowing limited use with disclosure.

Q: What are the risks of using AI in peer review?
A: The risks include peer reviewers using AI to review content, and AI-generated content slipping through the peer review process.

Q: What are the benefits of using AI in peer review?
A: AI has the potential to streamline the process and provide helpful feedback, with 40% of scientists feeling that ChatGPT reviews could be as helpful as human ones, and 20% more helpful.

Q: What are the concerns about using AI in peer review?
A: The whole point of peer review is considered feedback from fellow experts – not an algorithmic rubber stamp.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here