ChatGPT tends to ignore retractions on scientific papers Study finds the chatbot doesn’t acknowledge concerns with problematic studies by Dalmeet Singh Chawla, special to C&EN August 15, 2025 They found that two-thirds of the time, the tool either said yes or gave a positive response. “We were surprised that, at the time, ChatGPT didn’t deal very well with retractions at all, so it didn’t mention them and reported retracted information as true,” says study coauthor Mike Thelwall, who is a metascience researcher at the University of Sheffield.
My letter to reps:
Not only does ChatGPT cite non-existent science papers, it also treats retracted papers as legit, when they’ve been retracted for a reason. Isn’t it obvious that this chatbot technology is dangerously unreliable? Do you really want doctors or scientists or anyone making decisions, some life and death decisions that could get people killed, using this shoddy technology? You need to put the stops on this madness.
Please feel free to copy or repurpose for your own letters to reps.
