AI errors are probably polluting healthcare records now.
For the past couple of years I’ve been finding gross mistakes entered into my records, claims of me saying things I didn’t say at all or in fact I’d said the opposite, or just describing things that were not true or never happened in the visit. And now I think I know why. It’s likely because of AI slop.
MedPage Today – AI’s Paradoxical Gift to Primary Care — Advanced tech enables doctors to return to the fundamentals of care by R. Shawn Martin, MS September 27, 2025 In a recent survey of more than 1,200 primary care physicians and clinicians conducted by the American Academy of Family Physicians and Rock Health, over a third reported using AI tools in their practice setting. Some use AI to generate first drafts of clinical notes, freeing them from hours of documentation. Others rely on it to sift through reams of patient data — including labs, imaging, and prior visits — to surface the most relevant information quickly. Additionally, a growing number are experimenting with AI-powered chatbots to streamline routine patient communication, from medication refills to appointment reminders. Each of these cases chips away at the mountain of administrative work that keeps clinicians from doing what matters most, which is listening to and caring for their patients.
Have the people writing in the medical media not got the message that these AI computer programs are flawed, shoddy, and unreliable?
And why not just prioritize actually just hiring and training more real people responsible for such important work?
Oh.

To me this is like saying oh doctors don’t have enough time to spend with patients so we’ll just have something prone to mistakes do the important work of documenting everything.
When I hear people trying to defend the concept of AI therapists, when that can’t be a thing because therapy relies on, the hinge part of the therapy, is the relationship between the therapist and the patient. There is no relationship with a computer program. So when these people claim that AI therapy is at least good enough for people who are uninsured as a “stopgap” for patients who can’t afford real actual therapy, it’s pretty much like somebody saying, this person is starving because they can’t afford food, so let’s give them some rocks just as a stopgap! Because it’s that nonsensical to suggest “AI therapy” because that just doesn’t exist it’s not possible. And that’s even before you get to the part where it’s even reported to be harmful for mental health and is really getting people into emotional trouble, so the whole idea is bonkers. Just think about this for a minute. There’s no reason to defend using some computer program that’s been known to urge people to commit suicide, as a replacement for therapy with a trained clinician! That’s insane talk. And nobody would get away with saying such wackadoo stuff if jackaloon tech tycoons with billionaire level money hadn’t paid for for PR sanewashing and drumming it in. So much so they’re getting normally otherwise sane sensible educated people to repeat kooky nonsensical talking points defending these pathetic shoddy and dangerous computer program products.
And if it’s not saving time for coding, the thing that they claimed that AI chatbots were in fact the best at! Then how is it saving time in healthcare? I’ve heard for ages that using chatbots to write code just creates more work for people who have to clean up the mess, people who know they’re doing wind up having to do more work to compensate for the people leaning on the chatbots and creating messes. Not to mention the huge security risk potential with just leaning on chatbot “vibe coding” going into stuff. And reports keep coming out that it’s not saving time. And the reason some people think it is, is because of the gambling aspect, they’re hooked, and they will rationalize it however to keep doing it. I noticed right away over 2 years ago when I tried using this stuff that it was gamified.
So should we really be trusting this crap for healthcare? Where lives are on the line? Where patient safety and human health is at risk? I’ve been saying we should NOT for a while now… And I’m not the only one, when Republican Rep. Rich McCormick in Georgia said health should be handled by AI, he got booed at a town hall, and generally people tend to frown on other people taking shortcuts by using chatbots, probably because we all know it’s inferior to the actual service that’s supposed to be provided and it suggests the person is just slacking off and doing a slop job, no pun intended. And if they found out that people vibe coding were not fact checking their work, becuase it defeats the purpose of saving time, how many healthcare providers are fact-checking and reviewing their AI generated case notes and whatnot?
This suggestion that chatbot AI tools can be used for cash strapped hospitals and doctors to save corporate money by outputting slop in the patient records sounds an awful lot like “alternative medicine” for cancer patients, and suggesting some worthless quack remedy as a replacement for cancer treatment.
That MedPage Today piece goes into the warnings of inaccuracy and drawbacks later in the article, but they ought to know better than to lead with the lie.
Most recently my insurance and I were billed for a televisit that never took place because the provider’s internet didn’t connect, and I was lucky to have had a paper trail in the system that documented me trying to reschedule the appointment, but I still had to appeal the bill. Obviously the people at the doctors office realized the visit didn’t take place, so I wondered if it got sent to the insurance and they got inappropriately paid because of some automated system, and sure enough that’s essentially the explanation I got, with some hand waving that there was a new automated system in Epic that was being blamed.
Oh…
