Hallucinating chatbot healthcare tech tools hopped up on medical conference hype. What could possibly go wrong?

AI hype is so normalized that of course tech tycoons and private equity think that healthcare should run on hype filled apps and downsizing staff.


Healthcare company executives might think replacing human practitioners with shoddy lying chatbots[1] is a good idea even though these large language model software tools, as described by Emily Bender, “make papier-mâché out of whatever is put into them.”[2] But according to a survey of nurses, it’s not a good idea from their professional perspective. A National Nurses United survey found that the current types of medical tech tools that input generated stuff into records still don’t have the kinks worked out, including stuff as simple as healthcare workers not being able to edit wrongly auto-filled patient information.[3] 

Michelle Mahon, Director of Nursing Practice with National Nurses United, was recently on the Mystery AI Hype Theater 3000 podcast with Emily Bender and Alex Hanna, discussing the many problematic things about the push to sell healthcare companies “generative AI” to replace various types of healthcare workers.[4] Something that stood out to me was how the Forbes article described “a collaboration between Nvidia and Hippocratic AI to launch generative AI “agents.” and repeated company released data asserting that their AI bots are better than nurses at a few tasks. It was also noted that unlike IBM’s Watson which was a tool based on curated medical records, that this new AI healthcare worker replacement tool reportedly “not only pulls from published, peer-reviewed medical journals and textbooks but also will be able to integrate real-time information from global health databases, ongoing clinical trials and medical conferences.”[5]

This alone makes this AI in non-compliance with any kind of healthcare regulations anywhere. Using this software in a healthcare setting this way would be a clear cut violation of medical safety regulations of various types. It’s the very epitome of bad science, using research results or data that’s not been reproduced or maybe not even yet peer reviewed. Just chucking unverified information into the mix and using it to make life and death critical medical decisions flies in the face of medical ethics. I have a hard time understanding how a medical doctor could’ve written about this and not pointed that out.

Information and decisions for healthcare practice are not supposed to be made based on unpublished research, unvetted databases, preliminary theories, or incomplete clinical trials. There’s a reason for peer review and demonstrating reproducible safety and efficacy. None of that list of AI training sources would meet FDA approval or compliance with the Centers for Medicare & Medicaid Services. Michelle Mahon points out in the MAIHT3K podcast that conferences are where ideas are discussed but, “They’re not necessarily the standard of care” and “there’s lots of ideas and that they’re not all great.“[4] There are a lot of conferences, and a lot of things that go on at them. You don’t want life or death decisions made for you at the hospital based on some industry convention where there’s a panel with people spit-balling, or saying something snazzy, just to try to stand out in the crowd.

And that’s before you even get to the fake science conferences whose purpose is “to give studies an air of scientific credibility while cashing in on millions of dollars in the process” according to a DEF CON 26 presentation from 2018.[6] Or consider for a moment the possible inclusion of who knows what from some weirdo medical conference held by anti-vaxxers like the FLCCC convention that had presentations on so-called vaccine “shedding”, the preposterously false conspiracy fiction that vaccinated people transmit “gene therapy” to others, and had doctors pushing ivermectin as a covid treatment[7] even though that’s been discredited and not a legitimate medical application.[8] Who knows what dangerous papier-mâché nonsense will come out of a healthcare LLM trained on bullshit from strange MAGA trucker convoy politics from Qanon inspired medical conferences.

It wouldn’t even meet FDA approval or any sensible standard to any thinking person to include just marketing materials or aspirational wish-casting predictions from some medical conference when evaluating lab values, making new diagnoses, managing chronic disease, and giving patients a detailed explanation of advice – all things the Forbes article reports that these “AI nurse-bots” are supposedly designed to do. And these AI nurse-bots are advertised to do them at $9.00 per hour – which is mentioned as a selling point, contrasting it to the average salary of a nurse[5] which is obviously higher. The purpose of pointing this out is to sell healthcare companies ways to cut staff, which is something that happens when private equity takes over, which is happening a lot.[9]

There are problems with replacing doctors with nurse practitioners – not all of whom are well-trained for the job situations they’re put in – and all just to reduce healthcare company’s payroll costs and supposedly alleviate staffing shortages. The Bloomberg Big Take podcast recently had an episode on this topic of questionable training taking place for many NPs and described a situation where a nurse practitioner didn’t want to scare a patient by telling them to go to the hospital while on vacation, and the patient wound up dying of an ectopic pregnancy as a result.[10]

National Nurses United reported that at University of Michigan Health-Sparrow in Lansing, Michigan, the hospital got rid of nurse shift change reports and automated them with no human-to-human communication, and despite the crowing done by Epic about their AI, some have seen these hands-off reports fail to report critical information and sometimes emphasized less important details.[11] Probably because these chatbots do not think – they auto-fill without regard to clinical relevance. We all know how auto-correct or speech to text works, or often doesn’t and gets it wrong. It’s a running joke. Not only does it seem reckless to get rid of nurse to nurse shift change reports, but there has been a patient centered movement arguing for doing the shift change reports bedside,[12] and the assertion that it improves patient safety.[13] Omitting thinking people invested in patient outcomes from this exchange entirely would seem to be moving in the wrong direction.

National Nurses United has a web page where the problems with AI in nursing and patient care are explained,[14] and they’ve scheduled a webinar on the topic next week on the topic where they say they’ll also discuss how healthcare workers and allies can work to hold executives accountable for patient safety. 


References:

[1] Cats in Wonderland – the Uncanny Valley of lying AIs It’s just a huge coincidence that AI chatbot services are very much like a lot of other tech products with problematic tradeoffs and just happen to be useful to a lot of the same questionable actors. CHLOE HUMBERT MAY 29, 2023 A memorable quote from The Hitchhiker’s Guide to The Galaxy trilogy, is that of Marvin the Paranoid Android, in the second book of the trilogy, The Restaurant at the End of the Universe, upon being accused of making stuff up, Marvin responds by saying, “Why should I want to make anything up? Life’s bad enough as it is without wanting to invent any more of it.” There are reports that chatbots get citations completely wrong, and that these made up citations nevertheless sound very plausible. So today’s AI chatbots are not like the self-questioning and brooding Marvin the Paranoid Android at all. Quite the opposite. ChatGPT appears confident and gives prolific convincing made-up output with seeming bravado. Even the OpenAI CEO acknowledged their tool’s ability to generate false information that is persuasive.

[2] Mystery AI Hype Theater 3000: The Newsletter – March 18, 2024, 10:17 a.m. – US DHS attempts to use “AI” – Three more uses cases where synthetic text is not appropriate, now paid for with tax dollars – By Emily So, in other words: they’re planning on putting synthetic text, which is only ever accurate by chance into a) the information scanned by investigators working on fentanyl-related networks and child exploitation; b) the drafting of community emergency preparedness plans; and c) the information about the laws and regulations that immigration officers are supposed to uphold. I searched the roadmap linked to the press release for “accuracy”, “false”, “misleading”, and “hallucination” to see if there was any discussion of the fact that the output of synthetic text extruding machines is ungrounded in reality or communicative intent and therefore frequently misleading. None of those terms turned up any hits. Is the DHS even aware that LLMs are even worse than “garbage in garbage out” in that they’ll make papier-mâché out of whatever is put into them?

[3] National Nurses United survey finds A.I. technology degrades and undermines patient safety May 15, 2024 Some 29 percent of nurses said they are unable to change assessments or categorizations that are software-generated by A.I., in facilities that use devices to capture images and sound information about patients, such as pain scores and wound assessments. In facilities that use a scoring system to predict a patient’s outcome, risk for a complication, or determine if patients are on schedule for discharge, 40 percent of nurses said they are unable to modify scores to reflect their clinical judgment and the individualized needs of the patient. “While our employers argue A.I. will help us, they’re using these technologies to erode our ability to practice our clinical judgment,” said Aretha Morgan, RN in emergency pediatrics at New York Presbyterian in Manhattan and a board member of the New York State Nurses Association, who also teaches nursing.

[4] Mystery AI Hype Theater 3000 Episode 37: Chatbots Aren’t Nurses (feat. Michelle Mahon), July 22 2024 August 02, 2024 Emily M. Bender and Alex Hanna Episode 37

[5] Forbes – Nvidia’s AI Bot Outperforms Nurses, Study Finds. Here’s What It Means. Robert Pearl, M.D. Apr 17, 2024 When assessing the transformative potential of generative AI in healthcare, it’s crucial not to let past failures, such as IBM’s Watson, cloud our expectations. IBM set out ambitious goals for Watson, hoping it would revolutionize healthcare by assisting with diagnoses, treatment planning and interpreting complex medical data for cancer patients. I was highly skeptical at the time, not because of the technology itself, but because Watson relied on data from electronic medical records, which lack the accuracy needed for “narrow AI” to make reliable diagnoses and recommendations. In contrast, generative AI leverages a broader and more useful array of sources. It not only pulls from published, peer-reviewed medical journals and textbooks but also will be able to integrate real-time information from global health databases, ongoing clinical trials and medical conferences. It will soon incorporate continuous feedback loops from actual patient outcomes and clinician input. This extensive data integration will allow generative AI to continuously stay at the forefront of medical knowledge, making it fundamentally different from its predecessors.

[6] DEF CON 26 – Svea, Suggy, Till – Inside the Fake Science Factory – Sep 17, 2018 Fake News has got a sidekick and it’s called Fake Science. This talk presents the findings and methodology from a team of investigative journalists, hackers and data scientists who delved into the parallel universe of fraudulent pseudo-academic conferences and journals; Fake science factories, twilight companies whose sole purpose is to give studies an air of scientific credibility while cashing in on millions of dollars in the process. Until recently, these fake science factories have remained relatively under the radar, with few outside of academia aware of their presence; but the highly profitable industry is growing significantly and with it, so are the implications. To the public, fake science is indistinguishable from legitimate science, which is facing similar accusations itself. Our findings highlight the prevalence of the pseudo-academic conferences, journals and publications and the damage they can and are doing to society.

[7] Vaughn, what team is he actually on? A second opinion on the politics of the pandemic healthcare landscape. Chloe Humbert · May 8, 2024 Sharyl Attkisson interviews Pierre Kory and Jordan Vaughn together at the FLCCC conference in Phoenix Arizona in April 2024. In the interview Pierre Kory mentions ivermectin and says it has “20 positive mechanisms of action”. Yet ivermectin was shown to be ineffective as a covid treatment and linked to MAGA politics.[48] David Gorski criticized FLCCC as a group formed during the pandemic with ideological motivations for “covid protocols” in their opposition to public health measures, and referred to FLCCC’s “now repurposing ivermectin for cancer” as quackery.[49] One of the presentations at the FLCCC Winter 2024 conference was a “Shedding is Real” lecture.[41] People are NOT having “viral shedding” after covid vaccinations.[42] The spike protein is not produced indefinitely after vaccination,[43] it degrades.[44] The vaccines don’t alter DNA, and mRNA has a very short lifespan.[45] The only way viral shedding from a vaccine is theoretically possible is with a live virus vaccine, and the mRNA covid vaccines don’t contain live virus. Even the J&J and AstraZeneca vaccines, which contain live adenovirus, do not contain the coronavirus, and the adenovirus can’t replicate. And the spike protein can’t itself shed.[46] The Novavax vaccine also doesn’t contain live or inactivated virus.[47]

[8] Who What Why — Ivermectin: Dr. Pierre Kory and the Wonder Drug That Wasn’t — Ivermectin has fused with MAGA politics. — Karam Bales 03/17/24 The American Medical Association and Wisconsin Medical Society had filed an amicus brief on behalf of the hospital system being sued, noting that the plaintiff had largely relied on Kory’s “opinion testimony” and that “the studies on which his opinion is based — including his own — have been thoroughly discredited.” They further highlighted that: “Additional research determined that meta-analyses touting ivermectin’s effectiveness, including Dr. Kory’s, had surveyed “largely poor-quality studies.” Indeed, one of the studies on which Dr. Kory relied was “potentially fraudulent” and included duplicated data. The journal that published Dr. Kory’s survey subsequently issued an expression of concern, which questioned Dr. Kory’s conclusions about ivermectin.”

[9] The Washington Post – When private equity buys a hospital, assets shrink, new research finds The study comes as U.S. regulators investigate the industry’s profit-taking and its effect on patient care. By Peter Whoriskey July 30, 2024 at 11:00 a.m. EDT Federal Trade Commission chair Lina Khan said the agency will investigate “strip-and-flip tactics and other financial plays that can enrich executives but leave the American public worse off. … When private equity firms buy out health-care facilities only to slash staffing and cut quality, patients lose out.” The American Investment Council on Monday continued to defend the role of private equity in U.S. health care. “While we were unable to review this study before publication, the reality is that private equity plays a limited role in the health-care sector,” according to a statement from its spokesperson. “When it is used, private capital helps drive medical innovation, increase access to care, and improve local communities.” Private equity firms, which pool money from wealthy investors, financial firms and pensions funds, buy up companies and typically seek to sell them again within about 10 years. They have spent hundreds of billions buying up hospitals and today, more than 450 U.S. hospitals are owned by private equity firms, according to the Private Equity Stakeholder Project, a watchdog group.

[10] Why You’ll Want to Know How Your Nurse Practitioner Was Trained Big Take – July 24, 2024 Americans are more and more likely to get health care not from doctors, but from nurse practitioners. It’s one of the fastest-growing professions in the US — and the number of nurse practitioners in the country is expected to climb 45% by 2032. But training for the booming profession has never been standardized, and some students worry they’re not being set up for success.

[11] National Nurses United – Risky Business – June 5, 2024 With automated hand-offs, the electronic health record (EHR) system in which nurses chart pulls together sections of the chart and highlights certain data for the next shift’s nurse; no human-to-human communication happens. Sparrow’s EHR system, Epic, brags on its corporate website that “With Epic, Generative A.I. seamlessly integrates into your Electronic Health Record (EHR)… see how A.I. personalizes patient responses, streamlines handoff summaries, and provides up-to-date insights for your providers.” Breslin is not impressed. He has seen that the automated hand-off reports often omit critical information or overstate the importance of other data.

[12] Griffin T. Bringing change-of-shift report to the bedside: a patient- and family-centered approach. J Perinat Neonatal Nurs. 2010 Oct-Dec;24(4):348-53; quiz 354-5. doi: 10.1097/JPN.0b013e3181f8a6c8. PMID: 21045614.

[13] Washington, T. (2023). The Impact of Nurse-to-Nurse Change of Shift Report at the Bedside on Patient Satisfaction Scores and Patient Safety. , (). Retrieved from https://hsrc.himmelfarb.gwu.edu/son_dnp/135 

[14] National Nurses United – A.I.’s impact on nursing and health care 2. Patient care cannot be ceded to A.I. technology, which has been demonstrably prone to serious inaccuracies and biases. In addition to a lack of well-designed research, clinical trials, and post-implementation evaluation, the makers of algorithmic and A.I. software are not required to disclose how their algorithms work or why they produce a given result. A.I. systems as currently deployed are “black boxes” without any transparency and without any input from RNs. 3. A.I. creates opportunities for exploitation and scapegoating of nurses. A.I. technologies enable mass surveillance of nurses and other health care workers at facilities, with disturbing opportunities for employers to violate individual privacy and union organizing rights. It also increases the risk of liability for RNs, whose licenses may be on the line for erroneous decisions made by the models, which hospital management and software companies might refuse to accept liability for.