Lying AI should not be doing the people’s business or science.

Lives are at stake and the U.S. government and scientific scholars are buying into tech hype boondoggles. Is it corruption, incompetence, or sabotage?


My letter to the White House and my elected representatives on the use of synthetic text by government agencies:

I don’t pay taxes to have “AI” making grave bad decisions that impact people’s lives. I’m supposed to have representation as a human in government and lawmakers and government agencies are supposed to actually have people doing the people’s business. We know that these LLMs and chatbots are NOTORIOUS for giving bad information and creating disinformation. Why would FEMA, DHS or any other agency tasked with protecting human lives use this crap technology at all. Is someone getting kickbacks? I want an investigation into this, with advice from experts who are not receiving money to hype this stuff all over the place.

The way these companies are selling boondoggles to the government, with people who ought to know better, but apparently they don’t?


Mystery AI Hype Theater 3000: The Newsletter – March 18, 2024, 10:17 a.m. – US DHS attempts to use “AI” – Three more uses cases where synthetic text is not appropriate, now paid for with tax dollars – By Emily

So, in other words: they’re planning on putting synthetic text, which is only ever accurate by chance into a) the information scanned by investigators working on fentanyl-related networks and child exploitation; b) the drafting of community emergency preparedness plans; and c) the information about the laws and regulations that immigration officers are supposed to uphold.

I searched the roadmap linked to the press release for “accuracy”, “false”, “misleading”, and “hallucination” to see if there was any discussion of the fact that the output of synthetic text extruding machines is ungrounded in reality or communicative intent and therefore frequently misleading. None of those terms turned up any hits. Is the DHS even aware that LLMs are even worse than “garbage in garbage out” in that they’ll make papier-mâché out of whatever is put into them?


The people making the decision to use this faulty technology don’t even know about the “hallucinations” and are not even considering that chatbots are NOT reliable. It’s like they’ve all been conned by the chatbot ability to generate false content that seems persuasive.

Forbes – ChatGPT Could Leave Europe, OpenAI CEO Warns, Days After Urging U.S. Congress For AI Regulations. By Siladitya Ray May 25, 2023

The OpenAI CEO acknowledged disinformation concerns surrounding AI while addressing an audience at University College London, specifically pointing to the tool’s ability to generate false information that is “interactive, personalized [and] persuasive,” and said more work needed to be done on that front.

It’s really hard for me to understand how this is slipping past people in charge because it’s clear most people don’t trust this to be used where our personal safety or health is at stake. But I guess this is just another case of ELITE PANIC, and big shots who have very different goals than the rest of us.

Cats in Wonderland – the Uncanny Valley of lying AIs It’s just a huge coincidence that AI chatbot services are very much like a lot of other tech products with problematic tradeoffs and just happen to be useful to a lot of the same questionable actors. CHLOE HUMBERT MAY 29, 2023

A memorable quote from The Hitchhiker’s Guide to The Galaxy trilogy, is that of Marvin the Paranoid Android, in the second book of the trilogy, The Restaurant at the End of the Universe, upon being accused of making stuff up, Marvin responds by saying, “Why should I want to make anything up? Life’s bad enough as it is without wanting to invent any more of it.” There are reports that chatbots get citations completely wrong, and that these made up citations nevertheless sound very plausible. So today’s AI chatbots are not like the self-questioning and brooding Marvin the Paranoid Android at all. Quite the opposite. ChatGPT appears confident and gives prolific convincing made-up output with seeming bravado. Even the OpenAI CEO acknowledged their tool’s ability to generate false information that is persuasive.


Rand Waltzman on Linkedin Strategies for Manufacturing Doubt (2) - "A": unbiased studies based on scientific evidence "B": information generated to promote narratives that are favorable to the industry. Suppress Incriminating Information - Hide information that runs counter to "B" - Contribute Misleading Literature - Use literature published in journals or the media to deliberately misinform, either pro-"B", anti-"A", or to distract with peripheral topics - Host Conferences or Seminars - Organize conferences for scientists or relevant stakeholders to provide a space for dissemination of only pro-"B" information. Image includes a cartoon where someone is holding up a sign that says “the world already ended, but the government hushed it up” Harley Schwadron, and another cartoon where there’s a person in a lab coat standing outside a door that’s labeled String Theory Lab, he’s looking at a cat playing with a ball of yarn.
Rand Waltzman on Linkedin Strategies for Manufacturing Doubt (2) – “A”: unbiased studies based on scientific evidence “B”: information generated to promote narratives that are favorable to the industry. Suppress Incriminating Information – Hide information that runs counter to “B” – Contribute Misleading Literature – Use literature published in journals or the media to deliberately misinform, either pro-“B”, anti-“A”, or to distract with peripheral topics – Host Conferences or Seminars – Organize conferences for scientists or relevant stakeholders to provide a space for dissemination of only pro-“B” information. Image includes a cartoon where someone is holding up a sign that says “the world already ended, but the government hushed it up” Harley Schwadron, and another cartoon where there’s a person in a lab coat standing outside a door that’s labeled String Theory Lab, he’s looking at a cat playing with a ball of yarn.

Lots of “scholarly” articles now are blatantly using chatbots, as evidenced by apparently many so-called authors not even bothering to edit out the standard qualifying statements of the chatbots, such as “as of my last knowledge update” and “I don’t have access to real-time data.”


twitter post @LifeAfterMyPhD 3:01 AM · Mar 18, 2024 - It gets worse. Apparently if you search "as of my last knowledge update" or "i don't have access to real-time data" on Google Scholar, tons of AI generated papers pop up. This is truly the worst timeline. 897.7K Views 95 replies 2.4k retweets 78k hearts 1.7k bookmarks [The tweet contains screenshots of the google scholar search results containing those search term sentences.]
twitter post @LifeAfterMyPhD 3:01 AM · Mar 18, 2024 – It gets worse. Apparently if you search “as of my last knowledge update” or “i don’t have access to real-time data” on Google Scholar, tons of AI generated papers pop up. This is truly the worst timeline. 897.7K Views 95 replies 2.4k retweets 78k hearts 1.7k bookmarks [The tweet contains screenshots of the google scholar search results containing those search term sentences.]

And the answer to this… is more individualism?

Walters, W.H., Wilder, E.I. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci Rep 13, 14045 (2023). https://doi.org/10.1038/s41598-023-41032-5 Students who are knowledgeable about fabricated citations will presumably be more likely to take the literature review process seriously and to do the work themselves—or to check and build on their ChatGPT citations in ways that lead them to accomplish many of the intended learning goals.

Hoping that having this information available will lead students to “just fact check the chatbot” is not a solution! When all the incentives and allowances obviously go in the other direction, toward flooding the zone with chatbot written journal articles, peer reviewed by chatbots, and chock full of dangerously inaccurate bullshit. And that’s exactly what they’re proposing. Not only allowing chatbot written synthetic text to fill science journals as “honorary authors” like Chester the Cat, but to actually just let AI do the peer review too.

This seems like the perfect recipe to undermine all of science literature, and the practice and usage of science itself.

Brandt AM. Inventing conflicts of interest: a history of tobacco industry tactics. Am J Public Health. 2012 Jan;102(1):63-71. doi: 10.2105/AJPH.2011.300292. Epub 2011 Nov 28. PMID: 22095331; PMCID: PMC3490543. The industry campaign worked to create a scientific controversy through a program that depended on the creation of industry–academic conflicts of interest. This strategy of producing scientific uncertainty undercut public health efforts and regulatory interventions designed to reduce the harms of smoking. A number of industries have subsequently followed this approach to disrupting normative science. Claims of scientific uncertainty and lack of proof also lead to the assertion of individual responsibility for industrially produced health risks.

Listen, I don’t know what I’m saying here, but this seems all very hinky. And whether it’s deliberate sabotage or catastrophic incompetence, isn’t that a problem either way? The mushrooms issue alone is a huge safety hazard with the rest of this nightmare. And I’d like to be able to stop thinking about this now because I’m having anxiety dreams about it at this point.

I had a dream of soylent chatbots made out of people. Maybe the chatbots aren’t made out of human bodies or even people toiling away in some scam center, but the way this sausage is made, and served, is nevertheless going to sour everyone eventually. CHLOE HUMBERT MAR 16, 2024

My blog post about a dream had 16 footnotes that are NOT fabricated, which is more than you can say for some scholarly articles now apparently.

This information dumpster fire is such an easily preventable rolling catastrophe. Just don’t throw good money after bad on expensive and faulty glorified auto-correct. But I guess I shouldn’t be surprised after the unmitigated shitshow that is the failure at the continuing pandemic.

Please join me in writing elected representatives to warn them about this stuff. Let them know if you don’t want faulty AI running your life, or potentially ruining it. Call now before it’s too late before hundreds of government contracts have been solidified with more of these rich scammy tech clowns and their nonsense hype boondoggles.