AI boosters use the vulnerable people as scapegoats tactic.

Sometimes it’s meant as a way to put people at ease who believe themselves to not be at risk, and sometimes it’s a way to say it just happens to “those other people” and that’s ok actually.

I see even prominent people who ought to know better regurgitating the spin on “AI psychosis” as something that just happens to “vulnerable people” with pre-existing conditions. This is how the business PR people spun covid too, it’s just those people with “underlying conditions” so you can ignore it and keep the economy buzzing along.

Of course it’s true that almost anything is always worse and more severe and a bigger risk for people who have risk factors. This is true of any harm at all. The social determinants of health are a known factor, for example in public health. The minimizer PR always seems to neglect to mention of course how many things actually put one at higher risk with covid.

This particular spin always seems to be related to business, and used to downplay anything that might interfere with fully actualizing maximum profit making. An interesting example is that people who make their living as meditation guides are reported to also be likely to downplay meditation sickness as something that happens to vulnerable people with pre-existing mental health problems, if they mention the possible side-effect at all, even though this risk of adverse effect from meditation has been documented the world over for hundreds of years (at least as early as the 400s), and does not only happen to people with pre-existing problems or the fault of the person’s practices, such as excessive meditation. This same spin is used to make people feel they can’t speak up about climate change, by shifting the blame, and therefore feelings of guilt, to individuals and their choices, even when there’s almost no choice anyway.

There’s a Youtube channel with AI in the name that seems to exist to do AI hype, and they recently finally started addressing “AI hallucinations” in September 2025 – the fact that AI chatbots put out faulty, flawed, and false information on the regular. This channel finally addresses this because they can’t ignore it any longer I suppose, so they made a video that made it out like nobody knows what causes this, that it’s a mystery, which is untrue, because experts have been warning it’s a known likely thing with Large Language Models, it’s not a mystery at all. The experts know what causes it, and know that it can’t be fixed with this model of LLM technology. But of course the spin all along is just that you must be prompting it wrong, or at least it’s the user’s fault for not doing a thorough fact check on the output. Of course almost nobody does fact checks on the output because it takes longer to fact check the output than for a human to just write the stuff in the first place. So if the reason someone is using this is to save time and effort, which is what these chatbots are advertised to do, then one must just accept that the output is likely to be flawed and contain mistakes.

This same youtube channel also had a video pivoting to hand wring shocked and upset about the news about AI psychosis. These stories of people becoming unmoored from interactions with chatbots have been circulating for awhile now, but recently they hit the mainstream media and it’s impossible to ignore anymore. And this youtube channel spins adverse psychological harms as the result of people just not understanding what chatbots are for, and how they work, and if only they understood their limitations, it would all have been fine. It’s the end user that’s doing it wrong. In the cases where someone had dementia, or a serious pre-existing mental illness that predisposes the person to psychosis, this claim might at first glance make sense. But the fact is that otherwise rather normal people have been using chatbots and getting weird about it. Including people who definitely ought to know better, like the Google engineer that was put on leave after going off the deep end about the chatbot. And in fact, this tech is being marketed as a replacement for healthcare providers and therapists, which means they’re being marketed to be used by the very vulnerable people that supposedly are the problem.

This assertion that people are simply misunderstanding I would call misdirection propaganda. It’s said that people who are harmed, or are being misled, are simply misunderstanding these chatbot products. They frame the people as foolishly trying to have relationships with chatbots, when that’s not what they’re for. Yet this is an obvious deceptive assertion because the companies themselves have deliberately obfuscated the limitations of chatbots, overpromised on what these products can do, and have deliberately been marketing “AI agents” and “AI therapists” and surely we’ve all seen the “AI companion” ads that advertise chatbot services as romantic partners, and the various tv interviews with people who claim to be in relationships with their AI chatbot services, and the stories about people being devastated with the ChatGPT upgrade that ruined their chatbot companion’s disposition. So the industry has been encouraging people to treat them as if they’re capable of human relations, indeed they’re promoting the chatbot products as a replacement for human relations. Some may say that perhaps a sexual or romantic chatbot is just some new form of porn, but you can’t claim they haven’t been sold as a relationship if they’re promoting “AI therapists” because therapy is specifically about the relationship between the therapist and the patient, the therapeutic relationship is widely known as the most important part of therapy, it’s the part of therapy that makes therapy effective. And someone can’t have a relationship with a chatbot, but if these companies are marketing their product in that way, they’re claiming it’s possible by definition. The tech companies who make these claims or allow these marketing claims for their products, and benefit from them are therefore are actively presenting these products as something they’re not.

So this shift to blame the end user is just another iteration of the “individual responsibility” lie perpetrated by the fossil fuel industry and big tobacco, both known for misdirection as a tactic. It’s not about people “misunderstanding” but that’s exactly the sort of facebook memes that are circulating. Don’t buy it. That’s frankly blatant spin, because it’s about people being misled by deceptive marketing claims. Because explanations about how chatbots work and how they don’t have been available from people like Angela Collier and Emily Bender for years. It’s not the end users fault that they only heard industry claims. False industry claims are everywhere, including coming from politicians.

So we have to beware of any messaging around AI because there’s just so much misleading marketing out there that misrepresents this technology.