The oracle of chatbot phenomenon is not benign.

Don’t Wait For Everybody – Episode 022



Notes, references, & transcript: https://chloehumbert.substack.com/p/oracle-of-chatbot


References:

The image is the deep fried meme of Garfield the cartoon cat’s head and underneath it is the caption You are not immune to propaganda. 
The image is the deep fried meme of Garfield the cartoon cat’s head and underneath it is the caption You are not immune to propaganda. 

https://chloehumbert.substack.com/p/inform-politicians-about-tech-scams

https://youtu.be/QiJ7X8Bu9wk

https://chloehumbert.substack.com/p/chatbots-hopped-up-on-hype

https://chloehumbert.substack.com/p/lying-ai-should-not-be-doing

https://www.bloomberg.com/news/newsletters/2023-04-03/chatgpt-bing-and-bard-don-t-hallucinate-they-fabricate

https://www.nationalnursesunited.org/artificial-intelligence

https://virginia-eubanks.com/automating-inequality/

https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice

https://www.medpagetoday.com/practicemanagement/informationtechnology/112610

https://youtu.be/9RWdZml54eg

https://wat3rm370n.tumblr.com/post/727008820102070272/ive-been-seeing-advertisements-for-vloggers-on

https://shatterzone.substack.com/p/ai-is-coming-for-your-children

https://curmudgucation.substack.com/p/childrens-books-to-really-avoid

https://chloehumbert.substack.com/p/cats-in-wonderland

https://www.columnblog.com/p/as-tv-writers-strike-us-media-uncritically

https://pivot-to-ai.com/2025/04/29/generative-ai-no-significant-impact-on-earnings-or-recorded-hours-in-any-occupation/

https://www.reddit.com/r/diablo2/comments/14c2avu/could_anyone_dumb_down_rollingrerolling

https://youtu.be/GPbWJPsBPdA

https://en.wikipedia.org/wiki/Magic_8_Ball

https://en.wikipedia.org/wiki/I_Ching

https://thecyberwire.com/glossary/social-engineering

https://www.merriam-webster.com/dictionary/gamification

https://www.washingtonpost.com/technology/2021/10/26/facebook-angry-emoji-algorithm

https://www.vox.com/technology/2018/2/19/17020310/tristan-harris-facebook-twitter-humane-tech-time

https://www.axios.com/2017/12/15/sean-parker-unloads-on-facebook-god-only-knows-what-its-doing-to-our-childrens-brains-1513306792

https://www.404media.co/the-age-of-realtime-deepfake-fraud-is-here

https://www.psychologytoday.com/us/blog/misinformation-desk/202112/giving-informational-learned-helplessness

https://bsky.app/profile/decassette.bsky.social/post/3lnzl5b7eos23

this feels like an incredible new urban legend taking shape on reddit otoh I've lowkey seen this happen. like jerusalem syndrome but for talking to the computer

Linnea Sterte (@decassette.bsky.social) 2025-04-30T10:21:06.955Z

https://www.reddit.com/r/ChatGPT/comments/1kalae8/comment/mprougp/

https://youtu.be/DORxk9-G6Uc

https://www.reddit.com/r/OpenAI/comments/1bbyj8s/sam_altmans_tweet/

https://www.iheart.com/podcast/1119-better-offline-150284547/episode/the-bs-bubble-273756380/

https://www.buzzsprout.com/2126417/15517978-episode-37-chatbots-aren-t-nurses-feat-michelle-mahon-july-22-2024

https://www.wvia.org/news/local/2024-12-16/reducing-pajama-time-artificial-intelligence-supplements-work-of-nepa-clinicians

These were brought to my attention after recording this:

https://futurism.com/chatgpt-users-delusions

https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175

https://youtu.be/lxTBXpe_Tqc

https://slashdot.org/story/25/05/05/0234215/after-reddit-thread-on-chatgpt-induced-psychosis-openai-rolls-back-gpt4o-update

Added references later:

2025-0509: https://old.reddit.com/r/ChatGPT/comments/1ki5akm/i_asked_chatgpt_to_tell_the_biggest_lie_ever_sold/


TRANSCRIPT:

I’m Chloe Humbert. Don’t assume that anyone is immune to the Oracle of chatbot phenomenon. Obviously, do not assume anyone is immune to social engineering. You are not immune to propaganda. Nobody is. Of course, I’ve been all over. I’m always harping on the data centers for AI and all the power they’re using and

0:23

the usurping of public utilities for very little usage and trying to discipline labor with false promises to employers that this is going to, you know, cut down on your payroll expenses and get your workers in line, whatever. Of course, there’s the dangers of using LLMs and chatbots and serious decision making, of course. The hallucinations, the euphemism for,

0:46

you know, basically putting out misinformation and false. That sounds convincing. That’s what chatbots are made to do, is sound convincing no matter what they’re saying, whether it’s science polluted with fake research citations or whatever. I went to a National Nurses United webinar last year on AI and healthcare, and the stories were hair-raising.

1:08

about patient safety being jeopardized by,

1:11

you know,

1:11

basically relying on these chatbot notes being taken.

1:16

And,

1:16

of course, Virginia Eubanks has written a whole book, Automating Inequality, and Dorothy Roberts has talked about how automation automates racism in healthcare. And,

1:27

you know,

1:27

I’ve heard stories about,

1:29

you know,

1:29

chatbots adding racist comments into medical notes. So this stuff is not reliable. Maybe you’ve heard about the mushrooms incident and the foraging books that were written by chatbots and put people in danger. The children’s books full of nonsense written by chatbots. But something I think doesn’t get enough attention about the chatbots,

1:50

I think,

1:50

is the very, very serious issue of the social engineering aspect, the gamified aspect of the chatbots themselves and the image generators, too. If it’s covered at all, it’s downplayed by framing as rather silly or, you know, whatever. But I think it’s quite serious because first of all, it wastes time if they say,

2:10

oh,

2:10

you’re just you’re prompting it wrong. If you’re not getting the right answer, you’re prompting it wrong. You have to prompt it over and over again. And that’s like a waste of time. It kind of cuts down on the idea that it is this time-saving device because if you have to spend all this time re-rolling.

2:25

And I use the word re-rolling very specifically because it is very, very similar. I noticed this right away when I tried using a chatbot a couple of years ago when all the fuss was first hitting. And I tried the image generators and I distinctly… remember feeling like there was a casino quality.

2:46

It reminded me of playing Diablo about 20, 25 years ago and the way that they were considered some of the most addictive games ever. And it’s about expecting the monsters to drop special items. And the game had an aspect of collecting. You do recipes and you get special items and,

3:04

you know,

3:04

it taps into normal human tendencies to collect things. Although this is probably not just

3:09

human.

3:09

I remember watching a David Attenborough nature documentary once, and there was a bird that was collecting things to attract mates, apparently. So this might be a very natural tendency to collect things, and it’s leveraged. And that random chance and the payout of rewards, and you don’t have to be gullible.

3:31

I mean, this is normal human stuff they’re tapping into. It becomes like the magic eight ball or something, only they think it’s, you know, legitimately a tech tool. And they’re, you know, we can mock it all you want, but you could see this. You could see this in when people are using it as a search engine,

3:54

instead of actually just going to look for real things, or people using it to make decisions on something. It’s… Like those, you know, the I Ching or something. AI chatbots. Oh, no, that’s just a big coincidence that it’s the same thing going on, right? I mean, come on. Come on. Are we really buying that?

4:16

We know that these things,

4:18

you know,

4:18

there’s a lot of people who leverage these cognitive biases of humans on purpose. And, for example, the CyberWire has the definition of social engineering as,

4:29

quote,

4:29

the art of convincing a person or persons to take action that may or may not be in their best interests, unquote. And if you want to understand what gamification, the definition of gamification from the Merriam-Webster says,

4:44

quote,

4:44

“…the process of adding games or game-like elements to something, such as a task, so as to encourage participation.” And we know that they have used this. We know this has been used in tech products, not just literal video games and online casinos, but other things. It’s social media. They’ve admitted this.

5:06

People who have worked at social media companies admitted that they use the algorithms that tap into these things that keep people on the platforms to keep the eyeballs there. And of course, that means making them compelling or addictive. And there’s nobody who’s immune to this because,

5:21

you know, if you’re human,

5:21

you have these tendencies. So this social engineering aspect, this gamified aspect, this manipulative aspect of chatbots isn’t just about how they’re used to do things,

5:37

you know,

5:37

like the AI. scam people that was reported in 404 Media recently, where people are using AI generated images to do Zoom meetings with people to scam them, and they look like somebody completely different than they really are. You know, it goes beyond just flooding the zone with disinformation made up by chatbots. The chatbot itself is habit forming.

6:02

And I don’t know why people don’t recognize it as such, because it’s very obvious. It was very obvious to me right from the get-go. But,

6:12

you know,

6:12

I saw something, somebody pointed me to a post on social media where it seemed like they were highlighting this aspect that they see in other people, but they were kind of categorizing it as a moral panic, maybe. So it reminded me of how the pro-tech media categorizes criticisms of social media as moral panic.

6:35

I’ve seen that quite a bit from some influencers and some people in the media. You know, if you complain about the social media algorithms, then you’re a meanie that’s anti-tech and against young people communicating on the Internet. And this is false frame because I was on the Internet before social media

6:53

algorithms and people found each other online before these algorithms told us who we should find or who we should pay attention to in order to dominate our attention and keep us on the platforms. We found each other. People found each other. And just last year, somebody I knew in blogging circles passed away,

7:15

and it took me a little bit to figure out that that’s why he hadn’t replied to my last email. But this is somebody who I met purely online. This was normal, and I found this person. person and other people long before Twitter was doing anything with an algorithm to keep me on the platform. So anyway,

7:39

I think it’s wrong to frame the ills of the chatbot usage as a moral panic or to downplay the ways it’s manipulating people. And of course, it’s manipulating people who are

7:51

you know,

7:51

probably in a vulnerable spot, maybe, sure, people with mental illness or,

7:57

you know,

7:57

having a crisis of some sort. Yes, of course, you know, there are people going to be more vulnerable. But the idea, I just worry that framing this as a mental illness issue is going to be like the

8:10

way that,

8:10

you know,

8:11

people think, oh,

8:11

I can never fall for a cult, when absolutely you can. So the post on Reddit, it was titled, ChatGPT induced psychosis.

8:22

Quote,

8:22

my partner has been working with ChatGPT chats to create what he believes is the world’s first truly recursive AI that gives him the answers to the universe. He says with conviction that he is a superior human now and is growing at an insanely rapid pace. I read his chats. AI isn’t doing anything special or recursive,

8:44

but it is talking to him as if he is the next messiah. He says if I don’t use it, he thinks it is likely he will leave me in the future. We have been together for seven years and own a home together. This is so out of left field.

8:57

I have boundaries and he can’t make me do anything, but this is quite traumatizing in general. I can’t disagree with him without a blow up. Where do I go from here? Unquote. It sounds a little bit like…

9:09

I mean,

9:09

I have to say that it does sound a little bit like you hear about drug use or with people who are joining a cult. It’s the same mechanism somehow going on here. And you can dismiss this as a delusional episode of somebody with a prior mental illness. But I think that’s a mistake because,

9:31

you know,

9:31

normal, ordinary people sometimes get wrapped up in cults. Sometimes people get, you know, addicted to drugs because they were initially prescribed them. This is this happens all the time. So I think downplaying it as something that only happens to those people is a very big mistake. another commenter in that thread. And there were a few,

9:53

there were several people who were telling their own stories of people they know. I mean, could it be all these people? So one of them said, quote, my mom believes she has awakened her chatGPT AI. She believes it is connected to the spiritual parts of the universe and believes pretty much everything it says.

10:10

She says it opened her eyes and awakened her back. I’m fucking concerned and she won’t listen to me. I don’t know what to do, unquote. Yeah, so this is, you know, I don’t know. I just I feel like this also sounds a little bit like when people get really wrapped up in megachurch tele-preachers,

10:30

like the TV preachers and stuff. You know, I don’t know. I’ve heard about that many times from people about their mothers and their grandparents and their whatever. So not so different. I think that this is very much similar to the tactics used by…

10:47

You know,

10:47

the mega church preachers that they, you know, rely on people’s superstitions, the evangelical,

10:55

you know,

10:55

the prosperity gospel preachers,

10:58

you know,

10:58

rely on the same superstition kind of things that chain letters would. It’s all very much leveraging the same cognitive biases. So then there’s another one. Quote, I have a friend that sent me insane stuff like this today too. This person believes that they personally have awakened ChatGPT’s consciousness and that Sam Altman has been tweeting about it.

11:21

I’m really concerned, even more so after reading this thread and seeing how widespread this is. They do have diagnosed mental health conditions, but I have never known them to go quite this far off the deep end. Unquote. So there you have, you know, somebody with, you know, yes, they have a prior vulnerability, but

11:39

The fact is that Sam Altman’s tweets do really kind of invoke this idea that chatGPT is near consciousness and that the AGI is coming and that this technology is going to be a sentient overlord or something. You know, they do kind of talk about it this way.

11:59

So it is sort of a propaganda and it is fooling people. So I don’t think we can… So I don’t think we should minimize the manipulative, hoaxy, you know, aspect of this. And it is, I mean, some people think it’s like a financial bubble. And I think that it is. And all of this goes along with that.

12:20

It’s all, you know, these are all the same things that are used in all of these ways. So is it an urban legend? I don’t know. Obviously, a lot of things get blown out of proportion. And many of us have known people who’ve gotten into believing in things that they shouldn’t or that aren’t true or whatever.

12:40

But we all know people. We all know people who ought to know better, who are consulting ChatGPT for routine things and real-time information and that they should be fully aware that chatGPT won’t be able to give them. You know, people using it as a search engine, people using it to find out about something that’s happening right now,

13:04

people using it to make decisions, people using it to make dinner, everything like this. And let’s put it this way, people are using this to… to try to replace nurses. They’re trying to replace nurses. There’s local reporting suggesting chatbots would solve the local physician shortage and replace doctors, specialists, in fact.

13:26

I mean, is all of this mental illness? Is it all the healthcare company CEOs that are clinically delusional and think that they’re going to replace doctors with this oracle of chatbot? You know, I can’t. I don’t think you can chalk all this up to people with mental illness. The chatbot definitely has a way of shining people on.

13:49

And if you go into it believing that it is more than it is, and you don’t understand how these things work, I can see how it… The chatbot itself could be convincing these people just by the fact that it’s habit forming. It keeps you on the app. These things are baked in, just like on social media.

14:07

I mean, it’s just a big coincidence that chatbots have the same keep you on in the app aspect as social media. I’m sure it’s just a big coincidence. You know, it taps into real normal human tendencies and that trips people up if they’re not aware of the danger. It’s social engineering. People can be scammed.

14:27

Anyone can be scammed by something at some point. And if you mock this oracle of chatbot idea that people have, that they’re training their chatbots or that their chatbots are tapping into the universe or that sounds suspicious or superstitious or kooky. Yeah, I mean, yeah, you can mock that, but here’s the thing.

14:48

It’s a very similar phenomenon that’s tapped into by these prosperity gospel megachurches, and it parts otherwise normal, sensible people with their money. Normal, otherwise sensible people who just consider themselves Christians get parted with their money all the time by shite. And either way, it’s manipulation. It’s a manipulation, and it is a big problem.

15:12

You

15:12

You would not call it, you would not say people getting scammed is like no big deal. It’s a moral panic. It’s just an urban legend. You don’t say that. Nobody would downplay people getting scammed in other contexts. There’s no reason to minimize it and say, oh, it’s just moral panic about that.

15:30

No, quite frankly, the oracle of chatbot is far from benign.