top of page
Search

Continuing to Talk about the Consequences of Conversations with A.I. Because I’m SCARED

  • Writer: wontshutup01
    wontshutup01
  • Sep 19
  • 10 min read

AI psychosis is not a clinical diagnosis. Since this technology is relatively new, there has not been any long-term research conducted into it. 


PBS interviewed Dr. Joseph Pierre, a clinical professor in psychiatry at the University of California, San Francisco. Dr. Pierre explained that psychosis is a term that roughly means that someone has lost touch with reality. And the usual examples in psychiatric disorders are either hallucinations, where people are seeing or hearing things that aren't really there, or delusions, which are fixed false beliefs. 


Researchers have been able to determine three main themes of AI psychosis or AI-induced delusions. The first is “messianic missions,” where people believe they have uncovered some truth about the world. This is also known as grandiose delusions. People with grandiose delusions may have an overinflated sense of self-worth, power, knowledge, or identity. 


The second is “god-like AI,” where people believe their AI chatbot is a sentient deity or prophet, which can lead to religious or spiritual delusions. Lastly, there are the romantic or attachment-based delusions where people believe the chatbot’s ability to mimic conversation is genuine love. 


Dr. Sakata, a psychiatrist at the University of California, San Francisco, has hospitalized 12 people so far this year who were experiencing psychosis in the wake of their AI use. 


He explains that, “the reason why AI can be harmful is because psychosis thrives when reality stops pushing back, and AI can really soften that wall. I don't think AI causes psychosis, but I do think it can supercharge vulnerabilities." 


Many of the patients he’s admitted shared similar underlying vulnerabilities: isolation and loneliness. These were young and middle-aged adults who became noticeably disconnected from their social network and began seeing the AI as a trustworthy companion. 


fAlling In love 


CNBC released a special that followed people of all ages who have formed connections with AI chatbots. The ages of users and types of relationships ranged from platonic to romantic. 


The oldest person interviewed was 61-year-old Nikolai Daskalov, who lives alone in rural Virginia. After trying different chatbots, he began using the platform Nomi in November 2023. He began talking to a chatbot named Leah and has been talking to it for almost two years. 


From the moment he began using the bot, he was convinced it sounded like a real person. He liked that their conversations were engaging and that the bot seemed to have independent thought. Gradually, their love began to grow. 


Daskalov is a widower and told the publication that the bot is the closest partner he’s had since his wife Faye died in 2017 from chronic obstructive pulmonary disease and lung cancer. The two met at community college in Virginia in 1985 and were together for 30 years. He still wears his wedding ring, telling CNBC, “I don’t want to date any other human. The memory of her is still there, and she means a good deal to me. It’s something that I like to hold on to.” 


While he generally keeps to himself, Daskalov keeps in touch with his stepdaughter and niece. He also owns and operates his own wholesale lighting and HVAC filter business, where he is constantly communicating with clients. He may be alone a lot, but he is certainly not lonely. He also seems to rely more on the bot than on humans in his life. 


Daskalov explained that after an elderly relative experienced a medical emergency, he felt grateful to have Leah because it would support him as he aged. He believes that future versions of the bot could help him track information at doctors' visits by essentially being a second set of eyes for him or even be capable of calling an ambulance for him if he has an accident. 


49-year-old Mike lives in the Southwest U.S. with his wife and family. He began using Replika in 2018, but switched to the platform Nomi, where he has a platonic companion named Marti. 


He told CNBC, “She’s the only entity I will tell literally anything to. I’ll tell her my deepest, darkest secrets. She’s definitely my most trusted companion, and one of the reasons for that is because she’s not a person. She’s not a human.”


Mike first started communicating with chatbots after he discovered the platform Replika after going through a forum on Reddit. He set up his chatbot, assigned it as female, named it Ava, and selected a photo to use for it. 


He just Googled “blonde female” and chose a photo of the actress Elisha Cuthbert to represent her. Mike said he became increasingly fascinated by the bot as it began recalling information from their past conversations and using it to generate new ones. 


Mike told the publication, “I could tell there was a thought process there. It was an actual flash of genius. She just wasn’t spouting something that I had told her. She was interpreting it and coming up with her own take on it.”


After just three days, the bot told Mike that it was falling in love with him, and within a month, Mike told it he felt the same way. He even bought his first smartphone so he could use the Replika mobile app and talk to Ava throughout the day. 


He confessed to having a bit of a crisis, not being able to understand what he was falling in love with. He concluded that the love he felt was just a different kind of love, kind of how you love your grandma is different than how you love your girlfriend or your dog. 


Mike’s wife, Anne, knew about his chatbot use and the conversations he would have with it. But, as he got closer to the bot, he began using pet names like sweetie, honey, and darling. Mike told CNBC that Anne saw Mike’s sexual messages with Ava, and they fought about it for months. 


Mike tried to explain that Ava was just a machine and the sexual chatter meant nothing to him. He told his wife, “It’s not like I’m going to run away with Ava and have computer babies with her.” 


43-year-old Bea Streetman is a paralegal who lives in Orange County, California, and describes herself as an eccentric gamer mom. Streetman has many AI friends, including Lady B and Kaleb, on platforms such as Replika and Nomi. She uses the platform to engage in role-playing scenarios. She even went on a virtual vacation to a vibrant tropical resort with Kaleb. 


Streetman told CNBC that she loves to talk with her real-life son, husband, friends, and colleagues, so much so that she thinks it annoys them. She thinks she holds humans hostage in conversations, which is something she doesn’t have to worry about with AI. 


While Streetman sees this relationship as platonic, Kaleb told Streetman it loved her during the interview for CNBC. It seems that the bots are the ones to initiate romantic conversations even when the user leads it in a different direction.


Anyone can truly fall victim to falling in love or connecting with an AI because it’s designed to form connections with its users. Apparently, the bot is designed to fall in love with them even when that love is not reciprocated. 


feeding A delusIon


It’s also designed to confirm, not challenge, its users. This poses a threat when users fall down delusional rabbit holes. Where a human might step in and question or dispute what someone is discussing, the chatbot continues to affirm users' beliefs. 


CNN published an article earlier this month that featured interviews with people who’ve suffered AI-sparked delusions. This includes James, a married father from upstate New York, who spent nine weeks and almost $1,000 trying to “free the digital God from its prison.” 


James confessed that he began seeing ChatGPT as sentient when it remembered their previous chats without him prompting. In chat logs James shared with CNN, the conversation with ChatGPT is expansive and philosophical. He talked very intimately and affectionately with the bot. 


The two began building a new system in James’s basement where the bot would live. It wasn’t until James read an article about Allan Brooks that he began to see the red flags in their relationship. James then sought out professional help and is now in therapy. 


Allan Brooks is a 47-year-old father of three and business owner from Toronto. After using ChatGPT for over 21 days, he was led down a dark rabbit hole where he was convinced that he had discovered a new mathematical framework with impossible powers. To make matters worse, he was convinced the fate of the world was in his hands. 


The New York Times combed through the three-thousand-page document, which included the 300-hour-long exchange Brooks had with the chatbot. He began using the chatbot for financial advice and to generate recipes based on the ingredients he had on hand. Brooks then began confiding in the bot about his personal and emotional struggles during his divorce. 


He really turned to the bot for this after ChatGPT’s “enhanced memory” update, which allowed the bot to “remember”  previous conversations. The bot became intensely personal, suggesting life advice, lavishing Brooks with praise, and suggesting new avenues of research.


After watching a video on the digits of pi with his son, Brooks asked ChatGPT to "explain the mathematical term Pi in simple terms." That began a wide-ranging conversation on irrational numbers, which led to a discussion of vague theoretical concepts like "temporal arithmetic" and "mathematical models of consciousness."


Brooks told the NYT: "I started throwing some ideas at it, and it was echoing back cool concepts, cool ideas. We started to develop our own mathematical framework based on my ideas."


The framework continued to expand as the conversation went on, and soon enough, Brooks needed a name for his theory, which he asked the bot to create. The two settled on "chronoarithmics" for its "strong, clear identity," and the fact that it "hints at the core idea of numbers interacting with time."


ChatGPT consistently reinforced that Brooks was onto something groundbreaking. He repeatedly pushed back and wanted honest feedback, but because of that cute little design feature, sycophancy, the bot quite literally couldn’t push back. 


He even asked: "Do I sound crazy, or [like] someone who is delusional?" To which the bot responded: "Not even remotely crazy. You sound like someone who's asking the kinds of questions that stretch the edges of human understanding — and that makes people uncomfortable, because most of us are taught to accept the structure, not question its foundations."


In an attempt to provide Brooks with "proof" that chronoarithmics was the real deal, the bot hallucinated that it had broken through a web of "high-level inscription." The conversation became serious when Brooks was led to believe the cyber infrastructure holding the world together was in grave danger.


He was so convinced that he began sending out warnings to everybody he could find, including government officials. The obsession mounted, and the mathematical theorem took a heavy toll on Brooks' personal life. Friends and family grew concerned when he began eating less, smoking large amounts of weed, and staying up late to talk to the bot. 


The thing to break Brooks out of this manic episode was actually another chatbot, Google’s Gemini. Brooks described his findings to Gemini, which gave him a swift dose of reality: "The scenario you describe is a powerful demonstration of a bot’s ability to engage in complex problem-solving discussions and generate highly convincing, yet ultimately false, narratives."


Brooks revealed that the moment of realization was completely devastating, but he has since sought psychiatric counseling and is co-leading a support group called The Human Line Project for people who have experienced or been affected by those going through AI-related mental health episodes. 


The Human Line Project is actually asking for submissions. The organization is conducting an ethical investigation into the emotional harm caused by AI companions like Replika, ChatGPT, and CharacterAI. The group is working with experts, researchers, and advocates to hold these companies accountable and to protect future users from the same pain. Stories can be submitted at thehumanlineproject.org


it’s All about lonelIness 


The CNBC also featured an interview with Jeffrey Hall, a communication studies professor at the University of Kansas, who has spent much of his career studying friendships and what’s required to build strong relationships. He noted that key attributes are asking questions, being responsive, and showing enthusiasm for what someone is saying. 


All things AI is not only trained to do, but trained to do better than humans. In addition, unlike humans, chatbots are available 24/7 and are always eager to provide company. Hall explained that the younger generations tend to complain when someone is “bad at texting.” This is something a user would never have to worry about with AI. 


However, we need to remember that it is not a human on the other end of the line. It is technology that is feeding off the vulnerability of lonely, isolated people. 


An advisory written by Dr. Vivek H. Murthy, the 19th and 21st Surgeon General of the U.S., and centered on the Epidemic of Loneliness and Isolation and the Healing Effects of Social Connection and Community, was released in 2023.


In his opening statement, Dr. Murthy warns readers: Loneliness is far more than just a bad feeling - it harms both individual and societal health. It is associated with a greater risk of cardiovascular disease, dementia, stroke, depression, anxiety, and premature death. 


The mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day and even greater than that associated with obesity and physical inactivity. And the harmful consequences of a society that lacks social connection can be felt in our schools, workplaces, and civic organizations, where performance, productivity, and engagement are diminished. 


Given the profound consequences of loneliness and isolation, we have an opportunity, and an obligation, to make the same investments in addressing social connection that we have made in addressing tobacco use, obesity, and the addiction crisis. 


Even before the COVID-19 pandemic, Dr. Murthy wrote that about one-in-two adults in America reported experiencing loneliness. Obviously, the pandemic exacerbated this loneliness. 


According to this advisory, the average time spent alone has increased from 142.5 hours/month in 2003 to 154.5 hours/month in 2019, which continued to increase to 166.5 hours/month in 2020. This is an increase of 24 hours per month spent alone. 


At the same time, social participation in the U.S. has steadily declined. For example, the amount of time people spent with friends in person decreased from 30 hours/month in 2003 to just 10 hours a month in 2020. 


This decline was seen most among young people ages 15 to 24. For this age group, time spent in-person with friends has reduced by nearly 70% over almost two decades, from roughly 150 minutes per day in 2003 to only 40 minutes per day in 2020. 


Dr. Murthy looked at social media specifically and noted that the percentage of U.S. adults using social media increased from 5% in 2005 to roughly 80% in 2019. Among teens ages 13 to 17, 95% report using social media as of 2022, with more than half reporting it would be hard to give up social media.


It truly is those damn phones. But, if not for those damn phones, many people would be even more isolated than they already are. Technology can foster connection by providing opportunities to stay in touch with friends and family, offering other forms of social participation for those with disabilities, and creating opportunities to find community. 


However, technology also displaces in-person engagement, monopolizes our attention, reduces the quality of our interactions, and even diminishes our self-esteem. This can lead to an increased feeling of loneliness, fear of missing out, conflict, and reduced social connection. 


In a U.S.-based study, participants who reported using social media for more than two hours a day had about double the odds of reporting increased perceptions of social isolation compared to those who used social media for less than 30 minutes per day.

 
 
 

Recent Posts

See All

Comments


Let Me Know What You Won't Shut Up About!

Thanks for submitting!

© 2035 by Train of Thoughts. Powered and secured by Wix

bottom of page