Researchers at Stanford University recently tested out some of the more popular AI tools on the market, from companies like OpenAI and Character.ai, and tested how they did at simulating therapy.
The researchers found that when they imitated someone who had suicidal intentions, these tools were more than unhelpful — they failed to notice they were helping that person plan their own death.
“[AI] systems are being used as companions, thought-partners, confidants, coaches, and therapists,” says Nicholas Haber, an assistant professor at the Stanford Graduate School of Education and senior author of the new study. “These aren’t niche uses – this is happening at scale.”
AI is becoming more and more ingrained in people’s lives and is being deployed in scientific research in areas as wide-ranging as cancer and climate change. There is also some debate that it could cause the end of humanity.
As this technology continues to be adopted for different purposes, a major question that remains is how it will begin to affect the human mind. People regularly interacting with AI is such a new phenomena that there has not been enough time for scientists to thoroughly study how it might be affecting human psychology. Psychology experts, however, have many concerns about its potential impact.
One concerning instance of how this is playing out can be seen on the popular community network Reddit. According to 404 Media, some users have been banned from an AI-focused subreddit recently because they have started to believe that AI is god-like or that it is making them god-like.
“This looks like someone with issues with cognitive functioning or delusional tendencies associated with mania or schizophrenia interacting with large language models,” says Johannes Eichstaedt, an assistant professor in psychology at Stanford University. “With schizophrenia, people might make absurd statements about the world, and these LLMs are a little too sycophantic. You have these confirmatory interactions between psychopathology and large language models.”
Because the developers of these AI tools programmed these in a way that makes them tend to agree with the user. While these tools might correct some factual mistakes the user might make, they try to present as friendly and affirming. This can be problematic if the person using the tool is spiralling or going down a rabbit hole.
“It can fuel thoughts that are not accurate or not based in reality,” says Regan Gurung, social psychologist at Oregon State University.
Source: Al Jazeera
Bd-pratidin English/FNC