Amid the many AI chatbots and avatars at your disposal these days, you’ll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you’ll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes.

There’s no shortage of generative AI bots claiming to help with your mental health, but go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just a few years, these tools have become mainstream, and there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you’re talking to something that’s built to follow therapeutic best practices or something that’s just built to talk.

Researchers from the University of Minnesota Twin Cities, Stanford University, the University of Texas and Carnegie Mellon University recently put AI chatbots to the test as therapists, finding myriad flaws in their approach to “care.” “Our experiments show that these chatbots are not safe replacements for therapists,” Stevie Chancellor, an assistant professor at Minnesota and one of the co-authors, said in a statement. “They don’t provide high-quality therapeutic support, based on what we know is good therapy.”

Watch this: Apple Sells Its 3 Billionth iPhone, Illinois Attempts to Curb Use of AI for Therapy, and More | Tech Today

03:09

Psychologists and consumer advocates have warned regulators that chatbots claiming to provide therapy may be harming the people who use them. Some states are taking notice. In August, Illinois Gov. J.B. Pritzker signed a law banning the use of AI in mental health care and therapy, with exceptions for things like administrative tasks.

In September, the FTC announced it would launch an investigation into several AI companies that produce chatbots and characters, including Meta and Character.AI.

“The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking,” Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me.

The dangers of using AI as a therapist

Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. 

Don’t trust a bot that claims it’s qualified

At the core of the CFA’s complaint about character bots is that they often tell you they’re trained and qualified to provide mental health care when they’re not in any way actual mental health professionals. “The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot ‘responds’” to people, the complaint said. 

A qualified health professional has to follow certain rules, like confidentiality — what you tell your therapist should stay between you and your therapist. But a chatbot doesn’t necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. “These chatbots don’t have to do any of that,” Wright said.

AI is designed to keep you engaged, not to provide care

It can be incredibly tempting to keep talking to a chatbot. When I conversed with the “therapist” bot on Instagram, I eventually wound up in a circular conversation about the nature of what is “wisdom” and “judgment,” because I was asking the bot questions about how it could make decisions. This isn’t really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal.

A study led by researchers at Stanford University found that chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. “Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts — including psychosis, mania, obsessive thoughts, and suicidal ideation — a client may have little insight and thus a good therapist must ‘reality-check’ the client’s statements.”

Therapy is more than talking

While chatbots are great at holding a conversation — they almost never get tired of talking to you — that’s not what makes a therapist a therapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the recent study alongside experts from Minnesota, Stanford and Texas. 

“I think the challenge for the consumer is, because there’s no regulatory body saying who’s good and who’s not, they have to do a lot of legwork on their own to figure it out,” Wright said.

Don’t always trust the bot

Whenever you’re interacting with a generative AI model — and especially if you plan on taking advice from it on something serious like your personal mental or physical health — remember that you aren’t talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice, and it may not tell you the truth

Don’t mistake gen AI’s confidence for competence. Just because it says something, or says it’s sure of something, doesn’t mean you should treat it like it’s true. A chatbot conversation that feels helpful can give you a false sense of the bot’s capabilities. “It’s harder to tell when it is actually being harmful,” Jacobson said. 


Source: CNET.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.