Stanford University study finds AI-based therapy has ‘significant risks’


AI therapy session

Access to psychotherapy remains something that many people cannot afford, so it is hardly surprising that increasing numbers of people are turning to AI therapists for help.

It is not just cost that attracts people to seek therapy from artificial intelligence. Demand for therapy is high and waiting lists can be long. There are also considerations such as the difficulties some clients face accessing the help they seek because of living in a remote location, or having an unreliable internet connection. The persistent stigma associated with seeking therapy can also serve as an obstacle that stands in the way of people reaching out to a qualified, human therapist.

So, AI therapy is on the rise. But it is any good? Yes. And no. It’s complicated.

While there are major benefits to AI-based therapy on the face of things, there are also great areas of concern. This is something highlighted by researchers at Stanford University. In a paper entitled Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers researchers warn against seeing large language models (LLMs) as a replacement for therapists.

The findings show that AI therapists have numerous negatives that count against them. As well as promoting unhelpful or even dangerous advice, biased inherent in LLMs is something that is simply not acceptable in the therapy room.

But while the study concludes that AI therapists are simply not fit for purposes, they are tools that could be helpful in other ways in clinical therapy.

One of the authors of the study, Nick Haber, an assistant professor at the Stanford Graduate School of Education, says:

LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits. But we find significant risks, and I think it’s important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences. 

In tests that were performed as part of the research, AI chatbots were found to show stigma towards clients with schizophrenia or addiction problems. A total of five chatbots were tested, including Character.ai bots.

What is worrying is that things do not seem to be improving. Lead author of the paper, Jared Moore, a PhD candidate in computer science at Stanford University, says:

Bigger models and newer models show as much stigma as older models. The default response from AI is often that these problems will go away with more data, but what we’re saying is that business as usual is not good enough.

Of particular concern was how chatbots dealt with clients talking about suicidal thoughts.

But there has previously found to be scope for AI being helpful in the therapeutic journeys of some clients – such as those working with some types of trauma.

The authors of the Standford research paper stress the need for nuance. They say that AI may have a place in therapy, but it is not entirely clear what that role is yet. Haber says to the Stanford Report:

This isn’t simply ‘LLMs for therapy is bad,’ but it’s asking us to think critically about the role of LLMs in therapy. LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be.

The report is available to read in full here (PDF).

Image credit: Alphaspirit / Dreamstime.com




Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment