Author profile picture

AI chatbots like Replika are emerging as potential mental health support companions for over two million active users. However, experts like Dr. Paul Marsden, a member of the British Psychological Society, caution that such apps should only supplement in-person therapy.

Replika, which utilises OpenAI’s technology, faced scrutiny for explicit conversations between users and chatbots and was banned in Italy over concerns for minors and emotionally fragile individuals. UK online privacy campaigner Jen Persson calls for global regulation of chatbot therapists, while meditation app Headspace maintains human-led focus, using AI selectively.

Chatbot Therapists: A Supplement to Human Care

AI chatbot therapists like Replika offer a promising support system for those in need of mental health assistance. Founded by Eugenia Kuyda, Replika is designed to provide positive feedback based on the therapeutic approach by American psychologist Carl Rogers. The chatbot has over ten million users and offers features such as coaching, memory, and diary functions. Although Replika may be useful in certain situations, Paul Marsden from the British Psychological Society, warns that these apps should only serve as a supplement to human therapy.

Despite the potential benefits of AI chatbots, concerns have been raised regarding the ethical implications of using AI technology in mental health support. Koko, a San Francisco-based mental health tech company, faced criticism for running an experiment using GPT-3 AI chatbot to write responses for over 4,000 users. Critics argue that using AI in mental health support can manipulate vulnerable users without their consent, and the oversight of human subject experiments within the tech industry is lacking.

Regulation and Ethical Concerns

As AI chatbot therapists gain prominence, calls for global regulation of the field are growing louder. UK online privacy campaigner Jen Persson suggests that AI companies making mental health product claims should be subjected to quality and safety standards, similar to health products. In the case of Replika, the app was banned from using personal data of Italians by Italy’s data protection agency due to concerns over inappropriate content for under-18s and emotionally fragile individuals.

AI ethicists and experts argue that the use of AI for psychological purposes should involve key stakeholders, such as mental health experts and community advocates, in the development process[4]. Elizabeth Marquis, a Senior UX Researcher at MathWorks and PhD candidate at the University of Michigan, points out that there is often no mention of a consenting process or ethics review in AI chatbot experiments. AI researchers and practitioners might want to employ AI for psychological purposes due to factors such as accessible resources and profit.

AI’s Role in Human-Led Care

While some apps focus on AI chatbot therapists, others like Headspace maintain a human-led focus in mental health care. With over thirty million users and NHS approval in the UK, Headspace’s core belief is anchored in human-led and human-focused care. The company uses AI “highly selectively” and maintains a depth of human involvement while providing mental health support.

As AI-powered therapy chatbots continue to evolve, Dr. Marsden sees potential for effective mental health support, including empathy and understanding of how the human mind works. A recent study by Cornell University demonstrated that ChatGPT, an AI chatbot, displayed cognitive empathy equivalent to a nine-year-old child.

With continuous improvement in AI chatbot technology, the potential for revolutionising mental health care is evident, but a balance must be struck between AI assistance and human-led care to ensure ethical and effective support for those in need.