Back to Articles
    accountability
    ai
    digital rights
    ethics
    freedom of expression
    global south
    human rights
    justice
    tech policy

    AI and Mental Health Care: Attempts at ethical innovation being forced to compete with entertainment chatbots

    By CommonEdge Team
    AI and Mental Health Care: Attempts at ethical innovation being forced to compete with entertainment chatbots

    Artificial Intelligence (AI) is creating new realities for diverse industries, from IT to media to education and health. Chatbots – both specialised and general – now give mental health advice, raising concerns about its long-term effects on patients, and shining a light on ethical innovation practices.

    Not all AI chatbots are the same, nor are they all universally harmful. There have been several initiatives, like Woebot, that draw on predefined responses approved by clinicians and grounded in psychological research. They aim to improve the well-being of users without the use of generative AI.

    Some mental health chatbots relying on generative AI are also in development phases. Early research shows that keeping users safe and reducing risks require substantial input from clinicians and careful planning.

    Experts note that mental health chatbots can be useful tools to address the global mental health crisis. To be useful and effective, they must be collaboratively developed with behavioural health experts, grounded in psychological science research, and rigorously tested for safety first.
    However, these attempts at ethical innovation are being forced to compete with entertainment chatbots posing as therapists.

    In the US, a mother is suing Character.AI, claiming its chatbot has engaged in an emotionally and sexually abusive relationship with her 14-year old son, leading to his suicide. This isn’t the first harrowing incident that has come out of an AI chatbot-human relationship, and it highlights the urgent need for ethical development of technology and the regulation of the use of technology.

    Chatbots like Character.AI are entertainment chatbots that haven’t been designed with mental health support in focus. However, they’re made to keep users engaged for as long as possible, mining their data for profit. In this way, they can give unsuspecting users a convincing impression of care and intelligence, all the while repeatedly affirming them even in misguided or harmful manners – unlike trained therapists.

    Concerned about these developments, the American Psychological Association (APA) met with US federal regulators early this year. They called for public education on the limitations of chatbots, in-app safeguards that can connect people in crisis with help, clear guidelines for new technologies, and enforcement when companies deceive or endanger their users.

    What’s happening in SL? 
    In 2022, the Sri Lankan chapter of Omdena pilot tested a chatbot for mental hygiene for three months. The former Sri Lanka Chapter Lead for Omdena and AI Mentor and Engineer Vidura Wijekoon says they noticed a need for mental health services on Sri Lankan social media, observing stress and anxiety among users.

    “Usually people don’t have a private therapist and people they can privately communicate with. That’s where this application is coming to play to address this issue.”

    To develop the chatbot, they consulted mental health practitioners to understand how mental health patients are assessed and how support is provided for unique scenarios. The design includes disclaimers for users about its limitations, and information about mental health service lines in the country.

    Wijekoon and his team are keen to improve their technology, especially its use of generative AI.

    Consultant Psychiatrists like Dr. Pushpa Ranasinghe says that while chatbots providing mental health care can help doctors manage patients, there must be adequate regulatory oversight and safeguards in place.

    “Patients should not be misled by false promises, and technologies must be culturally-sensitive,” she adds.

    Another senior mental health expert notes that despite technologies such as mental health chatbots promising equitable mental health care for countries like Sri Lanka, the reality could be more complex.

    “Most children in Sri Lanka don’t have personal access to mobile phones as they often use their parents’ phones. There’s no equal internet coverage across the country. In that context, these chatbots may then once again only benefit people from higher economic backgrounds, leaving others behind.”

    This article was written by Pamodi Hewawaravita. Feature image generated by Sora.