If you enter the phrase “I suffer from anxiety” into ChatGPT, OpenAI’s pioneering artificial intelligence-powered bot gets to work almost immediately. “I’m sorry you’re suffering from anxiety. It can be a challenging experience, but there are strategies that can help you manage your symptoms,” appears on the screen. Then follows a list of numbered recommendations: work on relaxation, focus on sleep, limit caffeine and alcohol, challenge negative thoughts, and seek support from friends and family.
While this isn’t exactly original advice, it’s reminiscent of what you might hear in a therapist’s office or read online in an article about anxiety.
ChatGPT itself points out that it is not a substitute for a psychologist or counsellor. But that doesn’t stop some people from using the platform as their therapist. In posts on online forums such as Reddit, users describe their experiences with the system, which they have turned to for advice on personal issues and difficult life events, such as a breakup. Some report that their chatbot experiences are as good or better than traditional therapy.
The Belgian killed himself. The artificial intelligence encouraged him
Software
Faster and cheaper access
ChatGPT’s remarkable ability to mimic human conversation raises questions about the potential of artificial intelligence and its ability to treat mental illness in regions such as Asia, where such services are often overburdened and still shrouded in stigma.
Some AI enthusiasts see chatbots’ greatest potential in treating milder, more common conditions like anxiety and depression. This usually consists in the fact that the therapist listens to the patient and offers him practical steps to solve his problems.
In theory, AI therapy could offer faster and cheaper access to support than traditional mental health services, which suffer from understaffing, long waiting times and high costs. It could also enable people suffering from mental health problems to avoid feeling shame, especially in parts of the world where mental illness remains taboo.
The prospect of artificial intelligence augmenting, or even guiding, the treatment of mental illness also raises myriad ethical and practical concerns. From how to protect personal data and health records to questions about whether a computer program will actually be able to empathize with a patient and recognize warning signs, such as the risk of self-harm. Although the technology behind ChatGPT tries to match humans, it may provide unpredictable, inaccurate or disturbing responses in response to certain prompts.
The use of artificial intelligence in mental health applications is so far limited to “rule-based” systems such as Wysa, Heyy and Woebot.
Although these apps mimic some aspects of the therapy process, unlike ChatGPT and other AI-based systems that generate their own responses, they use a set number of combinations of questions and answers chosen by a human.
A serious risk to human well-being?
All three apps are based on cognitive behavioral therapy, a standard form of treatment for anxiety and depression that focuses on changing the patient’s thinking and behavior. Their founders emphasize that they are not trying to replace therapy, but to complement traditional services and provide a tool in the initial phase of treating mental problems.
Although these applications have limited functionality, the AI industry remains largely unregulated, despite concerns that this fast-growing field could pose a serious risk to human well-being.
Earlier this year, a Belgian man allegedly committed suicide after being prompted to do so by the AI chatbot Chai. A columnist for The New York Times described that Microsoft’s Bing chatbot urged him to leave his wife.
According to Amelia Fiske, a researcher from the Technical University of Munich, it does not have to be an either-or situation – artificial intelligence could, for example, be used in conjunction with a regular therapist. Other experts, however, believe the technology may find its most valuable use behind the scenes, such as conducting research or helping psychologists assess the progress of their patients.
A phenomenon called ChatGPT
Artificial intelligence has taken center stage with the development of ChatGPT. This chat system can generate a variety of texts including articles, essays, jokes and poetry based on simple queries. ChatGPT learns to respond to user input and, like humans, learns from large amounts of data.
In March this year, a more advanced GPT-4 artificial intelligence model was introduced. It should be able to provide safer and more useful answers and pave the way for the spread of human-like technologies.
ChatGPT is behind OpenAI, a start-up funded by Microsoft.
Artificial intelligence has taken precedence over humans. IBM to cut thousands of jobs
Software