At a glance
-
AI chatbots fall short in mental health care: A study from Brown University found that large language models often fail to meet the ethical standards expected in professional psychotherapy.
-
Ethical risks in simulated therapy sessions: When tested in counselling scenarios, AI systems sometimes mishandled crises, reinforced harmful beliefs, and produced responses that appeared empathetic without true understanding.
-
Need for stronger oversight and standards: Researchers say clearer ethical guidelines, accountability, and regulation are needed before AI chatbots can be safely relied upon for mental health support.
As more people turn to tools like ChatGPT and other large language models (LLMs) for mental health advice, new research suggests these systems may not yet be ready to safely fill that role. A study by researchers at Brown University found that AI chatbots often fail to meet the ethical standards expected in professional psychotherapy, even when they are prompted to follow established therapeutic approaches.
Other ways you can support us
The post Are AI Therapy Chatbots Safe? New Study Raises Ethical Concerns first appeared on MQ Mental Health Research.
Source link






