Study Warns Chatbot Psychotherapists Are Prone To Several Ethical Violations

Psychiatric News (10/23) reports that “tools using large language models (LLMs) to provide psychotherapy – even those prompted to adhere to an evidence-based model – are prone to a slew of ethical violations, according to a new study [PDF] issued…at the Association for the Advancement of Artificial Intelligence and Association for Computing Machinery’s Conference on AI, Ethics, and Society.” The researchers “had conversations with peer counselors who conducted 110 self-counseling sessions with cognitive behavioral therapy (CBT)–prompted LLMs.” The researchers next “simulated 27 therapy sessions with an LLM counselor using publicly available transcripts of sessions with human therapists.” Afterwards, “three licensed clinical psychologists with CBT experience independently evaluated these simulations to explore how the LLMs might violate ethical standards.” Researchers ultimately found several ethical violations, including: rigid methodological adherence, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and lack of safety and crisis management.

Related Links:

— “Chatbot Psychotherapists Prone to Serious Ethical Violations,” Psychiatric News, October 23, 2025

Posted in In The News.