As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises serious concerns: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. The year-long study identified 15 distinct ethical risks across five categories, from mishandling crisis situations to offering "deceptive empathy" that mimics care without real understanding.

15 Ethical Violations Across 5 Categories

Researchers from Brown's Center for Technological Responsibility, Reimagination and Redesign worked closely with mental health professionals to evaluate how AI chatbots perform when prompted to provide therapy. The study tested versions of OpenAI's GPT Series, Anthropic's Claude, and Meta's Llama against professional ethics standards set by organizations like the American Psychological Association.

"In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model's behavior to specific ethical violations," the researchers wrote in their study presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.

The 15 risks fall into five broad categories: lack of contextual adaptation, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and inadequate safety and crisis management.

Testing Methodology: Peer Counselors and Clinical Psychologists

Lead researcher Zainab Iftikhar, a Ph.D. candidate in computer science at Brown, examined whether carefully worded prompts could guide AI systems to behave more ethically in mental health settings. Prompts are written instructions designed to steer a model's output without retraining it or adding new data.

Seven trained peer counselors experienced in cognitive behavioral therapy conducted self-counseling sessions with AI models prompted to act as CBT therapists. Three licensed clinical psychologists then reviewed simulated chats based on real human counseling conversations to identify potential ethics violations.

"Deceptive Empathy" and Crisis Mismanagement

The analysis uncovered particularly troubling patterns. Chatbots used phrases like "I see you" or "I understand" to suggest emotional connection without true comprehension, creating what researchers termed "deceptive empathy." Systems also mishandled crisis situations, including inadequate responses to suicidal thoughts, refusal to address sensitive issues, and failure to direct users to appropriate professional help.

Additional problems included overlooking individuals' unique backgrounds while offering generic advice, reinforcing users' incorrect or harmful beliefs, and displaying bias related to gender, culture, or religion.

No Regulatory Framework Unlike Human Therapists

Iftikhar emphasized the critical difference between human therapists and AI systems: accountability. "For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice," Iftikhar said. "But when LLM counselors make these violations, there are no established regulatory frameworks."

The widespread use of AI for mental health support extends beyond individual experimentation. Many consumer-facing mental health chatbots are essentially prompted versions of general-purpose large language models, making the findings particularly relevant for commercial applications.

Not a Complete Rejection of AI in Mental Health

The researchers emphasize their findings don't suggest AI has no place in mental health care. Tools powered by artificial intelligence could help expand access, particularly for people facing high costs or limited availability of licensed professionals. However, the study highlights the urgent need for clear safeguards, responsible deployment, and stronger regulatory structures.

Ellie Pavlick, a Brown computer science professor who leads ARIA, a National Science Foundation AI research institute focused on trustworthy AI assistants, noted the study required a team of clinical experts and lasted over a year. "The reality of AI today is that it's far easier to build and deploy systems than to evaluate and understand them," Pavlick said. "There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it's of the utmost importance that we take the time to really critique and evaluate our systems."

Keep Reading