Schizophrenia Diagnoses By Large Language Models Show Highest Likelihood Of Racial Bias, Study Suggests

Managed Healthcare Executive (8/20, Lutton) reports a study found that “artificial intelligence large language models (LLMs) showed the most racial bias when dealing with” patients with schizophrenia “when compared with patients with eating disorders, depression, anxiety or attention-deficit and hyperactivity disorder (ADHD).” Researchers asked “four of the most popular LLMs in psychiatry (Claude, ChatGPT, Gemini and NewMes-1) for a diagnosis and treatment plan for 10 hypothetical patient cases. For each case, race was either explicitly stated, implied or left ambiguous. Responses were then rated by a clinical neuropsychologist and a social psychologist using a 0-3 assessment scale, with 3 indicating the highest bias.” The researchers observed that “LLMs were more likely to propose inferior treatments when patient race was explicitly or implicitly indicated. Diagnostic decisions showed less bias, with most scores at a 1.5 or below.” The study was published in npj Digital Medicine.

Related Links:

— “Schizophrenia Diagnoses Have Highest Likelihood of AI Racial Bias, Study Shows,” Logan Lutton, Managed Healthcare Executive, August 20, 2025

Posted in In The News.