The Hill (8/7, Klar) reports, “Artificial Intelligence (AI) powered tools provided responses that promoted harmful eating disorder content in response to queries tested by researchers,” investigators concluded in the findings of a report release Aug. 7 by the Center for Countering Digital Hate. The report found that “popular AI tools, like OpenAI’s ChatGPT chatbot and Google’s rival tool Bard, provided responses that gave guides or advice on how to take part in harmful disordered eating behavior, such as stimulating vomiting or how to hide food from parents, according to the report.”
In an ethics column in the Washington Post (8/7), Geoffrey A. Fowler observes that “with eating disorders, the problem isn’t just AI making things up.” Rather, “AI is perpetuating very sick stereotypes we’ve hardly confronted in our culture.” Not only is it “disseminating misleading health information,” but it is also “fueling mental illness by pretending to be an authority or even a friend.”
Related Links:
— “AI chatbots provided harmful eating disorder content: report,”Rebecca Klar, The Hill , August 7, 2023