On its front page, the New York Times (12/31, A1, Singer) reported, “Facebook has computer algorithms that scan the posts, comments and videos of users in the” US and elsewhere “for indications of immediate suicide risk.” Whenever “a post is flagged, by the technology or a concerned user, it moves to human reviewers at the company, who are empowered to call local law enforcement.” But, “in a forthcoming article in a Yale law journal,” health law scholar Mason Marks contends that “Facebook’s suicide risk scoring software, along with its calls to the police that may lead to mandatory psychiatric evaluations, constitutes the practice of medicine.” Marks argues that “government agencies should regulate the program, requiring Facebook to produce safety and effectiveness evidence.”
Related Links:
— “In Screening for Suicide Risk, Facebook Takes On Tricky Public Health Role, ” Natasha Singer, , December 31, 2018.