Study: Meta failing to moderate Instagram, circulating self-harm content to teens
A new study has found self-harm content on Instagram to be thriving due to parent company Meta’s “extremely inadequate” moderation efforts.
The report conducted by Danish watchdog…

A new study has found self-harm content on Instagram to be thriving due to parent company Meta’s “extremely inadequate” moderation efforts.
The report conducted by Danish watchdog organization Digitalt Ansvar, which translates to Digital Accountability, used fake teen profiles that went on to share 85 images of graphic self-harm content. Researchers then found that, within the span of a month, not a single image shared by their test accounts had been flagged and removed.
“We thought that when we did this gradually, we would hit the threshold where AI or other tools would recognize or identify these images,” shared Ask Hesby Holm, chief executive of Digitalt Ansvar, in an interview. “But big surprise – they didn’t.
“That was worrying because we thought that they had some kind of machinery trying to figure out and identify this content.”
Using their own AI tool, the Digitalt Ansvar team was able to automatically identify 38% of the self-harm content the group had posted, while detecting 88% of the most severe material.
Researchers found that, rather than being blocked by Instagram’s technology, their posts were discovered by teens led there by the app’s algorithm.
Though a Meta spokesperson attempted to refute the study, claiming, “we remove this content when we detect it,” the social media giant may face repercussions from the European Union, which legally requires large digital services to identify risks such as those exposed by Digitalt Ansvar.
Self-harm remains generally on the rise among vulnerable populations such as adolescents, with emergency room data revealing a 50% increase in reported self-injury among young women since 2009. Experts say social media has played a major role in exacerbating this crisis.
“It’s a huge epidemic going on in the world, where young teens, 7-24% of them have had some incidence of self-harm – cutting, or something like that,” explained Dr. Marc Siegel, senior medical contributor at Fox News. “That’s really, really disturbing. A study in Nature last year showed that over three hours of social media each day contributes to this.”
Siegel went on to note that consumption of self-harm content on social media is “contagious,” due to the nature of platforms hosting intentionally shareable content.
Instagram, however, is not the only social media platform whose algorithm is guilty of pushing dangerous self-harm content to teens. In 2023, Digitalt Ansvar published a similar analysis of dark material circulating on TikTok, noting half of their test profiles were fed 70-80% self-harm content after only 15 minutes of viewing related posts.