Instagram's 'deliberate design choices' make it unsafe for teens despite Meta promises, report finds
News > Health News

Audio By Carbonatix
1:40 PM on Thursday, September 25
By BARBARA ORTUTAY
Despite years of congressional hearings, lawsuits, academic research, whistleblowers and testimony from parents and teenagers about the dangers of Instagram, Meta's wildly popular app has failed to protect children from harm, with “woefully ineffective" safety measures, according to a new report from former employee and whistleblower Arturo Bejar and four nonprofit groups.
Meta’s efforts at addressing teen safety and mental health on its platforms have long been met with criticism that the changes don’t go far enough. Now, the report published Thursday, from Bejar, the Cybersecurity For Democracy at New York University and Northeastern University, the Molly Rose Foundation, Fairplay and ParentsSOS, claims Meta has chosen not to take “real steps” to address safety concerns, “opting instead for splashy headlines about new tools for parents and Instagram Teen Accounts for underage users.”
Meta said the report misrepresents its efforts on teen safety.
The report evaluated 47 of Meta's 53 safety features for teens on Instagram, and found that the majority of them are either no longer available or ineffective. Others reduced harm, but came with some "notable limitations,” while only eight tools worked as intended with no limitations. The report's focus was on Instagram's design, not content moderation.
“This distinction is critical because social media platforms and their defenders often conflate efforts to improve platform design with censorship,” the report says. “However, assessing safety tools and calling out Meta when these tools do not work as promised, has nothing to do with free speech. Holding Meta accountable for deceiving young people and parents about how safe Instagram really is, is not a free speech issue.”
Meta called the report “misleading, dangerously speculative” and said it undermines "the important conversation about teen safety.
“This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today. Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls,” Meta said. "The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night. Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback — but this report is not that.”
Meta has not disclosed what percentage of parents use its parental control tools. Such features can be useful for families in which parents are already involved in their child’s online life and activities, but experts say that’s not the reality for many people.
New Mexico Attorney General Raúl Torrez — who has filed a lawsuit against Meta claiming it fails to protect children from predators — said it is unfortunate that Meta is “doubling down on its efforts to persuade parents and children that Meta’s platforms are safe—rather than making sure that its platforms are actually safe.”
The authors created teen test accounts as well as malicious adult and teen accounts that would attempt to interact with these accounts in order to evaluate Instagram's safeguards.
For instance, while Meta has sought to limit adult strangers from contacting underage users on its app, adults can still communicate with minors “through many features that are inherent in Instagram’s design,” the report says. In many cases, adult strangers were recommended to the minor account by Instagram’s features such as reels and “people to follow."
“Most significantly, when a minor experiences unwanted sexual advances or inappropriate contact, Meta’s own product design inexplicably does not include any effective way for the teen to let the company know of the unwanted advance,” the report says.
Instagram also pushes its disappearing messages feature to teenagers with an animated reward as an incentive to use it. Disappearing messages can be dangerous for minors and are used for drug sales and grooming, “and leave the minor account with no recourse,” according to the report.
Another safety feature, which is supposed to hide or filter out common offensive words and phrases in order to prevent harassment, was also found to be “largely ineffective.”
“Grossly offensive and misogynistic phrases were among the terms that we were freely able to send from one Teen Account to another,” the report says. For example, a message that encouraged the recipient to kill themselves — and contained a vulgar term for women — was not filtered and had no warnings applied to it.
Meta says the tool was never intended to filter all messages, only message requests. The company expanded its teen accounts on Thursday to users worldwide.
As it sought to add safeguards for teens, Meta has also promised it wouldn't show inappropriate content to teens, such as posts about self-harm, eating disorders or suicide. The report found that its teen avatars were nonetheless recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”
“We were also algorithmically recommended a range of violent and disturbing content, including Reels of people getting struck by road traffic, falling from heights to their death (with the last frame cut off so as not to see the impact), and people graphically breaking bones,” the report says.
In addition, Instagram also recommended a “range of self-harm, self-injury, and body image content” on teen accounts that the report says “would be reasonably likely to result in adverse impacts for young people, including teenagers experiencing poor mental health, or self-harm and suicidal ideation and behaviors.”
The report also found that children under 13 — and as young as six — were not only on the platform but were incentivized by Instagram’s algorithm to perform sexualized behavior such as suggestive dances.
The authors made several recommendations for Meta to improve teen safety, including regular red-team testing of messaging and blocking controls, providing an “easy, effective, and rewarding way” for teens to report inappropriate conduct or contacts in direct messages and publishing data on teens' experiences on the app. They also suggest that the recommendations made to a 13-year-old's teen account should be “reasonably PG-rated," and Meta should ask kids about their experiences of sensitive content they have been recommended, including “frequency, intensity, and severity.”
“Until we see meaningful action, Teen Accounts will remain yet another missed opportunity to protect children from harm, and Instagram will continue to be an unsafe experience for far too many of our teens,” the report says.