Children and teenagers are still at risk from online harm on Instagram despite the rollout of “woefully ineffective” safety tools, according to research led by a Meta whistleblower.
Two-thirds (64%) of new safety tools on Instagram were found to be ineffective, according to a comprehensive review led by Arturo Béjar, a former senior engineer at Meta who testified against the company before US Congress, New York University and Northeastern University academics, the UK’s Molly Rose Foundation and other groups.
Meta – which owns and operates several prominent social media platforms and communication services that also include Facebook, WhatsApp, Messenger and Threads – introduced mandatory teen accounts on Instagram in September 2024, amid growing regulatory and media pressure to tackle online harm in the US and the UK.
However, Béjar said although Meta “consistently makes promises” about how its teen accounts protect children from “sensitive or harmful content, inappropriate contact, harmful interactions” and give control over use, these safety tools are mostly “ineffective, unmaintained, quietly changed, or removed”.
He added: “Because of Meta’s lack of transparency, who knows how long this has been the case, and how many teens have experienced harm in the hands of Instagram as a result of Meta’s negligence and misleading promises of safety, which create a false and dangerous sense of security.
“Kids, including many under 13, are not safe on Instagram. This is not about bad content on the internet, it’s about careless product design. Meta’s conscious product design and implementation choices are selecting, promoting, and bringing inappropriate content, contact and compulsive use to children every day.”
The research drew on “test accounts” imitating the behaviour of a teenager, a parent and a malicious adult, which it used to analyse 47 safety tools in March and June 2025.
Using a green, yellow and red rating system, it found that 30 tools were in the red category, meaning they could be easily circumvented or evaded with less than three minutes of effort, or had been discontinued. Only eight received the green rating.
Findings from the test accounts included that adults were easily able to message teenagers who do not follow them, despite this being supposedly blocked in teen accounts – although the report notes that Meta fixed this after the testing period. It remains the case that minors can initiate conversations with adults on Reels, and that it is difficult to report sexualised or offensive messages, the report found.
They also found the “hidden words” feature failed to block offensive language as claimed, with the researchers able to send “you are a whore and you should kill yourself” without any prompts to reconsider, or filtering or warnings provided to the recipient.
Meta says this feature only applies to unknown accounts, not followers, which users can block.
Algorithms showed inappropriate sexual or violent content, with the “not interested” feature failing to work effectively, and autocomplete suggestions actively recommending search terms and accounts related to suicide, self-harm, eating disorders and illegal substances, the researchers established.
The researchers also noted that several widely publicised time-management tools intended to curb addictive behaviours appeared to have been discontinued – although Meta said the functionality remained but had since been renamed – and spotted hundreds of reels showing users claiming to be under 13, despite Meta’s claims to block this.
The report said Meta “continues to design its Instagram reporting features in ways that will not promote real-world adoption”.
In a foreword to the report co-authored by Ian Russell, the founder of the Molly Rose Foundation, and Maurine Molak, the co-founder of David’s Legacy Foundation and Parents for Safe Online Spaces, both of whose children died by suicide after being bombarded by hateful content online, the parents said Meta’s new safety measures were “woefully ineffective”.
As a result, they believe the UK’s Online Safety Act must be strengthened to “compel companies to systematically reduce the harm their platforms cause by compelling their services to be safe by design”.
The report further asks that the regulator, Ofcom, become “bolder and more assertive” in enforcing its regulatory scheme.
A Meta spokesperson said: “This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today. Teen accounts lead the industry because they provide automatic safety protections and straightforward parental controls.
“The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night. Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback – but this report is not that.”
An Ofcom spokesperson said: “We take the views of parents campaigning for children’s online safety very seriously and appreciate the work behind this research.
“Our rules are a reset for children online. They demand a safety-first approach in how tech firms design and operate their services in the UK. Make no mistake: sites that don’t comply should expect to face enforcement action.”
A government spokesperson said: “Under the Online Safety Act, platforms are now legally required to protect young people from damaging content, including material promoting self-harm or suicide. That means safer algorithms and less toxic feeds. Services that fail to comply can expect tough enforcement from Ofcom. We are determined to hold tech companies to account and keep children safe.”