Meta rolls out new Instagram safety features to protect teens and children

The update builds on Meta’s ongoing efforts to curb exploitation and strengthen its platform’s privacy controls.

author-image
afaqs! news bureau
New Update
meta (17)

Meta has announced a fresh set of safety tools for Instagram, aimed at protecting teens and child-focused accounts from harmful interactions and unwanted content. The update builds on Meta’s ongoing efforts to curb exploitation and strengthen its platform’s privacy controls.

New Protections for Teen Accounts

Teen users on Instagram will now see enhanced safety features in their direct messages (DMs). These include contextual tips about who they are chatting with, the ability to view when an account was created, and quick-access options to block and report users.

Meta has also introduced a combined block-and-report feature in DMs, simplifying the process and ensuring potentially harmful accounts are flagged for review. The company says these additions complement existing safety notices that encourage teens to block or report any behaviour that makes them uncomfortable.

In June 2025 alone, Instagram reported teens blocked accounts over 1 million times after seeing safety prompts and filed another 1 million reports.

Another notable feature is the Location Notice, which warns users if they’re chatting with someone in another country. This is designed to combat sextortion scams, a growing threat on social media platforms. Meta says over 1 million users saw this notice in June, with 10% engaging to learn more.

The nudity protection tool, which automatically blurs suspected nude images in DMs, has also seen wide adoption. Meta reports 99% of users, including teens, have kept the feature turned on since its global rollout. In June, more than 40% of blurred images stayed hidden, and 45% of users chose not to forward nude content after seeing warning prompts.

Safeguards for Adult-Managed Accounts Featuring Children

Meta is also expanding protections to accounts run by adults that primarily feature children, such as parent-managed profiles or accounts representing young talent. These accounts will now default to Instagram’s strictest messaging settings and enable Hidden Words to filter offensive comments automatically.

Notifications will prompt account managers to review privacy settings. Additionally, Meta will avoid recommending these accounts to adults flagged as potentially suspicious—such as those previously blocked by teens—and restrict their visibility in search results.

This builds on earlier measures that stopped such accounts from offering subscriptions or receiving gifts.

Cracking Down on Harmful Accounts

Meta’s specialist teams removed nearly 135,000 Instagram accounts earlier this year for leaving sexualised comments or soliciting images from accounts featuring children. Another 500,000 linked Facebook and Instagram accounts were also taken down.

The company is sharing data on these accounts with other tech firms via the Tech Coalition’s Lantern program, reinforcing cross-platform safety efforts.

Meta
Advertisment