Instagram has rolled out new safety measures to better protect accounts that frequently feature children, responding to mounting concerns about online exploitation and inappropriate interactions.
HIghlights
- Default Privacy Restrictions: Accounts that frequently feature children now have the most restrictive comment and message settings enabled by default, including offensive content filtering and limited DM permissions.
- Algorithmic User Blocking: Suspicious or previously blocked adult users will be prevented from discovering or interacting with child-focused accounts through recommendations or content feeds.
- In-App Safety Prompts: Instagram will guide account managers to review their privacy settings via proactive notification reminders.
- Enforcement at Scale: Meta has removed over 135,000 Instagram accounts and 500,000 cross-platform accounts linked to child exploitation behaviors.
- Lantern Coalition Collaboration: Instagram is working with the Tech Coalition’s Lantern program to combat child exploitation across multiple platforms in real time.
- Enhanced Teen Tools: Teen accounts now include safety tips during chats, quicker access to block/report functions, and transparency around suspicious users’ metadata (e.g., account age, location).
- Nudity Protection Engagement: More than 40% of users are actively avoiding blurred explicit content, reflecting strong user engagement with the nudity shield tool.
The update includes a series of default settings and algorithmic changes designed to limit how unknown or potentially suspicious users can engage with such profiles.
The changes are targeted primarily at adult-managed accounts—such as those run by parents, family vloggers, or managers representing child influencers—that regularly share content involving minors.
According to Meta, while many of these accounts are maintained with positive intent, they remain vulnerable to misuse by malicious actors.
Default Restrictions on Comments and Messages
Instagram will now automatically apply its most restrictive privacy settings to accounts that primarily showcase children.
- “Hidden Words” filter enabled by default – which blocks offensive or inappropriate comments and message requests.
- Strict DM limitations – making it more difficult for strangers to initiate private conversations.
- In-app notification prompts – encouraging account managers to review and customize their privacy settings.
Limited Discoverability for Suspicious Users
To further minimize risk, Meta is deploying algorithmic updates that limit how potentially harmful users interact with child-focused accounts. Specifically,
- Adults who have been blocked by teen users or flagged as suspicious will be excluded from account discovery features.
- These individuals will no longer see or be recommended content from relevant child-focused profiles—and vice versa.
Account Removals
- Nearly 135,000 Instagram accounts that were found to be sexualizing content involving children have been removed.
- An additional 500,000 related accounts were taken down across Instagram and Facebook.
- These removals are being coordinated with other platforms via the Tech Coalition’s Lantern program, enabling broader efforts to tackle cross-platform abuse.
Expanded Safety Tools for Teen Users
Alongside protections for child-focused accounts, Instagram is enhancing its safety features for teen users. New tools and updates include,
- In-chat safety prompts (e.g., “Safety Tips”) that appear when suspicious behavior is detected.
- A “block and report” combo button for immediate action.
- Visibility into user metadata such as account creation date and country—helping teens assess the legitimacy of unfamiliar profiles.
- Continued support for Instagram’s nudity protection filter, which remains enabled for over 99% of users. In June, more than 40% of blurred images remained unopened, indicating a high level of user engagement with the feature.