Meta has introduced a new artificial intelligence system on Instagram to identify teen users who may be misrepresenting their age.
Highlights
The platform now uses AI signals to detect when a user who claims to be an adult might actually be underage. If flagged, these users are automatically shifted into a restricted version of the app designed specifically for teens.
This initiative is part of Meta’s broader safety framework known as Teen Accounts, which was launched last year to enhance protections for users between the ages of 13 and 17.
These accounts include a range of restrictions: limited visibility of personal details, restricted communication with unknown users, and parental oversight of certain settings.
AI-Driven Age Detection
At the core of this effort is an AI tool called the Adult Classifier, designed to assess whether an account belongs to a user aged 18 or older.
This system considers various data points, including profile details, account activity, social interactions (such as birthday messages), and follower relationships.
If the system identifies discrepancies suggesting the account belongs to a minor, it overrides the self-declared date of birth and assigns the account to the teen-specific experience.
This marks a shift in how Instagram handles age verification. Previously, users were only subject to age estimation techniques during onboarding. The current update allows Instagram to continually assess age signals and take action when needed.
Features of Teen Accounts
Teen Accounts on Instagram include several safety-oriented features designed to protect younger users:
- Default Privacy Settings: Accounts are set to private automatically.
- Restricted Messaging: Only existing connections can send direct messages.
- Content Filters: Sensitive material, such as violent or cosmetic surgery-related content, is restricted.
- Time Management Alerts: Users receive notifications if they exceed 60 minutes of daily use.
- Sleep Mode: Between 10 p.m. and 7 a.m., notifications are silenced and automated responses are enabled.
These features aim to provide a more controlled digital environment for teens, reducing exposure to potentially harmful interactions or content.
Parental Controls and Oversight
For users under 16, any changes to safety settings require parental approval. In addition, parents can access tools to view recent interactions, monitor screen time, and apply custom restrictions.
Instagram has also introduced a feature that notifies parents if an account appears to have been set up with a false age, encouraging further communication between families.
Addressing Age Misrepresentation
Misrepresentation of age remains a known issue on social media platforms, especially among younger users trying to access adult content.
Meta’s approach involves proactively identifying and correcting such cases through AI analysis rather than relying solely on user-reported data or static account information.
Context and Implementation
This development follows ongoing public and legal scrutiny over the mental health effects of social media on teenagers.
Meta has faced lawsuits and policy pressure related to how its platforms handle teen safety. The latest updates are part of its broader strategy to meet growing regulatory expectations and societal concerns.
In a continuation of its safety rollout, Meta recently expanded the same protections found on Instagram to its Facebook and Messenger platforms.
According to the company, over 54 million users have been placed in Teen Accounts globally, and 97% of users aged 13 to 15 remain within the restricted environment.