Meta Adds New Safety Features to Protect Child-Focused Instagram Accounts

Abhi Soni

Meta is expanding its efforts to protect children on Instagram by rolling out new safety features designed specifically for child-focused accounts managed by adults. Although users under 13 are officially not allowed to create personal Instagram accounts, the platform permits adults—such as parents and talent managers—to run accounts showcasing children. These accounts, while often created for innocent purposes like sharing family moments or managing young influencers, have unfortunately become targets for online predators.

To combat this, Meta is applying its strictest safety measures to these accounts. The company announced that these changes will be implemented in the coming months, and include:

  • Stricter message settings: DMs from unknown or potentially suspicious users will be heavily restricted.
  • Auto-enabled Hidden Words: This feature filters out inappropriate or harmful comments, automatically shielding account managers from seeing offensive or dangerous content.
  • Reduced discoverability: Meta will make it harder for predators and suspicious users to find these accounts, including limiting search visibility and stopping recommendations to users who have been blocked by teens.
  • Comment hiding: Comments from adults flagged as suspicious will be hidden automatically from the posts on these accounts.

Meta emphasized that it has already taken strong action, removing over 135,000 Instagram accounts during the year for inappropriate behavior toward children’s accounts and taking down an additional 500,000 linked accounts across Instagram and Facebook.

These measures arrived amid growing scrutiny over online child safety, especially following reports that some algorithms were inadvertently promoting harmful content. Meta says the company continues to take “aggressive action” against users and accounts that violate its platform rules.

This move builds on Meta’s previous rollout of privacy-focused teen accounts, launched last year for users aged 13–18, which automatically enabled stricter safety settings. Additional protections, including AI-based age detection, are also in testing stages to spot adults lying about their age to bypass security checks.

Meta has further introduced features to fight sextortion scams, such as nudity protection in DMs that blurs explicit images and Location Notices that alert teens if they are speaking with someone from another country—a common tactic used by scammers.

In addition, the company is enhancing how young users can protect themselves. Teens can now access an updated “Safety Tips” section in their DMs, providing quick access to block, report, or restrict buttons. Meta has also combined the block and report feature into a single tap action, making it quicker and easier for users to respond to harmful behavior.

These updates aim to reinforce Instagram as a safer space for younger users and child-focused content, ensuring protection is prioritized in environments that may otherwise attract exploitation.

TAGGED:
Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version