Meta is deploying advanced artificial intelligence to strengthen safety measures for young users on Instagram, focusing on identifying and restricting accounts where minors misrepresent their age.
The initiative aims to curb potential risks by ensuring age-appropriate experiences and safeguarding children from harmful interactions.
Using machine learning, Meta’s AI analyzes behavioral patterns, such as posting habits and engagement, to detect accounts likely operated by users under 18 who claim to be adults.
According to Forbes, once identified, these accounts face restrictions, including limited visibility in public searches and reduced access to adult-oriented content.
This builds on Meta’s existing tools, like age verification prompts and parental supervision features, to create a safer digital environment.
The move responds to growing concerns about online safety for minors, particularly after reports highlighted predatory behavior on social platforms.
Meta’s latest AI-driven approach also includes enhanced content moderation to flag inappropriate messages or comments targeting young users.
By proactively addressing age misrepresentation, the company aims to balance user privacy with robust protection.
Critics, however, argue that AI-based detection may raise privacy concerns or lead to false positives, potentially affecting legitimate accounts.
Meta acknowledges these challenges but emphasizes ongoing refinements to improve accuracy.
The company also plans to expand these AI tools to other platforms, like Facebook, to further protect vulnerable users.
This initiative reflects a broader industry push to prioritize child safety amid increasing regulatory scrutiny.
As Meta rolls out these changes, it seeks to rebuild trust with parents and policymakers while navigating the complexities of AI-driven moderation.