Social media firms will use facial recognition age checks to “drive out” under-age children from their sites, under plans to be announced next month by Ofcom.

In an interview with The Telegraph, Jon Higham, the online regulator’s head of online safety policy, said platforms would be expected to remove millions of children from their sites by using “highly accurate and effective” age checks.

The largest tech firms will face multi-billion-pound fines under the Online Safety Act if they fail to protect children and instead allow them to access harmful content such as porn, child sex abuse images and violence.


Advertisement


Ofcom estimates as many as 60 percent of eight to 11-year-olds have social media profiles – equivalent to 1.6 million children in the UK – despite major sites like Facebook, Instagram, TikTok, and Snapchat having minimum age limits of 13. A third of five to seven-year-olds are said to use social media unsupervised.

Mr Higham said the watchdog’s research had exposed the “big” problem that more than one-fifth of under-age children on social media sites claimed to be adults who could access content.

It found that 25 percent of eight-year-olds with a social media profile on at least one platform had a user age of 16 plus, and 14 percent had a user age of 18 plus. Most users have never been asked to verify their age.

Mr. Higham said: “What we see is 22 percent of children are online with a profile which suggests they’re an adult because, at the moment, all too many platforms basically let children self-certify how old they are.

“It doesn’t take a genius to work out that children are going to lie about their age. So we think there’s a big issue there.”

Mr Higham said Ofcom would set out next month what the platforms would be expected to do to ensure users were not under-age.

“We will expect the technology to be highly accurate and effective. We’re not going to let people use poor or substandard mechanisms to verify kids’ age,” he said.

“The sort of thing that we might look to in that space is some of this facial age estimation technology that we see companies bringing in now, which we think is pretty good at determining who is a child and who is an adult.

“So we’re going to be looking to drive out the use of that sort of content, so platforms can determine who’s a child and who isn’t, and then put in place extra protections for kids to stop them seeing toxic content.”

Under the Online Safety Act, Ofcom has powers to fine tech firms that fail to protect children from online harm up to 10 percent of their global turnover – which would be £10 billion for Meta, owner of Facebook, Instagram and WhatsApp – and to jail executives for up to two years for persistent breaches.

Technology companies say they have introduced more stringent age checks in recent years. These include scanning personal IDs, estimating facial age, and asking a parent to confirm an age.

Checks are often triggered when a user attempts to change their birth date, or when systems automatically detect indicators that someone might be lying about their age, such as the age of their frequent contacts.

However, in the Ofcom research, most children said they had never been asked to confirm their age. Only 18 percent of Instagram users, 19 percent of TikTok users, and 14 percent of Snapchat users said they had ever been asked to verify their date of birth.

Author

  • End Time Headlines

    End Time Headlines is a Ministry that provides News and Headlines from a "Prophetic Perspective" as well as weekly podcasts to inform and equip believers of the Signs and Seasons that we are living in today.

    View all posts