May 2026
by Lim May-Ann
As the years progress and more digital natives are born to the world, governments are confronting a difficult digital policy question: how can children be protected online without compromising privacy, freedom, and cybersecurity itself?
From Australia’s under-16 social media age restrictions to China’s strict gaming regulations and South Korea’s proposed rules for social media algorithms for children, age verification technologies have emerged as a controversial cybersecurity control tools of recent years.
Policymakers are increasingly arguing that platforms must verify user ages to shield minors from harmful content, cyber predators, addictive algorithms, online gambling, and exploitative advertising. Yet critics warn that many age verification systems are invasive, ineffective, and surprisingly easy to circumvent.
The challenge is particularly acute in Asia Pacific, where digital adoption among children and teenagers is among the highest in the world. The region is home to some of the world’s youngest internet populations, highly mobile-first societies, and rapidly expanding social media ecosystems. For governments already hard-pressed on all policy sides to address various digital challenges such as cybercrime, disinformation, and online exploitation, child safety is an important social imperative.
What are the tools available for cybersecurity policymakers to deploy?
Age verification technologies generally fall into four categories
- Self-declaration,
- Identity document verification
- Biometric estimation, and
- Device- or Account-based parental controls.
Self-Declaration
The simplest method remains self-declaration, where users merely enter a birthdate to access a service. This approach is inexpensive and minimally invasive, but it is also largely symbolic. Most children can bypass these systems within seconds simply by entering a false age. Many social media platforms continue to rely heavily on this approach despite its obvious weaknesses.
Identity Document Verification
More advanced systems require government-issued identity documents such as passports or national IDs. China has aggressively implemented such mechanisms for online gaming platforms, linking user access to national identity databases. Companies such as Tencent and NetEase use real-name authentication systems tied to state identity records to enforce youth gaming restrictions.
Supporters argue this has significantly reduced excessive gaming among minors and improved accountability for platforms. However, identity-linked verification raises practical, as well as major cybersecurity and privacy concerns. In the first instance, not every country has a national ID system, and not every country has one for all citizen at birth, meaning this approach is not suitable for all markets. And where there is a centralised database, having a single location storing identification documents create attractive “honeypot” targets for cybercriminals. A breach involving children’s identity data could have lifelong consequences, including identity theft and financial fraud.
A number of identity verification solutions are now available and are possible options for governments and platforms to consider, such as Yoti https://www.yoti.com, Jumio https://go.jumio.com, Onfido/Entrust https://www.entrust.com/products/identity-verification, Au10tix https://www.au10tix.com, and Veriff https://www.veriff.com. These companies offer various identity and age assurance services (some AI-enabled) using document scanning, selfies, and biometric matching.
Biometric Age Estimation
Biometric age estimation has emerged as an alternative to self-declaration and identity verification. It works by initiating a video/photo scan that analyses facial features or behavioural patterns to estimate whether users are adults or minors. Yoti has promoted facial age estimation technologies for social media and restricted online services, while firms such as FaceTec https://www.facetec.com provide liveness detection tools designed to prevent spoofing attacks.
Biometric age estimation therefore avoids collecting/connecting with full government identity systems, and is a more sophisticated/elegant solution than simple self-reporting. However, age estimation technologies can produce inaccurate results, and some systems have been reported to have intrinsic systemic bias, and is less accurate for marginalised groups such as those with an indigenous or non-white ethnicities. False positives may wrongly block legitimate users, while false negatives can still allow minors to bypass protections.
These systems could also create new cybersecurity risks. If biometric information – such as facial scans – are stored and the database compromised, this breach damage is irreversible as biometric identifiers cannot simply be changed after a breach. You can change a password, but not your cranial features (or at least, not without great difficulty and expense!)
Device- or Account-based parental controls
Device- or account-based parental controls represent a less intrusive approach to age assurance by shifting responsibility from centralised identity verification systems to families and trusted devices. Rather than requiring children to upload government IDs or biometric data, these systems rely on operating system settings, app store restrictions, and supervised family accounts to manage access to age-restricted content and services. Major technology companies including Apple, Google, and Microsoft provide parental control ecosystems such as Apple Screen Time, Google Family Link, and Microsoft Family Safety, allowing parents to restrict app downloads, monitor screen time, filter content, and approve purchases or account access. Gaming platforms such as Sony Interactive Entertainment and Nintendo also offer child safety settings tied to user accounts and consoles.
Supporters argue these tools are more privacy-conscious because they avoid collecting large amounts of sensitive identity data. However, their effectiveness depends heavily on parental engagement, device ecosystem compatibility, and technical literacy. However, while parental controls can form an important layer of child online safety, they are rarely sufficient as standalone age verification mechanisms.
Circumventing Age Restrictions
Despite these solutions and their availability, there exists the uncomfortable reality that determined users often circumvent age restrictions anyway. Australia’s implementation of their under-16 social media ban has revealed that many children have been finding ways to bypass these restrictionswith various methods such as VPNs, borrowed identities, secondary devices, prepaid SIM cards, AI-generated faces, and fake credentials, to bypass and undermine age verification systems.
Therein lies the dilemma facing policymakers making age restriction policies: the stronger the verification method, the greater the possible privacy and cybersecurity risk to the data collected. Yet alternative self-declared or technology-enabled (AI) enforcement mechanisms may either fail technically, or possibly require surveillance measures that create new problems (e.g. “who’s watching the watchers?”).
Online Child Safety – A multifaceted issue
As a cybersecurity issue, online child safety intersects with broader questions around protecting digital rights, and control of surveillance and monitoring data, including behavioural tracking. Cybersecurity analysts are warningthat these same tools could be/are being repurposed for political monitoring and surveillance of adults as well.
As a policy issue, it is multifaceted, and cannot be solved solely through authentication – or indeed any single technology alone. Governments, users, parents, platforms themselves, and other stakeholders all play a part in the development of a safe cyberspace for children. It requires safer platform architecture, stronger moderation, more careful deployment of recommendation algorithms, digital literacy education from an early age, parental involvement, regional cooperation and sharing of best practices.