Skip to content
Home » Deepfakes and the Collapse of Digital Trust in Asia Pacific

Deepfakes and the Collapse of Digital Trust in Asia Pacific

Nov 2025
by Lim May-Ann

With the introduction of readily-available generative artificial intelligence (genAI) platforms for consumers, many policymakers are realising that deepfakes are no longer merely a misinformation problem, they are a full-spectrum cybersecurity threat. What began as experimental AI-generated videos has evolved into a sophisticated ecosystem of synthetic identities, cloned voices, manipulated livestreams, and hyper-realistic impersonation attacks. Across Asia Pacific, governments, businesses, banks, and ordinary citizens are confronting a new cybersecurity reality — seeing is no longer believing.

Implications beyond social media ennui

Going beyond social media users complaining about how “AI slop” turning that arena of cyberspace into a digital wasteland, a new threat has emerged: financial fraud. According to the WEF, deepfake-enabled fraud has surged in 2025. Criminal groups now routinely use cloned executive voices to authorise fraudulent wire transfers, impersonate company leadership during video calls, and bypass biometric identity verification systems. In Feb 2025, a multinational firm in Hong Kong reportedly lost millions of dollars after attackers used AI-generated video conferencing impersonations of the company’s CFO to deceive employees into approving financial transactions. One estimate has losses from deepfakes running to USD 1.65b in 2025.

Financial institutions across the region are scrambling to adapt. Banks that once promoted voice authentication and facial recognition as secure verification tools are now confronting the new reality that artificial intelligence can accurately replicate these human traits accurately. Cybersecurity systems built on these “human-centric” trust pillars have become increasingly vulnerable in the age of generative AI and deepfake-enabled fraud.

From financial fraud to erosion of geopolitical trust mechanisms

The challenge for cybersecurity professionals and analysts is seeing beyond this current trend of financial losses, and looking to the deeper impact of the synthetic escalation threat. As Asia Pacific sits at the center of some of the world’s most sensitive geopolitical flashpoints, including tensions in the South China Sea, the Taiwan Strait, and the Korean Peninsula, in such an environment, a fabricated video or manipulated video or audio recording showing a political leader making a controversial announcement, or a fake emergency broadcast, could spread quickly like wildfire on social media, triggering panic before authorities can respond.

The challenge for policymakers is therefore the challenge of trust – how can governments and authorities ensure timely responsiveness when the speed of misinformation now rivals the speed of official verification? The malaise of deepfakes and fake news has indeed reached the point of being a national security and critical information infrastructure issue, rather than simply a technology or financial fraud issue.

The Asia Pacific region is particularly vulnerable because of its digital density, where large swathes of its population are youth, and therefore some of the world’s most digitally-adept and mobile-native generations. Applications susceptible to misinformation, deepfakes, and fake news such as messaging apps, livestreaming platforms, and short-video ecosystems, are widely used, and are fertile ground for synthetic media campaigns designed to manipulate public opinion or destabilise institutions.

Digital Trust Technologies

The deeper issue and challenge to policymakers and the authorities that is that deepfakes erode trust itself. Modern digital societies depend on trusted communications systems. Citizens must trust emergency alerts, financial transactions, public institutions, and news reporting. Once widespread doubt takes hold, every image, video, and statement becomes contestable. Ironically, even authentic evidence can be dismissed as fabricated — a phenomenon researchers increasingly describe as the “liar’s dividend.”

Technology companies are responding with trust mechanisms, for example: AI watermarking systems, provenance standards, and synthetic media detection tools. Firms such as Microsoft, Google, and Adobe are supporting initiatives like the Coalition for Content Provenance and Authenticity (C2PA), which aims to authenticate digital media origins.

Governments are also stepping up the fight. Countries such as Indonesia and Singapore have set up verified channels on widely used commercial platforms WhatsApp and Telegram to distribute trusted news in a trusted manner; ASEAN has also released the Guideline on Management of Government Information in Combating Fake News and Disinformation to aid governments in establishing structures for digital trust communication.

Moving into 2026, Asia Pacific governments face a difficult balancing act. Policymakers must strengthen legal frameworks against malicious synthetic media while avoiding overreach that could undermine privacy, innovation, or free expression. Public education will also become critical, and digital literacy can no longer focus solely on phishing emails and password hygiene; citizens must now learn to navigate an information environment where synthetic reality is increasingly indistinguishable from truth.

Concerned about the same thing? CCAPAC would be happy to work with you to raise further awareness of this trust deficit issue in the context of cybersecurity policy, and work together to preserve digital trust before synthetic deception becomes impossible to contain.