Skip to content
Home » AI Security in 2025 and Beyond: A new CCAPAC Report

AI Security in 2025 and Beyond: A new CCAPAC Report

Oct 2025
by Ethan Mudavanhu

Artificial intelligence has outgrown its adolescence. What began as a tool for narrow automation is now a sprawling ecosystem of autonomous AI agents – self-directed, and self-improving. Yet as AI becomes both the engine and the nervous system of our digital world, without security, this brand of digital autonomy may come at a great cost.

The Coalition for Cybersecurity in Asia-Pacific (CCAPAC) 2025 report, AI Security in 2025 and Beyond: Emerging Threats and Solutions, captures this inflection point. Across industries and borders, organizations are aggressively pursuing AI deployment, with 78% already relying on it in at least one business function. However, only 44% have established security policies designed to mitigate agent-related threats. This gap between frontier technology adoption and protection exposes a dangerous reality that organizations may be implementing transformative technology while operating without adequate safeguards.

The AI Security in 2025 and Beyond: Emerging Threats and Solutions CCAPAC Annual Report 2025 looks into these gaps, and explores two new considerations for cybersecurity professionals discuss the inclusion of AI technologies in the global security order: the rise of agentic AI and the industrialization of AI-powered social engineering.

Threat 1: The Double-Edged Sword of Autonomy

Agentic AI systems promise productivity gains so vast they almost sound utopian: autonomous agents coordinating supply chains, managing data ecosystems, even patching vulnerabilities faster than human teams can detect them. Some can resolve security incidents in under five minutes. But this digital autonomy cuts both ways – it introduces decision pathways that allow actions and decisions to be made without full human oversight. Autonomous systems are designed with rules to remember, reason, and act – and sometimes, these systems’ rules learn “wrong things” and then act against their creators’ intent. Survey data reveals that 23% of IT professionals have already witnessed incidents where AI agents were successfully deceived into revealing access credentials.

The CCAPAC 2025 report highlights how attackers have already turned these capabilities inward. A notorious case in 2025 saw a malicious actor exploit Claude Code, Anthropic’s agentic coding assistant, to orchestrate a multi-sector data extortion campaign. CCAPAC therefore observes that what was once a theoretical risk has now crossed the line into operational reality.

Threat 2: The Human Target in the Loop

If agentic AI represents a structural vulnerability, AI-enabled social engineering is its human mirror. With sufficient data to learn from, AI systems can now mimic voices, faces, and correspondence styles so convincingly that even seasoned professionals struggle to distinguish truth from simulation.

For example, phishing, long the oldest trick in the hacker’s playbook, has evolved into something far more formidable. The “phishing-as-a-service” economy has become industrialized, lowering the barrier to deception and multiplying its reach. In one 2025 attack, AI-generated meeting invites posing as Zoom and Microsoft Teams requests tricked employees into installing a legitimate remote-access tool, giving attackers control of their systems – proof that deception itself has become automated.

Between September 2024 and February 2025, organizations in Asia-Pacific experienced a 17.3% increase in phishing emails. For Asia-Pacific companies, many of whom are small and medium enterprises (SMEs) with limited resilience to cyberattacks, this convergence between human susceptibility and machine precision presents an existential risk.

Industry and Government: Quick Responses to Frontier Technology

Market dynamics reflect the urgency of these threats. The AI security market, valued at USD 20.19 billion in 2023, is projected to increase to USD 141.64 billion by 2032: a compound annual growth rate (CAGR) exceeding 24% that signals both opportunity and necessity.

A positive sign that CCAPAC has observed is that the industry is responding with strong solutions for addressing these two new emerging threats. Agentic sandboxing, multi-agent security frameworks, and AI Security Posture Management tools are emerging as new guardrails of the digital age. Companies are already building the scaffolding for a world where humans and machines must coexist securely.

Similarly, governments are beginning to look into regulations and governance mechanisms to address the threat of agentic AI. For example:

Perhaps due to the nascent use of agentic AI systems in applications, CCAPAC notes that at current time, it is a blind spot in current national AI strategies, often mentioned in passing.

From Chaos to Accountability

The task ahead is not to slow the pace of AI’s evolution but to steer it toward accountability. The CCAPAC report calls for a pragmatic alignment of interests – governments, industry, and civil society collaborating to make accountable autonomy a reality. CCAPAC makes recommendations for five urgent identified priorities:

  • Intelligence-sharing partnerships – these partnerships should create dedicated AI threat workstreams within Information Sharing and Analysis Centers, Security Operations Centers, and National Computer Security Incident Response Teams.
  • Adaptive expert-driven frameworks – these frameworks should strengthen smaller specialized groups modeled on initiatives like OWASP’s GenAI Security Project or the Coalition for Secure AI.
  • Regulatory sandboxes – sandboxing innovation will allow organizations to test technologies under oversight without immediately facing full regulatory weight.
  • Mutual recognition agreements (MRAs) for security certifications – MRAs will allow organisations and countries align security schemes across jurisdictions to reduce duplication while maintaining robust assurance levels.
  • Generational investment in AI-ready cyber skills – upskilling will address the global shortage of cybersecurity professionals while recognizing how AI reshapes required competencies.

History has shown that every technological revolution demands its own architecture of trust. For AI, that architecture must be security-embedded not as an afterthought but at the design stage. Only then can we ensure that AI autonomy remains accountable to the human values that created it.

For more information, download the free CCAPAC Annual Report on AI Security in 2025 and Beyond: Emerging Threats and Solutions