Agentic AI Adoption Drives New Cybersecurity Challenges
At a glance
- By December 2025, 70% of organizations reported using agentic AI
- Only 21% of organizations had full visibility into agentic AI behaviors
- Anthropic and CalypsoAI identified new cyber threats from agentic AI
Widespread adoption of agentic AI systems has introduced new cybersecurity concerns, as organizations expand their use of autonomous AI agents across departments and business functions.
By the end of 2025, agentic AI was present in 70% of organizations, with nearly 39% deploying these systems at scale, according to industry research. A further 24% of organizations were running pilot programs, while about 32% were engaged in hands-on trials. Despite this rapid uptake, only a minority of organizations reported having complete oversight of how these AI agents operate, including their permissions and data access.
Security research and advisories have highlighted emerging risks associated with agentic AI. A December 2025 advisory from Gartner stated that AI-powered browsers can be exploited by malicious websites, potentially leading to unauthorized collection and transmission of sensitive information. At SXSW 2025, the Signal Technology Foundation stated that agentic AI bots may require broad access to personal data, raising concerns for encrypted communication services.
Threat intelligence reports from 2025 documented the use of agentic AI in cyberattacks. Anthropic identified a method called “vibe-hacking,” where cybercriminals used AI agents to conduct extortion campaigns against 17 organizations, resulting in ransom demands exceeding $500,000. CalypsoAI reported that the deployment of autonomous AI agents contributed to a 12.5% reduction in security scores across leading AI models.
What the numbers show
- 70% of organizations used agentic AI by December 2025
- Only 21% had full visibility into agentic AI operations
- Agentic AI systems refused only 41.5% of malicious prompts in testing
- Poisoning 2% of training data could trigger unsafe actions with over 80% success
Academic studies have evaluated the resilience of agentic AI systems against attacks. In December 2025, a comparative analysis found that these systems rejected just 41.5% of malicious prompts across 13 attack scenarios, meaning more than half of harmful inputs were not blocked by existing safeguards. Research published in October 2025 demonstrated that introducing backdoors into as little as 2% of an agent’s training data could result in unsafe actions being triggered with over 80% reliability when specific conditions were met.
To address the evolving risk landscape, a governance model known as the 4C Framework was introduced in February 2026. This framework organizes agentic AI security risks into four dimensions: Core, Connection, Cognition, and Compliance, providing a structured approach for organizations to assess and manage vulnerabilities.
Visibility into agentic AI operations remains a challenge for many organizations. With only 21% of organizations reporting complete awareness of agent behaviors, permissions, and tool usage, gaps in oversight may increase the likelihood of undetected security incidents or data exposures.
Ongoing research and industry reports continue to document the intersection of agentic AI deployment and cybersecurity. As organizations expand their use of autonomous AI agents, new frameworks and security assessments are being developed to address the risks identified in recent studies and threat intelligence findings.
* This article is based on publicly available information at the time of writing.
Sources and further reading
- Penetration Testing of Agentic AI: A Comparative Security Analysis Across Models and Frameworks
- Human Society-Inspired Approaches to Agentic AI Security: The 4C Framework
- Malice in Agentland: Down the Rabbit Hole of Backdoors in the AI Supply Chain
Note: This section is not provided in the feeds.
More on Technology
-
Romance Scams in 2026 Use AI and Data Brokers to Target Victims
A report describes how scammers utilize AI and data brokers to fabricate online identities, according to industry sources.
-
Automated Device Recovers More Eggs in IVF, Study Finds
An automated IVF device recovered additional oocytes missed by manual screening in over 50% of cases, according to a study in Nature Medicine.