AI Is Transforming Security Operations Centers With New Tools and Approaches
At a glance
- AI-specific attacks often evade traditional SOC detection methods
- IBM launched ATOM and QRadar Investigation Assistant for AI-driven security
- Over one third of organizations have experienced AI system compromise
Security operations centers (SOCs) are adapting their processes and technologies as artificial intelligence introduces new types of threats and detection challenges. This shift is prompting organizations to integrate AI-focused tools and cross-functional teams to address risks unique to AI systems.
Conventional SOCs are structured to identify threats such as data breaches, system outages, and network disruptions. However, these centers are not designed to detect attacks that target AI models, such as manipulations that degrade decision-making while leaving systems operational. As a result, new monitoring approaches are being developed to address these gaps.
AI-specific monitoring requires the collection and analysis of data related to model behavior, inference patterns, and the AI supply chain. These methods aim to uncover attacks that do not produce obvious signs like data exfiltration or service interruptions but still compromise AI system integrity. Integrating these capabilities into existing security platforms is a key part of the transition.
The evolution toward an AI-enabled SOC involves enhancing, rather than replacing, traditional detection and response tools. By incorporating AI-specific detection logic into platforms such as Security Information and Event Management (SIEM) and Extended Detection and Response (XDR), organizations can respond to both conventional and AI-related threats.
What the numbers show
- Over one in three organizations reported AI system compromise as of 2025
- More than one in four AI initiatives did not scale due to security concerns
- 76% of executives expect operational improvements from AI agents within two years
- AI agents are projected to increase workflow automation by 45% in three years
Agentic AI frameworks, including multi-agent orchestration systems, are being introduced to enable autonomous security operations. These systems allow SOCs to function with minimal human intervention, while analysts maintain oversight. IBM’s Autonomous Threat Operations Machine (ATOM) automates threat detection, investigation planning, and remediation using multiple AI agents to support security teams.
Generative AI co-pilots are also being deployed to assist analysts by triaging alerts, prioritizing incidents, and automating responses. These tools help reduce false positives and streamline workflows, while ensuring that human analysts retain authority over critical decisions. IBM’s QRadar Investigation Assistant, launched in May 2025, uses large language models to improve investigation efficiency within SOC environments.
Research indicates that large language models are used by SOC analysts primarily for sensemaking and context-building, rather than for making high-stakes decisions. This approach helps reduce analyst workload while preserving human judgment for critical tasks. Frameworks for human-AI collaboration in SOCs recommend tiered autonomy, where the level of AI involvement is adjusted based on the importance of the task and trust calibration.
Implementing AI-enabled SOCs requires collaboration between security operations, platform teams, data science, and governance functions. This cross-functional alignment ensures shared responsibility and clear accountability for AI system security. As organizations adopt these new practices, the focus remains on extending existing capabilities to address the evolving landscape of AI-driven threats.
* This article is based on publicly available information at the time of writing.
Sources and further reading
- Why security operations must evolve for the AI era | IBM
- IBM Delivers Autonomous Security Operations with Cutting-Edge Agentic AI
- IBM has launched QRadar Investigation Assistant: A New Era of AI- Powered Security
- [2508.18947] LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
- Agentic AI enables an autonomous SOC | IBM
- How AI-driven SOC co-pilots will change security center operations | IBM
Note: This section is not provided in the feeds.
More on Technology
-
CBP Signs $225,000 Sole-Source Deal With Clearview AI for Facial Recognition
A contract for facial recognition software was awarded to Clearview AI for $225,000, according to reports. The deal spans one year.
-
RegFi Podcast Examines Agentic AI in Consumer Financial Guidance
The latest episode features insights on agentic AI in financial services, according to RegFi. Nadim Homsany joined as a guest on February 11, 2026.
-
LOTI Model Highlights Regional Collaboration in London’s Tech Innovation
Data from 33 London boroughs and charities is integrated by LOTI, utilizing IoT sensors to enhance housing conditions and service delivery.
-
Children’s Commissioner Urges Ban on Social Media Ads Targeting Minors
A report shows 41% of adolescents saw online ads for prescription weight-loss drugs. The Children's Commissioner calls for a ban on such ads.