Header image

CISO Advisory: How To Use Agentic AI In Security

Agentic AI has huge potential in cybersecurity, helping to reduce alert fatigue and find vulnerabilities more quickly. It is therefore no surprise that CISOs are upbeat about the potential of AI tools in security operations, according to Splunk’s annual CISO Report.

The report’s findings show AI adoption is a key priority, with 68% highlighting investment in the area as a leading focus. But only 6% have fully deployed agentic AI in security operations.

This cautious approach is understandable. Take the example of Anthopic’s Claude Mythos, which the AI firm deems too powerful to release to the public, despite its ability to find software flaws at speed.

There’s no doubt agentic AI can open up companies to new attack threats. So, how can CISOs start to use agentic AI in security, while being aware of the risks?

Careful Assessment

It always makes sense to be careful about the deployment of new technology in the business. Security leaders are “carefully assessing the operational risks of introducing greater autonomy into security workflows,” says Mandy Andress, CISO at Elastic. “Agentic AI systems are designed to plan and execute tasks with limited human input. They are also non-deterministic, meaning their outputs can vary depending on context and data inputs, which means behaviour is not fully predictable.”

Agentic AI presents increased security risks compared to generative AI due to its “autonomy and scale,” says Sam Peters, chief product officer at IO. “Leaders may have concerns around agents acting outside of scope or simply being tricked by malicious actors.”

In addition, many leaders have concerns around regulatory compliance and alignment, for example compliance with the EU AI Act and the General Data Protection Regulation (GDPR), according to Peters. “Both regulations require meaningful human oversight, and under the EU AI Act, providers and deployers of high-risk AI systems – a category agentic AI may fall into – will need to monitor the operation of the high-risk AI systems.”

Firms also need to establish AI literacy within their organisation, as well as ensure robust risk management, align with transparency requirements, retain logs for accountability and ensure human oversight – “at a minimum,” warns Peters.

Agentic AI Benefits 

The risks are clear, but agentic AI offers multiple benefits in security. Experts say the immediate and highest-value opportunity is in reactive processes: Triage, investigation and containment.

“These are high-volume, highly repeatable, and extremely resource-intensive,” says Martin Riley, CTO at Bridewell. He says phishing investigation and suspicious login analysis are “good examples of processes where agentic platforms have mature, deployable solutions today”.

Agentic AI can give CISOs leverage where security teams are most stretched, says Dom Glavach, CISO at Black Duck. “It can reduce repetitive analysis, accelerate investigations, improve prioritisation, enrich context, and help turn large volumes of technical data into faster, more actionable decisions. The value goes beyond efficiency. It is a force multiplier for resilience, adaptability, and security that scales with the threat.”

CISO AI Strategy

With these benefits in mind, it’s possible for CISOs to start cautiously dipping their toes into agentic AI. Caution should be applied to scope and expectations, not to adoption itself, says Riley. “Agentic AI introduced without clear integration into the wider management stack, without governance over what the agent can and cannot do autonomously, and without a plan for the processes it does not cover, will create complexity rather than resolve it.”

The most effective approach is to introduce agentic capabilities gradually, starting with assistive tasks such as alert triage, investigation support, or data correlation, before expanding into areas where agents may take more autonomous action, says Andress. “This means CISOs and security leaders can take advantage of the benefits while maintaining strong guardrails and human oversight.”

Start by working with your privacy and legal teams to understand the boundaries achievable with your current tooling or natural extensions, says Riley. “If you are a Microsoft house, understand what Copilot for Security can deliver today, and what extending into Azure AI Foundry would look like as a next step.”

Look at your highest-volume, most repeatable reactive processes first, Riley advises. “Phishing and suspicious login investigation are the right starting points. The return on investment is clearer, the risk is more contained, and the market has usable solutions.”

CISOs should approach agentic AI as “a secure enablement challenge,” says Glavach. “Start with clear use cases, tightly scoped permissions, strong logging and auditability, and human accountability for high-impact actions.”

He advises using frameworks such as the OWASP LLM Top 10 and Agentic Applications Top 10 as implementation guides. “They address deployment realities like prompt injection, excessive agency, and weak identity boundaries directly.”

Chas Clawson, VP of security strategy at Sumo Logic says the biggest strategic mistake is “giving AI too much open-ended control too early.”

“A better model is to use AI first to build and improve deterministic workflows before asking it to run them autonomously,” Clawson advises. “In other words, use AI to write the playbook, test the script, recommend the change, or correlate and provide context before you let it directly operate the environment. That approach is much more consistent with NIST’s guidance on governance and monitoring, and with where the market itself seems to be heading: Assistive first, autonomous later.”

It’s key that CISOs strategically identify, assess and address risk related to agentic AI agents and tools, Peters says. “They need to enforce least privilege access controls, ensure traceability, define thresholds for human oversight and monitor agentic AI activity to ensure safety.”

The starting point should be CISOs’ teams, according to Andress. “Many organisations already have capable tools in place, but don’t always get full value from them. Before introducing new autonomous technologies, organisations should ensure their teams fully understand existing workflows, data sources, and detection capabilities. Strengthening skills and awareness within teams often delivers more impact than simply adding another layer of tooling.”

Kate O'Flaherty
Kate O'Flaherty Cybersecurity and privacy journalist
Kate O'Flaherty
Kate O'Flaherty Cybersecurity and privacy journalist

Upcoming Events

No events found.