Header image

Why Firms Can’t Ignore Agentic AI

How big a threat does agentic AI pose to businesses currently? And what should security leaders be doing to address the risk?

Since OpenAI’s ChatGPT burst into the mainstream nearly three years ago, warnings around artificial intelligence (AI) have been coming thick and fast. But as the technology is increasingly integrated into the workplace and employees start to use it under the radar, concerns are growing about the risks posed by shadow agentic AI.

The figures show a worrying trend. In around 5.5% of organizations, employees are running AI agents created with frameworks such as open-source application builder LangChain or the OpenAI Agent Framework, according to Netskope research.

Meanwhile, an exec has warned that even when used with permission, AI agents need training, just like humans.

How big a threat does agentic AI pose to businesses currently and what should security leaders be doing to address the risk?

Making Its Own Decisions

The danger posed by agentic AI stems from its ability carry out specific tasks with limited oversight. “When you give autonomy to a machine to operate within certain bounds, you need to be confident of two things: That it has been provided with excellent context so it knows how to make the right decisions – and that it is only completing the task asked of it, without using the information it’s been trusted with for any other purpose,” James Flint, AI practice lead at Securys, said.

Mike Wilkes, enterprise CISO, Aikido Security, describes agentic AI as “giving a black box agent the ability to plan, act, and adapt on its own.”

“In most companies that now means a new kind of digital insider risk with highly-privileged access to code, infrastructure, and data,” he warns.

When employees start to use the technology without guardrails, shadow agentic AI introduces a number of risks. These can be malicious or accidental, according to Angus Allan, AI innovation lead, Version 1.

“Because agentic systems can take independent, multi-step actions across an enterprise stack, employees could unintentionally exfiltrate data, corrupt sensitive records, or support network penetration,” he explains. “Even where intentions are good, misalignment with company policy can still create cybersecurity or reputational harm.”

Inadvertently Creating Vulnerabilities

Even when deployed by well-meaning teams aiming to enhance efficiency, unvetted agents can “inadvertently create vulnerabilities,” agrees Bharat Mistry, field CTO at Trend Micro. “They might access or alter sensitive data, integrate with external applications that fall outside policy guidelines, or operate with excessive permissions that bypass established security controls.”

For example, a marketing team might connect an unsanctioned AI assistant to Salesforce to streamline lead management, says Mistry. However, this could unintentionally result in the overwriting or exposure of critical business data. “Without centralised visibility into who created, configured, or authorised these agents, organizations face an increased risk of compliance failures, data loss and operational disruptions,” Mistry warns.

Adding to the risk, agentic AI is becoming easier to build and deploy. This will allow more employees to experiment with AI agents – often outside IT oversight, creating new governance and security challenges, says Mistry.

Agentic AI can be coupled with the recently open-sourced Model Context Protocol (MCP), a protocol released by Anthropic that provides an open standard for orchestrating connections between AI assistants and data sources. By streamlining the work of development and security teams, this can “turbocharge productivity,” but  it comes with caveats, says Pieter Danhieux, co-founder and CEO of Secure Code Warrior.

If not carefully controlled, it can “introduce new vulnerabilities and amplify existing ones,” leading to prompt injection attacks, the generation of insecure code and exposure to unauthorized access and data leakage, says Danhieux. The interconnected nature of these tools inevitably expands the attack surface, he says.

The MCP and other similar protocols allow AI to safely connect to outside services, but future versions will “go considerably further”, says Randolph Barr, CISO at Cequence Security. “We will soon see new frameworks that make cross-domain automation more advanced. Such tools will let an individual's AI plan meetings, shift money, approve workflows, or change systems on different platforms without any problems.”

As these systems get better, it will become increasingly important to keep track of non-human identities, Barr warns.

How To Regain Control Over Agentic AI 

The threat of agentic AI is real and growing, but firms can take steps now to ensure they are on top of the risk. Going forward, organizations must ensure that every AI agent operating within their environment is assigned an owner accountable for its actions, Barr says. “Each agent should be governed through a least-privilege access model and a defined lifecycle, from enrolment and monitoring to eventual deactivation – much like how human users are managed today.”

At the same time, companies must maintain “continuous visibility” into every agent’s activity, understanding “what it is doing, where it is connected and what data it is accessing at all times”, he says.

To mitigate the risks posed by agentic AI, organizations need to enforce “strong governance policies” and “robust technical controls,” says Mistry. “Clear usage policies should define who can create, deploy and connect AI agents, what data they can access, and how their actions are monitored. Training programs must ensure employees understand the risks, treating AI agents like digital coworkers that require oversight and accountability.”

An effective AI use policy should set out a clear framework for how staff should approach and use AI, balancing the risks to the organization against the opportunities of the new technology, says Iain Simmons, consultant lawyer at Arbor Law. “Perhaps most important, for most use cases, is the retention of human control, often referred to as a human-in-the-loop. AI can support decision-making but does not replace accountability.”

Privacy, data protection and security together form another cornerstone, says Simmons. “Provide guidance around the use of approved AI tools as opposed to unapproved ones. A good policy does more than state ‘don’t input sensitive data’. It explains why, and points out the difference between, for example, an enterprise AI licence and a free tool where information could be reused in ways you cannot control.”

Agentic AI is here to stay, so its risks can’t be ignored. As frontier capabilities increase, agentic AI will be able to operate autonomously for longer and complete more complex actions, says Allan.

The best defence is “strong cybersecurity hygiene” and “clearly defined guardrails,” he says. “At a policy level, clarity is king. Clear AI policies reduce shadow use and ensure employees understand what tools can, and cannot, be used safely.”

Kate O'Flaherty
Kate O'Flaherty Cybersecurity and privacy journalist
Kate O'Flaherty
Kate O'Flaherty Cybersecurity and privacy journalist

Upcoming Events

No events found.