New research shows one in four CISOs believe losing a single security engineer could directly lead to a breach. With rising tool sprawl, AI-driven complexity and a shrinking talent pool, many teams are far more fragile than they realise.
Your lead security engineer has handed in their notice. They’ve spent years building the defences, wiring together dozens of tools from different vendors, and patching vulnerabilities before anyone even noticed. In two weeks, they’re gone. And with them goes everything they know about your entire security setup. All the tribal knowledge that is rife in the security domain disappears, leaving you exposed.
New data from Aikido Security’s 2026 State of AI in Security & Development report shows that 1 in 4 CISOs say the loss of a single security engineer would likely directly result in a serious attack or breach. A higher proportion say that incident response times would be significantly delayed (43%), that key security tools or workflows would break or go unmanaged (39%), or that vulnerability fixes would be delayed or deprioritised (36%). More than a third say that compliance and audit readiness would be at risk.
The impact of a security engineer leaving is clearly significant, but why?
Dependency on People

When tools are hard to manage – or if you need to bundle numerous tools from different vendors together – tribal knowledge builds up in one engineer’s head. It’s unrealistic to expect them to document it. Gartner recently said that organizations use an average of 45 cybersecurity tools and called for security leaders to optimize their toolsets.
And in that context, losing the one person who understands how these systems actually work is not just inconvenient: it's a structural risk.
And the impact this has is seen in the data from the State of AI in Security & Development report; using numerous vendors for security tools correlates with more incidents, more time spent prioritising alerts and slower remediation. In short, a security engineer has too much on their plate, and most security tools aren’t making their job any easier.
This also helps explain why CISOs and Heads of SecOps set extremely high expectations: when the environment is this complex, only a handful of people can operate it successfully.
“This demonstrates the slim thread which at times holds systems together,” says Kevin Curran, Professor of Cybersecurity at Ulster University.
“It highlights the need to properly allocate resources to cybersecurity,” he adds.
The Unicorn Problem
But hiring a security engineer isn’t that clear cut, as the demands for their skill sets and expectations are very high.
“Organisations tend to be all looking for the same blend of technical cloud, integration, SecOps, IAM experience but with extensive knowledge in each pillar,” says James Walsh, National Lead for Cyber, Data & Cloud UK&I at Hays.
“Everyone wants the unicorn security engineer whose experience spans all of this, but it comes at too high a price for lots of organisations,” he adds.
Walsh notes that hiring is often driven by teams below the CISO — such as Heads of SecOps — which can create inconsistent expectations of what a ‘fully competent’ engineer should look like.
He also points to a structural imbalance in the market: organisations demand deep expertise across cloud, architecture, monitoring and integration, yet “aren’t paying enough to acquire that, with little compromise on their side.”
Walsh believes businesses need to think more strategically about cyber talent pipelining and the programmes they have in place to develop new talent.
“This can be through solid methods of knowledge transfer to wider team members by their security engineers, and building this into deliverables of the role alongside documenting the ‘tribal knowledge’ in their heads succinctly,” he says.
Walsh calls this the difference between “talent consumers” (organisations that compete for the same fully formed unicorns) and “talent creators” who invest in apprenticeships, entry-level roles, internal mobility and clear cyber career paths.
The AI Paradox
We already know that AI accelerates development and code production, but it’s already causing problems too: 1 in 5 CISOs say that they have suffered a serious security incident tied to AI-generated code, while attackers have found cheaper and easier methods to breach organisations thanks to AI.
On the flipside, security tools are now often using AI to help thwart attackers or prevent security gaps through code reviews, vulnerability fixes, prioritising alerts, penetration testing and more. So could organisations rely on AI to help them with their overreliance on senior security engineers?
According to Igor Andriushchenko, CISO of Lovable, the vibe coding platform and fastest growing SaaS company ever, it's quite the opposite.
“It's a bit ironic that the industry talks so much about replacing people with AI, but in security, we worry much more about not having enough security people,” he says.
“With AI, not only development velocity and code quantity have exploded. What’s also increased by orders of magnitude is attack surfaces, threats, ability of attackers to quickly weaponise vulnerabilities. So security has much more to deal with nowadays on a much shorter timeline with higher stakes, and I could see this situation only becoming more severe with time. Security needs to get into AI adoption ahead of attackers and find ways to multiply work velocity, coverage of controls, robustness of defences to always stay ahead of attackers.”
Igor’s point underscores a growing paradox: AI can automate tasks, but it expands threat evolution even faster. It widens the asymmetry between attacker and defender, increasing the pressure on already thinly stretched security engineers.
One way of fixing this, according to Walsh, is to hire individuals who are looking to become a security engineer but need a role below that to make this attainable.
“The decline in entry-level roles or the increased outsourcing of these roles is leaving a big gap that will only widen if hiring is limited to finding the unicorn candidate,” he says.
Modern security culture
There’s a link between the cognitive overload for security engineers and negative security outcomes; Aikido’s report found that teams that used more vendors took longer to prioritise security alerts, deal with false positives and remediate critical vulnerabilities. In addition, those who used more vendors also suffered more incidents.
And this vicious cycle starts up again, by adding another security vendor to the mix to “be more secure”. The burden placed on the security engineer is also likely to translate into them looking for alternative opportunities. After all, why would they work somewhere where the developer experience, standards and security posture are not up to scratch?
Engineers don’t just leave because they’re poached, they leave because the environment makes success impossible. And when success requires personal heroics, burnout isn’t a bug; it’s a feature of the system.
The bigger issue
The problem isn’t the engineer who leaves, it’s the system that made their exit a single point of failure. Many organisations chase unicorn hires instead of developing talent, and still overlook basics like documentation, knowledge transfer and reducing tool sprawl. As Walid Mahmoud, Head of DevSecOps at the UK Cabinet Office says, “Security engineers are the backbone of the organisation.” When that backbone is fragile, exposure grows — especially as AI expands the attack surface.
The fix is simple to state: build teams and tools that stay resilient when one person leaves. No organisation should be a resignation away from a breach.
Written by
Sooraj Shah
Content Marketing Lead
Aikido Security
Sooraj Shah is Content Marketing Lead at Aikido Security. He has a background as a journalist for publications such as the BBC, the FT, Infosecurity Magazine and SC Magazine, and as a content marketer for B2B tech companies and start-ups.