Discussion on the power and capabilities of AI are rife, so where do humans fit into the loop?
Cyber threats faced by UK-based organizations are more diverse, frequent, and fast-evolving than ever before. As of November 2024, The National Cyber Security Centre has responded to approximately 50 percent more incidents which it deemed “nationally significant” than in 2023.
This combined with many reporting understaffing and a lack of resources, has led to an increasing emphasis is being placed upon new defensive technologies, many of which are enabled by artificial intelligence (AI).
These tools are gaining prominence due to their ability to increase the productivity of cybersecurity professionals, address skill gaps, and leverage large language models (LLMs) to conduct predictive analysis and vulnerability management. However, they have their shortfalls which begs the question - can organisations effectively bridge the cybersecurity skills gap with technology?
AI Alone Is Not the Solution
The growing dependency on AI as a solution for cybersecurity is made evident by the recent AI-led discovery of a Google zero-day vulnerability, yet there are substantial limitations to relying solely on AI in this fight.
While AI can process vast amounts of data and identify patterns faster than any human - with 65 percent of businesses saying AI helps to reduce false positives and improve efficiency - it lacks the human capacity for critical, creative, ethical thinking, or strategic oversight.
Large language models (LLMs) and other AI systems are constrained by the quality of the data used for training: this limitation makes it challenging for AI to provide the same nuanced analysis as a seasoned cybersecurity professional, who can deploy a wider range of information sources when assessing a security alert that’s been flagged.
It's very unlikely that, in the foreseeable future, AI will be deployed without human oversight into IT security environments-particularly given the anticipated challenges surrounding the implementation of human ethics.
Organisations should recognise that while AI can enhance cybersecurity measures when working with massive datasets such as spotting change in malicious behaviour through extended detection and response (XDR), it cannot fully replace the nuanced understanding and analytical capabilities of human experts.
The Importance of Human Intelligence Analysts
The ultimate decision maker behind every IT security assessment is still, and will always be a human being, not AI. This is because behind every ransomware campaign lies a human — someone behind a computer demanding a ransom, or a malware gang preparing to publish the personally identifiable information (PII) of countless individuals on dark web forums.
At times when high-stakes and close-call scenarios are presented in a security war room, it requires quick thinking, sound judgement, and an understanding of what's in the company's best interest. These are not the best times to have a logical animatronic in the pilot's seat.
This is partly why only 12 percent of security professionals believe AI will fully supplant their roles. Combatting human-generated cyber threats requires human expertise, as AI cannot think and act like a human. It’s innately human to think beyond the algorithm, understand an adversary’s intent, and employ a strategic mindset - all of which can make the difference when identifying and mitigating cyber threats.
Combining Human and External Threat Intelligence
To effectively counter these threats, organisations should integrate AI-driven threat intelligence solutions with human expertise. This unified approach is critical for:
Navigating the grey space: Cyber threats aimed at businesses often emerge beyond the firewall where digital services, customer interactions on social media and eCommerce sites, and threat actors converge. AI can collect threat data and provide elementary analysis that helps time-short IT security teams, however, organisations need skilled analysts to prioritise which threats should be handled immediately.
Empowering IT security teams with best-in-class threat technologies and accountability: Organisations should empower their internal teams with the right external threat detection technologies. Even with world-class technology, AI can inadvertently introduce bias into its outputs, which can have significant ethical implications. Human intelligence operatives can better navigate ethical considerations and ensure unbiased decision-making.
Adopting a holistic approach to cyber risk management: Leveraging the synergy between human insight and advanced technologies can significantly enhance an organisation’s cybersecurity posture within a short space of time. It also creates a more resilient framework by shifting a reactive cybersecurity approach to a proactive one with automation helping the human operative to locate the signals through the noise.
Cyber intelligence activities require specialised knowledge and tradecraft that cannot be solely fulfilled by technology. In the rush to adopt AI solutions, it is crucial not to overlook the irreplaceable value of human intelligence.
Organisations should therefore cultivate a balanced approach that embraces both advanced technologies and human capabilities, ensuring a robust, proactive defence against the evolving cyber threat landscape.
Written by
Lewis Shields
Director, Dark Ops
ZeroFox