Defending against a new AI threat-scape requires understanding how the dynamic security environment is changing.
Whether it’s the soaring number of cybersecurity memes or real-life incidents, the past year has been another tough one for our industry.
The volume and sophistication of cyber-attacks keeps on rising. Take the number of password attacks detected by Microsoft: during the past 12 months, they more than tripled – from 1,287 per second to more than 4,000 per second.
Ransomware, malware, phishing and email compromise attacks are getting ever more targeted and difficult to detect. The number of attack surfaces also keeps sprawling, driven by the need for remote and hybrid work. Many security teams, meanwhile, are short-staffed and under-resourced.
By now, the global shortfall of cybersecurity staff already runs to 4 million people according to an estimate in the Cybersecurity Workforce Study by ISC2 (PDF), while Gartner predicts that lack of security staff and human failure will be responsible for more than half of all cybersecurity incidents.
What already feels like an asymmetric battle now faces an even greater threat: cyber criminals that supercharge their operations by deploying a new generation of artificial intelligence tools. No wonder that any security breach now rapidly gets C-Suite level attention.
The rapid rise of AI means that we face a pivotal moment for the cyber threat landscape – but one that the good guys can win, provided they have the right tools.
The AI threat scape
To defend against the new AI threat-scape, security professionals first have to recognise how much this incredibly dynamic security environment is changing.
We expect that in time more and more modern apps will be powered by the Large Language Model (LLM) that underpin Generative AI. These apps will have an increased threat surface, which means that they will be vulnerable to both inadvertent and deliberate misalignments, such as command injection attacks. The cyber criminals, meanwhile, will use their own AI tools to find new vulnerabilities and exploit them.
With AI, phishing attacks will also get cleverer, more interactive and can easily target multiple countries by using sophisticated translation engines. AI also makes it extremely easy to clone or imitate real websites, and can be used as a weapon to refine phishing messages and improve influence operations with synthetic imagery.
Security teams also have to brace for polymorphic malware that undergoes rapid and dynamic iterations that are harder to detect. In fact, AI will be able to automatically create new malware without much human intervention. Prompt attacks, meanwhile, can help broadcast security vulnerabilities to attackers.
In short, AI tools make it possible to automate increasingly sophisticated attacks, which will increase threat volumes exponentially. Little wonder that the annual cost of cyberattacks continues to grow. According to the latest research by the FBI’s Internet Crime Complaint Center, reported total losses in the United States alone grew from $9 billion in 2021 to more than $10.2 billion in 2022. On a global scale, the losses will be even greater.
The AI counterstrike
Security teams that face this new generation of attacks the traditional way will feel like fighting with one hand tied behind their backs. The good news, though, is that given the right tools, they can fight fire with fire – and win.
Successful cyber defence uses AI as a platform that integrates all security tools. AI can also improve and automate the detection and analysis of threats, suggest incident responses and predict the attackers’ next moves. Even better, combining AI systems dedicated to cyber security with generative AI models makes it possible to turn complex data into easy to understand insights and recommendations. This makes security analysts more effective and responsive, and helps senior executives in an organisation to better understand the situation.
This is not theory, but practice. AI-powered cyber-defence systems are already successfully defending against large-scale cyberattacks, for example in Ukraine.
A key advantage of using AI for cybersecurity is the ability to bring real-time monitoring and analysis to the daily flood of incident reports. Humans are simply neither able to respond as quickly as machines, nor can they work 24/7 giving their full attention. Security Copilot can help catch what other approaches might miss and augment an analyst’s work. In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture.
At Microsoft, we call this new generation of AI security tools “cognitive cyber”. These AI tools are trained on security logs, attack telemetry and threat intelligence, and combined with self-learning algorithms, natural language processing and big data mining. The result is something that’s greater than the sum of its parts – the Microsoft Security Copilot. Currently still in Early Preview, the Security Copilot is real-life proof that AI can turn the tide in today’s rapidly changing cyber threat landscape.
And maybe, just maybe, we can also use Generative AI to produce more, but at long last positive memes about the life and times of working in cyber security.
Written by
Paul Kelly
Director, Security Business Group, Microsoft UK
Microsoft
Paul leads the Cyber Security business group for Microsoft UK. Paul’s team works with government, partners, industry bodies and customers to help keep their data safe.