Master AI or Fall Behind
Header image

Safeguarding your company in the Age of Generative AI

Generative AI poses challenges to security teams, but also creates huge opportunities.

Some eight or 10 years ago, security teams feared few things more than IT-savvy staff, who brought their own devices to the office and connected them to the corporate network or tried to install their own software, bypassing corporate legacy equipment and systems to become more productive. This backdoor consumerisation of IT was understandable, but triggered seemingly endless security nightmares – and very real losses of data, intellectual property and money.

Today, security teams are seeing a new wave of unathorised IT deployments. Once again, there are people who want to excel and try to boost their productivity,  by experimenting with the new generation of artificial intelligence tools. In many instances they use unauthorised AI tools, which, once again risks the data leakages and security failures.

Security teams also face another – external – threat that is AI-enabled: attackers who use AI for increasingly sophisticated attacks – whether that’s highly-polished phishing in multiple languages, polymorphic malware and ransomware, or clever prompt attacks that manipulate AI engines.

AI is changing the world of computing at a speed never seen before. Let’s not forget: It’s not much more than a year ago that OpenAI launched the first public version of ChatGPT, marking the start of a wave of generative AI innovation. Already, AI is establishing itself as a vital tool to augment human productivity. In most organisations people are – officially or unofficially – experimenting with AI tools to gauge its potential.

For security teams, this poses two serious challenges that in fact they can turn into two huge opportunities:

  • They must prepare their organisation’s defences for the age of AI.
  • They must ensure staff have the right tools that are sandboxed and make it possible to use AI responsibly, safely and to its full effect.

Defenders know that they are fighting an asymmetrical battle. Attackers often seem to be better skilled, resourced, and organised than many security teams. And attackers, of course, are not playing by the same rules as those imposed on corporate teams. Making things worse, incident response teams usually receive far more security alerts than they can realistically manage.

That doesn’t diminish the scale of the threat: we’ve seen a tenfold increase in password-related attacks, growing from 3 billion to 30 billion over the past year and it takes a little over one hour between a successful phishing email attack and bad actors accessing private data. In fact, at Microsoft we now process and analyse more than 65 trillion signals each day. The increased use of AI by threat actors is set to increase both the volume and the speed of attacks.

Fighting fire with fire – for posture, response and reports

It's time to fight fire with fire and use AI to set up more dynamic and adaptive cyber defences based on Cognitive Cyber. Currently in early preview, our Microsoft Security Copilot is based on a combination of traditional and generative AI systems trained on security logs, attack telemetry and threat intelligence. A simple natural language query like “How can I improve my security posture?” will identify potential vulnerabilities and give evidence-based guidance on how to protect your systems.

We all know the dangers of alert fatigue, which is why it is so useful to have an AI that is working 24/7 to identify real incidents, provide context for signals, assess scale and impact, and help identify their sources. The AI then supports the incident response with actionable recommendations.

To keep top management and other stakeholders in the loop, the Security Copilot also delivers easy-to-understand incident reports in natural language.

To put it simply, where security teams right now may have to sift through a deluge of signals and monitor multiple dashboards, Security Copilot offers an integrated platform that provides insight across the whole range of XDR (Extended Detection and Response) and SIEM (Security Information and Event Management). Our Security Copilot works already across all Microsoft products and will ultimately integrate a broad range of third party products as well.

We know from our early preview customers that Security Copilot can save up to 40 percent of time spent on core security operation tasks. It helps SOC teams to enhance and grow their capabilities and skills, while supporting workflow and collaboration across teams. This, in turn, means the Security Copilot helps to address the significant shortage of cybersecurity talent.

Making your organisation AI-ready

Security leaders also must work closely with top business executives and the IT team to ensure a safe, reliable rollout of generative AI for use by the organisation itself. Once the security team has embraced AI, they have to provide due diligence and help choose generative AI tools that are best suited for an organisation’s needs and goals.

They must define clear roles and responsibilities for users who consume AI tools and for inhouse developers who build new things using AI. Both groups have to be taught how to use AI effectively, ethically and securely. Security must provide clear guardrails for AI usage to ensure the organisation uses this powerful technology safely – for example by providing AI tools like Microsoft Copilot, which are sandboxed and not trained on the organisation’s own data.

A feedback mechanism helps to measure error rates and user satisfaction, and quickly flag any security incidents. All this needs constant monitoring and updating, so that the security posture stays ahead of the rapid evolution of the quality and power of AI tools.

 Getting an organisation AI ready – both in terms of security and its operations – is arguably the biggest security challenge since the arrival of the internet and the need to protect corporate networks against malicious actors. This time, however, SOCs know that they can have AI on their side to help them succeed.

Paul Kelly Director, Security Business Group, Microsoft UK Microsoft

Paul leads the Cyber Security business group for Microsoft UK. Paul’s team works with government, partners, industry bodies and customers to help keep their data safe.

Paul Kelly Director, Security Business Group, Microsoft UK Microsoft

Paul leads the Cyber Security business group for Microsoft UK. Paul’s team works with government, partners, industry bodies and customers to help keep their data safe.

Brought to you by:

Microsoft

Master AI or Fall Behind