Dave Harcourt, BT chief security authority and Fellow talks about the hype and threats surrounding AI.
How do we sort the ‘hype’ from the reality of AI and its real-world impacts today?
The hype around AI has manifested into a heightened sense of fear around its potential risks, but the reality is we have already been using AI for many years: it’s only the roll-out of GenAI that has captured everyone’s attention.
Security teams have long been employing machine learning and AI to prevent distributed-denial-of-service (DDoS) attacks, for example. While the technology and the thinking behind it is nothing new, the scale and the compute capacity involved is.
It’s important to remember that the dangers of AI are certainly real, especially when we think about its ability to lower the barrier to entry for cybercrime. For example, how cyber-criminals can use GenAI to create an extremely authentic-looking and personalised phishing email.
At the same time, there are huge upsides for security teams too, especially when trying to educate others and provide advice around how to get security right.
There are arguments on both sides for whether AI will turn into a bigger threat or opportunity, but I think the key is not to get caught up in the scare-mongering headlines and instead look at how you can learn more about where AI can help your business. Find opportunities to learn, share knowledge and collaborate with others across the industry to get the best understanding of how they are using AI.
This will be vital to keep grounded on the reality of the technology, and stay informed on how it can benefit your business the most.
What advice would you offer organisations to prepare for the risks and benefits of AI?
If threat actors are going to be using AI to their advantage, so should we. It’s so important that we make sure AI is never the ultimate decision-maker and instead focus on ‘assisted AI’.
As much as AI can support and assist humans, having a human decision-maker in control at all times is vital.
Also, we can’t assume that AI is a fix-all solution and by implementing it we are 100% covered. It’s not the “silver bullet” we’ve been waiting for in cybersecurity. In reality, it is a matter of “when” an attempted compromise takes place rather than “if” one will.
When people ask how cyber teams can sleep at night, it’s less about worrying that we will get hacked, and more about wanting to make sure we’ve done everything possible to minimise the impact when it happens which might mean integrating AI alongside cybersecurity practices. All you can do is prepare.
Rather than relying solely on AI to protect an organisation, the most important cybersecurity defence is a workforce that has bought into cybersecurity ideals. A team that thinks of it as another area of their responsibility and feels empowered to identify threats.
By working collaboratively, we will better share knowledge so the industry can stop falling for the same issues especially in the face of heightened risk associated with AI. This extends to best practice across the company - don’t fall into the trap of thinking security is security’s problem alone.
How is BT using AI for the opposite – to boost cyber resilience?
We certainly recognise its potential and we’re using AI to help detect and neutralise threats from hackers targeting business customers. So much so, BT now holds 725 AI patents across Europe, the US and China. This patented technology uses AI to analyse attack data to allow companies to protect their tech infrastructure.
For example, our Eagle-i platform combines BT’s industry-leading network insight with advances in AI and automation to predict, detect and neutralise security threats before they get a chance to inflict damage. It has been designed to self-learn from the intelligence provided by each intervention, so that it constantly improves its threat knowledge and dynamically refines how it protects customers across a multi-cloud environment.
We’re also able to advise customers on best security practices when they come to us with queries around the solutions they’re building and the right security to build into it. By implementing AI at this stage, it helps people get security right from the start.
Following the latest ChatGPT update from OpenAI, what are your biggest concerns about how GenAI could pose a cybersecurity risk?
The next problem is always just around the corner. Right now, I see the biggest concern around GenAI as its ability to increase one of the industry’s largest risks - the insider threat. The false authenticity GenAI can bring to phishing emails will create far more potential for devastating spear phishing and whaling campaigns that will inevitably weaken the first wall of defence within organisations.
Cybersecurity can sometimes be viewed as a mysterious ‘dark art’ that requires incredibly technical skills. Truthfully, it’s a lot more focused on psychology, driven by curiosity and there is a huge human element of working in this industry. We are a human workforce being targeted by human hackers, both using technology like AI to support our efforts.
We can pose one of the biggest risks to our organisations in how we use AI, while also becoming the best asset security teams have to help protect against evolving attacks.
Written by
Dave Harcourt
Chief Security Authority and Fellow
BT