Last week saw the name change for the AI Security Institute, but what does this mean for government attitudes for AI?
Last week we covered the announcement that the UK’s AI Safety Institute had changed its name to become the AI Security Institute.
It’s a name change, does it really matter? Well actually, it’s quite a significant step. If we look back to the original announcement from just over a year ago, it states that the then ‘safety institute’ “will advance the world’s knowledge of AI safety by carefully examining, evaluating, and testing new types of AI, so that we understand what each new model is capable of.”
Safe from Progress
The intention was also to conduct fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI. Then Secretary of State for Science, Innovation and Technology Michelle Donelan MP said the institute was “the first state-backed organisation focused on advanced AI safety for the public interest” and it would work towards this by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance.
Of course since this announcement, the UK has had a change of government and with that comes this change of name. Technology secretary Peter Kyle said the new name “will reflect its focus on serious AI risks with security implications.” This includes its use for cyber-attacks, as well as development of chemical and biological weapons, and enabling crimes such as fraud and child sexual abuse.
Speaking at the Munich Security Conference, Kyle also said that the AI Security Institute will collaborate with the MoD’s Defence Science and Technology Laboratory, the NCSC and Home Office.
Grabbing Your Attention
That is why this is important. A new name doesn’t really grab the attention, but efforts to create conversation and policy around AI does. When we’ve talked about AI over the past couple of years, it has been a persistent trend in conversation and research and now it seems that government is preparing to take a proactive stance too.
In an email to SC UK, a government spokesperson said: “The first job of any government is keeping its citizens safe, and ensuring our national security is something we will never compromise on. The Institute’s mandate is now crystal clear - to focus on identifying and mitigating the most serious risks posed by AI – putting the Institute precisely where the British public expects it to be.
“This renewed focus ensures our citizens, and those of our allies, are protected from those who would look to use AI against our institutions, democratic values, and way of life.”
AI Engagement
As we said previously, this effort to address AI is about more than just cybersecurity and being used by cyber-criminals. A recent report from TeamViewer found 82 percent of UK-based decision makers engage with AI a weekly basis, with 72 percent of respondents considering their organisations’ AI adoption to be mature.
Suzanne Button, EMEA Field CTO at Elastic, said it is reassuring to see the government taking AI security more seriously, calling this a pragmatic approach that focuses on tangible, immediate risks.
“As AI infrastructure evolves, it is critical that governments acknowledge that the same tools enabling innovation are also readily available to bad actors for malicious purposes,” she said. “We have already seen AI being exploited to spread misinformation, automate cyber-attacks, and manipulate public discourse. So establishing a body to safeguard national security against AI-driven threats is a logical and necessary step. “
She welcomed the addition of ‘security’ to the institute’s name, saying it signals a step in the right direction, “but its impact will depend on how well this framework translates into actionable safeguards, industry collaboration, and regulatory clarity.” After all, “safety and security must work in tandem.”
However Joseph Carson, chief security scientist and advisory CISO at Delinea was more tepid in his embrace of the announcement, saying while the Institute’s efforts to expand AI security knowledge are invaluable, “true resilience requires organisations to take proactive steps to strengthen their own security measures.”
He claimed that the risks of unchecked AI operations, such as those involving privileged access or data integrity, can be catastrophic, and the use of AI by businesses should see them demand a security-by-design and by-default approach from their vendors.
Perhaps this will be an outcome from the work done by the AI Security Institute, and drive a policy to ensure better standards from vendors and for practitioners. For now, it’s more than a new name — it’s a chance for government to join the conversation and join up departments for a collective understanding.
Written by
Dan Raywood
Senior Editor
SC Media UK
Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.
Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.