Header image

AI in the new Government: King Talks 'Appropriate Legislation'

The King's Speech acknowledged the need for requirements for the most powerful AI models.

As part of the King’s Speech earlier today, marking the start of the new parliamentary session and the first since the election of Sir Keir Starmer’s Labour Party, guidance on AI was highlighted.

Specifically, the speech announced that the government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”

A Step Forward?

Whilst this announcement is not part of any Bill or legislation, it does show a step forward in the use of AI, and overall it being taken seriously. The UK government has previously been accused of playing catch up on AI, specifically on regulation.

Whilst we have seen the likes of an AI white paper, NCSC guidelines and safety summit were launched last year, is AI now about to be taken seriously at a national level? 

Previously the UK government said it remains “committed to a context-based approach that avoids unnecessary blanket rules that apply to all AI technologies, regardless of how they are used”, noting “this is the best way to ensure an agile approach that stands the test of time.”

Regulation and Confidence

AI regulation is a tricky subject though, as it can cover everything from chatbots to in-office tools, and many are legitimate and work efficiently.

Adoption of AI is increasing: a survey of 612 IT decision-makers in the UK and US by JumpCloud released this week found 34% of UK respondents plan to implement AI technology in the next six months, compared to 19% in the last survey. Additionally, 75% view AI as a net positive, versus 71% in Q1 2024. 

Curtis Wilson, staff data scientist at the Synopsys Software Integrity Group, said the previous government’s whitepaper on AI regulation highlighted the importance of interoperability with EU and US AI regulation, and he hoped that this is something the Labour government commits to as well.

“With companies operating in global markets, the burden of complying to multiple inconsistent regulatory frameworks would be onerous,” he said. “This is especially true for smaller companies and start-ups that might lack the requisite resources to comply. The EU AI Act was able to take this into account and I would hope to see a UK act containing similar provisions.”

Fear and Trust

Maybe the Skynet-style fear of AI is unwarranted, as the steps all seem to be in place: confidence amongst practitioners, regulatory steps, government leadership, but can we expect complete trust?

Wilson said the greatest problem facing AI developers is not regulation but a lack of trust in AI, as for an AI system to reach its full potential it needs to be trusted by the people who use it - but that regulatory frameworks are an essential component to building that trust.

He said he hopes that the government relies on industry experts when creating legislation, so that it can be both abstract and overseen by competent regulatory bodies - which will be able to more quickly react to a changing technological landscape. “It’s important to remember that AI is a complex subject and in a stage of rapid improvement,” he said.

“When even many developers don’t fully understand the technology and its implications, what chance do policymakers have?”

This poses an interesting point, who actually understands the technology and its capabilities in order to be able to regulate it? Adam Pilton, senior cyber security consultant at CyberSmart, and former Detective Sergeant says just like the CEO of a car manufacturer does not need to be a mechanic, a CISO does not need to know the inner workings of AI, “they must however be sufficiently confident in understanding the fundamentals of artificial intelligence and keep updated on how AI can be used to enhance their daily operations both now and in the future."

That is true, and therein lays one of the major problems of AI: who knows it well enough to regulate it, let alone trust it and work with it? We’ve moved on from the basic concept of AI as an automation tool, and its use is becoming more and more widespread. 

However this mention today at the start of a new parliament could be a positive step forward, and one that goes a long way to resolving these issues.

Dan Raywood Senior Editor SC Media UK

Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.

Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Dan Raywood Senior Editor SC Media UK

Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.

Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Upcoming Events

24
Oct
Webinar

Securing Data in the Cloud: Advanced Strategies for Cloud Application Security

Discussing the current trends in cloud security, focusing on the challenges of hybrid environments

In this live webinar, join security specialists from OPSWAT to discuss the current trends in cloud security, focusing on the challenges of hybrid environments, including diminished visibility and weakened threat detection.

image image image