Header image

The EU AI Act: What UK Firms Need To Know

How does the EU AI Act affect UK firms following Brexit and what do CISOs need to do to protect their organisations?

At the start of August, the EU AI Act came into place, outlining explicit requirements and responsibilities for those who develop and deploy AI systems.

As the threat from AI grows, the Act aims to ensure systems are safe, transparent and respect fundamental rights within the EU. As part of this, it stipulates that AI systems should be overseen by people, rather than automation, to prevent “harmful outcomes”.

Many experts think the regulation has been a long time coming. Today, adversaries are able to use AI in attacks or take advantage of holes in the systems currently in use.

The vast amounts of data collected by AI are also an issue – not least due to the risk of falling foul of strict data protection regulation such as the EU update to General Data Protection Regulation (GDPR).

Under Scrutiny

AI has already come under intense regulatory scrutiny. Last year, OpenAI’s ChatGPT chatbot got in trouble with Italian regulators, while Meta and Google have faced barriers to launching AI technology in the EU markets.

The new EU AI Act focuses on a risk-based approach, with systems deemed to pose an “unacceptable risk” including those offering biometric emotional recognition or “social scoring” facing a ban from operating in the bloc. High-risk systems, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements.

Generative AI including ChatGPT isn’t classified as high-risk, but will have to comply with transparency requirements and EU copyright laws. In addition, the Act says AI content should be clearly labelled as such.

The first-of-its kind regulation is deemed to be groundbreaking and wide-reaching, with an impact on other markets. So, how does the EU AI Act affect UK firms following Brexit and what do CISOs need to do to protect their organisations from AI-based risks?

What does the EU Act mean for UK firms?

While the UK isn’t a member of the EU following Brexit, the Act still impacts UK companies that deal with firms in the EU. The regulation extends accountability to “any AI system that impacts an EU citizen”, so UK-based and global companies operating within the EU must comply, says Julian Brownlow Davies, VP of advanced services at Bugcrowd.

Additionally, the UK may choose to align some of its own AI regulations with the EU standards to “facilitate smoother trade and cooperation”, he says.

Currently, the UK doesn’t have plans for AI regulation of its own. However, on July 17, the King’s Speech proposed a set of binding measures on AI, which deviates from its previous approach. Specifically, the government plans to establish “appropriate legislation to place requirements on those working to develop the most powerful [AI] models”. 

The EU has chosen to be “very prescriptive”, but the UK's approach will be lighter in regulation, seeking as much as possible to “foster innovation while mitigating the risks”, says Adam Biddlecombe, co-founder of Mindstream.

The UK has traditionally favoured an “agile, principles-based approach to AI regulation, focusing on innovation and flexibility”, says Brownlow Davies. While the light-touch approach has encouraged rapid AI development, he says “more robust regulations are thought to be necessary to ensure ethical standards, mitigate risks, and provide clear guidance for developers and users”.

AI issues

There’s no doubt AI poses multiple risks from a number of angles. The technology can be used by attackers to create malware, with generative AI tools lowering the barriers to entry. Adversaries are also using AI tools to aid in phishing emails, social engineering and more convincing deep fakes.

At the same time, the technology presents “significant data protection challenges”, particularly in terms of safeguarding sensitive company information put into AI systems, says Derreck Van Gelderen, head of AI strategy at PA Consulting. 

The risks posed by AI – such as inadvertently exposing sensitive company information by feeding it into AI systems – are “significant”, agrees Matt Aldridge, principal solutions consultant at OpenText Cybersecurity. These risks are heightened by the increasing sophistication of attackers, who are “adept at exploiting any vulnerabilities in AI systems to gain unauthorised access to valuable data”, he says. 

The UK government has issued some guidance on the safe use of AI, focusing on ethical principles and data protection. However, there is “a notable gap” in actionable guidance specifically tailored for CISOs and companies using AI at scale, says Van Gelderen.

Stance on AI

Taking this into account, it’s up to CISOs to ensure their organisation's stance on the safe use of AI is “clearly communicated, understood and where possible, enforced”, says Adam Pilton, senior cybersecurity consultant at CyberSmart.

“Making sure staff using these tools are sufficiently trained to do so and building the core knowledge of your organisation will ensure you have a solid foundation in the safe use of AI.”

Additionally, organisations working with EU partners, “can and should be getting up to speed with this legislation, understanding its impact upon them and taking appropriate steps to comply”, Pilton says. However, before any significant investments or decisions are made, organisations will need to understand what the UK Government is going to do, he says.

For now, it’s important to strike a balance between innovating using AI and safety. CISOs and organisations should be proactive in their approach, staying ahead of regulatory changes by implementing strong data protection measures and fostering a culture of cybersecurity awareness, says Aldridge. “By doing so they can mitigate risks and ensure compliance with future regulations, positioning themselves as leaders in the safe and innovative use of AI.”

Kate O'Flaherty Cybersecurity and privacy journalist
Kate O'Flaherty Cybersecurity and privacy journalist

Upcoming Events

24
Oct
Webinar

Securing Data in the Cloud: Advanced Strategies for Cloud Application Security

Discussing the current trends in cloud security, focusing on the challenges of hybrid environments

In this live webinar, join security specialists from OPSWAT to discuss the current trends in cloud security, focusing on the challenges of hybrid environments, including diminished visibility and weakened threat detection.

image image