Master AI or Fall Behind

Your cyber intelligence source

European reaction to Biden’s Executive Order on AI

SC Media UK sourced comments from a range of cybersecurity experts on the world's biggest AI regulation announcement. Here we give you a taste of the consensus...

US President Biden's freshly signed Executive Order on AI (EO on AI) has been generally welcomed across the world. The cybersecurity community has commended its robust commitment to AI safety, rigorous testing and building security into design.

The directive mandates developers to share safety test results with the US government, ensuring AI systems are extensively vetted before public release. The order also champions the development of standards, tools, and tests for AI's safety and security.

The mandate underscores a robust commitment to safety, cybersecurity, and rigorous testing, says Casey Ellis, CTO and founder of Bugcrowd:

“Overall, the order reflects a proactive approach to manage AI's promise while mitigating its potential risks.”

The order is the first US AI act with substance, says Neil Serebryany, CEO and founder of CalypsoAI:

“This is the first AI act with teeth in the West. It’s a big move and signals that the US is serious about leading AI globally. It’s also a gamechanger as it offers very specific guidance for cybersecurity and building it into design. It offers massive opportunities for cybersecurity vendors globally and plays into the fact that every country needs to balance AI progress and risk accordingly.”

More alignment between the US and Europe would be beneficial, posits Richard Starnes, veteran SC Europe Awards judge and CISO at Six Degrees:

“President Biden’s Executive Order takes a sensible policy direction on a number of fronts; however, AI represents global challenges and global benefits. A joint release at the UK’s AI Safety Summit 2023 would have better demonstrated our shared thinking on the AI security landscape and how best to address those challenge moving forward together.”

With the exception of vendors that sell AI solutions in the US, the UK’s cybersecurity market will not be directly impacted, says Dr Ilia Kolochenko, CEO and founder of Immuniweb:

"The indirect consequences may, however, be quite palpable and long-lasting. For instance, given that the majority of leading AI vendors are incorporated in the US, their approach to the development and maintenance of AI/ML technologies (which are broadly defined in the EO) may become quite different.

“Resultingly, and almost inevitably, AI technology may become either less sophisticated at certain tasks or become significantly more expensive."

The US directive will generate vendor opportunities, says Azeem Aleem, managing director, northern Europe, Sygnia:

“It’s great news that the EO on AI is acknowledging the use of AI for malicious purposes, in great detail, such as the use of deep fakes. This EO will naturally have consequences for Europe as many US companies are affected and operate globally.

“While security by design is not a new concept, the US EO will evolve the concept and present opportunities for European companies in areas like software development, incident monitoring and threat development.”

The EO is a remarkable effort to provide guardrails for the future of humanity, says Dr. Martin Kraemer, security awareness advocate, KnowBe4:

“It follows development efforts across the globe, where the G7 jointly, as well as major influences like the US and the EU work on ensuring a future with AI.

“The specifics between the US and EU, of course, differ somewhat. Across these societies and their respective legislations, the expectations of people towards trustworthy AI will be different. Hence, aiming at developing responsible AI with such a comprehensive approach is a critical first step.”

The EO is the most comprehensive approach to AI we’ve seen to date, says Michael Covington, VP of Strategy at Jamf:

“As much as we may want to encourage organic and unconstrained innovation, it is imperative that some guardrails be established to ensure developers are mindful of any downstream effects, and that regulators are in place to help monitor for potential damages.

“Biden’s executive action is broad based and takes a long-term perspective, with considerations for security and privacy, as well as equity and civil rights, consumer protections, and labour market monitoring. The intention is valid– ensuring AI is developed and used responsibly.”

Gary Barlet, federal field chief technology officer at Illumio, would have liked to have seen more positivity in the directive:

“There's too much doom and gloom [in Biden’s EO] and not enough focus on how we can use AI for good. Where’s the call to leverage AI to improve cybersecurity? How are we going to train people to learn and use AI more effectively?  

“It’s also important to remember that while it’s good to get some safeguards put in place, AI is a global issue. Our safety checks are limited to what we can control. This is not going to be a blanket solution for AI – it's targeted to solve specific AI problems that are top of mind in the US. I do think it could be an important precedent for other more global AI regulation."

share
Master AI or Fall Behind