Access to both AI models granted to US agency.
OpenAI and Anthropic have signed an agreement with the National Institute of Standards and Technology’s (NIST) AI Safety Institute (AISI) to grant the government agency access to the companies’ AI models.
The agreement will provide a framework for the AISI to access new models both before and after their public release, reports SC US.
NIST will leverage this access to conduct testing and research, evaluating the capabilities and potential safety risks of major AI models. The institute will also offer feedback to the companies on how to improve the safety of their models.
The U.S. AISI is housed under NIST, which is part of the US Department of Commerce. The institute was established in 2023 as part of President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
AISI Director Elizabeth Kelly said safety is essential to fuelling breakthrough technological innovation, and these agreements are just the start, “but they are an important milestone as we work to help responsibly steward the future of AI.”
Written by
Dan Raywood
Senior Editor
SC Media UK
Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.
Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.