PHOTO
(FILES) This illustration photograph taken on October 30, 2023, in Mulhouse, eastern France, shows figurines next to a screen displaying a logo of OpenAI, a US artificial intelligence organisation. OpenAI on March 13, 2024 announced partnerships with French daily Le Monde and Spanish conglomerate Prisa Media, saying it intends to develop news-related uses of its ChatGPT artificial intelligence tool. OpenAI will be able to use content from Le Monde and Prisa Media publications including El Pais, Cinco Dias, and El Huffpost to train the models powering its artificial intelligence, the San Francisco-based company said in an online post. (Photo by SEBASTIEN BOZON / AFP)
Leading generative AI developers OpenAI and Anthropic have agreed to give the US government access to their new models for safety testing as part of agreements announced on Thursday.
The agreements were made with the US AI Safety Institute, which is part of the National Institute of Standards and Technology (NIST), a federal agency.
Regulation of AI has been a major concern since the advent of OpenAI's ChatGPT, with tech companies pushing for a voluntary approach to opening their technology to government oversight.
The agency said it would provide feedback to both companies on potential safety improvements to their models before and after their public release, working closely with its counterpart at the UK AI Safety Institute.
"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.
The agency said the evaluations would aim to support the voluntary commitments made by leading AI model developers, such as OpenAI and Anthropic, as they innovate.
"Our collaboration with the US AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment," said Jack Clark, Co-Founder and Head of Policy at Anthropic.
"This strengthens our ability to identify and mitigate risks, advancing responsible AI development," he added.
The collaboration is part of work connected to a White House executive order on AI announced in 2023 that was designed to provide a legal backdrop for the rapid deployment of AI models in the United States.
Washington is eager to leave tech companies a free rein to innovate and experiment on AI, in contrast to the European Union where lawmakers passed an ambitious AI Act to more closely regulate the industry.