U.S. Government to Receive AI Models from OpenAI and Anthropic

OpenAI and Anthropic have announced plans to collaborate with the U.S. government by sharing their AI models.

U.S. Government to Receive AI Models from OpenAI and Anthropic
OpenAI and Anthropic, two leading developers in the generative AI space, have reached agreements with the U.S. government to provide access to their new models for safety testing, as announced on Thursday.

These agreements are with the U.S. AI Safety Institute, which operates under the National Institute of Standards and Technology, a federal agency.

The need for AI regulation has become increasingly important since the launch of OpenAI's ChatGPT, with technology companies advocating for a voluntary framework that allows governmental oversight of their innovations.

The agency intends to offer feedback to both companies regarding possible safety enhancements to their models, both prior to and following their public release, while collaborating closely with the UK AI Safety Institute.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the U.S. AI Safety Institute.

The evaluations conducted will support the voluntary efforts of major AI model developers, including OpenAI and Anthropic, as they push the boundaries of innovation.

"Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment," noted Jack Clark, co-founder and head of policy at Anthropic.

"This strengthens our ability to identify and mitigate risks, advancing responsible AI development," he added.

This collaboration aligns with a White House executive order on AI, introduced in 2023, which aims to provide a legal framework for the swift rollout of AI models across the country.

While Washington seeks to allow tech firms the freedom to innovate and experiment with AI, the European Union has taken a different approach by enacting an ambitious AI Act to impose stricter regulations on the industry.

In a contrasting move, pro-regulation lawmakers in California passed a state AI safety bill on Wednesday, which awaits the governor’s signature to become law. This bill includes penalties for violations, a provision that OpenAI has opposed, arguing that it may hinder research and innovation.

In a social media statement regarding the new agreement with the U.S. government, OpenAI CEO Sam Altman emphasized the importance of having national-level regulation, subtly critiquing California's state law.

Ian Smith contributed to this report for TROIB News