U.S. Government Expands AI Safety Testing to Google DeepMind, Microsoft, and xAI Before Models Go Public

URL has been copied successfully!


Wednesday, May 7, 2026 | 8:30PM ET

The federal government announced Tuesday it has struck new agreements with three of the most powerful artificial intelligence companies in the world — Google DeepMind, Microsoft, and xAI — giving government evaluators access to advanced AI models before they are released to the public, as well as after deployment. The move marks a significant expansion of the U.S. government’s effort to vet cutting-edge AI technology for national security and public safety risks.

The agreements were announced by Center for AI Standards and Innovation, known as CAISI, housed within the Department of Commerce’s National Institute of Standards and Technology, or NIST. Under Howard Lutnick, CAISI has been formally designated as the federal government’s primary point of contact with the private AI industry — a central hub for testing, research, and best practice development related to commercial AI systems.

The new deals build on earlier voluntary agreements that NIST first struck in 2024 with Anthropic and OpenAI, which were the first of their kind. The current agreements with Google DeepMind, Microsoft, and xAI have been renegotiated to align with CAISI’s directives from the Commerce Secretary and President Trump’s America’s AI Action Plan. Chris Fall said independent, rigorous testing is essential to understanding frontier AI and its national security implications, and that the expanded partnerships allow the agency to scale its work at a critical moment.

A key feature of the agreements is that companies will provide CAISI with versions of their models that have reduced or removed safety guardrails — allowing evaluators from across the federal government to probe capabilities and risks that would not be visible in standard public releases. Testing can take place in classified environments, and the agreements are drafted with flexibility to adapt quickly as AI technology continues to advance rapidly. CAISI has already completed more than 40 such model evaluations, including reviews of systems that had not yet been released to the public at the time.

The announcement comes as the Trump administration shifts its posture on AI regulation. The administration initially prioritized an accelerated, largely unregulated approach to AI development in its first year, focused on building domestic infrastructure and advancing U.S. leadership over China in the field. That approach is now being recalibrated, according to reporting by The New York Times, as national security officials grow increasingly concerned about the risks posed by rapidly advancing AI models. The new oversight framework stops short of mandatory pre-clearance but establishes a standing federal review channel that could be formalized further by future policy action.

The practical stakes extend beyond government. For businesses selecting AI vendors — particularly companies with federal contracts or aspirations to win them — the new agreements carry commercial weight. Analysts note that a model’s relationship with the Department of Commerce and NIST is becoming a meaningful signal of long-term viability in the enterprise market. A vendor that has not secured a favored position within the federal testing framework carries what one analyst described as a “massive contagion risk” for any business tied to government work.

The Business Software Alliance backed the announcement, with Aaron Cooper saying CAISI has the right institutional expertise to work with private sector partners on evaluating frontier models. The voluntary structure of the current agreements leaves open the question of whether Washington will eventually move toward more enforceable standards — but for now, the government has established the architecture it would need to do so.

JBizNews Desk
© JBizNews.com. All rights reserved. This article is original reporting by JBizNews Desk. Unauthorized reproduction or redistribution is strictly prohibited

Please follow us:
Follow by Email
X (Twitter)
Whatsapp
LinkedIn
Copy link