The White House might stop freely promoting artificial intelligence technology, with reports from both The Wall Street Journal and The Washington Post indicating that the US government will adopt a more cautious stance after witnessing the capabilities of Anthropic’s Mythos, the latest generation of AI models.
According to the Thursday WSJ report, US Vice President JD Vance was “alarmed” after a call with the heads of the biggest artificial intelligence companies, with the Mythos model among the most worrying because of its ability to find software vulnerabilities on its own.
The main factor, according to the WSJ, is that these new models could target critical infrastructure administered by local authorities rather than the national government, with the local governments lacking the tools to disrupt such attacks when they occur.
US National Economic Council Director Kevin Hassett said the Trump administration was working on a way to regulate how high-tech companies introduce new AI models to the market, with the main proposal being a system similar to the FDA’s for testing new drugs.
This would, according to Hassett, guarantee that “they’re released to the wild after they’ve been proven safe,” while an official working on the project told The Washington Post that the details of how it would work are “still being hashed out.”
From ‘safety’ being a problem to a necessity
Nathan Calvin, general counsel and vice president of state affairs at Encode, a nonprofit AI advocacy group, told The Washington Post that officials started hearing the words “safety” and “AI” in the White House, something that was seen as taboo for the Trump administration up to now.
“We just heard a bunch of top Cabinet officials saying the words ‘safety’ and ‘AI’ in the same sentence, which is not how the admin was talking about these issues even a few months ago,” said Calvin.
The White House addressed the topic, saying that it was “exploring the balance between advancing innovation and ensuring security” alongside the top AI developers in the US.
Israel’s use of AI
In December 2023, the government of Israel introduced an “AI Policy on Artificial Intelligence Regulation and Ethics” that aims to apply “soft regulations” to the sector without impeding the development of these technologies.
“These principles we have published facilitate development and responsible innovation, enabling the use of AI, while safeguarding basic rights and the public interest,” said Ofir Akunis, the then-Innovation, Science and Technology Minister.
In September 2024, the Innovation Ministry launched a national expert forum on AI, with experts from academia, industry, and leading civil society organizations to help develop a government strategy and policy to promote the safe use of artificial intelligence.
At the military level, the IDF implemented during the recent wars a unit responsible for integrating and relaying artificial intelligence and “big data” intelligence, with its commander, Col. Rotem Beshi, telling The Jerusalem Post that it played a critical role in transforming the air force’s effectiveness during the recent war with Iran.
A new system managed by Matzpen, known as the LOCHEM system, handled all the planning for attacks on Iran, starting with working with the air force’s special, relatively new Iran unit, said Beshi.
Yonah Jeremy Bob contributed to this report.


