Government vs Ban? Agencies Use Anthropic AI Behind the Scenes

Federal agencies are reportedly testing Anthropic’s advanced AI despite restrictions. Discover what’s behind the move and its implications for AI regulation.

2026-04-15 06:52:53 - Mycashmate

"Trump Banned Anthropic… But US Agencies Are Secretly Testing Its Super-Powerful AI Hacker Anyway!"

Even though President Donald Trump put a ban on working with AI company Anthropic, some federal agencies are quietly finding ways around it. They’re especially interested in the company’s brand-new AI model that can spot serious software weaknesses better than most human experts.

The model, called Claude Mythos, was unveiled just last week. It has left researchers both impressed and nervous because of how good it is at finding hidden flaws in computer systems — the kind that even top cybersecurity pros have missed for years.

In the past few days, people from at least two big federal agencies have contacted Anthropic directly. They want to bring this new model into their cyber defense work, according to a former senior U.S. technology official who knows about the conversations.

The Commerce Department’s Center for AI Standards and Innovation (CAISI) is also actively putting Mythos through its paces. They’re testing how well it can hack into systems, according to four people with knowledge of the situation. That group includes one current and one former cybersecurity official, a former Trump administration official, and a former senior national security official.

On top of that, staff from at least three different congressional committees have asked for or already held briefings with Anthropic over the last week. They want to understand exactly what Mythos can do when it comes to scanning for cyber weaknesses, according to three congressional aides who work on AI policy matters.

All the people mentioned here asked to stay anonymous so they could talk about these behind-the-scenes government discussions with Anthropic.

This quiet push to get access to Mythos — even while the government is still fighting the company in court — shows just how much some officials want to use this powerful new tool to strengthen America’s cyber defenses. It also highlights how the Trump administration’s effort to cut off Anthropic is being carefully worked around by people inside the government.

Back in late February, President Trump and Defense Secretary Pete Hegseth ordered all federal agencies to stop using Anthropic’s technology. The reason? Anthropic’s CEO, Dario Amodei, took a strong stand against letting the Pentagon use their AI models for fully autonomous lethal attacks or for mass surveillance on American citizens. Last month, Hegseth went further and officially labeled Anthropic a “supply chain risk” — something rarely done to an American company. That move basically blocks Anthropic’s AI from being used on any Defense Department contracts.

Charlie Bullock, a lawyer and senior research fellow at the Institute for Law and AI, called the situation ironic. “It’s ironic that the U.S. government tried to ban U.S. government use of Anthropic products — and then a few weeks later, there’s this revolutionary Anthropic product that’s very important for cybersecurity, and has very important national security implications,” he said.

Three congressional aides told reporters they’re frustrated. They feel the White House campaign against Anthropic is stopping the government from using some of the best new technology available to protect its own networks. Those networks are facing more attacks than ever from countries like Russia and China, who are also racing to build their own powerful AI tools.

One of the aides put it bluntly: “The Pentagon has shot itself in the foot by giving the middle finger to the most capable AI provider.”

The Pentagon declined to comment when asked about the situation.

The White House released a statement saying the Trump administration “continues to work and engage with AI companies to ensure their models help secure critical software vulnerabilities.” They added that the White House is “proactively engaging across government and industry to ensure the United States and Americans are protected.”

When Anthropic introduced Mythos last week, the company said it was only making the model available to a small group of trusted tech and cyber organizations. They explained that it was simply too dangerous to release to the general public because of how well it can discover and exploit unknown software weaknesses.

One Anthropic official, speaking anonymously, confirmed that the company had given briefings to U.S. government officials about what Mythos can do in hacking and cyber defense. But the company’s public announcement didn’t name any specific federal agencies.

Bloomberg also reported on Tuesday that IT teams at the Treasury Department are looking for ways to use Mythos to find and fix hidden problems in their own networks.

Anthropic has pushed back against the government’s actions. Last month the company filed lawsuits against the supply chain risk designation in two different courts because of how federal law works. The rulings were split. A judge in Northern California put part of the government’s decision on hold for a while, but the D.C. Circuit Court of Appeals later temporarily upheld it.

Charlie Bullock pointed out that if the government had won both court cases, it would have been much harder for agencies to get their hands on Mythos. He believes federal agencies “would not have been allowed” to test the model if the California judge had ruled completely in the government’s favor.

Even so, one former senior national security official said Trump’s very public attacks on Anthropic have made it tougher for government teams to work closely with the company now that Mythos is ready. In February, the president posted on social media calling the people running Anthropic “Leftwing nut jobs” because of their refusal to let the government use the AI for mass domestic surveillance or autonomous lethal weapons.

That same official said the strong public statements from the administration created a “chilling effect.” Many agencies became hesitant to openly work with Anthropic or use its model to hunt for and fix cybersecurity holes in government systems. Doing that kind of work properly needs big teams of software engineers and serious money — things that could easily run into trouble with Trump’s directives.

Still, there are clear signs that the Trump administration understands it can’t completely ignore what Mythos might mean for national security — whether that’s good or bad.

The four people familiar with the situation said cybersecurity experts at CAISI — which is part of the National Institute of Standards and Technology — started testing Mythos’s hacking abilities even before Anthropic officially announced it. CAISI was created in 2024 and got a new name and focus last year under the Trump administration. Right now, researchers there are “red teaming” the model — basically attacking it to see what it can really do and what risks it might create for national security.

This whole situation shows the tricky spot the government is in. On one hand, the White House wants to punish Anthropic for its positions on military use and surveillance. On the other hand, some officials inside the system clearly see Mythos as a powerful tool that could help protect the country from growing cyber threats.

For now, the testing and quiet conversations continue behind the scenes. Whether this leads to more open cooperation or creates even more tension between the Trump administration and Anthropic remains to be seen. But one thing is clear: when it comes to cutting-edge AI that can strengthen America’s defenses, not everyone in government is willing to let politics get completely in the way.

More Posts