Listen to the article
PALM BEACH, Fla.—After weeks of back-and-forth with AI company Anthropic, the Pentagon is actively talking with all four major U.S. AI players—Anthropic, OpenAI, Google, and xAI—to ensure the companies and the Defense Department are at “the same baseline” regarding Pentagon expectations, the undersecretary of defense for research and engineering said Tuesday.
“We actually signed contracts with all four of them over the summer without a lot of specificity,” Emil Michael told a group of venture capital investors during an Amazon Web Services event. “Now we want to deploy [them] on our system so other people can build agents and pilots, and deploy it,” he said.
In other words, after months of exercises and experiments, the Pentagon is looking to allow different command elements and business entities to build AI agents that can perform a wider variety of tasks with minimal human oversight.
The discussions between Anthropic and the Pentagon have grown increasingly tense. Sources inside the company told Reuters the Defense Department was pushing to use Anthropic’s AI models for domestic surveillance and autonomous weapons targeting, and Axios reported the Pentagon is “close” to cutting ties with the company over Anthropic’s refusal to give the Pentagon unrestricted access to its models. Some Pentagon officials, speaking anonymously, have even vowed to make Anthropic “pay a price” for its perceived lack of cooperation.
Anthropic, which is heavily backed by AWS, is “having productive conversations, in good faith” with the Pentagon, according to a company spokesperson.
Michael struck a far more conciliatory note than other Pentagon officials who have spoken on the spat, and appeared at the event beside AWS Vice President of Worldwide Public Sector Dave Levy.
However, Michael did not move from the Pentagon’s red line. “We want all four of them,” he said, describing OpenAI, Google, xAI, and Anthropic as America’s “AI champions” with the financial staying power for long-term partnership.
Still, Michael noted there are a wide variety of roles the companies might be able to play, and the Pentagon wants different business and command elements to determine what to do with the models, rather than have the companies tell the military what they can and cannot do.
“We’re wanting all four companies to hear the same principle, which is: we have to be able to use any model for all lawful use cases.”
Of the four companies the Pentagon has contracted with, Michael said Anthropic is the only holdout on the issue of their ethical safeguards versus the Pentagon’s.
“Some of these companies have sort of different philosophies about what they want it to be used for it or not, but then selling to the Department of War. We do Department of War-like things.”
The Pentagon’s own safety or ethical safeguards must over-rule company safeguards, he said. He described an “extremely dangerous” hypothetical in which the U.S. military could be using an AI agent that suddenly stopped functioning due to embedded company safeguards. “That’s a risk I cannot take.”
The Pentagon has its own safeguards, a list of ethical principles enacted during the first Trump administration that governs everything from development to testing to deployment of AI systems. While the Pentagon’s newest AI acceleration strategy questions the very meaning of “responsible AI,” Michael said adherence to the AI ethics principles is still very much in place.
“The good news/bad news about a hierarchical department is when there’s a set of secretary-validated guidance directive memos that lays out the policies and procedures, people follow them. So that’s not an issue.”
Read the full article here

6 Comments
This is very helpful information. Appreciate the detailed analysis.
I’ve been following this closely. Good to see the latest updates.
Interesting update on The Pentagon says it’s getting its AI providers on ‘the same baseline’. Looking forward to seeing how this develops.
Good point. Watching closely.
Solid analysis. Will be watching this space.
Great insights on Defense. Thanks for sharing!