Tech & ScienceTop Story

Anthropic Defies Pentagon Over AI Safeguards Deadline

Anthropic Defies Pentagon Over AI Safeguards Deadline/ Newslooks/ WASHINGTON/ J. Mansour/ Morning Edition/ Anthropic is refusing Pentagon demands to loosen AI safeguards ahead of a 5:01 p.m. ET deadline. Defense officials warn the company could be labeled a supply chain risk if it does not comply. The dispute has drawn support for Anthropic from tech leaders, including OpenAI CEO Sam Altman.

FILE – Dario Amodei, CEO and co-founder of Anthropic, attends the annual meeting of the World Economic Forum in Davos, Switzerland, Jan. 23, 2025. (AP Photo/Markus Schreiber, File)

Quick Look: Pentagon vs. Anthropic

  • Deadline set for 5:01 p.m. ET Friday.
  • Pentagon demands unrestricted use of Anthropic’s AI model.
  • Anthropic refuses to drop safeguards.
  • Risk of “supply chain risk” designation.
  • Defense Production Act mentioned as possible tool.
  • OpenAI CEO Sam Altman backs Anthropic.
  • Debate centers on autonomous weapons, mass surveillance.
  • Silicon Valley divided over “woke AI” accusations.
FILE – Defense Secretary Pete Hegseth stands outside the Pentagon during a welcome ceremony for the Japanese defense minister at the Pentagon in Washington, Jan. 15, 2026. (AP Photo/Kevin Wolf, File)

Deep Look: Anthropic Defies Pentagon Over AI Safeguards Deadline

WASHINGTON — A high-stakes standoff between the Trump administration and artificial intelligence firm Anthropic is nearing a Friday deadline, with the company refusing to weaken its ethical safeguards despite pressure from the Pentagon.

Anthropic CEO Dario Amodei said Thursday that the company “cannot in good conscience accede” to the Defense Department’s demand that it allow unrestricted use of its AI systems.

The Pentagon has given Anthropic until 5:01 p.m. ET Friday to comply or face consequences that could extend beyond canceling its defense contract.

Pentagon Ultimatum

Defense officials have warned that if Anthropic refuses to adjust its policies, it could be labeled a “supply chain risk” — a designation typically associated with foreign adversaries. Such a move could jeopardize the company’s broader partnerships across the federal government and private sector.

Defense Secretary Pete Hegseth and Pentagon spokesperson Sean Parnell have said the military will not allow a private company to dictate operational terms.

“We will not let ANY company dictate the terms regarding how we make operational decisions,” Parnell wrote on social media, emphasizing that the department wants to use Anthropic’s AI model “for all lawful purposes.”

Anthropic, which develops the AI chatbot Claude, says it has sought narrow guarantees that its technology would not be used for mass surveillance of Americans or deployed in fully autonomous weapons systems without human oversight.

In a statement Thursday, the company said proposed contract revisions framed as compromise language would still allow the Pentagon to disregard those safeguards.

Industry Reaction

The dispute has sparked debate across Silicon Valley. Employees from OpenAI and Google — both of which also hold Pentagon AI contracts — signed an open letter expressing support for Anthropic’s position.

In a notable development, OpenAI CEO Sam Altman publicly sided with Anthropic during a CNBC interview Friday.

“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman said. He questioned the Pentagon’s “threatening” approach and suggested that many AI firms share similar ethical red lines.

Anthropic was founded in 2021 by former OpenAI leaders, including Amodei, who left the company citing safety concerns.

Elon Musk, whose company xAI also supplies AI systems to the military, criticized Anthropic on social media, accusing it of ideological bias.

The debate has revived broader tensions about “woke AI,” a term some Trump-aligned tech figures use to criticize safeguards embedded in large language models. Anthropic refers to its guiding AI principles as a “constitution,” which shapes how its chatbot behaves and responds.

Lawmakers and Military Voices Weigh In

Some lawmakers from both parties and former defense officials have voiced unease with the Pentagon’s approach.

Retired Air Force Gen. Jack Shanahan, who previously led the Defense Department’s AI initiatives, expressed sympathy for Anthropic’s position. Shanahan had overseen Project Maven, a controversial effort to use AI to analyze drone footage, which prompted protests from Google employees in 2018.

Shanahan wrote that Anthropic’s safeguards appeared reasonable and cautioned that current AI systems are not yet mature enough for fully autonomous national security applications.

“They’re not trying to play cute here,” he said.

What’s at Stake

The Pentagon has suggested it could invoke the Defense Production Act, a Cold War-era law that gives the government broad authority to compel companies to prioritize national defense needs.

Amodei argued that labeling Anthropic a security risk while simultaneously considering use of its technology under emergency powers presents a contradiction.

“If not, we will work to enable a smooth transition to another provider,” he said, signaling that Anthropic is prepared to lose the contract rather than compromise its safety standards.

The outcome could shape how artificial intelligence firms engage with the U.S. military going forward — and test the balance between national security priorities and corporate ethical commitments at a pivotal moment in the AI race.


Read more tech & science news

Previous Article
Scouting America to Change Policies under Pentagon Pressure
Next Article
Clinton Deposition Marks First for Former President

How useful was this article?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this article.

Latest News

Menu