Sam Altman

OpenAI CEO Sam Altman announced late Friday that the company has reached an agreement with the Department of Defense regarding the utilization of its artificial intelligence models. This development follows President Donald Trump’s statement indicating that the government will not collaborate with AI competitor Anthropic. “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” Altman stated in a post on X. “Throughout our engagements, the DoW exhibited a profound commitment to safety and an eagerness to collaborate in order to attain the most favorable results.” Altman’s post arrives at the conclusion of a tumultuous week for the AI sector, which has become the focal point of a political discourse regarding the applications of its models.

Earlier in the day, Defense Secretary Pete Hegseth classified Anthropic as a “Supply-Chain Risk to National Security” following weeks of intense negotiations. The designation is generally allocated for international opponents, compelling Department of Defense vendors and contractors to affirm that they do not utilize Anthropic’s models. President Trump also instructed every federal agency in the U.S. to “immediately cease” all use of Anthropic’s technology. Anthropic became the inaugural laboratory to implement its models within the Department of Defense’s classified network, and had been engaged in negotiations regarding the terms of its contract with the agency prior to the breakdown of discussions. The company sought guarantees that its models would not be employed for fully autonomous weapons or mass surveillance of American citizens, whereas the Department of Defense aimed for Anthropic to consent to the military’s utilization of the models for all lawful applications.

In a memo distributed to employees on Thursday, Altman indicated that OpenAI and Anthropic adhere to the same “red lines.” In his post on Friday, he stated that the Department of Defense concurred with its restrictions. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman stated. The DoW aligns with these principles, incorporates them into legislation and policy, and we embed them within our agreement. The rationale behind the DoD’s decision to accommodate OpenAI while not extending the same to Anthropic remains ambiguous. Government officials have, however, expressed ongoing concerns regarding Anthropic’s perceived excessive focus on AI safety. Altman stated that OpenAI will establish “technical safeguards to ensure its models behave as they should,” and that the organization will allocate personnel to “assist with our models and to ensure their safety.”

“We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman stated. “We have articulated our firm intention to witness a shift from legal and governmental measures towards amicable resolutions.” Anthropic expressed in a statement on Friday that it was “deeply saddened” by the Pentagon’s decision to categorize the company as a supply chain risk. The entity has expressed its intention to contest that classification through legal means.