Altman Says OpenAI Can’t Make Decisions About Military Operations
Sam Altman, the CEO of OpenAI, told his employees that the company can’t control how the Pentagon uses AI systems. During internal talks about how the military could use OpenAI technologies, he said these things. The comments brought up more worries about how AI tools are being used in war.
According to reports, Altman said that companies that provide technology don’t make decisions about how to run things during military conflicts. He stressed that the government is still in charge of all decisions about strikes and military campaigns. Bloomberg and CNBC both reported on the comments after talking about them internally.

Source: Fortune – Website
Pentagon Pressure On AI Firms Raises Ethical Concerns
The debate comes at a time when the Pentagon is putting more pressure on AI companies to improve military capabilities. Reports say that officials have asked for the removal of safety guardrails that keep AI systems from working in certain ways. These changes could make it possible for military operations to use artificial intelligence tools more widely.
These kinds of requests have caused problems between the US Department of Defense and tech companies. A lot of AI workers are worried that powerful systems could be used in military missions that are dangerous or controversial. The debate is part of a larger discussion about the moral issues surrounding the use of AI in war.
AI Technology Already Used In Military Operations
Reports say that artificial intelligence systems have already been used in a number of military operations in the United States. AI tools are said to have helped with targeting decisions during recent operations related to the Iran conflict. It has also been said that similar technologies were used to catch Venezuelan leader Nicolás Maduro.
These changes show how quickly AI is being used in modern warfare plans. Military planners are using data analysis and automated decision support systems more and more. Critics say that using AI more in conflict raises big questions about who is responsible and what is right.
Recommended Article: OpenAI Reaches Pentagon Deal With Strict AI Usage Limits
Anthropic Refused Pentagon Deal Over Safety Concerns
Anthropic, a competitor of OpenAI, recently turned down a deal with the Pentagon because they were worried about ethics. The company was afraid that its AI system could be used for mass surveillance at home or for weapons that work on their own. Anthropic executives said that these kinds of risks went too far when it came to making AI safe.
After the refusal, the US Secretary of Defense called Anthropic a possible supply chain risk. In the past, this label has been used against businesses that work with foreign governments. Analysts said that the move could hurt Anthropic’s finances and partnerships a lot.
Pentagon Turns To OpenAI After Anthropic Dispute
Not long after the fight with Anthropic, the Pentagon said it was teaming up with OpenAI. The deal seemed to be meant to replace Anthropic’s technology in some military uses. Both tech workers and people outside of the field quickly criticized the timing.
Critics said that the deal showed that OpenAI was willing to accept military conditions that its competitors turned down for moral reasons. As a result of the backlash, employees talked about how their technology might be used. There was also more public criticism in the larger artificial intelligence community.
Altman Admits Deal Created Negative Perception
Sam Altman later admitted that the deal with the Pentagon had been made too quickly in public. He said that the situation made OpenAI look like it was taking advantage of the situation and wasn’t ready. The admission was part of a larger effort to respond to criticism of the company’s partnership with the military.
OpenAI has tried to calm employees and the public by saying that its technology will follow the law. During military missions, executives stressed that government agencies are still in charge of making operational decisions. The business says it can’t tell governments how to use technologies they buy.
Rival CEO Criticizes OpenAI And Political Connections
Dario Amodei, the CEO of Anthropic, wrote an internal memo to employees that was very critical of Altman. He said that the leaders of OpenAI were working with government officials while pretending to be safe in public. Amodei said that his company decided to stick to stricter moral standards.
The memo also talked about political donations made by OpenAI leaders who support President Donald Trump. Amodei said that these relationships changed how businesses worked with the Pentagon. The fight shows that there are more and more political and moral differences in the AI industry.













