The chief executive of OpenAI, Sam Altman, has acknowledged that the company cannot fully control how artificial intelligence technologies may ultimately be used by the United States Department of Defence, highlighting growing concerns about the role of AI in modern warfare and national security.
Altman made the remarks while discussing the expanding use of artificial intelligence by governments and military institutions. He noted that once powerful AI technologies become widely available, companies that develop them have limited ability to dictate how state actors deploy them.
“There are limits to what any company can control once technologies are in the world and governments begin using them,” Altman said. “Ultimately, decisions about national security tools are made by governments, not technology providers.”
His comments come as the Pentagon accelerates efforts to integrate artificial intelligence into defence systems, including intelligence analysis, battlefield logistics, cybersecurity and autonomous technologies. The military views AI as a critical capability that could shape the future of global security and defence strategy.
Altman stressed that while OpenAI has established internal policies designed to guide responsible use of its technologies, enforcement becomes more complex when national governments adopt similar capabilities or develop their own systems.
“We can set policies for how our services are used, but governments will make their own decisions about defence applications,” he said. “That is why broader governance and international cooperation around AI are so important.”
The remarks highlight the growing debate over who should be responsible for regulating powerful AI systems as they become embedded in sensitive sectors such as defence. Technology companies have increasingly called for clearer regulatory frameworks to ensure that advanced AI tools are used safely and ethically.
Consider reading also:
- Open AI Revises Military Deal Following Public Backlash
- Trump orders US federal Agencies to halt the use of Anthropic AI
OpenAI has previously said its technologies are intended for civilian and beneficial uses, though it has also acknowledged that governments are among the institutions exploring how AI can support national security operations.
The United States Department of Defence has invested heavily in artificial intelligence research in recent years, launching multiple initiatives aimed at strengthening military capabilities through data analysis, automation and decision-support systems.
Defence officials have argued that AI will play a decisive role in maintaining strategic advantage, particularly as other global powers also accelerate their investment in emerging technologies.
Altman said the rapid development of AI underscores the importance of cooperation between governments, industry and international organisations to establish safeguards.
“This technology is going to affect national security, economics and society broadly,” he said. “It’s essential that governments, companies and researchers work together to ensure it is deployed responsibly.”
The discussion reflects a broader global debate over how to govern artificial intelligence as it becomes more powerful and widely adopted, particularly in areas with significant geopolitical consequences.

Senior Reporter/Editor
Bio: Ugochukwu is a freelance journalist and Editor at AIbase.ng, with a strong professional focus on investigative reporting. He holds a degree in Mass Communication and brings extensive experience in news gathering, reporting, and editorial writing. With over a decade of active engagement across diverse news outlets, he contributes in-depth analytical, practical, and expository articles exploring artificial intelligence and its real-world impact. His seasoned newsroom experience and well-established information networks provide AIbase.ng with credible, timely, and high-quality coverage of emerging AI developments.
