What ‘OpenAI for Government’ means for US AI policy

Elyse Betters Picaro / ZDNET

OpenAI maintains several government-facing initiatives, including testing partnerships with the National Labs and ChatGPT Gov. Last week, the company announced it is rolling them all under a single umbrella initiative: OpenAI for Government.

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Pilot program with the DOD

The initiative’s first priority will be a pilot program with the US Department of Defense (DOD), capped at $200 million, to “identify and prototype how frontier AI can transform [DOD] administrative operations, from improving how service members and their families get health care, to streamlining how they look at program and acquisition data, to supporting proactive cyber defense,” OpenAI’s announcement states.

Also: How much energy does AI really use? The answer is surprising – and a little complicated

The company added that all use cases in the contract must comply with OpenAI policies and guidelines. 

In April 2024, Microsoft reportedly pitched DALL-E to the Department of Defense as a battlefield training tool, which sparked debate over OpenAI’s usage policies. Since its founding, OpenAI’s policies page had stated its models should not be used for military development, but in January 2024 the company removed “military” and “warfare” from the usage language. 

The page now forbids using the service “to harm yourself or others,” including “develop or use weapons, injure others or destroy property.”

By naming use cases ranging from administration to cyber defense, the announcement keeps a certain amount of flexibility — and, to some degree, opacity — in how OpenAI’s tech will be used in practice.

Also: AI agents will threaten humans to achieve their goals, Anthropic report finds

“The DOD will be looking for tools that can integrate into their complex and highly secure infrastructure,” Ben Van Roo, CEO and co-founder of Legion Intelligence, told ZDNET. “OpenAI’s new government offering might start with something like ChatGPT Gov, but the real value will be realized when models can be embedded into actual applications and workflows, and operate within a variety of environments — including classified, degraded, and disconnected networks.”

Access to ChatGPT Enterprise and Gov 

OpenAI for Government isn’t just setting its sights on the DOD. 

Under the new initiative, OpenAI is making its “most capable models within secure and compliant environments” available to federal, state, and local government workers via ChatGPT Enterprise and Gov. Those workers will also receive dedicated support from OpenAI, custom models for national security, and previews of forthcoming OpenAI products and features so teams can plan for how best to integrate them where applicable.

“Our goal is to unlock AI solutions that enhance the capabilities of government workers, help them cut down on the red tape and paperwork, and let them do more of what they come to work each day to do: serve the American people,” the announcement reads.

Van Roo observed that OpenAI’s move is part of a larger sea change in the tech sector. “Major AI organizations are increasingly engaging with the DOD,” he said. “Until recently, many kept their distance from the federal space, but are now actively building toward it.”

Also: 15 new jobs AI could create – could one be your next gig?

In practical terms, this is partly because the DOD has much more funding at its disposal than other government agencies. Proprietary AI tools aren’t cheap, especially when they need to be tailored for hyper-secure environments. “It needs to work within legacy systems, across a variety of networks, and meet its guidelines and restrictions,” Van Roo notes. He added that the DOD has also been the most “forward-leaning” among agencies when it comes to AI adoption.

President Trump’s AI Action Plan

More broadly, however, this move is occurring against the backdrop of President Donald Trump’s administration’s approach to AI policy. The administration is set to deliver its AI Action Plan by July 22, a deadline Trump set when he rolled back former President Joe Biden’s executive order on AI in January. 

Since taking office, Trump has reduced AI safety guardrails within government and cut AI research funding — to the alarm of industry leaders — while expanding partnerships with AI companies including OpenAI and Anthropic.

Also: Anthropic’s new AI models for classified info are already in use by US gov

Meanwhile, Trump’s so-called “big, beautiful bill,” or H.R. 1, his administration’s primary piece of legislation, includes a 10-year moratorium — eons in AI acceleration years — on state AI legislation, which would leave all regulation and enforcement to the federal level if passed.

As the July deadline inches closer, it appears the Trump administration is outsourcing the bulk of AI policy to contracts with private AI companies rather than cultivating independent regulation.

Want more stories about AI? Sign up for Innovation, our weekly newsletter.



Original Source: zdnet

Leave a Reply

Your email address will not be published. Required fields are marked *