President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC.

President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Chip Somodevilla/Getty Images

Trump’s order to remove ‘woke’ AI from government may have ‘downstream impacts,’ experts worry

Some experts and advocates warned that last week's executive order will have a chilling effect on free speech.

Days after the Trump administration issued a flurry of artificial intelligence executive orders and an AI Action Plan meant to accelerate uses of the technology, experts are questioning how the government will implement the new policies — especially those meant to ban “woke” AI — and warning about their potential implications for free speech.

“There are many laudable and logical aspirations such as encouraging open-source models and building world-class scientific datasets,” one former senior AI official in a federal agency told Nextgov/FCW. 

But “the Action Plan is also dotted with inherent contradictions,” they said, noting that one recommendation “suggests proactive removal of specific topics framed as protecting free speech.” They requested anonymity for fear of retribution.

One of the new executive orders Trump signed last week directed agencies to consider “ideological neutrality” as they procure large language models. 

OMB has 120 days to issue guidance on the new terms that agencies should include in contracts buying large language models. Given the size of the government’s procurement base, the new policy also stands to impact the industry writ large as AI companies seek to sell their models to the government.

Agencies should consider if large language models are “truth-seeking” and have “ideological neutrality,” the order says. 

The focus of the executive order is a marked change from the Biden administration’s concerns about AI, which targeted the harms from potential biases based on things like race or ethnicity and encoded into systems.

Republican concerns about AI have focused more on free speech and content moderation. The new executive order cites an incident where “one major AI model changed the race or sex of historical figures.” Google’s AI image generator last year produced pictures that showed the founding fathers of the U.S. and Nazi soldiers as Black.

How OMB will set up guidance that a range of federal agencies can implement as they procure large language models, though, remains to be seen. The Office of Science and Technology Policy didn’t respond to a request for comment.

Big-picture, the ambiguity “pulls all the power to decide what’s acceptable into the administration, which effectively gives them power and control over the outputs of the LLMs that companies are producing,” said Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination and Redesign at Brown University.

On a call with reporters last week, a senior White House official said that DEI is the “main” focus of the order, but offered little details on how the government would be screening for it. 

The executive order itself offers the “manipulation of racial or sexual representation in model outputs” and “incorporation of concepts like critical race theory” as examples of how DEI shows up in AI models. 

“We expect [the General Services Administration] to put together some procurement language that would be contractual language requiring that again that LLMs procured by the federal government would abide by a standard… of seeking accuracy and truthfulness and not sacrificing those things due to an ideological bias,” the official said. 

The new executive order follows the Pentagon contracting with Grok from xAI, Elon Musk’s company, earlier this month. The week prior, the AI chatbot called itself “MechaHitler” and posted antisemetic comments following an update instructing the bot not to shy away from politically incorrect claims. xAI has since apologized and said it updated the model.

Trump also signed two other executive orders meant to promote the export of American AI and speed up the building of AI data centers, including by streamlining permitting. 

The administration’s AI Action Plan comes with a list of recommendations for how agencies can accelerate the use of AI. 

Among them is the formalization of the Chief AI Officer Council, the creation of a new talent exchange program to detail data scientists and software engineers to agencies needing AI talent and the establishment of an AI procurement toolbox at GSA. That toolbox would be meant to help agencies choose a model that’s already in line with various privacy and data laws. 

GSA is also tasked with implementing a program that enables it to quickly share advanced AI capabilities with other agencies. The administration’s plan recommends that agencies give employees access to large language models to the maximum extent possible and directs the Office of Management and Budget to pilot the use of AI in agencies that deliver benefits and services to the public. 

The National Institute of Science and Technology will also have to publish guidelines that agencies can use to evaluate AI systems for their missions. 

The plan recommends that NIST take references to misinformation, diversity, equity and inclusion and climate change out of its AI Risk Management Framework to “ensure that frontier AI protects free speech and American values.”

NIST debuted that voluntary framework, meant to help organizations manage the risks of AI, in early 2023 after years of public comments and workshops meant to drive consensus across the public and private sector. Agencies aren’t required to use the framework, although some lawmakers have introduced bills that would require them to do so.

“One of the principles on truth-seeking [in the executive order on “woke” AI] prioritizes historical accuracy, scientific inquiry, and objectivity. Yet, the Action Plan directs NIST to eliminate reference to misinformation, DEI, and climate change,” the former official said. “This principle seems to suggest proactive removal of those data from model training or retrieval, which ironically will almost certainly introduce biases.”

Cody Venzke, senior policy counsel with the American Civil Liberties Union, said in a statement that the AI action plan “may have downstream impacts on free speech, potentially censoring how AI can talk about race, gender, climate, or inequality” in a statement.

The new policies may also have a chilling effect on what agencies do to evaluate AI and manage risks, said Venkatasubramanian, who previously served in OSTP during the Biden administration.

The executive order could also potentially lead to “different classes of models that could create divergent outputs — one that’s compliant with EO for federal agency use and one that commercial/public use,” the former official said. 

Agencies’ capacity to implement scale AI across the government and these new policies as the administration shrinks the federal workforce is another open question, they said.

Many in industry have praised the new policies, however.

Victoria Espinel of the Business Software Alliance said in a statement that the plan “reinforces the roles of the Center for AI Standards and Innovation (CAISI) and NIST in the development of standards and evaluation tools, a foundation for both domestic AI governance and in promoting international collaboration on AI.” 

“The administration’s vision takes essential steps to ensure the U.S. can win the global AI race by prioritizing U.S. energy production and infrastructure development to power AI’s growth, promoting U.S. AI leadership internationally by supporting the export of the full stack of American AI technologies to partners and allies, and accelerating adoption of AI across the public and private sectors,” Jason Oxman, president and CEO of trade group ITIC said in a statement.