We've seen attorneys land in hot water over their apparent use of artificial intelligence to draft their pleadings. Now, it's a pair of government agencies.
On Thursday, a federal judge handed down a decision that's notable not so much for its clipping the wings of the Trump administration (that's to be expected in so many of the cases challenging the administration's actions), but for the major warning it includes about the use of artificial intelligence in government decisionmaking.
The decision, issued by Judge Colleen McMahon of the U.S. District Court for the Southern District of New York, actually involves two consolidated cases against the National Endowment for the Humanities (NEH) and the United States Department of Government Efficiency Service (DOGE), and several of their individual administrators/employees. The plaintiffs are organizational entities and individuals who sued to challenge the April 2025 termination of more than 1,400 grants, which the administration maintains:
were lawful efforts to implement presidential directives, eliminate grants associated with “diversity, equity, inclusion, and accessibility” (“DEIA”), “diversity, equity, and inclusion” (“DEI”), “environmental justice,” and “gender ideology,” and reduce discretionary spending in accordance with the priorities of the new administration.
RELATED: Hot Takes: GA Supreme Court's Punishment for Prosecutor Who Cited Fake Cases Has People Talking
Big Win for Trump Admin. on DEIA Initiatives As 4th Circuit Puts Kibosh on Lower Court Injunction
A key factor in McMahon's ruling is the allegation that federal officials used ChatGPT-assisted reviews to help identify humanities grants connected to “DEI” themes for termination. In finding the process unconstitutional and unlawful, McMahon notably observes:
The record reflects that these ChatGPT determinations were generated without any additional context beyond the cursory spreadsheet descriptions themselves. Given what courts now know about the hallucinatory propensities of ChatGPT and similar generative-AI tools, it would hardly be surprising if ChatGPT inferred, from DOGE’s repeated requests, that [DOGE employees Justin Fox and Nate Cavanaugh] were looking for reasons why grants could be characterized as DEI – and therefore terminable – and supplied “rationales” simply in order to satisfy the user’s perceived demand. The utter lack of reasoning behind so many of its “rationales” certainly suggests as much.
In other words, McMahon posits that the AI was essentially guessing at what reviewers wanted to find in order to flag grants ripe for termination. To be clear, it isn't that AI was used in the process, but that it was done — apparently — with limited human oversight.
Ultimately, McMahon concludes that the administration's mass termination of NEH grants violated the First Amendment and the equal protection component of the Fifth Amendment because the grants were allegedly targeted on the basis of viewpoint, race, sex, religion, national origin, and other constitutionally sensitive criteria. She further found that DOGE officials lacked lawful statutory authority to direct or influence the terminations and that the decision-making process itself was arbitrary, opaque, and infected by improper AI-assisted classifications.
Unlike much of our coverage regarding suits against the administration, this decision involves a judgment on the merits. The administration will almost certainly appeal to the Second Circuit and will likely seek a stay of McMahon's ruling pending appeal — particularly given the sweeping nature of the injunction and the potentially significant financial consequences attached to restoring the grants. But while it's one thing for an appellate court to review an emergency injunction entered early in a case on a limited factual record, it's quite another to review a final merits determination following discovery, a developed evidentiary record, and detailed constitutional findings by the district court. And McMahon's opinion leaves very little ambiguity about what she believes actually happened behind the scenes, which means the administration has a significant uphill climb at the appellate level.
And beyond the ultimate outcome of this case, there's a broader issue: Government agencies — at every level — are increasingly experimenting with AI-assisted systems for everything from benefits determinations to immigration processing to fraud detection and regulatory enforcement. Courts are now having to grapple with a question that's only going to become more pressing: How much human oversight is legally required when artificial intelligence helps shape government decisions?
How much do we want there to be?
Editor's Note: Unelected federal judges are hijacking President Trump's agenda and insulting the will of the people.
Help us expose out-of-control judges dead set on halting President Trump's mandate for change. Join RedState VIP and use promo code FIGHT to receive 60% off your membership.







Join the conversation as a VIP Member