Grassley Demands Accountability After Judges Release AI Errors


Follow America's fastest-growing news aggregator, Spreely News, and stay informed. You can find all of our articles plus information from your favorite Conservative voices. 

Federal judges in New Jersey and Mississippi admitted this summer that staffers used artificial intelligence in drafting court orders that contained mistakes, and those orders were withdrawn after problems were spotted. Senator Chuck Grassley pressed for answers, calling the rulings “error-ridden.” Both judges said the drafts bypassed normal review and that practices have been tightened to avoid repeats.

The core issue is simple: courts relied on AI-assisted drafting where human checks fell short. When a draft lands in the public record with factual mistakes, litigants suffer and confidence in the system erodes. This isn’t a tech novelty excuse; it is an accountability problem that needs direct fixes from judges and their chambers.

U.S. District Judge Julien Xavier Neals acknowledged a June 30 draft decision in a securities case “was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.” He said a law school intern used OpenAI’s ChatGPT for legal research without authorization or disclosure, violating chamber policy and the law school’s rules. That combination of unauthorized tools and weak oversight produced a document that had to be pulled.

Neals spelled out his prior reliance on verbal instructions and moved to fix that. “My chamber’s policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders,” Neals wrote. “In the past, my policy was communicated verbally to chamber’s staff, including interns. That is no longer the case. I now have a written unequivocal policy that applies to all law clerks and interns.”

In Mississippi, U.S. District Judge Henry Wingate reported a similar lapse when a law clerk used Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket.” The judge said a July 20 draft decision was released in error because of that lapse in oversight. Wingate removed and replaced the original order in a civil rights case after identifying “clerical errors.”

Wingate accepted responsibility and promised changes. “This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again,” the judge wrote. Those steps matter, but they are reactive. The bigger question is whether the judiciary will adopt uniform, enforceable standards rather than leaving policy inconsistently applied across chambers.

Senator Chuck Grassley pressed both judges for explanations and framed the problem as institutional, not accidental. He called the orders “error-ridden.” Grassley said the judiciary must prevent lax habits that let generative AI slip into legal work without clear rules or transparency to the parties involved.

Grassley made a direct appeal for decisive action from the courts. “Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again,” Grassley said in a statement. “Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,” the senator continued. “The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue.”

Republican oversight in this context is straightforward: insist on clear rules and consequences so litigants are protected and judges are accountable. When technology is used, disclosure and verification must be mandatory. Courts should require that any use of AI in research or drafting be documented, reviewed by a human lawyer, and disclosed to the parties before publication.

This episode follows other instances where lawyers and clerks faced scrutiny or sanctions for improper AI use in filings. Judges across the country have fined attorneys or imposed other penalties after errors tied to generative tools surfaced. That trend shows the judiciary is already grappling with the consequences, but inconsistent responses undermine predictable justice.

Practical reforms can be precise and enforceable: written chamber policies, mandatory reporting of AI use, training for clerks, and disciplinary steps for violations. Those are not radical demands; they are basic steps to protect litigants and the integrity of rulings. The judiciary must move from ad hoc fixes to uniform standards that deter carelessness.

Courtroom integrity depends on accurate, reviewed work. AI can help, but it cannot replace human responsibility and facts. If the judiciary does not adopt clear, permanent safeguards, mistakes will keep surfacing, and public trust will keep eroding. Lawmakers and judges should treat this as a governance problem that requires firm, consistent solutions.

Share:

GET MORE STORIES LIKE THIS

IN YOUR INBOX!

Sign up for our daily email and get the stories everyone is talking about.

Discover more from Liberty One News

Subscribe now to keep reading and get access to the full archive.

Continue reading