Technology

US federal judges discuss the intersection of emerging technology, AI with the legal system

U.S. federal judges convened to examine how emerging technologies, including AI, intersect with and challenge existing legal frameworks.

Smartphone with ai text in jeans pocket

Image: GlobalBeat / 2026

AI legal system: Federal judges map courtroom limits for machine evidence

Sarah Mills | GlobalBeat

Federal judges from 12 circuits met in San Francisco on Thursday to draft the first national guidelines on admitting AI-generated evidence and algorithmic sentencing tools.

The draft rules would require prosecutors to disclose training data, error rates and any post-deployment updates before judges allow predictive software to influence bail, sentencing or parole decisions.

Courts have wrestled with AI since a 2023 Wisconsin case where Compas software recommended a 6-year prison term that a judge later called “opaque arithmetic” after the defendant appealed. Similar disputes have erupted in at least 47 federal cases since January, clogging dockets and forcing emergency tutorials on neural networks, according to a Berkeley Law review filed with the committee.

Judge Jeremy Fogel, who heads the Federal Judicial Center’s technology division, told reporters the proposals are “an attempt to keep the gatekeeper role human” while still harnessing tools that can spot fraud patterns across millions of filings. The committee circulated a 42-page draft that sets a four-step test: disclosure, accuracy review, bias audit and ongoing monitoring. Any party hoping to introduce an AI report must file a “model card” 30 days before trial listing data sources, known failure modes and calibration drift since training. Failure triggers automatic exclusion, the draft states.

Public defenders welcomed the transparency push. “We’ve seen pretrial risk scores label Black defendants high-risk for the same profile that gets white defendants low,” said Avery Jackson, federal defender for the Northern District of California. Jackson cited an internal analysis of 1,800 Bay Area cases where the algorithm flagged 28 percent of Black respondents for detention versus 9 percent of white respondents with similar prior records. “Numbers look neutral until you dig into the dataset,” she added.

Prosecutors warned tighter rules could slow white-collar probes that rely on machine-learned transaction graphs. “We’re chasing shell companies that mutate daily,” said Miriam Delgado, a Securities and Exchange Commission trial attorney. Delgado estimated her team uses network-analysis AI in 30 pending fraud cases; a month-long disclosure window, she argued, gives suspects time to move assets overseas. The committee compromised by allowing sealed submissions and ex parte review for sensitive trade data.

Tech firms face new paperwork. LexisNexis, which sells litigation-analytics software to more than 2,000 law firms, must now supply error-rate tables for every predictive model, general counsel Robert Koons confirmed. “We’re rebuilding 18 products,” he said during a public comment session. Smaller vendors objected that compliance costs could push them out of the government market. “A five-person startup can’t afford a third-party bias audit on every tweak,” said Dana Rao, general counsel at Adobe and former chair of the AI trade group TechNet.

Academics testified that current accuracy metrics miss real-world drift. “An evidence model trained on 2020 filings already undercounts crypto scams,” said Stanford computer-science professor Percy Liang, who audits legal algorithms. Liang recommended quarterly recertification tied to model performance rather than calendar dates. Judges adopted the suggestion, inserting a clause that requires parties to rerun bias tests after any “material update” including new training data or threshold changes.

The draft also bars AI-only testimony. Expert witnesses must explain outputs in “plain language intelligible to an average juror,” mirroring a 2022 Fifth Circuit ruling that tossed a damages calculation generated by undisclosed code. Analysts called the plain-language rule the biggest shift. “Lawyers can’t just wave at a screen anymore,” said Adam Feldman, editor of the Empirical SCOTUS blog. “They’ll need humans who actually read the matrix.”

Civil liberties groups want tighter limits on sentencing. The American Civil Liberties Union submitted comments urging an outright ban on predictive-risk tools that include zip code, employment history or juvenile records, arguing these variables act as “proxies for race.” The committee left the door open, stating courts must weigh probative value against unfair prejudice under existing rule 403. “We’re not banning math, we’re channeling it,” Fogel said.

Several judges pushed back, warning over-regulation could freeze adoption of tools that reduce incarceration. “If an algorithm tells me a veteran with PTSD will likely succeed on supervised release, I want to hear it,” said Judge Patricia McInerney, an Army veteran who sits in the Eastern District of Pennsylvania. The final vote was 8-3 to allow risk-assessment evidence under strict disclosure, with dissenting judges calling for a moratorium until Congress funds federal studies.

Background

Federal evidence rules have not had a major overhaul since the 1975 adoption of the Federal Rules of Evidence, themselves based on centuries of common-law gate-keeping that began with England’s 1670 Bushell’s Case. Each new technology, from ballistics to DNA, forced courts to decide what juries can trust. The 1993 Daubert standard required judges to verify scientific methods, yet software slipped through because code was treated as trade secret. Early computer forensics in the 1990s simply matched spreadsheet hashes; today’s neural nets learn patterns no human programmed, making the Daubert factors harder to apply.

The catalyst came in 2020 when the Marshals Service deployed a face-recognition algorithm that falsely matched 42 suspects to open warrants, leading to a mistaken arrest in Detroit broadcast on body-cam. A Government Accountability Office study found 64 percent of federal agencies using AI had no documented accuracy standards. The Supreme Court has yet to rule directly on algorithmic evidence, though Justice Sonia Sotomayor warned in a 2021 lecture that “opaque tools risk replacing the rule of law with the rule of data.”

What’s Next

The committee will accept public comments until July 15 and release a revised draft in September, aiming for a Judicial Conference vote in March 2027. If approved, the guidelines take effect December 1, 2027 unless Congress intervenes, under the Rules Enabling Act procedure. A pilot program in 6 districts will test compliance starting January, with clerks instructed to flag any AI-assisted filing for audit. Litigators are already stockpiling model-card templates and hiring data scientists, a recruiting surge that one legal recruiter called “the Daubert boom all over again.”

The bigger fight looms in Congress, where Senators Josh Hawley and Jon Ossoff plan separate bills to codify or tighten the committee’s draft. Hawley’s office told GlobalBeat he wants criminal penalties for hiding algorithmic bias, while Ossoff favors civil fines and mandatory transparency registries. Neither bill has cleared committee, but both parties agree some guardrails are inevitable as AI-generated exhibits multiply on dockets nationwide.

Sarah Mills
Technology & Science Editor

Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.