
In short
- The LA Superior Court is testing Learned Hand’s AI to help judges prepare for cases without altering sentencing decisions.
- The company’s CEO warns that AI-powered legal documents will flood the courts with powerful “bots versus bots” if left alone.
- The system uses legal closing devices and authentication units designed to capture demonstrations before a judge sees the results.
Courts around the world are struggling with a high caseload, and a pilot program in Los Angeles hopes to change that by testing whether AI can help judges without lowering their verdict.
The Los Angeles County Court is testing an AI tool called Learned Hand that summarizes what we’ve written, organizes evidence, and makes sentencing decisions.
The goal is to reduce the time spent on administrative tasks so that judges can focus on the aspects of the case that require forensic analysis and insight, Learned Hand founder and CEO Shlomo Klapper said. Decrypt.
“We’re at a point where the courts are in serious trouble,” Klapper said. “Their cases are going up, but no help is coming,” he said, adding that smart progress will “significantly reduce the cost of cases.”
AI is increasing the pressure on the courts by making it easier to create documents, records are increasing 49% from 4,100 to 6,400 last year, according to February 2026. report and the international law firm of Fisher Phillips.
The Los Angeles Superior Court pilot gives a small group of judges the opportunity to use Learned Hand’s AI system to test its effectiveness throughout the entire court system, from intake to sentencing.
A former law clerk at the U.S. Court of Appeals and a senior consultant with Palantir, Klapper said Learned Hand, which was founded in 2024, was named after the same federal judge. nameit is designed to provide overburdened courts with “purpose-built” AI tools that reduce the “burdensome work” by highlighting key facts and legal issues and leaving judgment to the board and a human judge.
“With this agreement, we are carefully evaluating emerging technologies to see how they can help court administrators work more efficiently and effectively,” said Deputy Chief Justice Sergio C. Tapia. words. “Let me be clear, while this tool may enhance the way the courts review and engage with case files and information, it will not replace, or undermine, the sanctity, independence, and impartiality of decision-making.”
Klapper said that the most difficult part of developing courtroom AI is not producing the words but checking the results of the AI against the content of the cases and legal sources.
“A lot of what’s destroying our great language is a matter of fact, not generation,” Klapper said. Generation is simple. Anyone can make something, but how do you make sure it’s reliable?”
Demonstrations of AI have already appeared in high courts.
In 2023, the security team of Prakazrel “Pras” Michel, a founding member of the hip-hop group Fugees, he says that AI helped write the final argument which included the baseless and weak claims against him by the government.
That same year, a federal judge subpoenaed Trump’s former lawyer Michael Cohen providing printed copies of the cited cases after the court cannot confirm.
Klapper said Learned Hand is built around a smaller pool to reduce the risk of artificial intelligence and AI. Instead of releasing the open web or random datasets, the system works within well-known legal frameworks.
The reason is that major language brands can show bias in their training data, pointing to examples of AI advice from platforms like Reddit, Klapper said. Learned Hand communicates by breaking tasks into steps and assigning each step to a model with a specific function.
Learned Hand is also designed so judges don’t need technical training to use it.
“It’s important to click,” Klapper said. “They don’t have to do anything.”
Klapper said that too much of judges’ time is spent on routine tasks rather than legal considerations, and that AI aims to allow them to “spend more time on judicial work and less time on heavy work.”
Klapper said that judges should not take AI products at face value and that the tools and companies behind them should prove their reliability.
“I like to say, don’t believe it, prove it,” he said. They don’t have to believe anything, it has to show its value.
Daily Debrief A letter
Start each day with top stories right here, including originals, podcasts, videos and more.





