Trial by Artificial Intelligence? How AI Navigates Judicial Systems and Why It Is A Bad Idea

Is the U.S. moving towards an AI-driven "smart court" like China's digitized court proceedings, as labeled by the Center for Strategic and International Studies? Not exactly, experts say, but these forecasts are not entirely wrong.

AI Impacting Legal Field

Wayne Cohen, a managing partner at Cohen & Cohen and a law professor at the George Washington University School of Law, noted that AI increasingly impacts various areas of the legal field.

AI's involvement in the U.S. legal sector is expanding from its predominantly behind-the-scenes role to more active participation in courtroom proceedings.

AI Clerical Aid in Expediting Resolutions

According to Cohen, AI now contributes significantly to various aspects of trial preparation, including research, writing, creating jury exhibits, and handling office tasks like trial summaries and translations. AI also aids in expediting the litigation process, reducing the time it takes for cases to progress from filing to resolution. Cohen predicts that this trend will lead to shorter timelines for case resolution.

READ ALSO: U.S. Lawmakers Gears Toward AI Regulation, Proposes Bill to Address AI Risks in the Government

Capability to Produce Transcripts from Audio Recordings

Judges can now produce searchable PDF transcripts from audio recordings and render well-informed decisions on the same day. With the assistance of AI, discrepancies can be flagged, potentially impacting the credibility of either the prosecution or defense. According to

Jackie Schafer, a former assistant attorney general for the state of Washington, says judges can now make rulings with a high degree of accuracy, supported by the evidence presented in court.

In 2020, Schafer established Clearbrief, a platform utilizing AI to analyze documents, detect citations, and generate hyperlinked timelines of all dates referenced in the documents, facilitating rapid access to information.

Analyzation of Legal Contracts

Jason Boehmig, CEO and co-founder of Ironclad, a digital contract company and a former corporate attorney, highlighted the capabilities of AI in analyzing a company's legal contracts. He mentioned that AI could familiarize itself with the preferred language used in the organization's contracts, enabling it to draft and negotiate agreements in the company's established legal tone.

Boehmig emphasized that business contracts represent the forefront of legal innovation where experimentation is feasible. Compared to individuals facing potential threats to their basic freedoms within the legal system, businesses engaged in contractual agreements arguably have less at stake. Experts emphasize the importance of human review in all these AI applications. While this concept is not exclusive to the legal industry, the profound consequences of legal proceedings underscore the necessity of human oversight.

Can AI Really Be Trusted?

AI in legal systems could be problematic as it functions either as an expert system or a machine learning system. Expert systems entail encoding rules into a decision model within the software, often called a decision tree. These were widely used in law during the 1980s but ultimately failed to consistently produce satisfactory outcomes on a broad scale.

While Machine learning can be highly effective, its capabilities are comparable to those of a well-educated guess. One of its strengths is its ability to uncover correlations and patterns in data that surpass human capacity for calculation. However, unlike human error patterns, it also exhibits weaknesses, notably in reaching incorrect conclusions.

Series of AI Fails

In a prominent case, an AI mistakenly identified a turtle as a gun. Facial recognition systems frequently struggle to accurately identify women, children, and individuals with dark skin. 

This raises concerns about the potential for AI to implicate individuals in crime scenes falsely they did not attend. The lack of transparency in machine learning makes it challenging to verify incorrect outcomes due to the complexity of machine learning algorithms, which surpasses our comprehension- a phenomenon referred to as the "black box problem."

The repercussions can be severe when AI is integrated into legal procedures and encounters failures. Large language models, which underpin AI chatbots like ChatGPT, have been observed to generate entirely false text. Termed AI hallucinations, this phenomenon implies that the software is thinking rather than statistically determining its next word output.

Recently, it was revealed that a lawyer in New York employed ChatGPT to draft court submissions, only to realize that it referenced nonexistent cases, which underscores that such tools are currently incapable of substituting for lawyers and may never reach that capability.

RELATED ARTICLE: First AI Officer Named, as New Technology Challenges U.S. Justice Department

Real Time Analytics