Have you ever thought about how for thousands of years we have maintained the same dispute resolution mechanism? Well, I have. The existing system works like this: two people come and tell their stories to a third person, who then makes a decision. It worked this way in Ancient Rome, during the Crusades, and yesterday in Brighton District Court. Will generative AI change this approach?
One can easily imagine a potential self-enclosed judicial system powered by AI. An attorney loads the key case documents into ChatGPT, together with relevant precedent and, perhaps, a sampling of her own work for a little personal touch. Then he tasks the machine with preparing a motion to dismiss written in the lawyer's personal style. One imagines the system could easily assemble exhibits that support the necessary argument.
With minor tweaks, if any, this document would then make it to the opposing counsel. She would then feed the motion and her own cocktail of caselaw, exhibits, and pleadings into the similar algorithm and then charge it with preparing an opposition. The machine will then prepare the opposition, together with the supporting exhibits.
Once these are filed in court and land on a judge’s desk, it would be very tempting to simply load these into the same algorithm, pre-primed with the judge’s prior decisions, and, perhaps, some writings of others the judge holds in some esteem. The machine would then make a decision, which the judge can read and endorse. Of course, with time, one can imagine where the decisions with the lowest stakes are simply done automatically. What could go wrong? After all, many decisions are made without any human involvement at all. We rely on autopilot and cruise control. The traffic lights have no trouble directing cars without much human intervention.
There are some obvious arguments for a ChatGPT courtroom.
First, a computer is much faster, than any human. ChatGPT can read and synthesize thousands of documents in a matter of minutes. It can also perform legal research at the speed unmatched by a human lawyer. And it can prepare grammatically correct and mostly lucid text quite rapidly.
Second, with the speed comes cost-efficiency. By automating part or all of the judicial decision-making, dramatic savings can be realized, which will make the justice system more accessible for all. The constant concerns about the cost of legal services and the lack of legal aid funding could become a thing of the past.
Third, one can expect consistency and freedom from bias, to an extent. ChatGPT’s decision cannot be influenced by whether it has had lunch or whether the Red Sox won their game (both examples come from real studies that showed differences in judicial outcomes based on these events). ChatGPT would not be prone to emotional manipulation that underpins every jury trial. If it deems the facts and legal principles similar – one can expect the same decision time and time again.
On the other hand, isn’t justice supposed to be imperfect and at least somewhat biased? A jury of one’s peers is supposed to be a mixed bag. Such persons are bound to match the level of one's incompetence.
Some scholars argue that the imperfect justice is a built-in feature of the system. For example, in the book “The Idea of Justice,” Amartya Sen argues that justice is a continuum, rather than a binary idea. Just because something is imperfect, it is not inherently unjust. Thus, justice can be driven by irrational, emotional perspectives that are not (yet?) achievable by ChatGPT.
Further studies show that AI systems are not free from bias because they are trained on the pre-existing data, which, historically favors certain groups. Thus, a ChatGPT courtroom may become an echo chamber of the existing prejudice. If ChatGPT’s consistency is the result of reinforcing the stereotypes, then it might not be a good idea after all.
There is also something about a human judge, with their empathy (or lack thereof) that cannot (again, cannot yet?) be replicated by AI. ChatGPT may be able to analyze the law, but it can't understand the emotional impact of a case or the nuances of human behavior, which frequently defies rationality. There is something profound about having a human, sitting on a throne in a black robe, determine the outcome of a case. Decades of popular culture from My Cousin Vinny to Law and Order, have instilled the awe and respect for the judicial figure.
One of the purposes of the judicial system is to maintain the illusion of a social order rooted in justice (which is a separate discussion altogether). This goal is probably better served by the current model, legitimized by thousands of years of existence. After all, a courtroom without a human touch might feel less like "To Kill a Mockingbird" and more like "The Terminator." AI-based judicial system will need many years of good promotion and advertising to reach the same level of recognition and reverie as the traditional system.
So, for now, adopting a fully automated judicial system seems in somewhat of a distant future. Still, the AI tools may excel in augmenting judicial tasks by, for instance, summarizing lengthy documents or offering legal precedent targeted to a particular set of motion papers. Ultimately, striking a balance between AI-assisted decision-making and human intuition could be the key to unlocking a more efficient and fair justice system.
By Pavel Bespalko, attorney, arbitrator, legal tech aficionado.