It has been almost a year since I wrote on the potential use of ChatGPT (and other, similar models, of course) as a judge to resolve conflicts of all kinds. As the AI systems advance at a meteoric pace, this hypothetical is nearing reality.
Today, ChatGPT can convincingly emulate anyone’s writing and reasoning, explain the meaning of a statute, review and synthesize actual documents of all formats. With proper priming (i.e., adequate context) and prompting (i.e., precise instructions), ChatGPT will happily analyze motion papers and issue a reasoned decision declaring someone a winner. In fact, I uploaded a motion and an opposition into ChatGPT and, after some further priming, asked it to issue a decision. After correctly identifying the issues in contention, the machine declared victory for my client. In reality, our side lost twice, including an appeal. To paraphrase an old adage: any two judges will have at least three different opinions, even if one of them is a machine. The analysis of the technicalities of machine justice is outside of the scope of this particular note (but stay tuned). It is perhaps gratifying that the courts are moving fast to issue guidelines for AI justice. One suspects the purpose is to show that the judicial branch is up to the task and to reassure the public that ChatGPT is not sending anyone to jail by itself. At least, not yet. On December 12, 2023, the UK office of the Courts and Tribunals Judiciary published the trailblazing Artificial Intelligence (AI) Guidance for Judicial Office Holders. (available at https://www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf) The six-page document is offered with an introduction signed by the Lady Chief Justice, Master of the Rolls, Senior President of Tribunals, and a Deputy Head of Civil Service. The guidelines were developed by an anonymous “cross-jurisdictional judicial group” after a “consultation with all judicial office holders.” One hopes the selection included technologists as well as their judicial counterparts. Perhaps intentionally, the Guidelines speak in generalities. They begin with the now unremarkable reminder that AI tools are unreliable. Indeed, over the last year, the US courts addressed a number of cases where unwitting lawyers submitted non-existent case citations, which ChatGPT made up in the process called “hallucinating.” The judges are thus cautioned to “trust, but verify” the contents created by the AI. The admonition to check the work twice is followed by the warning that AI tools offer little privacy or confidentiality. Therefore, a judge is to be careful when passing any data to the machine. This warning is directed to the free systems; the judges are encouraged to use the paid versions, which are “generally more secure.” This, of course, may or may not be true as paid services routinely share data, which the Guidelines readily admit. It is probably just a matter of time until one of the usual industry suspects offers a proprietary tool, which will offer adequate protections. Or one of the judicial players inadvertently uploads an intimate description of some closely guarded national security matter, which will resurface on the interwebs at the least opportune moment. After a brief scandal, the loophole will be closed. The Guidelines clarify that the that the judges remain “personally responsible for material which is produced in their name.” Thus, for instance, where judges use AI to “summarize large bodies of text,” they are encouraged to “ensure the summary is accurate.” This appears to suggest that a judge would need to read said large body of text to verify the summary, which would defeat the purpose of using AI. One of the more interesting passages is buried in the Take Responsibility section, which seemingly advocates the opposite. The passage starts with the premise that judges do not routinely disclose “the research and preparatory work … done … to produce a judgment.” It then postulates that because the AI is mainly such a “research and preparatory work” tool, its use does not need to be disclosed as part of the resulting decision. This is an interesting approach, which appears to equate AI tools to a typewriter or, perhaps, a caselaw database. ChatGPT, of course, is more of a law clerk than a quill pen. With enough inputs – AI will produce a fully formed judicial opinion, citations and all. Indeed, it may emulate the preferred linguistic constructions and turns of the chosen jurist so as to make it indistinguishable from her other works. If a judge is happy with the outcome after having read the opinion, do the parties have a right to know if the entire opinion, or most of it, was authored by a machine, especially if the judge is confident in the citations and the rest of it? Law clerks frequently author entire opinions, which are then issued without much modification. If the parties are comfortable with this approach, why does it matter if the clerk is substituted by a machine? On the other hand, how many criminal defendants would be comfortable with this approach? The Guidelines allow a judge to rely on summaries prepared by ChatGPT, so long as the reliability of such summaries can be reasonably assured. Again, this is something that is commonly done by a clerk or other assistants. I frequently wonder whether the parties actually expect a judge to read thousands of pages of documents or are happy to accept the illusion of analysis? Even today, a judge may use an e-discovery tool, for instance, to review and summarize vast amounts of data. According to the Guidelines, the parties are not entitled to know if that were the case. Is ChatGPT any different? Is machine decision-making scarier in court than in an airplane run by autopilot, for instance? What makes it so special? Whatever the answer, it is clear that the public is not ready to hear the words “Send him down” from a machine. For all their human foibles, human judges seem to offer the intangibles that exceed computer decision-making. At least, so far. The fundamental question is whether ChatGPT and its brethren function as tools or usurpations. In other words, if ChatGPT is just an augmentation, it needs no more supervision than a Xerox machine. One wants ensures the pages are in order, but is hardly expected to compare each to the original. If, however, ChatGPT is relied upon for analysis and rendering the final opinion – perhaps the parties would like to know. The latter eventuality raises obvious questions of due process. Is being sentenced by a machine cruel and unusual? What about deciding a purchase and sale agreement? A division of assets in a divorce? The answer seems clear. For now.
0 Comments
|
AuthorBy Pavel Bespalko, attorney, arbitrator, legal tech aficionado. Archives
May 2023
Categories |