Regulating Generative Artificial Intelligence in Domestic and International Arbitration: A Content-Neutral Blueprint for Action
S. I. Strong
ABSTRACT
In 2023, ChatGPT – an early form of generative artificial intelligence (AI) capable of creating entirely new content – took the world by storm. The first shock came when ChatGPT demonstrated its ability to pass the U.S. bar exam. Soon thereafter, lawyers and judges were found to use ChatGPT in actual litigation, raising questions about the extent to which generative AI is being or will be used in domestic and international arbitration.
Some within the legal community find technology like ChatGPT untroubling. Others believe generative AI is problematic as a matter of due process and procedural fairness due to its propensity not only to misinterpret legitimate legal authorities but to create fictitious sources through a process known as hallucination. These phenomena suggest that participants in arbitration cannot rely on anything contained in a document created by generative AI.
Thus far, the legal response to generative AI has been partial, piecemeal, and panicked. No consensus exists as to what can or should be done, let alone who should be responsible for regulating the use of generative AI by arbitrators, arbitral secretaries, practitioners, and unrepresented parties in arbitration.
This Article analyzes the narrow issue of how best to address the problems associated with generative AI in domestic and international arbitration. Rather than proposing specific solutions to the issue, this Article focuses on identifying who can and should act in the short, medium, and long terms. In so doing, this Article provides the arbitral community with a content-neutral blueprint for immediate action.
I. Introduction
Early 2023 saw a quantum shift in the world of dispute resolution. In January, ChatGPT shocked the global legal community by not only passing the U.S. bar exam but subsequently acing it. At the same time, ChatGPT was wending its way into courts, not only being used by lawyers in drafting legal submissions but also by at least one judge in drafting a legal opinion. Though there have not yet been any reports of ChatGPT being used in arbitration, it is not only possible but likely that ChatGPT or some other form of generative artificial intelligence (AI) will be or is already being used by arbitrators, arbitral secretaries, practitioners, or unrepresented parties in domestic and international arbitral proceedings.
Although the arbitral community has been considering the role of technology in alternative dispute resolution for years, generative AI presents an unprecedented challenge to arbitral legitimacy due to its ability to create entirely new content. While some lawyers have lauded generative AI as a useful, cost-saving device, ChatGPT and its competitors give rise to numerous concerns, including those relating to due process and procedural fairness. While it is beyond the scope of this Article to consider all the problems associated with generative AI, three issues rise to the fore.
First, generative AI has no safeguards requiring it to produce information that is true and correct, resulting in legal documents replete with fictitious legal authorities known as hallucinations. Second, generative AI misinterprets and misapplies source material, thereby casting doubt on references to legitimate legal authorities. Third, it is difficult or impossible to tell, simply by looking at the face of the document, that it has been created by a computer rather than a human.
Taken together, these factors require participants in arbitration to consider any document created by generative AI with skepticism. Though users of generative AI may claim they are making the arbitral process more efficient by reducing time and expense, they are actually requiring arbitrators and opposing parties to double-check their work, thereby shifting the time, cost, and burden of legal analysis to other participants in the arbitration. This approach is not only highly inefficient, it also runs the risk of eroding public confidence in domestic and international arbitration.
The technology industry has already called for immediate regulation of AI, and the legal community is responding. For example, individual judges in the United States are amending their rules to indicate the extent to which generative AI is permitted in party submissions, while the Supreme Court of Canada is considering adopting a practice note concerning use of AI. Legislatures in the United Kingdom, the European Union (EU), and China are drafting statutes concerning AI, including the use of generative AI in courts. Arbitral organizations have begun drafting soft law guidelines on the use of AI in arbitration, while researchers are pursuing empirical studies of the use of AI in arbitration as a matter of urgency.
As welcome as these initiatives may be, they are piecemeal, partial, and, to a certain extent, panicked. The arbitral community urgently needs to consider whether and to what extent ChatGPT and similar types of generative technology can be relied upon in arbitral proceedings, but the path forward needs to be charted sensibly and intelligently, with an eye towards creating a holistic response that is flexible enough to take changing circumstances into account while still providing robust safeguards for parties and other participants in the process.
As tempting as it is to jump straight to substantive analyses, the first question surely must be which legal measures are best-suited to respond to the current dilemma. This Article therefore analyzes the narrow, preliminary issue of which type of legal authority should be used to respond to the challenges of generative AI in domestic and international arbitration (Section III). Rather than proposing a content-based solution to the issues facing the arbitral community, this Article focuses on identifying the various methods of response and evaluating which approaches can or should be used in the short, medium, and long terms. In so doing, this Article considers the possible use of generative AI in both (1) party submissions drafted by either a lawyer or an unrepresented party, with the latter most likely to arise in consumer or employment arbitration, and (2) arbitral awards drafted by either an arbitrator or an arbitral secretary, with the latter most likely to arise in investor-state arbitration.
The only way to properly evaluate the various options is to compare each mechanism to a standard set of criteria. The Article therefore begins with a short discussion of the factors used to identify a fair, effective, and appropriate response to generative AI in arbitration (Section II). The Article concludes by tying together the various strands of argument and recommending how the arbitral community should proceed (Section IV).
II. Factors Used to Evaluate Different Methods of Response
Though there are many ways to evaluate the relative merits of different methods of responding to the challenges of generative AI, this Article focuses on four key factors. The first is consistency, meaning consistency between parties in a particular arbitral proceeding; consistency between different arbitrations within a single jurisdiction or subject-matter specialty as well as arbitrations proceeding under the same institutional rules; and consistency between arbitrations on both the interstate and international levels. Consistency promotes due process and procedural fairness concerns relating to the equal treatment of parties and improves efficiency by providing advance notice of what is required of arbitrators, parties, and practitioners. Consistency implicitly includes an element of transparency, thereby promoting public confidence in arbitration as an institution.
The second factor is speed. The longer generative AI remains unregulated in arbitration, the more likely that injustices will arise in individual proceedings, possibly damaging the reputation of arbitration as a legitimate form of dispute resolution. Delay is also likely to promote cognitive distortions (such as the status quo bias or the anchoring bias) that make it harder to regulate problematic conduct in the future.
The third factor is flexibility. The arbitral community often discusses flexibility in the context of procedural autonomy, but here the focus is on regulatory flexibility. Technology and AI are changing rapidly, and it is important to avoid calcifying the law in an immature or undesirable state. However, those concerns cannot excuse inaction. Instead, it simply means that the techniques used to address generative AI need to be agile enough to respond to changing circumstances, now and in the future.
The fourth and final factor is accountability. Any rule or law regulating generative AI needs to include provisions outlining the sanctions that will result from non-compliance. Furthermore, any sanctions need to narrowly target the party responsible for the wrongful use of generative AI and avoid injuring other participants in the process.
III. Possible Methods of Addressing Generative AI in Arbitration
Methodologically, the best way to determine the optimal method of addressing the challenges of generative AI is to consider various types of arbitral authority and evaluate whether those authorities can be developed speedily, flexibly, consistently, and with sufficient accountability. There are seven standard types of arbitral authority in domestic and international proceedings: agreements between the parties; procedural orders from arbitral tribunals; institutional rules of arbitration; national statutes on arbitration; judicial decisions; arbitral awards; and international treaties and conventions. Each is considered in turn below in addition to rules of professional responsibility from licensing authorities and scholarship and soft law from policymaking bodies.
Rather than assuming that advocates are the only ones who might use generative AI in arbitration, the analysis below recognizes that unrepresented parties, arbitral secretaries, even arbitrators might seek to rely on generative technology at some point during an arbitration. Regulation of arbitral behavior may seem somewhat inapt, given the longstanding desire to protect arbitral independence, but some arbitrators – like some judges – have been known to engage in questionable behavior when drafting decisions. Since calls have been made to regulate judges’ and judicial clerks’ use of generative AI, it is appropriate to do so with respect to arbitrators and arbitral secretaries as well.
—from Regulating Generative Artificial Intelligence in Domestic and International Arbitration: A Content-Neutral Blueprint for Action, 34 American Review of International Arbitration (forthcoming 2024)