The legal profession has increasingly witnessed the rise of artificial intelligence (AI) technologies, particularly generative AI, which has shown immense potential in various areas of legal practice. From legal research to drafting, generative AI is a promising tool. However, its use in litigation requires careful consideration and oversight.
Earlier this year, we published a blog post explaining how judges were beginning to step in to put safeguards on the use of generative AI in their courtrooms. As we predicted, this trend has continued and so this updated blog post reports on the unfolding nature of this development.
AI gone awry in litigation. The problem of fake cases being generated by ChatGPT and marshalled into court as legal authority entered the public’s eye earlier in the year with the now infamous case of Mata v. Avianca. There, a legal brief submitted by Mata's lawyers was found to contain fictitious judicial decisions. That case continues to serve as a cautionary tale. This highlights the need for verifying and cross-referencing the data provided by AI tools with traditional legal databases or seeking expert human opinions.
AI continues to go awry in litigation. Since the news first broke of the Mata v. Avianca case, other lawyers have followed suit in becoming examples of lawyering gone wrong in the age of generative AI. Of late, however, this has become a particularly pronounced problem among pro se litigants. Last month alone there was a decision out of New Hampshire highlighting the problem, and another particularly pointed opinion from a judge in New Mexico. Perhaps in acknowledgement of the growing ubiquity of pro se litigants using generative AI and the concern of what this could mean for court dockets, the New Mexico judge warned “nor will the Court look kindly upon any filings that unnecessarily and mischievously clutter the docket.” Even the federal appellate courts have seen this problem rising to their level, as noted in one recent Fifth Circuit opinion. The repercussions of such deception include wasting time, resources, and causing reputational harm to the legal system.
Courts set rules governing generative AI use in litigation. Earlier this year we reported on first-of-their-kind judicially imposed restrictions on the use of generative AI in courts, first emerging in Texas and Illinois. Both of those examples were standing orders from individual judges. Now, however, we are starting to see amendments to rules that govern conduct across courtrooms. Recently, the Eastern District of Texas announced changes to its Local Rules, effective December 1, 2023. In the comments to these local rules, the court noted that:
Recent advancements in technology have provided the legal profession with many useful tools for daily practice. Ultimately, however, the most valuable benefit a lawyer provides to a client is the lawyer’s independent judgment as informed by education, professional experiences, and participation in the legal and professional community in which the lawyer practices. Although technology can be helpful, it is never a replacement for abstract thought and problem solving.
Another comment called out the fact that this problem is manifesting in the filings of pro se litigants:
Recent advancements in technology have provided pro se litigants access to tools that may be employed in preparing legal documents or pleadings. However, often the product of those tools may be factually or legally inaccurate.
The local rules in the Eastern District of Texas were amended to provide cautionary language for pro se litigants and lawyers alike. Here is the amendment directed at lawyers:
If the lawyer, in the exercise of his or her professional legal judgment, believes that the client is best served by the use of technology (e.g., ChatGPT, Google Bard, Bing AI Chat, or generative artificial intelligence services), then the lawyer is cautioned that certain technologies may produce factually or legally inaccurate content and should never replace the lawyer’s most important asset – the exercise of independent legal judgment. If a lawyer chooses to employ technology in representing a client, the lawyer continues to be bound by the requirements of Federal Rule of Civil Procedure 11, Local Rule AT-3, and all other applicable standards of practice and must review and verify any computer generated content to ensure that it complies with all such standards.
Emerging trends in use of AI in litigation. If you don’t litigate in Texas or Illinois you may not think that this implicates you. But don’t disregard these developments so quickly. Other individual judges will be following suit, and additional districts will inevitably alter their local rules following the example set in the Eastern District of Texas. Changes to rules of civil procedure oftentimes take months and years to roll out, so don’t look for those immediately, but they are likely coming. Even beyond rules and standing orders by specific judges, by explicitly drawing in Rule 11, such existing rules and standing orders present a cautionary tale for all litigators about the need to verify and cross-verify the output of generative AI. It is crucial to adopt a diligent approach when dealing with data generated by AI tools. For instance, it is advisable to cross-reference any data produced by AI tools with established legal databases to ensure accuracy and reliability. Additionally, seeking expert human opinions can provide valuable insights and further validation. By incorporating these measures, one can enhance the quality and credibility of the AI-generated data in legal proceedings.
Conclusion. While the explosive use and rapid adoption of generative AI has come upon the profession quickly, we are only beginning to see controls put in place by judges, courts, and governing bodies. Learning from these recent safeguards in Texas and Illinois is prudent, and maintaining a continued awareness of emerging orders, rules, and restrictions is recommended.
The Between the Lines blog is made available by Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C. and the law firm publisher. The blog site is for educational purposes only, as well as to give general information and a general understanding of the law. This blog is not intended to provide specific legal advice. Use of this blog site does not create an attorney client relationship between you and Mitchell Williams or the blog site publisher. The Between the Lines blog site should not be used as a substitute for legal advice from a licensed professional attorney in your state.