The emergence of ChatGPT comes with tremendous promise of increased automation and efficiency. But at what cost? In this blog post, we’ll explore the potential ethical time bomb of using ChatGPT and examine the responsibility of lawyers in the age of AI.
How could lawyers possibly use ChatGPT? Before getting to the ethical questions, it is worth brainstorming some of the many ways that lawyers can, or could at some point in the future, employ ChatGPT. In many ways, ChatGPT can function as a virtual legal assistant, paralegal, and associate. Promising to upend discovery, ChatGPT purports to quickly analyze large volumes of legal documents and provide relevant information to lawyers. ChatGPT can identify legal theories, research applicable law, and assist with persuasive tasks like streamlining and refining a closing statement delivered at trial. But it could also do heavy drafting work like writing letters and preparing legal documents. These are just some of the ways that it could be put to work for those practicing law.
What ethical concerns does the use of ChatGPT raise? While the benefits of using ChatGPT are promising, there are also concerns about the ethical implications for those seeking to rely on AI for legal work. This blog post focuses on ChatGPT, the currently free tool that anyone can go online and use.
-
Privilege: Everyone seems to think that ChatGPT is a wonderful free service. It is currently free, but that is because the AI is in training mode. Whenever you feed it prompts and information, you are training the machine. When using ChatGPT, it’s important to ensure that the data being inputted is not confidential or privileged. That is because everything you feed it grows ChatGPT’s collective brain. Of course, lawyers have an ethical duty to maintain client confidentiality.1 So put simply, if the information you input is privileged, you should not feed it to ChatGPT. Lawyers must ensure that the input provided to, and the output generated by, ChatGPT does not contain confidential or privileged information.
-
Truth and Accuracy: Clients demand quality legal representation, and the ethical rules command that we provide it. There may come a day when a machine lawyer can provide more accurate advice than a human lawyer, but that day is not today. ChatGPT contains errors. If a lawyer merely repeats those errors in representations made elsewhere without independently verifying the representations, the lawyer runs the risk of violating any number of ethical duties.2 For example, there have been reports about ChatGPT completely making up a case to stand for a certain proposition for which there is no case to support it. Just like “the dog ate my homework” never flew in grade school, it would be a rough day in court to have to explain to a judge that “a machine wrote my brief.”
-
Competence: Lawyers have a duty to provide competent representation to their clients.3 When using any new tool, lawyers must ensure that they have the necessary knowledge and skills to use the tool effectively and responsibly.
-
Zealous representation: A lawyer must also act with zeal in advocacy upon the client’s behalf.4 Some of the drafting output generated by ChatGPT is polite, tepid, and balanced. Such writing style has a place, and in some instances writing a letter with that tone may be the best representation that a client needs in a situation. But striking the correct tone is something that must be borne from professional judgment, not machine default.
-
Billing considerations: If a lawyer can shorten and automate tasks, at some point the lawyer needs to use technology to do that and keep client costs down.5 That is to say, this ethical consideration cuts against, or must at least be balanced with, those above. But lawyers should not be pressured into using free online tools like ChatGPT to cut costs when doing so would result in compromising ethical duties. And if a lawyer does use AI, then there are significant ethical considerations for billing. For example, if GPT writes a 10 page brief and the lawyer spends an hour cleaning it up, the lawyer should bill for one hour of time not the time that it would have taken had the lawyer written it the old-fashioned way.
-
Bias: AI language models like ChatGPT are only as good as the data they are trained on, and they replicate any bias that was contributed into them. Lawyers are specifically prohibited from engaging in harassment and discrimination in a lawyer’s conduct “related to the practice of law.”6 While it remains to be seen whether bias baked in through ChatGPT could constitute a violation of this rule, there is a growing impetus to put ethical safeguards on the practice of law to curtail the bias, harassment, and discrimination that the ABA acknowledges “are still alive and well in business, government, and the practice of law.”7 Any and all use of AI, including ChatGPT, must be done in a way that checks a lawyer’s work so that bias is not perpetuated.
Is there a way for lawyers to ethically and responsibly use ChatGPT today? There may potentially be a present use for ChatGPT if it can be done without compromising privilege and while being mindful of the other ethical considerations raised above. However, at present those are rather large and difficult hurdles to clear. As the technology continues to develop and as safeguards are put in place, there may come a day when ChatGPT could be ethically used in the practice of law.
Conclusion. While the use of ChatGPT in the practice of law provides tremendous opportunities, there are also serious ethical implications that must be considered. Lawyers have a responsibility to ensure that they are using ChatGPT ethically and responsibly, to the extent that they feel confident in using it at all. By understanding the limitations of the technology and navigating the ethical minefield summarized above, there is likely a day coming when lawyers can use this tool in a way that is both innovative and ethical.
1https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/
2https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_3_diligence/
3https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/
4https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_3_diligence/comment_on_rule_1_3/
5https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_5_fees/
6https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_8_4_misconduct/
7https://www.americanbar.org/groups/construction_industry/publications/under_construction/2019/spring/model-rule-8-4/
The Between the Lines blog is made available by Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C. and the law firm publisher. The blog site is for educational purposes only, as well as to give general information and a general understanding of the law. This blog is not intended to provide specific legal advice. Use of this blog site does not create an attorney client relationship between you and Mitchell Williams or the blog site publisher. The Between the Lines blog site should not be used as a substitute for legal advice from a licensed professional attorney in your state.