ChatGPT: the Legal Assistant?

By Frishta Abdul Wali

On November 30th 2022, OpenAI showcased a demo of ChatGPT and its capabilities.[1] Within days users were flooding OpenAI’s platform after its launch and popularity on social media platforms.[2] ChatGPT is a program built using large language models (LLM) which processes mass amounts of text data in order to generate responses to users.[3] ChatGPT is not the only program with such capabilities, as new models and approaches are introduced into the market; generative artificial intelligence (AI) has taken center stage in all aspects of modern daily life. With generative AI users can now, “generate new content based on a variety of inputs. Inputs and outputs to these models can include text, images, sounds, animation, 3D models, or other types of data.”[4] There are of course positive benefits from tools like ChatGPT which can now, “assist attorneys and law office staff with some tasks—and even to provide advice and direction to clients directly–in the relatively near future.”[5] However, with the advent of new generative AI platforms being accessible to the public and lack of external oversight and or governance, public misuse of generated output will continue to grow as a notable issue.

In Mara v. Avianca, we already see ChatGPT being misused directly by legal professional, when attorneys on behalf of the plaintiff filed an Affirmation in Opposition in which they cited to fabricated cases generated by ChatGPT.[6] Generating ‘false’ cases or, in this instance misinformation, is one of many issues that can arise from unsupervised use of generative AI in legal research. Further issues include (1) privacy concerns, (2) bias and discrimination, (3) misinformation and manipulation, (4) autonomous decision-making, and (5) intellectual property—all of which were generated by prompt by ChatGPT.[7] For example, when ChatGPT is prompted to explain ethical issues such as privacy concerns it says, “ChatGPT and other language models are trained on large amounts of data, which can include sensitive information such as personal details and private conversations.”[8] We must ask ourselves where does sensitive information and such personal data come from? One of those sources of the information lie in user queries as Matt Coatney[9] explains, “OpenAI reserves the right to review anything you send and can use the content to train its model, potentially exposing your information to other users.”[10] Meaning that user use of the platform is actually training the model, and in the context of legal research which may include sensitive facts and or proprietary information in queries would mean possibly exposing confidential information. The potential for misuse indicates a clear need for regulatory intervention in the use of generative AI in certain sectors of the industry.  


[1] Bernard Marr, A Short History Of ChatGPT: How We Got To Where We Are Today, Forbes (May 19, 2023, 1:14 AM), https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/?sh=46d3bd03674f.

[2] Id.

[3] Chat GPT: What is it?, Univ. Cent. Ark., https://uca.edu/cetal/chat-gpt/#:~:text=Chat%20GPT%20is%20an%20artificial,generate%20responses%20to%20user%20inquiries (last visited Sept. 15, 2023).

[4] What is Generative AI?, Nvidia, nvidia.com/en-us/glossary/data-science/generative-ai/ (last visited Sept. 15, 2023).

[5] Dan Pinnington & Reid Trautz, A Present Look at ChatGPT in Your Future, 49 A.B.A. Sec. L. Prac. 64, 65 (2023).

[6] Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 4114965, at *3 (S.D.N.Y. June 22, 2023). 

[7] Aviva Meridian Kaiser, Ethical Obligations When Using ChatGPT, Wis. Law., Feb. 2023 at 44. 

[8] Id.

[9] Author of Navigating the Legal Landscape of Generative Artificial Intelligence: The Risks and Opportunities of ChatGPT.

[10] Matt Coatney, Navigating the Legal Landscape of Generative Artificial Intelligence: The Risks and Opportunities of ChatGPT, 49 A.B.A. Sec. Litig. 11, 13 (2023).