The Uncharted Territory of AI in Legal Practice: A Cautionary Tale
In a groundbreaking legal event, a New York lawyer, Steven A. Schwartz, recently faced disciplinary action for employing an AI tool, ChatGPT, for legal research in a personal injury lawsuit involving Avianca Airlines12. This case serves as a stark reminder of the potential pitfalls when integrating AI technologies into legal practice.
The plaintiff’s legal team submitted a brief that cited several precedent cases to support their arguments. However, the airline’s legal team raised a red flag when they couldn’t locate several of the referenced cases. Upon investigation, it was revealed that six of the cited cases were non-existent, leading Judge P. Kevin Castel to demand an explanation from the plaintiff’s legal team.
At the heart of the controversy was the AI tool ChatGPT, which had generated the problematic case references. This AI system creates original text upon request and has been used by millions since its launch in November 2022. It can answer questions in a natural, human-like language, mimic other writing styles, and even generate case references, as seen in this lawsuit.
Schwartz, who had been an attorney for over 30 years, was using ChatGPT for the first time. He had asked the AI tool about the veracity of the cases it provided, and ChatGPT assured him of their authenticity. The realisation that the AI had provided false case references only came after the airline’s lawyers disputed the citations.
This incident has brought to the fore several important considerations for the legal fraternity. AI tools like ChatGPT are being increasingly used in a variety of applications, but they are not infallible. They can inadvertently spread misinformation and show bias, and their usage requires a high degree of caution.
While AI can potentially bring significant benefits to legal research by improving efficiency and reducing the time spent on routine tasks, its usage in critical applications like case reference generation should be treated with caution. Verification of information from AI tools is essential, and reliance on them should never substitute for professional judgment and expertise.
The incident has left the legal community with several important questions. What are the ethical implications of using AI in legal practice? How can we ensure the accuracy of AI-generated legal information? And how can we best integrate AI tools into legal practice to leverage their benefits without falling prey to their limitations?
The legal community will have to grapple with these and many more questions as AI becomes more pervasive in the legal sector. The case serves as a reminder that while AI holds vast potential, its application in areas like law needs careful consideration and robust checks and balances.
The disciplinary hearing for the lawyers involved in this case is scheduled for 8 June1. As we await the outcome, it’s clear that the incident has already catalyzed a much-needed dialogue on the role of AI in law