X

Colorado Springs Attorney Surprised to Discover AI-Generated Cases in Motion Filing

Colorado Springs attorney Zachariah Crabill found himself in an unexpected predicament when he discovered that several cases cited in a motion he filed for a client were generated by an artificial intelligence (AI) software called ChatGPT. Crabill, who had been practicing law in Colorado for about a year and a half, was handling his first civil litigation case, defending a client accused of breaching a car payment agreement.

In an admission of his mistake documented in court, Crabill revealed that he turned to AI technology to aid his legal research and writing. Having heard about ChatGPT, an AI chatbot developed by OpenAI, he saw an opportunity to expedite the time-consuming process of researching relevant case laws to strengthen his client’s argument. Crabill believed that leveraging AI could enhance his efficiency in drafting legal documents and benefit his clients.

Some see the use of AI in legal research and writing as a potential time-saver in a field notorious for its demanding research requirements. Ramsey Lama, a former judge turned defense attorney, acknowledged the time-intensive nature of legal research, with some cases taking as much as 20 to 30 hours to complete. Though not having used ChatGPT, Lama expressed the potential benefits of AI tools in the legal industry, suggesting that they could significantly reduce the time spent on research and ultimately provide quicker results for clients.

Crabill’s initial interactions with ChatGPT were positive, as he received accurate responses to his inquiries about existing Colorado laws. This success bolstered his trust in the technology. However, problems arose when Crabill started searching for cases to cite in his motion. Unbeknownst to him, ChatGPT generated fictitious cases that seemed to match his client’s situation. This error is a common pitfall when relying on AI technology, as users often assume the information provided is accurate once they have established confidence in the tool.

Ready to take your firm to the next level? Submit your job openings with BCG Attorney Search.

One of the fabricated cases cited by ChatGPT was Gonzales v. Allstate Ins. Co. from 2014. According to ChatGPT, the defendant failed to appear at a hearing, but the court deemed her absence excusable. In reality, this specific case did not exist in the form described. There was indeed a case involving Gonzales v. Allstate Ins. Co., but it dated back to 2002 and revolved around an insurance policy dispute following an accident that occurred outside of the country.

Crabill unknowingly included these fake cases in his motion and only realized his mistake on the day of the hearing. He immediately recognized the issue and sent a message to a legal aide at the Baker Law Firm, expressing his belief that the case citations from ChatGPT were unreliable and could not be found in the reputable legal database, LexisNexis. Unfortunately, the judge presiding over the hearing failed to locate the referenced cases and denied the motion due to the false citations. Furthermore, the judge threatened to file a complaint against Crabill for his misleading representation. It remains unclear whether a formal complaint was lodged against the attorney with the Office of Attorney Regulations.

Lama, noting Crabill’s violation of the “duty of candor to the tribunal” by submitting misleading information, acknowledged the potential benefits of AI technology in the legal profession. However, he also highlighted the evident dangers associated with relying on software that can generate inaccurate information. Lama playfully remarked that he did not want AI to render him obsolete, alluding to the potential job displacement resulting from the adoption of AI in various industries.

Despite the risks, some experts, like Sean Williams, the director of the Technical Communication and Information Design Program at the University of Colorado-Colorado Springs, remain optimistic about the future of AI in the legal field. Williams acknowledged that while AI technology threatens existing jobs, it also has the potential to create new opportunities. He cited examples such as marketing companies using ChatGPT to draft copy, emphasizing that AI can provide fresh perspectives and open new possibilities that might not have been previously considered.

Nevertheless, Williams cautioned that AI’s creative abilities can also lead to the propagation of falsehoods. In Crabill’s case, the AI software distorted existing data and information, including fabricated cases. This incident highlights the need for users to exercise caution and verify the accuracy of information generated by AI systems, even if previous interactions seemed reliable.

As the legal industry continues to explore the benefits and challenges of AI integration, Crabill’s experience serves as a cautionary tale about the potential pitfalls and the importance of human oversight and critical evaluation when utilizing AI in legal research and writing.

Rachel E: