When a lawyer submits a legal brief, the expectation is that the cases cited within it are real. They should have real docket numbers, real judges, and real holdings. But in a growing number of high-profile court cases, those citations are turning out to be entirely fictional—conjured out of thin air by large language models like ChatGPT. The blame game over who is responsible for these “hallucinations” has officially begun, and it is creating a messy, and sometimes embarrassing, legal liability crisis for the legal profession.
Who dropped the ball: The lawyer or the tool?
At the center of the debate is a fundamental question of professional responsibility. Attorneys have a duty of competence under ethical rules, which includes understanding the technology they use. Yet, the typical argument from a lawyer caught in a hallucination scandal is simple: “I didn’t know it would make things up.” In several recent cases, including the infamous Mata v. Avianca case in New York, lawyers have admitted to using ChatGPT to draft briefs without verifying the cases. The result was a sanctions motion and a public shaming. But is the fault entirely on the human?
Proponents of AI argue that these models are not search engines. They are statistical machines that predict the next most likely word. They do not “know” the law. The user, they say, has an absolute duty to read the output critically. But the tech companies behind the models are not entirely free from blame. OpenAI, for instance, has acknowledged that GPT-4 can hallucinate, but they market it as a productivity tool for professionals. This creates a dangerous gray area where the promise of efficiency overrides the warning of inaccuracy.
The hidden cost of speed
Law firms are under immense pressure to bill hours and deliver results quickly. The allure of generative AI is obvious: you can draft a motion in ten minutes instead of three hours. But the hidden cost is the time required to verify every single citation. In the rush to adopt AI, many firms have skipped the crucial step of training their staff on its limitations. We are now seeing the fallout. In a recent federal case in Texas, a lawyer argued that they believed the AI was “reliable” because it sounded confident. This is a dangerous mix of human trust and machine authority.
The blame game is also spreading to the vendors. Several legal tech companies now offer AI tools specifically trained on legal databases. These tools are supposed to be safer, but they are not immune to hallucinations. When a tool promises “hallucination-free” results and still produces a fake case, the liability shifts back to the software provider. This is where the lawsuits will likely start. If a client suffers damages—say, a lost case or a sanctions fine—they will look to the law firm, which will then look to the software developer for indemnification.
The judiciary’s response
Judges are growing tired of this excuse. Many courts have already issued standing orders requiring lawyers to certify that they have verified any AI-generated content. Some judges are now asking for affidavits explaining the exact process used to generate a brief. The message is clear: ignorance of how the AI works is no longer a valid defense. The judiciary is putting the burden squarely on the attorney’s shoulders. “The technology is an assistant, not a replacement for your judgment,” one federal magistrate wrote in a recent order.
This approach, while strict, makes sense from a procedural standpoint. Courts cannot sanction a chatbot. They can only sanction a person. But this puts small firms and solo practitioners in a tough spot. They may lack the budget for expensive, low-hallucination legal AI tools and rely on free versions of ChatGPT. They are the most vulnerable to these errors and the most likely to face professional discipline.
Is there a path forward?
The blame game will continue until there is a cultural shift in the legal profession. The solution is not to ban AI—that ship has sailed. It is to mandate a “human-in-the-loop” verification process. Every case cited by an AI should be checked against a reputable legal database. Furthermore, law schools must start teaching AI literacy as part of the core curriculum. The next generation of lawyers needs to understand that a large language model is a tool for brainstorming, not for final legal research.
Ultimately, the responsibility for a hallucinated case rests with the person who signs the filing. The AI does not have a bar license. It cannot be disbarred. But the tech companies have a responsibility to be clearer about the risks. A simple warning pop-up that says, “This case may not be real,” would go a long way. Until then, the blame game will only get louder as more fake cases appear in court dockets across the country.
The legal profession is at a crossroads. It can either embrace AI with rigorous safeguards, or it can continue to pay the price for its blind trust. For now, the blame is landing on the lawyers—but the tech companies should not expect to escape scrutiny for long.
Ahmed Abed – News journalist