translate
Blog

Using AI? Read This First!

  •  

Generative AI is a tool that can help write or explain things, but it often makes mistakes. When used wisely, AI tools can help streamline research and save time, advancing access to justice. As this NBC News article highlights, AI can help litigants navigate court procedures and simplify legal jargon. However, these tools are not infallible. Artificial intelligence can hallucinate, misinterpret nuance, and reinforce bias. Using incorrect legal information can damage your claims, harm your case, or limit your ability to protect your rights.

AI tools can produce incorrect information, often known as “hallucinations”, in response to legal questions or prompts. When it comes to online legal resources, most trustworthy information is locked behind a paywall or subscription that free AI tools can’t access. For this reason, searches related to legal information are deemed to be highly susceptible to hallucinations (Mignanelli, 2024). Legal researcher Damien Charlotin has organized a public database to track instances of AI hallucination cases.

A recent article published in the ABA Journal discusses one example of such a hallucination, cited in a motion filed by an Iowa attorney that resulted in his suspension. The Guardian published a story in May discussing the sanction of a Utah attorney after it was discovered that he cited a ChatGPT-hallucinated case within his brief. This article by Thomson Reuters provides another example of attorneys in Wyoming being caught for including nine AI-hallucinated cases in their argument. In Nevada, attorneys are being referred to the state bar and directed to write explanatory letters to their law school deans after being caught citing 14 fabricated cases.

These are just a few recent examples that highlight the importance of using AI carefully in legal contexts. If you decide to use AI as part of your legal work, it's a good idea to double-check the results produced to ensure that what was generated is real, accurate, and up to date. Keep reading for tips to use AI critically. 

1. Look up citations in a trusted legal database to confirm their existence

Jenkins Law Library offers access to legal databases like Westlaw, Lexis, Bloomberg Law, and LLMC, which you can use to confirm that a generated citation refers to a real case or code section. If you can’t find your generated case in any trusted legal databases, it may be a hallucination. If you’re unable to visit Jenkins, check out our blog post on open-access caselaw for freely accessible databases online.

2. Use citator tools. 

Citator tools such as Shepard’s on Lexis and KeyCite on Westlaw help verify that the case you are reviewing is still "good law" (that is, that the case is still valid and citable).

3. Make sure any AI-generated content, forms, and information pertains to your jurisdiction.

Legal information often varies depending on jurisdiction and court type. If using AI, make sure to verify the generated results apply to your jurisdiction.

4. Review secondary sources. 

Law research tends to involve two types of sources. Primary Sources include authorities such as statutes or judicial opinions. Secondary Sources, by comparison, interpret and explain the law. Examples include law review articles, legal encyclopedias, and treatises. 

If you’re struggling to find a case or source on point to cite to in your legal argument, secondary sources may be helpful to guide you to primary sources you can cite to.

Remember that:

        • AI is not a lawyer. 

        • AI may give wrong answers, even if it claims to be accurate. 
        • AI may give advice that does not fit your case. 
        • Do not share personal or private information when using AI. 

Have any questions? Ask us

This blog is for information only. It is a public service to help you learn. It is not legal advice and does not replace help from a lawyer or court staff.

Referenced: Mignanelli, N. (2024). The Legal Tech Bro Blues: Generative AI, Legal Indeterminacy, and the Future of Legal Research and Writing. The Georgetown Law Technology Review, 8(2), 298.