A Legal Practitioner's Guide to AI and Hallucinations
Data privacy and cybersecurity attorney Mark Francis co-authored a guide on artificial intelligence (AI) for lawyers, specifically focusing on how generative AI works, what it does (and does not) do well, and how it can be used responsibly. AI tools bring the ability to transform legal work; whether scanning hundreds of cases in seconds, analyzing case outcomes to predict litigation results or automating contract creation, chatbots, virtual AI assistants, machine learning, natural language processing and large language models alike all have the potential to reshape how legal professionals practice on a daily basis and how ordinary people access law. However, with the promise of efficiency and cost savings comes the challenge of errors, in particular AI hallucinations, a phenomenon in which an AI program produces content that is either distorted, misleading or altogether fictitious, such as a citation for a made-up case. To prevent their occurrence – and maintain trust and confidence in the justice system – it's essential for lawyers to understand how these tools work, where they are error-prone and what basic measures can prevent fatal mistakes. Mr. Francis' piece offers several practical steps, from implementing risk rating systems to requiring multiple reviews, that can help safeguard work-product. Above all, he emphasizes, the cardinal rule is "never trust, always verify" when it comes to relying on AI-generated content.
The guide was published by the Thomson Reuters Institute/National Center for State Courts (TRI/NCSC) AI Policy Consortium for Law & Courts, a partnership offering education, training and recommendations for courts to respond to the opportunities and challenges that have arisen with advances in AI and generative AI solutions.