Responsible AI Use for Courts: Minimizing and Managing Hallucinations and Ensuring Veracity
Cybersecurity and data privacy attorney Mark Francis was featured in a Thomson Reuters report analyzing how courts should address artificial intelligence (AI) hallucinations and uphold responsible AI use. AI hallucinations occur when AI tools produce content that appears accurate or factual but is actually inaccurate or entirely fictitious. Stories about AI hallucinations in the legal industry, such as fake cases cited in briefs, have only become more common as more lawyers turn to emerging technology to increase operational efficiency.
In this report, Thomson Reuters interviews experienced practitioners about the risks AI poses for the courtroom and what judges and legal professionals should do to mitigate them. Mr. Francis highlighted that understanding these technologies is the first step to managing their deployment. For example, he said, generative AI (GenAI) is designed to predict expected language, which makes it inherently susceptible to errors. Knowing this, users should exercise heightened vigilance when reviewing GenAI-origin information to avoid unwanted inaccuracies and maintain integrity. Overall, the report offers a comprehensive look at the challenges of lawyers' AI adoption, underscores that judicial skepticism is a matter of professional responsibility and shares concrete steps to recognize the issues at play and work through them in a way that protects clients and counsel.
READ: Responsible AI Use for Courts: Minimizing and Managing Hallucinations and Ensuring Veracity (Viewing the report requires filling out a form)