The AI Red Flags in DOJ Fraud Investigations
Litigation attorneys Jessica Sievert and Megan Mocho were quoted in Law360 on the federal government's growing use of artificial intelligence (AI) and data analytics in healthcare fraud enforcement. The U.S. Department of Justice (DOJ) announced in July a sweeping set of cases against more than 300 defendants accused of violating the False Claims Act (FCA), made possible by cross-agency cooperation and new digital tools.
Ms. Sievert warned that AI heightens confidentiality and patient privacy risks and said she expects a measured rollout. She added the government will have to answer questions about data governance, Health Insurance Portability and Accountability Act (HIPAA)-compliant vendor oversight and defensible analytics.
"With the implementation of AI, confidentiality is a huge concern, especially in the healthcare context where patient and billing data is at issue," she said.
Ms. Mocho advised providers to proactively analyze their own billing data, including anticipating potential use of it by AI tools, to spot outliers and demonstrate good-faith compliance, along with expanding monitoring and analytics to stay ahead of scrutiny from the DOJ or U.S. Department of Health and Human Services (HHS).
"It creates almost an enhanced obligation for clients to consider use of AI in their data analytics so that they too can be on top of compliance in a proactive way," she said.
READ: The AI Red Flags in DOJ Fraud Investigations (Subscription required)