April 16, 2020

Navigating Artificial Intelligence and Consumer Protection Laws in Wake of the COVID-19 Pandemic

Holland & Knight Alert
Kwamina Thomas Williford | Anthony E. DiResta

Highlights

  • The Federal Trade Commission's (FTC) Bureau of Consumer Protection director issued a statement on Using Artificial Intelligence and Algorithms, providing added insight into how the FTC assesses a company's use of Artificial Intelligence and Algorithms (collectively AI).
  • This statement comes in the midst of the COVID-19 pandemic during which there has been a wave of ingenuity unleashed, much of which implicate AI. COVID-19 tracking mechanisms, disinfecting robots, smart helmets, thermal camera-equipped drones and advanced facial recognition software are being considered and deployed in the fight against COVID-19. 
  • The FTC statement brings attention to the potential consumer protection exposure for companies – reaffirming that consumer protection laws in place for traditional human activity and automated decision-making technology will equally apply to sophisticated AI.

The Federal Trade Commission's (FTC) Bureau of Consumer Protection Director Andrew Smith issued a statement on Using Artificial Intelligence and Algorithms, providing added insight into how the FTC assesses a company's use of Artificial Intelligence and Algorithms (collectively AI). This statement comes in the midst of the COVID-19 pandemic during which we have seen a wave of ingenuity unleashed, much of which implicate AI. COVID-19 tracking mechanisms, disinfecting robots, smart helmets, thermal camera-equipped drones and advanced facial recognition software are being considered and deployed in the fight against COVID-19.1

These solutions may help save lives, but they also have consumer protection implications that must be considered. This FTC statement is timely and reminds us of the potential consumer protection exposure for companies – reaffirming that existing consumer protection laws covering traditional human activity and automated decision-making technology will equally apply to sophisticated AI. It further highlights how companies can manage the risk, emphasizing that the use of AI tools should be transparent, explainable, fair and empirically sound while fostering accountability.

Consumer Protection Risks Presented by AI

The FTC has long experience enforcing consumer protections presented by the use of data and algorithms that make decisions about consumers, and the statement reinforces the reality that such protections will be enforced in connection with AI technology. Front and center in the assessment will be traditional concepts of fairness, accuracy and transparency implicated by Section 5 of the FTC Act's prohibition against unfair and deceptive acts, equal protection laws such as the Equal Credit Opportunity Act (ECOA), and laws impacting consumer access to credit, employment and insurance such as the Fair Credit Reporting Act (FCRA).

Unfair and Deceptive Acts. Section 5(a) of the FTC Act protects against "unfair or deceptive acts or practices in or affecting commerce," and is often used to hold companies to fair and transparent privacy and security standards. For example, in this time of crisis, people may be more willing to share personal information related to COVID-19 status and location for certain uses. This triggers numerous privacy concerns for consumers providing their sensitive information as well as responsibilities for companies collecting consumer data.

Nondiscrimination Laws. Equal opportunity laws, such as the ECOA and Title VII of the Civil Rights Act, protect consumers from being discriminated against on the basis of their race, gender, national origin or sex. With AI, we know that objective data (such as zip codes) may serve as a proxy for race, resulting in actionable disparate impact claims. In 2019, the federal government charged a social media and technology company with violating fair housing laws by enabling discrimination on its advertising platform under a disparate impact analysis.2 Data is now surfacing that COVID-19 hospitalization rates and death rates appear to be disproportionally impacting black and Latino people.3 If COVID-19 related data is used in connection with the extrapolation, prediction or access to healthcare, the utilization of such an algorithm could result in a disparate impact on these black and Latino communities if such disparities are not accounted for.

Fair Credit Reporting Act (FCRA). The FCRA protects information collected by consumer reporting agencies (CRAs) and sets strict notice, disclosure and investigation requirements around the use of such information. Companies should be aware if activities and use of AI could cause the company to be deemed a CRA or otherwise trigger obligations under the FCRA. For example, if the AI is being utilized to provide data about consumers or make decisions about consumer access to credit, employment, insurance, housing, government benefits or check-cashing, the company may be viewed as a CRA that must comply with the FCRA. This means taking diligent measures to ensure information is accurate, including providing consumers an opportunity to challenge inaccurate information. Similarly, if the company makes automated decisions based on data from a third party, an adverse action notice may be needed if the company's actions implicate the FCRA.

Managing Consumer Protection Risks Presented by AI

The FTC highlights several key principles that can help companies manage this risk. While the ultimate use of  AI may not warrant strict adherence to these principles, they should be considered when managing risk.

  1. Be Transparent. Don't deceive consumers about how you use automated tools. When collecting information from consumers, use clear messaging and conspicuous disclosures about what information is being collected, how it is going to be secured and stored, and how it is going to be used. If you change the terms of a deal or how information would impact a score based on automated tools, make sure to tell consumers. 
  2. Explain Your Decision to the Consumer. Understand that AI could trigger the FCRA, if it implicates the compilation of information and involves decisions being made related to consumer credit, employment or insurance. For example, if AI is used to assign risk scores to consumers, you should also disclose the key factors that affect the score, ranked in order of importance. If you deny consumers something of value based on algorithmic decision making, be prepared to explain why. Under this law, consumers must also have an opportunity to correct information used to make decisions about them.
  3. Ensure Decisions Are Fair. Be sensitive to the disparate impact that your AI (or your products and services integrating AI)may have on protected classes. For example, given the disproportionate COVID-19-related hospitalizations and deaths that appear to be occurring in black and Latino communities, the use of COVID-19 related information in AI must be assessed and controlled for as a potential proxy for race. You should be mindful of such disparities on the front end, and tests should be done on the back end to assess whether there is a disparate impact on a protected class. If there is a disparate impact, you must ensure the impact is narrowly tailored to address the need.
  4. Ensure Data and Models Are Robust and Empirically Sound. Make sure the AI models are validated and revalidated to ensure they work as intended. Use acceptable statistical principals and methodology, and adjust as necessary to maintain predictability.
  5. Be Accountable. The FTC suggests that the development of AI comes with a responsibility to be accountable for compliance, ethics, fairness and nondiscrimination. It suggests four key questions to ask for to help with such an assessment:
  6. a. How representative is your data set?

    b. Does your data model account for biases?

    c. How accurate are your predictions based on big data?

    d. Does your reliance on big data raise ethical or fairness concerns?

Perspective is key. Consider your accountability mechanism, and the prudence of using independent standards or expertise to step back and take stock of the new AI development. Finally, you should protect your algorithms from unauthorized use. This includes making clear what and how the algorithm should be used.

Conclusion: Takeaways

Innovation and AI will be needed to help our nation navigate these unprecedented times. While doing so, it is important that we keep in mind consumer protection laws. The FTC has made clear that traditional consumer protection laws will apply. Importantly, the statement does not specifically account for the differences between automated decision making and more sophisticated AI, the latter of which relies on machine learning and black box inputs that may be unknown. We will continue to follow developments in this area and whether the FTC's approach to such AI evolves over time. However, for now, companies should take heed of the current lens and expectations that the FTC will have when assessing AI. When it comes to consumer protection, understand the technology, understand its impact, understand your disclosure obligations and be accountable for what you put into the marketplace.

How Holland & Knight Can Help

Holland & Knight's Consumer Protection Defense and Compliance Team and Data Strategy, Security & Privacy Team work collaboratively to offer the full range of solutions our clients need to operate in today's data- and consumer-driven marketplace. Our seasoned professionals are committed to anticipate the risk management challenges our clients confront, develop appropriate compliance management systems, and advocate before the regulatory bodies and courts with the touch that is developed from having former roles in government agencies and credible reputations before decision-makers. For questions or more information about AI and consumer protection during this unprecedented COVID-19 pandemic, contact the authors.

Notes

1 See BBC March 3, 2020, article, "Coronavirus: China's Tech Fights Back." See also NPR's April 10, 2020, article, "Apple and Google Build Smartphone Tool to Track COVID-19."

2 See HUD v. Facebook.

3 For example, in New York City, preliminary data from the Bureau of Communicable Disease Surveillance System shows that COVID-19 is killing black and Latino people at twice the rate it is killing white people.


DISCLAIMER: Please note that the situation surrounding COVID-19 is evolving and that the subject matter discussed in these publications may change on a daily basis. Please contact your responsible Holland & Knight lawyer or the authors of this alert for timely advice.


Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem. Moreover, the laws of each jurisdiction are different and are constantly changing. If you have specific questions regarding a particular fact situation, we urge you to consult competent legal counsel.


Related Insights