July 14, 2022

FTC: Companies Should Use Caution When Relying on AI Tools to Curtail Online Harms

Holland & Knight Alert
Anthony E. DiResta | Kwamina Thomas Williford | Wanqian Zhang

Highlights

  • The Federal Trade Commission (FTC) is critical of the use of artificial intelligence (AI) to combat "online harms." In a recent report by the agency issued in response to Congress' inquiry, the FTC found that the use of AI has not significantly curtailed online harms overall.
  • The FTC raised significant concerns that AI tools can be ineffective, inaccurate, biased and discriminatory, in light of the lack of transparency and accountability in large technology companies and platforms regarding the use of AI algorithms and datasets.
  • Platforms, decision-makers, policymakers, and companies that use AI to identify harmful content must exercise great caution in either mandating the use of or overrelying on AI tools.

In a report by the Federal Trade Commission (FTC) released on June 16, 2022, the agency is critical of the use of artificial intelligence (AI) to combat online harms. In the report by the agency, the FTC found that the use of AI has not significantly curtailed online harms overall. "Online harms" that are of particular concern include online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.1

While the FTC Report Recognizes AI's Use in Combating Harmful Content and Other Positive Outcomes, the Report Cautions Against "Overreliance"

The FTC report finds AI to be effective in combating harms of which the detection requires no context, including illegal items sold online and child pornography, and recognizes effective AI systems in preventing the inadvertent release of harmful information. AI can be used for "interventions" or "frictions" purposes before the release of harmful content, including labeling, adding interstitials, sending warnings. The FTC does not believe these strategies prevent maliciously spread information.2

Platforms can also use AI tools to address online harms by finding the networks and actors behind them. AI tools can facilitate cross-platform mapping of certain communities spreading harmful contents. However, these strategies can also inadvertently ensnare marginalized communities using protected methods to communicate about authoritarian regimes.

Notwithstanding the inevitability of AI use, the FTC is concerned with using AI to combat online harms and cautions against overreliance for several reasons.

First, AI tools have built-in imprecision. The report cautions that datasets used to train AI systems are often not sufficiently large, accurate and representative, and the classifications can be problematic. It explains that AI tools are generally deficient at detecting and including new phenomena, and the operation of AI tools is subject to platform moderation policies that may be substantially flawed.

Second, the report suggests that AI tools are often unreliable at understanding context, therefore they typically cannot effectively detect frauds, fake reviews and other implicitly harmful contents.

Third, the report suggests that use of AI cannot solve and instead can exacerbate bias and discrimination. It explains that inappropriate datasets and a lack of diverse perspectives among AI designers can exacerbate discrimination against marginalized groups. It cautions that big technology companies can influence institutions and researchers and set the agenda for what AI research the government funds. It cautions further that AI tools used to uncover networks and actors behind harmful contents may inadvertently stifle minority groups.

Fourth, the report suggests that AI development can incentivize invasive consumer surveillance because improving AI systems requires amassing large amount of accurate, representative training data.

Fifth, the report cautions that bad actors can easily escape AI detection by hacking, using their own developing AI technology or simply using typos and euphemisms.

Finally, the report cautions that the massive amount of ordinary and pervasive posts that express discriminatory sentiments cannot be detected effectively by AI, even under human oversight.

FTC Report's Proposed Recommendations

The report identifies the need to increase the transparency and accountability of those deploying AI as a top priority. It stresses the importance of increasing data and AI designer/moderator diversity to combat bias and discrimination. The report also finds that human oversight is a necessity.

Transparency

The FTC report stressed that to increase transparency, platforms and other entities should do the following.

  1. Make sufficient disclosure to consumers about their basic civil rights and how their rights are influenced by AI. The report pointed out that consumers have the right to be free from being subjected to inaccurate and biased AI, the right to be free from pervasive or discriminatory surveillance and monitoring, and the right to meaningful recourse if the use of an algorithm harms them.
  2. Give researchers access to sufficient, useful, intelligible data and algorithms for them to properly analyze the utility of AI and the spread and impact of misinformation.
  3. Keep auditing and assessment independent while protecting auditors and whistleblowers who report illegal AI use.

Accountability

The FTC report stressed that to increase accountability, platforms and other entities should conduct regular audits and impact assessments, should be held accountable for the outcome and impact of their AI systems, and provide appropriate redress for erroneous or unfair algorithmic decisions.

Diversity – Assess Through a Diverse Lens

The FTC report recommends increasing diversity in datasets, AI designers and moderators. Firms need to retain people with diverse perspectives and should strive to create and maintain diverse, equitable and inclusive cultures. AI developers should be aware of the context where the data is being used and the potential discriminatory harm it could cause, and mitigate any such harm in advance.

Human Oversight

The FTC stresses the importance of proper training and workplace protection of AI moderators and auditors. The training should correct human moderators' implicit biases and moderators' tendency to be overly deferential to AI decisions. The FTC encourages platforms and other internet entities to use Algorithmic Impact Assessments (AIAs) and audits, as well as document the assessment results in a standardized way. AIAs allow for the evaluation of an AI system's impact before, during or after its use. Companies can mitigate bad outcomes in time with AIAs, and the FTC and other regulators can obtain information from AIAs for investigations into deceptive and unfair business practices. An audit focuses on evaluation of an AI model's output.

Two Commissioners Criticize the Report

Commissioner Noah Joshua Phillips issued a dissenting statement, and Commissioner Christine Wilson also listed several disagreements that she had with the report in a concurring statement. The two commissioners based their criticisms on three grounds.

First, the agency did not solicit sufficient input from stakeholders. They perceive the FTC report as a literature review of academic articles and news reports on AI. They note that the authors did not consult any internet platforms about how they view AI efficacy, and they find that the report frequently cites to the work and opinions of current FTC employees, holding that the quantity of self-reference calls the objectivity of the report into question.

Second, they believe that the report's recommendation might produce the countereffect of subjecting compliant entities vulnerable to FTC enforcement actions.3

Third, they conclude that the report's negative assessment of AI use in combating online harms lacks foundation. They find that conclusions of AI inefficiency are sometimes based on the fact that harmful contents are not completely eliminated by AI tools, and they say that the report lacks a cost-benefit analysis of whether the time and money saved by using AI tools to combat harmful contents outweigh the costs of the AI tools missing some percentage of these contents.

The Takeaways

AI has tremendous benefits that companies leverage every day, but when doing so it is prudent to be mindful of the FTC's cautions and take steps to fortify practices related to AI:

  1. Companies should collect only the information necessary to provide the service or product. The FTC is not against implementing innovative AI tools to prevent frauds or fake reviews. However, the FTC encourages data minimization. Companies should tailor data collection to their need to render services or products.
  2. Be transparent. The FTC may require social media platforms and other internet entities to disclose sufficient information to allow consumers to make intelligible decisions about whether to and how to use certain platforms. The FTC may also require entities to grant researchers access to information and algorithms to a certain extent.
  3. Be accountable. The FTC may hold platforms and other internet entities responsible for impact of their AI tools, especially if the AI harms the rights of marginalized groups, even if the tools are intended for combating harmful contents.
  4. Enhance human oversight. The FTC may encourage standardization of appropriate training of AI moderators/auditors and enhancement of their workplace protection.
  5. Refrain from invasive consumer surveillance. Consumer privacy interests outweigh accuracy and utility of AI tools.
  6. Be cautious about potential free speech disputes when prebunking "misinformation."
  7. The FTC may conduct more research on using AI to combat online harms, and its report may be subject to significant change based on the sources it decides to consult.

How We Can Help

Holland & Knight's Consumer Protection Defense and Compliance Team includes a robust FTC practice, with experienced attorneys that are recognized as thought leaders in the field. The firm has represented dozens of companies and individuals in federal and state investigations concerning advertising, marketing practices, privacy and data security, consumer credit, telemarketing and debt collection, saving clients from significant financial loss, public scrutiny and having to make changes to their core business operations.

For more information or questions about the FTC report or the legal implications of AI use, contact authors Anthony DiResta or Kwamina Williford.

Notes

1 In legislation enacted in 2021, Congress directed the FTC to examine ways that AI "may be used to identify, remove, or take any other appropriate action necessary to address online harms." See Statement of Commissioner Alvaro M. Bedoya Regarding Report to Congress on Combatting Online Harms Through Innovation, FTC (June 16, 2022) (acknowledging that in the 2021 Appropriations Act, Congress asked the Commission to report on the use of AI to detect or address harmful online content including fake reviews, opioid sales, hate crimes and election-related disinformation).

2 Commissioners Christine S. Wilson and Noah Joshua Phillips are concerned about "prebunking misinformation" recognized as effective in the report. Both point out in their statements that prebunking information that is not verifiably false but may be false might create free speech issues. See Dissenting Statement of Commissioner Noah Joshua Phillips Regarding the Combatting Online Harms Through Innovation Report to Congress and Concurring Statement of Commissioner Christine S. Wilson Report to Congress on Combatting Online Harms Through Innovation, FTC Public Statements (June 16, 2022).

3 In 2021, the FTC brought a case against an ad exchange company for violations of the Children's Online Privacy Protection Act (COPPA) and Section 5 of the FTC Act. The company claimed to take a unique human and technological approach to traffic quality and employed human review to assure compliance with its policies and to classify websites. The company’s  human review failed. But it was only the human review that provided the "actual knowledge" needed for the Commission to obtain civil penalties under COPPA. If the company had relied entirely on automated systems, it might have avoided monetary liability. U.S. v. OpenX Technologies, Inc., Civil Action No. 2:21-cv-09693 (C.D. Cal. 2021).


Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.


Related Insights