Podcast - Where the FTC Stands on AI: Evidence Over Speculation
As artificial intelligence (AI) becomes embedded in daily life, the Federal Trade Commission (FTC) has signaled it has no immediate plans to implement AI-specific rules. In this episode, consumer protection attorney Anthony DiResta analyzes recent statements by FTC Bureau of Consumer Protection Director Chris Mufarrige and compares the agency's current enforcement outlook with past regulatory actions. According to Mr. DiResta, the FTC appears focused on targeting bad actors, rather than the technology they are using, and avoiding the pursuit of rules that could slow AI industry growth. That shift is evident in the commission's case against AI writing assistant Rytr, which alleged review generation abuses but was set aside for lack of evidence of actual consumer harm. The outcome aligns with the White House AI Action Plan's emphasis on avoiding regulatory overreach. Overall, Mr. DiResta concludes, AI use that misleads consumers or violates existing laws will still draw federal scrutiny, but the FTC is signaling a more supportive posture toward technological innovation.
Anthony DiResta: Welcome to another podcast of "Clearly Conspicuous." As we've noted in previous sessions, our goal in these podcasts is to make you succeed in this current environment, make you aware of what's going on and give you practical tips for success. It's a privilege to be with you today.
Today we discuss a timely policy issue, the Federal Trade Commission's current posture toward regulating artificial intelligence. We'll discuss recent comments by the director of the Bureau of Consumer Protection, how the FTC has engaged with AI issues to date and what this means for innovators, consumers and the law. This episode will cut through the noise to give you clarity on where the FTC stands and where it might be headed.
Bureau of Consumer Protection Director: "No Appetite" for AI Rulemaking
So let's start with the director's comments. Earlier this year, the FTC's director of the Bureau of Consumer Protection, Chris Mufarrige, made headlines by stating that the agency currently has "no appetite for anything AI-related" in terms of new rulemaking. Those comments came at an industry event focused on privacy and enforcement strategies. What he meant was straightforward. While the FTC retains authority to police deceptive, unfair or harmful conduct involving AI, creating a broad, standalone AI rule through the rulemaking process is not on the near-term agenda. These remarks are consistent with his regulatory stance. Mufarrige has emphasized that the FTC should not create "bureaucratic red tape" that might stifle the growth of the AI industry.
So first he is targeting conduct and not tech. The agency's focus has moved from judging technologies based on their potential for misuse to targeting specific bad actors who use AI for fraud, scams or false earnings claims. Then there's ordered liberty. Mufarrige famously stated that "condemning a technology or service, simply because it potentially could be used in a problematic matter, is inconsistent with the law and ordered liberty." Then there's avoiding overreach. He has criticized previous "top-down" regulatory approaches as overreach, arguing that the FTC should "stay in its lane" by enforcing laws as directed by Congress, rather than acting as a "roving legislature."
Thus, this sentiment reflects a broader shift in emphasis – FTC leadership appears cautious about broad, proactive AI regulation, opting instead for toolkit enforcement grounded in established consumer protection laws rather than novel, AI-specific rules.
What the FTC Has Done on AI
So let's talk about what the FTC has done on AI. Although general AI-specific rulemaking isn't a priority, the FTC has actively engaged with AI issues through enforcement actions and policy guidance that leverage its existing authority.
Notable past and recent actions include the following:
- Enforcement based on deception and unfairness: The FTC uses sections of the FTC Act to pursue companies that misrepresent capabilities or harm consumers through AI-powered services.
- Then there are AI-related privacy and data practices: The FTC has taken actions addressing privacy harms tied to AI data use, such as alleged misuse of consumer voice and video data in training algorithms.
- Then there's voice cloning and fraud initiatives: Recognizing AI-enabled voice cloning as a fraud vector, the FTC's Voice Cloning Challenge encouraged development of safeguards against AI-enabled deception.
- Then there is Operation AI Comply and Rytr Correction: In a high-profile case, the FTC originally brought an action against an AI writing assistant, Rytr (R-Y-T-R), based on alleged review generation abuses. Under new policy direction aligned with the White House AI Action Plan, the FTC reopened and set aside its prior order, signaling a shift toward requiring actual evidence of consumer harm rather than speculative risk.
This pattern suggests the FTC is pragmatic and not doctrinaire in applying its existing statutory powers to AI-related conduct.
Shifting Enforcement Standards
So what we have here, folks, is shifting enforcement standards. A key doctrinal shift the FTC appears to be embracing is the distinction between theoretical risks and actual consumer harm. Rather than penalizing AI tools merely because they could be misused, the agency is focusing on cases where there's concrete evidence that the conduct is causing deception or injury.
This emphasis on demonstrable harm reflects longstanding FTC enforcement principles. The agency's core mission is to protect consumers from unfair or deceptive practices, regardless of the specific technology involved. AI, in this view, is governed through traditional legal frameworks, not sui generis rules.
At the same time, the FTC has made clear it will hold companies accountable when AI is used to mislead consumers or violate existing laws. So while broad AI regulation is not a priority, AI-related enforcement will continue under established authorities.
Key Takeaways
So, folks, what does this mean going forward? For innovators and legal practitioners, several implications stand out.
- Compliance with existing consumer protection laws remain essential, even when a product uses AI.
- Second, speculative abstract harms are less likely to drive enforcement than real-world deception or injury.
- Third, businesses should be particularly careful about statements regarding AI capabilities. Misleading claims remain fertile ground for FTC action.
- And fourth, the FTC's posture reflects a broader regulatory philosophy. Protect competition, protect consumers, but avoid stifling innovation with overly broad regulations.
So that concludes our look at the FTC stance, currently, on AI: a measured, evidence-based approach rooted in traditional consumer protection law and a practical recognition of innovation value. Thank you for joining. Stay tuned to further programs as we identify and address the key issues and developments and provide strategies for success. As always, I wish you continued success and a meaningful day. Thank you.