Podcast - Unlawful Discrimination by Artificial Intelligence
In this episode of his "Clearly Conspicuous" podcast series, "Unlawful Discrimination by Artificial Intelligence," consumer protection attorney Anthony DiResta explores the emerging world of artificial intelligence (AI). He focuses on a joint statement issued by the heads of several key federal agencies, including the Federal Trade Commission (FTC), that covers the social, economic, political and legal implications of AI as well as explores bias and discrimination in its use by businesses. Mr. DiResta breaks down several aspects of the statement and provides key takeaways for companies looking to incorporate AI into their platforms.
Good day and welcome to another podcast of Clearly Conspicuous. As we've noticed in previous sessions our goal in these podcasts is to make you succeed in this current environment that's very aggressive and progressive, make you aware of what's going on with the federal and state agencies and give you practical tips for success. As always, it's a privilege to be with you today.
The Implications of AI Socially, Economically, Politically and Legally
Today, we talk about artificial intelligence or AI Artificial intelligence technology not only exists, but it has enormous implications socially, economically, politically and, yes, legally. Congress just held hearings about it, and we could spend hours, if not days and weeks, talking about the policy implications of AI. But today, let's just focus on the topic of consumer protection. So the topic is what the federal government is doing about discrimination and bias in AI. Just a few weeks ago, the director of the CFPB, the assistant attorney general of the Civil Rights Division in the Department of Justice, the chair of the EEOC and the chair of the FTC issued a joint statement on enforcement efforts against discrimination and bias in automated systems. The statement discusses the potential harms with AI, the government's ability to enforcement laws governing AI and the issue of potential unlawful discrimination because it identifies beneficial information on many, many levels. I'd like to read some portions of the statement to you today. It's quite educational.
Is AI Responsible Innovation?
So let's begin with the opening of the statement.
America's commitment to the core principles of fairness, equality and justice is deeply embedded in the federal laws that our agencies enforce to protect civil rights, fair competition, consumer protection, and equal opportunity. These established laws have long served to protect individuals, even as our society has navigated emerging technologies. Responsible innovation is not incompatible with these laws. Indeed, innovation and adherence to the law can complement each other and bring tangible benefits to people in a fair and competitive manner, such as increased access to opportunities as well as better products and services at lower costs. Today, the use of automated systems, including those sometimes marketed as artificial intelligence or AI, is becoming increasingly common in our daily lives. We use the term automated systems broadly here to mean software and algorithmic processes to make decisions. Private and public entities use these systems to make critical decisions that impact individuals rights and opportunities, including fair and equal access to a job, housing, credit opportunities and other goods and services. These automated systems often are advertised as providing insights and breakthroughs, increasing efficiencies and cost savings and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination and produce other harmful outcomes.
So there you have it from these federal agencies outlining big picture the issue of AI, the use of the technology and the potential harms they can provide. And importantly, they set the scope for what they can do legally.
Federal Agency Authority to Address and Enforce AI Regulations
The statement then goes on to talk about the agencies, enforcement authorities and what they can do. Specifically, the CFPB, the Consumer Financial Protection Bureau, recently published a circular confirming that the federal consumer financial laws can apply to this kind of technology being used. The Department of Justice Civil Rights Division has federal statutes prohibiting discrimination and recently filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services. The EEOC has federal laws that make it illegal for an employer, union or employment agency to discriminate against an applicant or an employee due to a person's race, color, religion, sex, gender identity and sexual orientation, as well as national origin, age 40 or older, disability or generic information, including family history. The FTC, as we know, deals with unfair and deceptive business practices, and the FTC has issued a report evaluating the use and impact of AI in combating online harms identified by Congress. The report by the FTC outlines significant concerns that AI tools can be inaccurate, biased and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The FTC has also warned that market participants that may violate the FTC Act that use automated tool. And we'll talk in our next podcast about some of these issues by the FTC.
Data Sets, System Access, and Design and Use
Finally, the statement by these agencies outlines some data points or some buckets that we can be concerned about. First, data and data centers, and the agencies say the following:
Automated system outcomes can be skewed by unrepresentative or imbalanced data sets. Data sets that incorporate historical bias or data sets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.
The statement also talks about model, opacity and access, saying the following:
Many automated systems are black boxes whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency also makes it all the more difficult for developers, businesses and individuals to know whether an automated system is in fact fair.
So finally, the statement says the following:
Design and use. Developers do not always understand or account for the contexts in which private or public entities will use their automated systems. Developers may design a system on the basis of flawed assumptions about its users' relevant context, or the underlying practices or procedures that it may replace.
So these agencies are basically saying in this statement that while they want to promote responsible innovation, we have to be very aware of the laws, at least the federal laws through the state, govern AI
The Key Takeaway
So here is the key takeaway. Again, artificial intelligence technology not only exists, but it has enormous implications socially, economically, politically and legally. There are calls for immediate regulation, and the government is not being silent about its harms and consumer implications. Therefore, we must pay close attention to what the regulators and legislators are saying about this emerging technology. So please stay tuned to further programs as we identify and address the key issues and developments and provide strategies for success and awareness. I wish you continued success and a meaningful day. Thank you.