Consider the philosophical concept known as infinite regress. Every belief or proposition requires justification, but each justification itself requires further justification, leading to an endless chain.
The idea is usually illustrated — somewhat humorously and somewhat alarmingly — with gargantuan turtles. We imagine the turtle holding our world on its back must be supported by another, larger, turtle. And so on. As the saying goes, it is turtles all the way down.
Infinite regress challenges the foundation of knowledge. If every belief needs another belief to justify it, we may never reach a solid ground or starting point. This vicious cycle is a fundamental challenge of epistemology.
There are similar challenges in artificial intelligence governance. For one, the inherently opaque nature of many AI systems makes it incredibly difficult to determine the origin or contributing factors to an output with any certainty. Does an AI system include personal data when it is made up of probabilities all the way down?
AI assurance also presents a recursive challenge: If we rely on audits and external assessments to solve for issues like bias, do we also audit the auditors? Should we audit the fact that audits were completed? Should we audit the audit of the audits?
The need for external validation at multiple stages of AI development and deployment is reflected in a new bill introduced by U.S. Sens. Ed Markey, D-Mass., and Mazie Hirono, D-Hawaii, who drafted the bill with the support of the Lawyers’ Committee for Civil Rights Under Law and the Leadership Conference on Civil and Human Rights.
In fortuitous timing, Lawyers’ Committee President and Executive Director Damon Hewitt received an award this week at the Electronic Privacy Information Center's Champions of Freedom Awards. In his acceptance speech, Hewitt echoed a popular adage among civil rights advocates, "anything about us without us can never be for us."
The draft AI Civil Rights Act includes at least six types of mandatory assessments, many of which take the form of third-party audits required to be submitted within 30 days to the U.S. Federal Trade Commission. The bill also has a significantly broader scope, stronger data rights and enhanced transparency obligations compared to similar legislation such as the Colorado AI Act.
Preliminary evaluations for developers and deployers
First, developers and deployers of AI systems that affect "consequential actions" must each undertake a "preliminary evaluation of the plausibility that any expected use of the covered algorithm may result in a harm."
To understand this and later requirements, we must review the types of actions and harms included in scope of the bill.
Broader in every way than similar legislative efforts at the state level, the bill appears to cover both commercial and public sector actions and decisions facilitated by AI systems. Covered consequential actions would include those that have a material effect on employment, education, housing, utilities, health care, financial services, insurance, criminal justice, elections, government benefits and services, public accommodations, and anything with a "comparable legal, material, or similarly significant effect on an individual’s life," as determined by FTC rulemaking.
Covered harms from a consequential action are also broader than similar legislation such as the Colorado AI Act. In their initial assessments, developers and deployers would be looking for plausible evidence of any "non de-minimis adverse effect on an individual or group of individuals:
- (A) on the basis of a protected characteristic;
- (B) that involves the use of force, coercion, harassment, intimidation, or detention; or
- (C) that involves the infringement of a right protected under the Constitution of the United States."
Full pre-deployment evaluations for developers, deployers
After the preliminary evaluation, if the developer or deployer uncovers a plausible harm from the expected use of the AI system, it must engage an independent auditor to conduct a pre-deployment evaluation.
The bill requires auditors to be nonconflicted third parties who can review systems to assess the design, training and mitigation measures that went into their design and deployment.
Deployer annual impact assessments
Whether or not harms are expected at the outset, deployers would also be required to undertake preliminary impact assessments each year to identify any harm that resulted from the use of the covered algorithm. If none is found, the "no harm" finding would be recorded along with information about the ongoing uses of the system and reported to the FTC.
If a harm is uncovered, the deployer must again engage an independent auditor to conduct a full impact assessment. This next layer of audit is focused on issues like disparate impact, provenance, model validity and mitigation measures. Once the auditor reports back to the deployer, the deployer would have 30 days to submit a summary of the report to the developer.
Developer annual review of assessments
Finally, developers would be required to collect all deployer impact assessments as part of an annual review of the use and impacts of their covered AI systems. This would provide an opportunity for the developer to review deployers’ compliance with contracts, which are also prescribed in the AI Civil Rights Act. Additionally, it would require considerations of possible modifications to ensure the ongoing safety and effectiveness of the system.
An opening volley
With its broad and multi-layered approach to AI governance, this bill is likely meant to kick-start the policy conversation about appropriate mitigations for AI systems to reduce harmful bias and related harms. After narrower automated decision-making bills were spurned this year by democratic governors in California, Colorado and Connecticut, there is no doubt this legislation will have a long path to consideration as a federal model.
The new bill does not directly grapple with the challenge of auditing the auditors. But other U.S. policy efforts have acknowledged the need for more robust and consistent standards for third-party assessments. For example, the VET AI Act would require the U.S. National Institute of Standards and Technology to develop detailed specifications, guidelines, and recommendations for third-party evaluators to provide independent external assurance and verification of how AI systems are developed and tested.
Policymakers can always look to philosophy for tools they can use to avoid vicious epistemic cycles.
For example, the idea of coherentism suggests that beliefs are justified by their coherence with a larger system of beliefs. If the system as a whole is coherent, individual beliefs are considered justified. Or, rather than rejecting the infinite chain of justifications, a separate theory called infinitism embraces it, arguing it’s not necessarily vicious so long as each step in the chain adds a degree of justification.
As the policy community continues to right-size our expectations for AI governance, we will hopefully seek structures with internal coherence, building on the iterative best practices that are emerging today. Each step toward accountability and transparency fortifies the foundations of trust in AI systems.
And as activists will continue to remind us, when we choose our steps, we must exercise caution to ensure we include the perspectives of those who would be impacted by the policies we embrace.
Here's what else I’m thinking about:
- California has a new generative AI law. Gov. Gavin Newsom, D-Calif., signed the California AI Transparency Act, Senate Bill 942. The law requires developers of large AI systems to include embedded "latent disclosures" and user-facing "manifest disclosures" in generative AI systems by 1 Jan. 2026. It also requires developers of such systems to provide AI detection tools to allow users to assess whether content was created by the platform's AI. Other recent AI legislation is still awaiting the governor's signature before the end of the month.
- A new model state privacy bill from civil society. Consumer Reports and EPIC jointly released a new draft bill called the State Data Privacy Act. The bill builds on the Connecticut Data Privacy Act and even includes a helpful redline of that existing law, with explanations of recommended changes. With endorsements from the Center for Democracy and Technology and Public Knowledge, the draft bill is a helpful preview of the goals these groups will embrace for their efforts to pass new comprehensive state privacy laws in 2025.
- A workshop on kids' attention. The U.S. Federal Trade Commission scheduled a public workshop for 25 Feb. 2025 on "The Attention Economy: Monopolizing Kids’ Time Online." The agency is looking for panelists.
- While we're on the topic of panelists. The IAPP Global Privacy Summit submission deadline is this Sunday, 29 Sept. Please submit your panel ideas for privacy’s largest annual gathering.
Please send feedback, updates and turtles to cobun@iapp.org.
Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director in Washington, D.C., for the IAPP.