argument: Normativa/Regulations - Digital Governance
The California Assembly Bill 2930 (AB2930) aims to regulate the use of automated decision tools powered by artificial intelligence. The bill establishes a framework for transparency, accountability, and fairness, requiring impact assessments, notifications, and accommodations for individuals subject to consequential decisions. It places responsibilities on deployers and developers, mandates governance programs, prohibits algorithmic discrimination, and provides enforcement mechanisms. AB2930 seeks to ensure the responsible and ethical use of automated decision tools while protecting individuals from potential algorithmic discrimination.
The California Assembly Bill 2930 (AB2930), introduced by Assembly Member Bauer-Kahan on February 15, 2024, is an important step towards regulating the use of automated decision tools powered by artificial intelligence. The bill recognizes the growing influence of these tools in various aspects of our lives and seeks to establish a framework that promotes transparency, accountability, and fairness while protecting individuals from potential algorithmic discrimination.
At the core of AB2930 lies the requirement for deployers and developers of automated decision tools to conduct annual impact assessments. These assessments serve as a crucial mechanism to evaluate the tool's purpose, intended benefits, uses, outputs, data collected, potential adverse impacts, and safeguards against algorithmic discrimination. By mandating the submission of these assessments to the Civil Rights Department upon request, the bill ensures that there is a level of oversight and accountability in the deployment and development of these tools.
One of the key provisions of the bill is the obligation for deployers to notify individuals who are subject to consequential decisions made by automated decision tools. This notification must include information about the tool's purpose and a plain language description, empowering individuals with knowledge about how these tools are being used to make decisions that significantly impact their lives. Furthermore, the bill goes a step further by requiring deployers to accommodate an individual's request for an alternative selection process or accommodation, if technically feasible, when a consequential decision is made solely based on an automated decision tool's output. This provision ensures that individuals have the right to opt-out of being subject to automated decision-making and seek human intervention when necessary.
The bill also places responsibilities on developers of automated decision tools. They must provide deployers with a statement regarding the intended uses of the tool, its known limitations, data used for programming or training, and evaluation methods. This requirement fosters transparency and ensures that deployers are well-informed about the capabilities and potential risks associated with the tools they are using.
To manage the risks of algorithmic discrimination, the bill mandates that deployers and developers establish and maintain a governance program with administrative and technical safeguards. This program should be designed to identify, measure, and mitigate the reasonably foreseeable risks of algorithmic discrimination associated with the use of automated decision tools. The governance program must be appropriate to the tool's intended use, the deployer's or developer's role, the size and complexity of the organization, and the technical feasibility and cost of available tools and assessments.
Importantly, AB2930 explicitly prohibits deployers from using automated decision tools that result in algorithmic discrimination. This provision is a clear statement against the use of these tools in a manner that perpetuates unjustified differential treatment or disparate impacts based on protected characteristics such as sex, race, color, ethnicity, religion, age, national origin, disability, and others.
To enforce the provisions of the bill, the Civil Rights Department is empowered to investigate reports of algorithmic discrimination or other violations. Additionally, public attorneys, including the Attorney General, can bring civil actions against deployers or developers for violations, with potential civil penalties of $25,000 per violation involving algorithmic discrimination. This enforcement mechanism ensures that there are consequences for non-compliance and serves as a deterrent against the misuse of automated decision tools.
The bill also addresses the balance between transparency and the protection of trade secrets. When complying with public records requests, the Civil Rights Department or any entity with which an impact assessment was shared must redact any trade secrets from the impact assessment. This provision ensures that sensitive business information is protected while still allowing for public scrutiny and accountability.