California’s Automated-Decision System Regulation
How to comply with yet another AI-specific rule.
California Civil Rights Department regulatory action 2025-0515-01 on Automated-Decision Systems in employment goes into effect October 1, 2025.
The key section is § 11009 (f):
It is unlawful for an Employer or Other Covered Entity or other covered entity to use an Automated-Decision System or selection criteria (including a qualification standard, employment test, or Proxy) that discriminates against an Applicant or Employee or a class of Applicants or Employee on a basis protected by the Act, subject to any available defense.
Most importantly:
Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
But what is "anti-bias testing"?
Not defined.
But StackAware has some suggestions, which we document in an actionable compliance procedure (below).
The regulation also does not mention ISO 42001 or similar standards, but external certification of an AI Management System (with relevant Annex A controls) could address the criteria of:
Quality
Efficacy
Response
And the ADS regulation could be an “external issue” which Clause 4 of the ISO 42001 standard requires you to consider.
Notes on the procedure:
It assumes you are an “Employer” per the regulation (which has a complex definition but basically covers all businesses with >5 employees if any of them are in California).
It is not a comprehensive human resources policy that covers all anti-discrimination requirements, nor is it legal advice.
Capitalized terms are defined in the California regulation.
Purpose
Ensure the responsible and ethical use of Automated-Decision Systems as well as compliance with California Civil Rights Department regulatory action 2025-0515-01.
Scope
All information systems1 COMPANY_NAME develops2 or uses that impact (or could in the future impact) employees or legal equivalents located in California.3
Requirements
Data Owners must:
NOT use Automated-Decision Systems to discriminate against an employee or legal equivalent on the basis of (or proxy for) any legally protected class4, or attempt to identify an employee or legal equivalent on such a basis (§ 11009 (f)).
Annually, conduct bias mitigation5 of any Automated-Decision Systems, including one of the following or alternative approved by the General Counsel:
A "Bias Audit" as defined by NYC Local Law 1446
A/B testing of outputs from the ADS
Review of ADS training data
AI Red-teaming
Retain, for 4 years from the date of the making of the record or the date of the personnel action involved, whichever is later all (§ 11013 (c)):
applications
personnel records
membership records
employment referral records
selection criteria
Automated-Decision System Data
California Employer Information Reports (CEIR)
Applicant Identification Records
other records created or received by the Employer or Other Covered Entity or other covered entity dealing with any Employment Practice and affecting any Employment Benefit of any Applicant or Employee.
Keep records as to the sex, race, or national origin of any individual accepted for employment separate from the employee's main personnel file or other records available to those responsible for personnel decisions (§ 11013 (c)(2)).
If an online application technology7 limits, screens out, ranks, or prioritizes Applicants based on:
schedule or time availability (§ 11016 (c)(3))
skill, dexterity, reaction time, and/or other abilities or characteristics
(§ 11016 (c)(5))
tone of voice, facial expressions, or other physical characteristics or behavior (§ 11016 (d)(1))
then:
Document and retain for 4 years the business necessity and job-related nature of the schedule or availability restriction or preference; and
Ensure the online application technology offers a method for the applicant to request an accommodation.
“Information system” is a StackAware term covering “Any digital interface with which one can interact.” This is intentionally broad, to avoid missing an ADS in the scope.
The regulation creates a new definition of “Agent,” which is:
any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity, which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system. An agent of an employer is also an “employer” for purposes of the Act.
This means ADS developers or operators (if deployed -as-a-Service) could also be subject to the California regulation.
“Employees or legal equivalents” is another StackAware term encompassing a broad array of definitions for potentially impacted people across all relevant jurisdictions. It covers Employees and Applicants as defined in this California regulation. This makes the procedure and definition flexible enough to cover multiple regulations.
Similarly, “legally protected class” is a catch-all to cover all relevant categorizations protected across all jurisdictions, not just California.
Yet another StackAware term that encompasses “anti-bias testing” but also substantially more.
NYC Local Law 144 and its supporting regulation clearly define “Bias Audit.” I speculate (although don’t have any evidence) California would look favorably on anti-bias efforts that mirror requirements from another blue state.
NOT defined in the regulation.