Why AI Bias Is a Specific Legal Exposure
Title VII of the Civil Rights Act applies to AI interviewer platforms as it applies to any other selection tool. The EEOC's 2023 technical assistance document makes the employer liability position explicit: using an AI hiring tool does not reduce the employer's responsibility for adverse impact outcomes. The employer is the liable party, even where the AI tool is entirely vendor-built.
The 4/5ths rule (80% rule) applies: if a selection procedure causes a selection rate for a protected class that is less than 80% of the rate for the most-selected group, that is evidence of adverse impact. This rule applies to AI interviewers as much as to any written test or structured interview.
Key EEOC 2023 position on AI tools
"Employers are responsible for ensuring that their AI hiring tools do not create unlawful disparate impact, regardless of whether the tool was developed by the employer or a third party. Employers cannot cede this responsibility to a vendor." Source: EEOC, "Leading for Change: A Blueprint for Promoting the Fair, Inclusive, and Responsible Use of Automated Systems in the Workplace" (May 2023).
The Four Types of Bias in AI Interviewers
Training-data bias
The model was trained on historical successful-hire data. If those hires skewed toward a particular demographic group (common in pre-2020 enterprise hiring), the model encodes the historical pattern. The model learns to prefer candidates who resemble past successful hires, which may reflect historical biases in hiring rather than genuine performance predictors.
Proxy-variable bias
The model uses variables that correlate with protected characteristics without being explicitly protected themselves. ZIP code, name, speech accent, and vocabulary patterns can act as demographic proxies. Removing the explicit protected variable (race, gender, age) does not remove the proxy if the model learns from correlated features.
Facial-analysis bias (historical)
Facial expression analysis was used by HireVue and others to score candidates on emotional states during video interviews. Research found that facial-analysis models trained on majority-group data performed less accurately on minority-group faces. HireVue discontinued facial-analysis for new customers in 2021; this is a historical issue but relevant for any vendor still using facial analysis.
Score-calibration bias
Different demographic groups may use different vocabularies, speech patterns, or communication styles that are equally valid for job performance but score differently under a model calibrated primarily on one group's patterns. This is the most technically subtle form of bias and the hardest for employer audits to detect without demographic data on candidate assessments.
Documented Cases
Amazon internal AI recruiting tool (2014-2018)
Amazon built an AI resume-screening tool that was trained on historical Amazon hire data. Because Amazon hired disproportionately male engineers historically, the model learned to penalise resumes that included the word "women's" (as in "women's chess club"). Amazon scrapped the tool in 2018 after internal audits revealed the pattern. Reported by Reuters (October 2018).
HireVue EPIC FTC complaint (2019)
The Electronic Privacy Information Center filed an FTC complaint alleging that HireVue's facial-expression-based scoring violated consumer privacy rights and used pseudoscientific features (micro-expressions) to make employment decisions. HireVue commissioned an independent audit from AlgorithmicAudit.eu and subsequently discontinued facial analysis for new customers in 2021.
iTutorGroup EEOC settlement (2023)
The EEOC reached a $365,000 settlement with iTutorGroup after the company's AI screening tool automatically rejected applications from women over 55 and men over 60. This is the first major EEOC enforcement action specifically targeting an algorithmic hiring tool for age discrimination. Primary citation: EEOC v. iTutorGroup, settlement announced March 2023.
Workday class action (2023, ongoing)
A class action was filed in California federal court alleging that Workday's AI-powered hiring tools discriminate against Black candidates, older workers, and candidates with disabilities. The case was ongoing as of April 2026. This is significant because it targets an ATS+AI platform rather than a standalone AI interviewer, broadening potential liability scope.
US State Law Landscape (2026)
As of April 2026, at least 10 US states have enacted or are actively considering legislation addressing AI in employment. The landscape is moving rapidly; verify current status with qualified employment counsel for your specific jurisdictions.
Illinois AIVIA (Artificial Intelligence Video Interview Act, 2020)
In forceApplies to all Illinois-based positions. Employers must: (1) notify candidates before using AI analysis of video interviews; (2) obtain consent; (3) explain what characteristics the AI evaluates; (4) destroy all video recordings within 30 days of candidate request if the candidate is no longer under consideration. Applies regardless of whether the video interview is async or live.
NYC Local Law 144 / AEDT (effective July 5, 2023)
In forceThe most significant US AI employment law in force. See the dedicated page at /nyc-aedt-compliance for full analysis. Summary: annual independent bias audit required, published summary required, 10-day candidate notice required. $500 first-violation penalty; $500-$1,500 per subsequent violation per day.
Colorado AI Act (SB 205, effective Feb 2026)
In forceApplies to "high-risk AI systems" including those used in employment decisions. Requires: developer disclosure to deployers, deployer disclosure to consumers, impact assessment, complaint and appeal process. Penalties enforced by Colorado Attorney General.
California SB 1001 (2019)
In force (limited scope)Requires disclosure when a bot interacts with a person online in California if designed to mislead. Applies to some AI chatbot recruiting interfaces. AB 2930 (broader AI bill covering high-risk AI in employment) was pending as of April 2026; verify current status.
Maryland HB 1202 (2020)
In forceRequires employer consent before using facial recognition technology in an employment interview. Applies to Maryland-based positions.
Texas, New York (state-level), others
Pending / under considerationMultiple states have introduced AI employment bills as of April 2026. The regulatory landscape is moving faster than in previous years. Consult employment counsel for your specific states.
12-Point Buyer's Compliance Checklist
Bias audit published and recent (within 12 months of your go-live)
Bias-audit methodology transparent (auditor name, methodology, data scope)
Candidate notice and consent flows compliant per your jurisdictions
Data-retention and deletion terms match Illinois AIVIA and EU requirements
Vendor indemnification on bias claims (typically absent; note this and factor into risk)
Human oversight architecture documented: who reviews AI scores before decision?
ADA accommodations for disabled candidates documented in vendor contract
Data portability terms on contract exit (format and timeline)
Demographic monitoring capability: can you pull selection-rate data by protected class?
Candidate appeal process: can candidates challenge an AI interview outcome?
Jurisdiction coverage: does the vendor's compliance posture cover your operating jurisdictions?
Sub-processor list and location: where is candidate data processed and stored?
Disclaimer
This page contains general information about legal requirements as of April 2026. It is not legal advice. The AI hiring legal landscape is evolving rapidly. Consult qualified employment counsel for jurisdiction-specific advice before procuring or deploying any AI-driven hiring tool.