Skip to main content
Mobley v. Workday signals a new era of AI hiring liability. Learn how shared responsibility between vendors and employers affects CHROs, and how to renegotiate AI hiring contracts, audit rights, and indemnification clauses after Mobley.

Mobley v. Workday and the new era of AI hiring liability

The Mobley v. Workday class action has turned the risk of AI-driven hiring discrimination from a theoretical concern into a concrete legal exposure for both software providers and employers. In Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal., filed Feb. 21, 2023), the plaintiff alleges that Workday’s artificial intelligence–based screening tools used in the hiring process functioned as an employment agency that enabled age discrimination and other unlawful employment harms. In an August 2023 order on Workday’s motion to dismiss, Judge Rita F. Lin allowed key claims to proceed on the theory that an AI hiring platform can be treated as an “employment agency” under federal employment law, directly linking automated screening tools to the Age Discrimination in Employment Act (ADEA) and related anti-discrimination statutes. The court summarized the plaintiff’s theory by noting that Workday’s platform allegedly “acts as an employment agency by helping its customers find and recruit employees,” and that these allegations were sufficient at the pleading stage to move forward.

This framing matters because it opens the door for both the AI vendor and the deploying employers to face responsibility for discriminatory hiring decisions generated by automated tools. In the Workday lawsuit, the court signaled that a platform offering algorithmic screening, background-check style filters, and automated ranking can fall under the same legal standards as traditional labor-market intermediaries. The order noted that the complaint plausibly alleged Workday “regularly undertakes to procure employees for its customers,” which is enough at the motion-to-dismiss stage to treat the software as an employment agency. For CHROs, that shifts AI from a neutral third-party technology to a regulated actor in the employment relationship, with potential liability that can extend to every job requisition, every class of applicants, and every automated rejection.

Legal teams now read the Mobley case as a template for future class action claims that target both AI platforms and employers for disparate impact in hiring outcomes. When an AI system like Workday or SAP SuccessFactors is treated as an employment agency, plaintiffs can argue that biased tools create systemic discrimination against protected classes, which raises the risk of class litigation under age discrimination rules and other equal employment protections. A hypothetical example illustrates the exposure: if an automated screener rejects 35% of applicants under age 40 but 70% of applicants over age 55 for the same role, that two-to-one disparity could be framed as evidence of adverse impact under the ADEA, especially if the pattern repeats across multiple requisitions. The Equal Employment Opportunity Commission and similar civil rights enforcement bodies are watching these developments closely; the EEOC’s technical guidance on algorithmic decision-making and its strategic enforcement plan both emphasize automated hiring, and that enforcement posture will shape how liability for AI screening tools evolves across industries and geographies.

Shared responsibility between vendors and employers in AI hiring

Mobley makes it clear that responsibility for AI-based hiring decisions is not just a vendor problem or an employer problem, but a shared obligation across the full recruitment lifecycle. Employers that rely on Workday, Oracle HCM, BambooHR, Personio, or other talent platforms for faster hiring still retain responsibility for compliance with federal, state, and international discrimination laws, even when a third-party algorithm performs the initial screening. At the same time, the court’s reasoning means a vendor that designs and markets automated decision tools for hiring can be sued directly, which changes how both sides must think about legal risk, human oversight, and the “human in the loop” in recruitment.

For HR leaders, this dual liability reshapes how to evaluate AI tools that promise faster time-to-fill and lower cost per hire in sales, engineering, or specialized AI recruiter roles. A Workday deployment that automates résumé screening, identity and background verifications, and interview scheduling now sits inside the same legal frame as a traditional staffing agency, so risk must be assessed with the same rigor as any labor or employment intermediary. That means mapping where human review occurs, how fairness and bias audits are conducted, and whether the vendor can demonstrate that its models do not create disparate impact against older workers or other protected groups in real-world hiring outcomes, rather than only in sandbox tests or marketing case studies.

Contract structures must catch up with this reality, especially for global employers that run integrated HRIS stacks across Workday, SAP SuccessFactors, and niche AI tools for sourcing, assessments, or video interviewing. Indemnification clauses that once focused primarily on data breaches now need explicit language about algorithmic hiring risk, discrimination claims, and class action exposure tied to automated screening or ranking decisions. HR and legal teams should also align with marketing and employer-brand leaders who shape public commitments to fair hiring, because misaligned messaging about diversity, equity, and inclusion can be used against a company in a Workday-style lawsuit or similar AI hiring case, as explored in analyses of modern HR tech leadership roles at the HR Tech Institute. A simple internal case study—such as tracking how an AI-enabled requisition for a customer success team changed the age or gender distribution of applicants who reached the interview stage—can help leadership understand where accountability truly sits.

How to renegotiate AI hiring contracts after Mobley

Mobley has elevated AI hiring risk to a board-level topic, and CHROs now need a concrete checklist for vendor selection and contract negotiation. First, every AI hiring platform contract should include detailed audit rights that allow employers to commission independent bias audits, review model documentation, and test for disparate impact across age, gender, race, disability, and other protected characteristics. At a minimum, contracts should specify the frequency of audits, the format of data access, and the standards used to evaluate fairness, so that internal compliance teams and external experts can replicate and challenge vendor claims. Second, HR leaders should require vendors to share clear documentation on their human-in-the-loop controls, including where human reviewers can override automated screening, how background checks are integrated, and how the platform supports anti-discrimination practices in daily employment decisions.

Indemnification language must evolve from generic boilerplate into specific allocations of liability for AI-driven hiring decisions and discrimination claims. Employers should push for clauses that require the vendor to maintain compliance with relevant employment law, including the Age Discrimination in Employment Act, Title VII of the Civil Rights Act, state-level fair employment statutes, and emerging AI regulations in California, Colorado, New York City, and the European Union, while also defining how responsibility is shared if a Mobley-style lawsuit alleges systemic bias. For example, a negotiated clause might state that the vendor will defend and indemnify the customer against third-party claims “to the extent arising from the design, training, or operation of Vendor’s algorithmic screening models,” while the employer accepts responsibility for “job descriptions, candidate data inputs, and final hiring decisions.” In parallel, HR teams should align AI governance with broader hiring process optimization efforts, such as those used in large-scale recruitment programs where every step from requisition to offer is mapped, timed, and audited for both fairness and efficiency.

Finally, global employers should treat AI-related hiring exposure as part of their enterprise risk management framework, not just an HR compliance issue. That means tracking regulatory developments from the Equal Employment Opportunity Commission and other enforcement agencies, monitoring litigation trends in AI hiring class actions, and building internal capabilities to evaluate automated decision tools beyond vendor marketing claims. A sample audit clause can help operationalize this approach: “Vendor shall provide Customer, upon reasonable notice, with access to anonymized candidate-level decision data, model documentation, and impact assessments sufficient to enable Customer or its independent auditor to evaluate potential disparate impact and compliance with applicable employment and AI regulations.” The real test of any AI hiring platform will be its performance under legal scrutiny and its impact on actual employment outcomes over time, not the elegance of the demo or the promise of faster hiring in a sales pitch.

Published on   •   Updated on