Workday
Workday’s AI rejected a qualified MBA 100 times. A judge ruled the algorithm is now the employment agency.
A federal judge ruled that Workday is an 'employment agency' for its AI hiring tools, ending the era of vendor-immunity defenses for biased algorithmic code.
If you have applied for a corporate job in the last decade, you have likely encountered the sleek, purple-accented interface of Workday. It is the enterprise software equivalent of a high-security checkpoint: a gatekeeper that processes millions of resumes with the cold, unblinking efficiency of a machine. For most applicants, the experience is a sterile void where data goes in and silence comes out. But for Derek Mobley, that silence was documented, logged, and eventually, weaponized in a courtroom. Mobley, a Black man over the age of 40 with a finance degree and an MBA, applied for more than 100 positions at companies that used Workday’s automated screening tools. He was rejected from every single one, often within hours of submission, regardless of how neatly his credentials aligned with the job descriptions.
The July 2024 ruling in Mobley v. Workday provides the first legal confirmation that AI software providers are 'employment agencies' when their tools perform screening functions, a shift that effectively terminates the industry’s reliance on vendor-immunity defenses for algorithmic discrimination. This ruling establishes a testable precedent: industry experts anticipate significant increases in compliance and auditing costs for the HR technology sector as proprietary algorithms are stripped of their "trade secret" protections under federal discovery rules. For years, the HR technology sector has operated under a convenient legal fiction: that they are merely providers of "tools," and that the responsibility for how those tools behave lies solely with the employers who buy them. This ruling suggests that when the tool starts making the decisions—or even just narrowing the field—it inherits the legal baggage of a human recruiter.
The 100-Rejection MBA: Derek Mobley vs. The Machine
Derek Mobley’s experience is a masterclass in the "Black Box" wall that defines modern recruitment. Holding a finance degree and an MBA from prestigious institutions, Mobley was, on paper, the kind of candidate companies claim to be "desperately seeking." Yet, between 2018 and 2023, his interactions with Workday’s Applicant Tracking System (ATS) were a repetitive cycle of instant disqualification Reuters. Mobley’s lawsuit alleges that this was not a coincidence or a lack of "cultural fit," but a direct result of algorithmic bias—systematic errors in computer systems that create unfair outcomes, typically by favoring one group over another based on training data or code design.
Mobley represents what could be called a "triple threat" to biased training data: he is Black, he is over 40, and he has a documented disability. In the world of predictive hiring, these traits are often treated as negative signals. This does not happen because the code is explicitly told to "reject Black applicants," but because the historical data used to train the machine is saturated with the prejudices of human recruiters from decades past. When a machine is told to find candidates who "look like our top performers," and those top performers are historically white men under 30, the algorithm learns that diversity is a bug, not a feature. This phenomenon contributes to the growing population of "hidden workers"—qualified individuals who are filtered out by automated systems before a human ever sees them Harvard Business Review.
The scale of Mobley's rejection—over 100 documented instances—provided the statistical "receipts" necessary to challenge the machine. It wasn't just one bad day or one biased recruiter; it was a consistent, automated exclusion. The complaint filed in the Northern District of California argues that Workday’s tools do not merely assist recruiters; they perform the initial, most critical "screen-out" phase. This effectively turns the software into a digital bouncer that prevents protected classes from ever reaching a human set of eyes Bloomberg Law. By acting as a primary filter, the software assumes the role of a recruiter, which is the core of the employment agency definition under Title VII of the Civil Rights Act EEOC Official Statement.
Technical Breakdown: How Ethical AI Learns to Discriminate
Workday often touts its commitment to "Ethical AI," a term that has become increasingly popular in Silicon Valley as a way to preempt regulation. However, the technical reality of algorithmic screening often relies on disparate impact—a legal doctrine where a policy that is neutral on its face has a disproportionately adverse effect on a protected group, regardless of intent Brookings Institution. You don't need a line of code that says if race == 'Black': reject(). You only need the machine to optimize for "success" based on proxy variables. These variables are the "hollow points" of algorithmic discrimination.
Proxy variables are data points that appear demographic-neutral but correlate highly with protected characteristics. For instance, zip codes can serve as a proxy for race in segregated housing markets. University names can filter for socioeconomic status or historical legacy admission patterns. Gaps in employment can disproportionately penalize women due to maternity leave or people with disabilities Forbes. When these proxies are fed into a machine learning model, the system identifies patterns that confirm existing biases without ever mentioning race or age.
Proprietary algorithms are frequently "black boxes" by design. Vendors claim their code is a trade secret, which prevents applicants—and often the employers themselves—from understanding why a specific "Fit Score" was assigned.
The "Fit Score" fallacy is perhaps the most insidious element of Workday’s screening process. By assigning a numerical value to a candidate’s potential, the software creates an illusion of objective mathematical certainty. However, if the training set consists of resumes from a company’s current high-earners, the AI will confidently replicate the biases that led to that specific demographic makeup. If the machine learns that "playing lacrosse" is a predictor of success, it will prioritize those signals while discarding candidates like Mobley. His MBA and finance degrees are overshadowed by the lack of "matching" proxy data that the machine has associated with high performance.
Historical Context: From Keyword Stuffing to Algorithmic Gatekeeping
To understand how we reached the Mobley watershed, we must look at the evolution of HR technology. In the late 1990s and early 2000s, recruitment tech was little more than a digital filing cabinet. Applicants "keyword stuffed" their resumes, hoping to match the specific terms a recruiter might search for. It was clunky, but there was usually a human at the end of the search string. The shift toward predictive AI changed the power dynamic by automating the evaluation process itself.
We have seen this disaster movie before. In 2018, Amazon famously scrapped an internal AI recruitment tool after discovering it was penalizing resumes that included the word "women's" Reuters 2018 Archive. Amazon’s machine had looked at ten years of resumes submitted to the company—most of which came from men—and concluded that being male was a prerequisite for technical competence. This failure demonstrated that even with the best engineering talent, historical bias is a persistent contaminant in training data. Despite this, the industry doubled down on automation, moving from simple keyword matching to complex neural networks.
For a decade, SaaS providers like Workday have shielded themselves behind "platform" defenses. While some AI platforms have looked to Section 230 for protection, Workday focused on arguing it was not a covered entity under Title VII or argued they were merely "conduits" for their customers' decisions Fast Company. They operated as the ultimate middlemen: providing the infrastructure for discrimination while claiming zero liability for the outcomes. The middleman defense was simple: "We just provide the box; if the box is biased, it's because the customer put biased data in it." This legal strategy allowed vendors to profit from efficiency while externalizing the risk of litigation to their clients.
The EEOC’s Pivot: Software is the New Gatekeeper
The involvement of the Equal Employment Opportunity Commission (EEOC) marked a significant turning point in the Mobley case. In April 2024, the agency filed an amicus brief arguing that AI vendors must be held accountable as "employment agencies" EEOC Official Statement. This was not an isolated incident but part of the EEOC’s broader Strategic Enforcement Plan for 2023-2027. The plan specifically targets the use of automated systems, including AI and machine learning, in employment decisions EEOC SEP 2023-2027.
The EEOC’s argument is grounded in the functional reality of modern hiring. If a human recruiter at a traditional agency threw out resumes based on race, they would be liable under Title VII Title VII of the Civil Rights Act. The agency contends that replacing that human with a script does not change the legal classification of the activity. This stance aligns with the 2022 settlement involving iTutorGroup, which paid $365,000 to resolve age discrimination claims after its software automatically rejected older applicants EEOC iTutorGroup Settlement.
Industry Response: The Chilling Effect Argument
It is important to represent Workday’s position fairly, as it reflects the standard industry viewpoint. Workday argues that it is merely a technology provider and that final hiring decisions are always made by human recruiters employed by their customers. In their view, the software is an efficiency tool, much like a calculator or a spreadsheet. They argue that assigning liability to the software creator for how a user interprets the results is a dangerous overreach that will stifle the development of beneficial tools.
"Workday is not an employment agency," a spokesperson confidently stated after the ruling Reuters. They contend that by holding software vendors liable, the court is creating a chilling effect on innovation. If every SaaS provider is liable for the statistical outcomes of their users, the cost of providing these tools will skyrocket. This could lead companies to revert to even less transparent, purely human-driven processes which are equally prone to bias SHRM Report. Supporters of this view argue that the focus should be on educating users rather than penalizing the tool-makers.
However, Judge Lin’s rebuttal to this was grounded in the functional reality of the software. She observed that if a third-party company were hired to physically sit in an office and throw out 90% of resumes, that company would be an employment agency. The fact that the physical person has been replaced by an algorithm does not change the legal nature of the function being performed. Procurement is procurement, whether it's done by a person in a suit or a script in the cloud. The "calculator" analogy fails because a calculator does not decide which numbers are "fit" to be added.
The Regulatory Tsunami: NYC, the EU, and Beyond
The Mobley ruling does not exist in a vacuum; it is part of a global movement toward algorithmic accountability. In New York City, Local Law 144 now requires employers to conduct independent bias audits on any "automated employment decision tool" used for hiring or promotion NYC Local Law 144. Failure to comply can result in daily fines per violation. This local regulation has effectively turned the "audit economy" into a mandatory business expense for companies operating in the city.
Across the Atlantic, the European Union's AI Act classifies AI used in "employment, workers management and access to self-employment" as high-risk EU AI Act Text. This classification carries strict requirements for transparency, data quality, and human oversight. Even in the United States, the White House has released a "Blueprint for an AI Bill of Rights," which explicitly calls for protection against abusive data practices and algorithmic discrimination White House Blueprint. These frameworks suggest that the "wild west" era of unregulated HR tech is rapidly closing.
The move toward transparency is already forcing vendors to change their products. For example, HireVue announced it would drop facial analysis from its video interview platform following intense criticism and legal pressure NPR Report. This retreat shows that when faced with the choice between "innovation" and legal liability, vendors will often prune features that cannot be statistically justified. The Mobley case simply moves this pruning process from the features to the core logic of the platform.
What This Means: Radical Transparency or Radical Litigation
The Mobley v. Workday ruling is the first major crack in the legal shield protecting AI vendors. Its implications for the future of HR technology are profound and likely expensive for the status quo. In May 2025, the court granted conditional certification for the ADEA claim, allowing it to move forward as a collective action Law and the Workplace. This means that other applicants over 40 who were rejected by Workday’s systems can now join Mobley’s suit.
- The End of the Black Box: Vendors can no longer hide behind "trade secrets" when their software is accused of discrimination. If they are "employment agencies," they are subject to the same record-keeping and audit requirements as traditional agencies. They must be able to prove their math in a way that satisfies a judge, not just a marketing department.
- Mandatory Third-Party Audits: We are moving toward a landscape where AI tools will require independent bias audits as a standard business prerequisite. Much like a financial audit, these will need to verify that the tool does not produce disparate impacts. However, the effectiveness of these audits is still debated, as many auditors focus on technical metrics rather than systemic outcomes MIT Technology Review.
- The Wave of Litigation: This ruling paves the way for a massive shift from individual complaints to class-action liability. By defining the vendor as the agency, plaintiffs can now sue the "source" of the bias rather than filing hundreds of separate suits against the individual employers using the tool. This aggregate liability is the nightmare scenario for SaaS providers.
| Defense Strategy | Previous Legal Status | Post-Mobley Status |
|---|---|---|
| Middleman Defense | Shielded vendors from customer data bias. | Effectively dead if tool performs screening. |
| Trade Secret Defense | Prevented discovery of algorithm logic. | Weakened by transparency requirements. |
| Section 230 Defense | Shielded platforms from third-party content. | Inapplicable to active algorithmic decision-making. |
Conclusion: The Algorithm is No Longer Above the Law
The Mobley v. Workday ruling is the first major crack in the legal shield protecting AI vendors. If the "employment agency" classification holds, the middleman defense is dead, forcing a total re-evaluation of how algorithms are trained and audited. The evidence presented in the case so far—the scale of rejections, the EEOC’s amicus brief, and Judge Lin’s methodical rejection of the "just a tool" argument—strongly supports the thesis that the industry’s reliance on vendor-immunity is coming to an end. This shift is not just a legal technicality; it is a structural change in how risk is distributed in the tech economy.
For too long, AI companies have enjoyed the benefits of human-like decision-making without the burden of human-level accountability. They have confidently sold "efficiency" that was, in many cases, just automated exclusion. Derek Mobley’s 100 rejections were the glitch in the matrix that forced the legal system to look under the hood. The resulting discovery process will likely reveal that the "objective" Fit Scores were built on a foundation of historical inequities.
We are moving from an era of algorithmic "move fast and break things" to one of "explain the code or pay the fine." For Derek Mobley, it took an MBA and a federal lawsuit to get a human response from the system. For the rest of the tech industry, the response has arrived in the form of a court order. The machine is now the agency, and the agency is liable. The evidence suggests that while the algorithm may be cold and unblinking, it is finally within the reach of the law.