Class action questions whether using AI to “score” job applicants violates the FCRA and similar California law
In this post, we cover a new class action recently filed in California against Eightfold AI, Inc. (Eightfold). Eightfold represents itself “[a]s pioneers of the world’s most innovative AI-native Talent Intelligence Platform.” It markets that it “combine[s] enterprise data, market insights and user interactions to create a complete picture of talent across the entire employment life cycle, providing an end-to-end experience that includes hiring, development and retention.” The company's core offering is allegedly built on "the world's largest, self-refreshing source of talent data,” which promises to enable organizations to make real-time talent decisions based on skills, capabilities and potential rather than traditional resumes.
This is not just another AI lawsuit. Unlike prior AI litigation that we have seen over the past year, this lawsuit does not attack the use of AI in hiring decisions for alleged discrimination but rather seeks to establish that using an AI tool could violate the federal Fair Credit Reporting Act (FCRA) and similar state laws.
Below, we provide a high-level summary of the case at issue and an in-depth discussion of the following: (1) the key claims and allegations against Eightfold AI; (2) how Eightfold's AI technology allegedly operates; (3) how the plaintiffs used the Fair Credit Reporting Act and California Investigative Consumer Reporting Agencies Act to form the basis of their claims and (4) practical takeaways for companies using AI in hiring processes.
The key claims and allegations against Eightfold AI
On January 20, two job applicants filed suit against Eightfold in the Superior Court of California in Contra Costa County, alleging that the company violated federal and state law consumer protection laws by creating “hidden credit reports” on job applicants that were then used to score them from 0 to 5 based on their supposed "likelihood of success" for potential employers – all without complying with the statutory requirements imposed on consumer reporting agencies.
The two class representatives applied for positions with companies in 2025 that were allegedly using Eightfold's AI-powered hiring platform. Both plaintiffs allege they were subjected to Eightfold's AI evaluation tools without proper disclosure, consent or the opportunity to review and dispute the information used to assess their candidacy before those reports were used to inform employment decisions. Based on that, the plaintiffs contend Eightfold's practices violate longstanding federal and California consumer protection laws.
The plaintiffs seek to represent two classes: (1) a Nationwide Class of all United States residents who applied to jobs and were subjected to Eightfold's Evaluation Tools within applicable statutes of limitations; and (2) a California Class of all California residents who applied to jobs and were subjected to Eightfold's Evaluation Tools within applicable statutes of limitations.
The plaintiffs allege that Eightfold operates as a Consumer Reporting Agency (CRA) under federal and California law by assembling and evaluating information on job applicants for the purpose of furnishing consumer reports to employers and score and rank applicants based on potential for success. The plaintiffs allege that employers rely on these scores to filter candidates, often discarding lower-ranked applicants before a human ever reviews their application. Critically, the plaintiffs contend that Eightfold fails to disclose this process to applicants, obtain proper certifications from employers or give applicants the opportunity to review and dispute the information used in their evaluations.
The complaint asserts three claims for relief:
- Violation of the Fair Credit Reporting Act due to Eightfold allegedly furnishing consumer reports for employment purposes without obtaining required certifications from employers regarding disclosure, authorization, and dispute procedures and without notifying employers of their responsibilities under the statute
- Violation of the California Investigative Consumer Reporting Agencies Act due to Eightfold allegedly providing consumer reports without ensuring reports would only go to permitted users for permissible purposes, without obtaining required certifications and without complying with notice and dispute procedures
- Violation of California's Unfair Competition Law due to Eightfold's alleged violations of the Fair Credit Reporting Act and the California Investigative Consumer Reporting Agencies, which, according to plaintiffs, constitute unlawful and unfair business practices
How Eightfold's AI technology allegedly operates
The complaint details how Eightfold's AI technology allegedly operates and violates the relevant statutes According to the plaintiffs, Eightfold's "Talent Intelligence Program" uses a proprietary large language model (LLM) trained on over 1.5 billion global data points, 1.6 billion career trajectories and 1.6 million skills. The system allegedly assembles and evaluates (1) the candidate's submitted resume and application data; (2) supplemental data gathered from public sources about the candidate's professional history (blogs, publications, conferences and job application history); (3) data about comparable employees at other companies; (4) AI-generated predictions and inferences about the candidate's personality and future career trajectory and (5) data used to train Eightfold's AI models.
The complaint alleges that Eightfold's Match Score algorithm involves three steps: first, an LLM evaluates semantic similarities between job descriptions and candidate profiles. Second, the AI extracts additional features including skill overlap, title progression, seniority fit, industry similarity and comparison to "ideal candidates" and hiring managers. Third, the AI blends these features into a proprietary calibrated prediction that ranks candidates by likelihood of success.
Along with producing these reports for employers, the complaint also contends that, once an applicant applies to a job through an employer using Eightfold, Eightfold also allegedly retains and uses that applicant's data for its own purposes, including to evaluate other applicants and to train its AI models.
How the plaintiffs allegedly used the Fair Credit Reporting Act and California Investigative Consumer Reporting Agencies Act to form the basis of their claims
The complaint asserts that the Fair Credit Reporting Act (FCRA), 15 U.S.C. § 1681 et seq., and the California Investigative Consumer Reporting Agencies Act (ICRAA), Cal. Civ. Code § 1786 et seq., were enacted specifically to address concerns about secretive third-party reports affecting employment decisions and that, while Eightfold’s AI may be new technology and its uses may be novel, its actions and outputs are still subject to these statutes.
As background, the FCRA broadly defines "consumer reports" to include any communication bearing on a consumer's "character, general reputation, personal characteristics or mode of living" used for employment purposes.
Under the FCRA, credit reporting agencies must: (a) furnish reports only for permissible purposes; (b) verify the identity and purposes of report users; (c) follow reasonable procedures to ensure maximum possible accuracy; (d) make report information available to consumers upon request and (e) investigate and correct disputed information. For employment-specific reports, the FCRA requires employers to provide standalone disclosures and obtain written authorization before procuring reports, and to provide applicants with copies of reports and dispute opportunities before taking adverse action. Critically, credit reporting agencies may only furnish employment reports after receiving certification from employers that they will comply with these requirements.
California's ICRAA imposes similar requirements for investigative consumer reports containing information about consumers' character, reputation and personal characteristics. The plaintiffs allege Eightfold failed to comply with either statutes’ requirements, leaving job applicants uninformed of their rights and unable to review or correct information used in their evaluations.
Conclusion: Practical takeaways for companies using AI in hiring processes
This case represents an attempt to apply longstanding consumer protection laws to emerging AI technologies. Companies using AI-powered hiring and recruitment tools should be aware of the potential issues and risks in doing so.
We have previously pointed out that other employment-related tools currently the subject of litigation and regulators are expanding employer’s obligations. The first step is for companies to learn which AI tools they are using and whether those tools are “decision makers” or “substantial factors” in decisions—or even if they “facilitate” decisions. Then determine which laws apply and how to comply with them. This latter point can be especially tricky because there are currently hundreds of AI laws proposed in Congress and the state legislatures. This active legislative environment means that activities that are compliant today may not be compliant tomorrow.
It is also critical that companies have a clear understanding of what their AI vendor is actually doing so that companies can properly access whether FCRA (and other state and/or local obligations) are implicated. Moreover, companies should closely review their vendor agreements to ensure that the vendor is also complying with any legal obligations.
Our Team has significant experience dealing with AI from both a governance and a litigation and perspective. For any questions regarding issues relating to AI governance or litigation matters, please contact us.
AI in litigation series
About the series
As companies increasingly integrate and use artificial intelligence (AI) in their daily operations, novel litigation based on those integrations and uses will arise as well. Tracking the sheer volume of cases in all relevant jurisdictions can be time-consuming. This AI in Litigation series aims to alleviate some of the hard work by highlighting relevant AI-related litigation and the key developments in these litigations that companies need to know.