Search Our Website:
BIPC Logo

As the comprehensive federal Algorithmic Accountability Act—first introduced in 2019—slowly progresses (or not) through Congress, individual states are actively responding to reports of algorithmic bias in employment decisions, swiftly enacting their own laws and guidance to combat AI‑enabled bias in recruiting, screening, and hiring employment applicants. Most recently, in January 2025, the New Jersey Division on Civil Rights published a Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination (the “NJLAD Guidance” or the “Guidance”). At the same time, the State of New York introduced legislation (S.B. 1169) (the “New York AI Act” or the “Act”), which would require companies using AI in recruiting and hiring processes to independently audit and evaluate their systems for algorithmic bias in employment practices. These state-level developments may create increased risks to employers using AI tools in recruiting, screening, and hiring applicants.

The NJLAD Guidance Puts Employers on Explicit Notice That They Are Liable for AI‑Driven Employment Discrimination

The Guidance mirrors the NJLAD’s prohibition of discrimination in employment decisions based race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. Consequently, covered Employers may be held liable for discriminatory employment decisions arising from the use of AI tools, even if they did not develop the tool or are unaware of its discriminatory mechanisms.

This is likely to affect more than half of New Jersey Employers; a recent Rutgers University survey cited in the Guidance found that 63% of the 233 Employers surveyed—averaging 400 employees and representing various sectors such as manufacturing, wholesale and retail trade, finance, healthcare, education, and government—use one or more AI tools in recruiting and selecting employees. The most commonly used AI tools include applicant-tracking software from LinkedIn, Workday, ADP, ZipRecruiter, iCIMS, and Greenhouse for drafting job postings, organizing resumes, and tracking candidates through the hiring process. Additionally, Employers frequently use AI tools to make decisions about promotions, demotions, or terminations.

The NJLAD Guidance aligns with, but is more expansive than, the federal EEOC Guidance issued in May 2023, which cautioned that Employers may be liable for discriminatory practices in recruitment, monitoring, transferring, and evaluation resulting from AI tools. However, unlike the NJLAD Guidance, the EEOC Guidance is limited to disparate-impact claims and does not encompass disparate treatment claims. And the long-term viability of the EEOC Guidance is uncertain under the new Administration.

Covered AI and Algorithmic Discrimination Under the NJLAD Guidance

While AI and AI tools are enormously broad terms, the NJLAD Guidance specifically covers “any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision‑making process.” Further, the Guidance emphasizes that Employers may not shirk liability by blaming an AI tool vendor for a discriminatory outcome that forms the basis of a lawsuit.

Disparate Treatment, Disparate Impact, and Reasonable Accommodations Under the NJLAD Guidance

Additionally, the Guidance explains that Employers may be liable under either a theory of disparate-treatment discrimination or disparate-impact discrimination. An Employer may be liable for disparate-treatment discrimination where the Employer uses AI tools to treat members of a protected class differently based directly on a protected characteristic, or indirectly based on a close proxy for a protected characteristic. An example of disparate-treatment discrimination would be an applicant-screening tool designed to prefer applicants who provide individual taxpayer identification numbers instead of Social Security numbers because the Employer wants to hire immigrants based on its belief that immigrants will be less likely to challenge unlawful working conditions. On the other hand, an Employer may be liable for disparate-impact discrimination where it relies on the recommendations of an AI tool that are facially neutral and not motivated by discriminatory intent but result in employment decisions that disproportionately harm members of a protected class, unless the AI tool serves a substantial, legitimate, nondiscriminatory interest. By way of example, an applicant-screening tool for a delivery service that disproportionately screens out applications from women based on a belief that men are stronger and thus more suitable for the delivery job could run afoul of disparate-impact prohibitions. However, even where the AI tool serves a substantial, legitimate, nondiscriminatory interest, an Employer may still be liable if there is a less discriminatory alternative.

Finally, the Guidance covers AI‑driven discrimination for failure to provide a reasonable accommodation for an applicant’s or employee’s known or reasonably apparent disability, religion, pregnancy, or breastfeeding. This may happen where an AI tool is inaccessible to individuals with a protected characteristic (e.g., an Employer uses an AI tool to measure applicants’ typing speed, which cannot measure the typing speed of an applicant with a disability who must use a non‑traditional keyboard) or where an AI tool makes recommendations without factoring in a reasonable accommodation (e.g., an Employer uses an AI tool to measure employees’ productivity based on break times and adopts recommendations for discipline or promotion based without considering  reasonable accommodations for protected employees to allow for additional break time).

The New York AI Act Creates Novel Employer Obligations and Private Rights to Sue

Unlike the NJLAD Guidance, which extends existing anti‑discrimination protections to Employers’ use of AI tools, the New York AI Act would establish new obligations for Employers to actively prevent AI‑driven employment discrimination. It also broadens the definition of protected characteristics under State law for the purposes of the Act to include height and weight.

Employers Would Need to Provide Notice and Opt‑Out Options Before Utilizing AI Tools

Most notably, the Act would require Employers to provide employees and applicants with at least five (5) business days’ notice in clear, consumer‑friendly terms before using AI tools to make employment decisions. Additionally, applicants and employees must have the ability to opt out of the AI process and request that a human representative make the decision instead. If an employee or applicant waives the five‑day notice, Employers would still need to provide one (1) business day of notice before using an AI tool, along with the opportunity to opt out. Employers would also be required to allow appeals for any decisions made entirely by or with the assistance of an AI tool. The Act specifies that employers are liable for any bias resulting from the use of AI tools.

Employers Would be Required to Commission AI-Discrimination Auditors and File Reports With the Attorney General

The Act would also impose significant third‑party audit and reporting obligations before Employers can use AI tools to make employment decisions. Employers would have to conduct audits six (6) months after deployment and every eighteen (18) months thereafter. These audits must analyze discriminatory disparate impacts, and Employers are prohibited from hiring any third‑party auditors who have provided services to them in the past twelve (12) months. Additionally, Employers would be required to file reports with the Attorney General and establish comprehensive risk-management policies to prevent AI‑driven employment discrimination.

Employers Could Be Liable Absent Actual Injury

The Act would empower the Attorney General to seek injunctions in New York State Supreme Court to enforce its provisions, imposing civil penalties of up to $20,000 for each violation without requiring proof of injury. New York residents could also bring private actions for harm caused by violations of the Act, entitling them to compensatory damages and legal fees if they prevail. Moreover, the Act includes whistleblower protections, granting employees and applicants rights to injunctive relief, reinstatement, compensatory damages, and punitive damages.

Potential Consequences: Increased Litigation Against NJ and NY Employers for AI-Driven Employment Discrimination

The NJLAD Guidance and New York AI Act, if passed, may result in an increase of AI‑based employment-discrimination lawsuits, particularly in New York. To date, there have been few precedential cases on AI‑driven employment-discrimination claims. However, two recent cases provide some insight into how courts may treat such actions.

In August 2023, the EEOC secured its first AI-bias settlement in New York after charging iTutorGroup, a tutoring services company, with age discrimination. The agency alleged that the company programmed its application review software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older.

Then, on July 12, 2024, a California federal judge declined to dismiss a putative class action brought against Workday, a major human capital management platform, claiming it is directly liable under Title VII for discrimination caused by its applicant-screening AI tools. The lawsuit alleges that Workday’s screening tools discriminated based on race, age, and disability. The EEOC filed a friend-of-the-court brief urging the federal court not to dismiss the action while Workday’s motion to dismiss was pending. The case has since moved to discovery and remains under litigation.

In the last year, class actions related to algorithmic discrimination beyond employment decisions have also emerged. In December 2023, the Federal Trade Commission (FTC) announced a settlement with Rite Aid in a lawsuit arising from the company’s use of AI-based facial recognition technology. Additionally, in April 2024, tenant-screening service SafeRent Solutions entered a $2.28 million settlement agreement in a class action after Massachusetts rental applicants alleged that the company’s AI‑based screening tool discriminated on the basis of race.

Takeaways

The onerous requirements and significant litigation risks caused by the Act will force New York Employers—including those that employ remote workers residing in New York—to reconsider whether the benefits of AI tools in recruiting, hiring, or other employment practices justify the costs and risks under the Act. Additionally, New Jersey Employers should immediately revisit their use of and policies on AI tools in employment processes and decisions to protect themselves from liability.

Buchanan’s Labor and Employment, Class Actions, Advanced Technology, Cybersecurity & Data Privacy, and Government Relations teams continue to monitor the latest developments and stand ready to assist Employers with ensuring that their policies and practices address federal and State governments’ evolving position on prohibited AI‑driven employment discrimination.