AI Hiring Lawsuits: Risks and How Employers Can Stay Compliant

A stack of blocks in the shape of a pyramid with black images of a person on each block with a robotic arm coming in from out of frame tapping on the top block to represent AI hiring tools.

The use of AI-based hiring tools is on the rise, with approximately 37% of organizations now actively integrating or experimenting with Gen AI tools.

These tools promise enhanced efficiency, consistency, and the ability to evaluate large volumes of job applicants rapidly.

From initial screening to final employment recommendations, AI helps recruiters make data-driven employment decisions.

However, this widespread adoption is sparking both excitement and concern across industries.

With the surge in AI-driven hiring tools, there’s also been a rise in discrimination lawsuits and heightened legal scrutiny.

Allegations range from age discrimination claims to disability discrimination, emphasizing the tools’ potential to unintentionally violate anti-discrimination laws.

Courts and regulators are increasingly evaluating whether these algorithmic tools create disparate impact or amount to intentional discrimination.

Employers now face increased legal exposure if their systems result in adverse impact on protected classes.

Recent AI Hiring Lawsuits and Backlash

Recent AI hiring lawsuits and backlash include the Workday class action lawsuit, HireVue’s legal challenges, as well as public backlash over AI interviews.

Workday Class Action Lawsuit

In a notable case, Workday Inc. faces a nationwide class action alleging that its AI-based screening tools discriminated against applicants over the age of 40.

A District Court recently granted preliminary certification for the lawsuit to proceed as a collective action.

Plaintiffs argue that Workday’s AI-based hiring tools had a disparate impact on older job applicants, potentially violating age discrimination laws.

This case underscores how even well-intentioned algorithmic hiring tools can lead to serious legal consequences.

The ACLU and partners have filed a complaint against Intuit and its AI hiring vendor HireVue, alleging that their technology discriminates against disabled and non-white applicants, violating multiple anti-discrimination laws.

The case centers on D.K., an Indigenous and Deaf woman who was denied a promotion after being forced to use HireVue’s video interview platform, which uses automated speech recognition known to underperform for non-white and deaf speakers.

The complaint argues that companies like Intuit knowingly used biased tools and failed to provide accommodations, calling for accountability and reform in AI-based hiring practices.

These issues raise red flags about legal risk and the need for fairness in automated assessments.

Public Backlash Against AI Interviews

Job seekers have increasingly spoken out against one-way AI interviews, calling them impersonal and unfair.

Many cite the lack of human judgment and inability to ask clarifying questions as barriers to gainful employment.

Concerns also focus on the impact on applicants from marginalized communities, who may experience higher rejection rates due to flaws in AI-based hiring tools.

This growing discontent reflects a broader distrust in algorithmic discrimination and opaque employment practices.

Questions about streamlining your business operations with AI?

Learn more by downloading our latest asset including an AI Gap Audit checklist.

The legal and ethical risks of AI hiring tools include algorithmic bias and discrimination, transparency and accountability, and data privacy concerns.

Algorithmic Bias and Discrimination

Despite being designed for objectivity, algorithmic hiring tools can still replicate human biases if trained on skewed data.

This can result in disparate impact claims and potential violations of laws like the Americans with Disabilities Act and the Age Discrimination in Employment Act.

Employers may inadvertently engage in algorithmic discrimination, putting themselves at risk for Discrimination Claims.

As a result, mitigating legal exposure is crucial when deploying these technologies.

Transparency and Accountability

A key challenge with AI-based hiring tools is the lack of transparency in how decisions are made.

When employment decisions are based on complex algorithms, it’s often unclear what factors led to a candidate’s rejection.

This makes it harder to detect disparate impact or to hold systems accountable for discriminatory impact.

Employers must prioritize explainability to avoid accusations of intentional discrimination.

Data Privacy Concerns

The use of AI in hiring also raises serious concerns about data security and privacy.

Candidate information must be handled in compliance with privacy laws, especially when gathered and processed by algorithmic tools.

Breaches or misuse of personal data could add another layer of legal risk for employers.

Responsible data governance is essential to maintaining trust and avoiding legal pitfalls.

Regulatory Landscape and Compliance Requirements

The regulatory landscape and compliance requirements includes New York City’s Local Law 144, state-level regulations, and international frameworks.

New York City’s Local Law 144

Effective July 5, 2023, New York City’s Local Law 144 requires independent bias audits of automated employment decision tools.

These audits aim to identify and correct disparate impact in AI systems used during the hiring process.

Employers must also notify job applicants about the use of these tools and provide information on how they work.

State-Level Regulations

Several states are enacting laws to regulate AI-based hiring tools.

Illinois and Maryland have already passed legislation, while Colorado has introduced new rules requiring disclosure and fairness audits.

These laws focus on minimizing discriminatory impact and ensuring human judgment remains part of the hiring decisions.

International Frameworks

Globally, the EU’s AI Act aims to regulate algorithmic tools, especially those used in high-risk areas like employment.

The framework mandates transparency, fairness, and safeguards against discriminatory outcomes for systems influencing hiring decisions, even if they produce unintended bias.

These developments signal a growing international consensus on the need for responsible AI governance.

Best Practices for Employers to Mitigate Risks

Best practices for employers to mitigate risks include conducting regular AI bias audits, ensuring human oversight, enhancing transparency, and staying informed and compliant.

Conduct Regular AI Bias Audits

Employers should conduct routine evaluations of their AI-based screening tools to uncover any patterns of disparate impact.

These audits help identify potential sources of algorithmic discrimination and ensure compliance with anti-discrimination laws.

By addressing issues early, organizations can reduce legal exposure and improve fairness.

Ensure Human Oversight

Despite the efficiency of algorithmic methods, it’s critical to integrate human judgment into the hiring process.

Human reviewers can validate AI-driven employment decisions and ensure context-sensitive evaluations.

This balance can reduce the likelihood of disparate impact claims and bolster the credibility of the process.

Enhance Transparency

Employers must clearly inform job applicants when AI-driven hiring tools are used.

This includes explaining how screening tools function and how employment decisions are made.

Transparency builds trust and helps mitigate concerns about discriminatory impact or hidden biases.

Stay Informed and Compliant

Employers should regularly review federal, state, and international developments affecting AI-based hiring tools.

Partnering with an experienced employment agency or legal counsel can help ensure adherence to best practices.

Proactive compliance is key to avoiding class action lawsuits and maintaining ethical employment practices.

AI Hiring Lawsuits and Staying Compliant

To avoid costly discrimination lawsuits, employers must address the potential pitfalls of algorithmic hiring tools head-on.

Implementing audits, fostering human judgment, and improving transparency can significantly reduce legal risk.

Ethical use of AI also promotes fair access to gainful employment for all candidates.

Now is the time to prioritize equity, accountability, and ethical innovation in your recruitment strategy.

Looking to hire top-tier Tech, Digital Marketing, or Creative Talent? We can help.

Every year, Mondo helps to fill thousands of open positions nationwide.

More Reading…

Related Posts

Never Miss an Insight

Subscribe to Our Blog

This field is for validation purposes and should be left unchanged.

A Unique Approach to Staffing that Works

Redefining the way clients find talent and candidates find work. 

We are technologists with the nuanced expertise to do tech, digital marketing, & creative staffing differently. We ignite our passion through our focus on our people and process. Which is the foundation of our collaborative approach that drives meaningful impact in the shortest amount of time.

Staffing tomorrow’s talent today.