by Charles Gallaer & Mike McMahan

Artificial intelligence is reshaping the way companies approach different facets of their businesses. From scanning resumes to conducting initial interviews, AI promises to make hiring faster, cheaper, and more efficient. And management consultants cannot wait to sell you generative AI for HR.

These smiling stock photo management consultants can take the place of the several dozen “articles” written by management consultants we found while researching this post. Before their great dental work fools you into a big purchase, read on.

Before implementing AI at your dealership for hiring and employment decisions, you need to understand the risks to your organization. Below are five risks you need to consider and how to navigate them.

1. Bias in Disguise

Proponents of AI claim it eliminates human bias. In reality, AI is only as unbiased as the data used to train it. If the historical hiring data fed into an AI model reflects societal or organizational biases, the AI will replicate and amplify those patterns.

For example, if past hiring decisions favored male candidates for leadership roles, an AI system trained on that data might unfairly rank male applicants higher than equally qualified women. This bias can also extend to factors like race, age, or socioeconomic background—often in subtle ways, such as associating certain zip codes or schools the applicant attended with “success.” Studies have shown this bias occurs in many AI models.

A biased model can open up a dealership to discrimination claims, even when a seemingly “neutral” AI was used. A federal court in California recently allowed a disgruntled worker to proceed with a class action based on disparate impact theories.

2. Lack of Transparency

Many AI tools operate as “black boxes,” meaning they provide little to no insight into how they make decisions. Nor are AI vendors willing to share the contents of these “black boxes” with their customers. This lack of transparency raises critical questions:

   •       Why was one candidate chosen over another?

   •       Is the AI evaluating qualifications fairly?

   •       Can candidates challenge or appeal these decisions?

When companies can’t explain how their hiring algorithms work, they risk losing trust from applicants and opening themselves up to legal scrutiny. Transparency isn’t just a technical issue—it’s a fundamental requirement for fairness and accountability.

3. Legal Risks

Many state and local governments are trying to catch up with the rapid adoption of AI. Laws already exist to prohibit discriminatory hiring practices, and some governments are enacting laws directly dealing with using AI in the hiring process. For example, in 2021, the New York City Council passed Local Law 144, which created obligations for employers when using AI for hiring decisions, but only when AI plays a major role in the hiring decision. Local Law 144 requires annual, independent bias audits for the employers that use AI in this way. Employers that ensure human managers make the decisions are not covered by Local Law 144.

The New York Legislature is considering two bills (S7623 and A9315), which, if passed, would close this loophole. Under either bill, a human as the ultimate decision-maker would not exempt an employer from the law’s reach. Instead, a company would have liability as long as AI played any role in assisting a human in making the employment decision. In addition, these bills give applicants and employees a private right of action against not only the employer but would also give them the right to sue the developers that created the tools and the vendors that sold them.

4. Ethical Dilemmas

AI in hiring often raises ethical red flags. Consider this: Is it fair to let a machine, rather than a human, decide someone’s career opportunities?

AI systems make decisions based on patterns and probabilities, not on the nuance of individual circumstances. This can dehumanize the hiring process, reducing applicants to data points instead of people with unique skills, potential, and experiences.

Moreover, many AI systems require massive amounts of personal data to function, from resumes to behavioral assessments. This raises concerns about privacy and whether candidates are giving truly informed consent when they submit their information.

5. Practical Challenges

There are practical challenges to consider beyond ethical and legal ones. Poorly designed AI systems can make inaccurate predictions, overlook qualified candidates, or recommend unfit ones.

Relying too heavily on AI also risks sidelining human judgment. AI might efficiently rank resumes, but it can’t assess qualities like cultural fit, leadership potential, or creativity as effectively as a seasoned recruiter can.

Moreover, AI systems need constant monitoring and updates to ensure they stay relevant and unbiased. This ongoing maintenance requires time, expertise, and resources—costs that can add up quickly.

What Can Employers Do?

Dealerships interested in using AI in their hiring practices should take steps to mitigate risks such as the ones below:

      1.   Audit for Bias: Regularly test AI tools to identify and address any discriminatory patterns. Use diverse datasets to train and validate models.

      2.   Ensure Transparency: Work with AI vendors who can explain their systems’ decision-making processes. Provide candidates with clear explanations for how their applications are evaluated.

      3.   Combine AI with Human Oversight: Use AI as a tool to support, not replace, human recruiters. Humans should make the final hiring decisions.

      4.   Stay Compliant: Keep up to date with laws regarding AI and hiring and have your AI hiring practices reviewed by legal counsel for compliance.

      5.   Adopt an Ethical Framework: Develop internal guidelines for the ethical use of AI in hiring. This could include respecting candidate privacy, ensuring fairness, and prioritizing inclusivity.

Final Thoughts

AI has the potential to transform hiring, but you can’t take a hands-off approach when using it. Without careful design, implementation, and oversight, these tools can introduce more problems than they solve such as bias, discrimination, and potential liability.

As companies increasingly turn to AI, it’s crucial to strike a balance between leveraging technology and preserving fairness, transparency, and human judgment. By addressing the risks head-on, businesses can create hiring processes that are not only efficient but also ethical and inclusive.

Be smart and stay up to date on the latest Auto Intel by subscribing to our newsletter below.

Questions? Ask the authors– Charles and Mike have contact information in their bios.

One response to “5 Critical Risks of Using AI for Hiring Decisions”

  1. […] have written about the increasing use of artificial intelligence by businesses and pitfalls you should consider. Once you’ve made the decision to implement AI solutions, the next step is […]

    Like

Leave a comment

Trending