AI is reaching into nearly every aspect of business, and the HR department is no exception. Because the ability of AI to impact hiring practices is evolving quickly, it’s important to stay as up to speed as possible on how AI can and does lead to inadvertent discrimination against job seekers.

In this two-part series, we’ll examine two forms of AI-driven policy concerns present in today’s hiring environment: 1) algorithmic discrimination and 2) biometric discrimination. First, let’s look at algorithmic discrimination.

What is algorithmic discrimination and how does it impact the talent acquisition process? 

Algorithmic discrimination happens when the AI – which learns from, among other things, our interactions with it, violates federal, state, or local anti-discrimination laws. The programming of the criteria is critically important, but also how the AI “learns” is very important.

Most recruiting platforms use AI to reduce the number of applications that need to be reviewed before they are presented to the company. The criteria used to reduce applications is generally selected by checking a box that the software presents. The programing of that criteria selection is an algorithm.

Algorithms are only as good as the programming that creates them. If programming has flaws, then the outcome can be a problem. In Talent Acquisition (or TA), if the outcome excludes candidates with particular characteristics, then the company is potentially liable for “algorithmic discrimination”.

How does algorithmic discrimination affect women in the workforce?

Gender is clearly a concern in algorithmic discrimination and women should be aware of it.

An example of this is from Amazon, which in the early days of AI in 2018, performed an experiment. The company continued to use humans for TA, but ran an AI program for TA also, to see how it performed and whether the company should use AI for its recruitment. The criteria the AI was programmed to use gave candidates scores related to their engineering degrees and job history. The AI then made recommendations based on who had higher scores.

Amazon found that the AI learned that when hiring engineers, there were more men than women who had engineering degrees. That, and other characteristics such as job history – which can be spottier for women who leave the workforce for blocks of time due to caregiving duties, “taught” the AI to score women lower than men and resulted in a recruitment bias against women. This is a classic example of algorithmic discrimination.

How can printing companies avoid algorithmic discrimination in the HR practices and hiring processes?

The way to avoid algorithmic discrimination is to make sure that the company who is screening candidates will verify that the design of the AI software was made to specifically avoid discrimination, and that the programming that allows the AI to learn was also designed to avoid discrimination.

A human from the company should review the recommended candidates and take note of whether any groups of people are being excluded. This can be done by having the job posting platform run reports providing the demographics of the recommended and excluded candidates.

In addition, when choosing the screening criteria for a posted position, do not overshoot with requirements. If a position does not require knowledge gained from a college degree, do not make a college degree a requirement for the job. Base criteria on skills that a needed for each position rather than having a blanket requirement for all job postings for the company.

How can job seekers in the printing industry protect themselves from algorithmic discrimination?

Job seekers may be given the chance to opt out of AI screening tools, so that is one way of making sure that algorithmic discrimination does not hold you back. If the option exists to apply directly through a company’s website, often a human will be screening your application rather than AI (depending on the size of the company).

Also, following up with the company after applying through messaging on the platform or directly to the hiring manager can bring attention to your application.

If you believe you may have been rejected by AI due to a discriminatory algorithm, reach out directly to the company and explain why you believe you may have been overlooked unfairly. The company may pull reports to make sure their recruitment platform has AI software that is nondiscriminatory.

If you’d like to dive deeper in HR policy topics like AI, pay transparency, and DEI, or to view my webinars, articles, and more, please visit PRINTING United Alliance’s Center for Human Resources Support or follow me on LinkedIn.

This is part 1 of a 2-part series on how AI is impacting employment policy. In the next Women in Print Alliance newsletter, we’ll focus on the issue of biometric discrimination.