Effects of Bias in Hiring Algorithms and Possible Solutions

Ana Herrera

Personal Statement

For my English Composition II class, I had to pick a topic about a social justice issue that would interest me for my final research paper. Since I am studying Computer Science, I decided to research more about algorithms and the different biases within them that can lead to unfair decisions. This is a topic I should keep in mind once I enter the workforce because with whatever we do, we should focus on implementing the best and fairest solutions for everyone.

Abstract

Algorithm bias is when a computer program leads to an unfair decision by giving an advantage to a group or population over another. This paper will discuss the different places where this happens in the hiring process. Several algorithmic tools exist to make decisions, such as algorithms that work with sourcing, screening, making predictions, and assessing performance. All of these tools are susceptible to introducing bias at any point in the decision-making. These biases can lead to the loss of job opportunities. Therefore, to prevent these biases, this paper proposes some solutions.

Keywords: algorithm bias, hiring algorithms, transparency in algorithms, race-aware algorithms, impact assessments.

Effects of Bias in Hiring Algorithms and Possible Solutions

Bias is the prejudice that causes one to favor a person or group over another. (“Bias,” n.d.) Historically, biases have been a persistent problem when making decisions, and biases remain and occur in new ways. Algorithm bias occurs when a computer program unfairly favors one population over another (Le, 2021). Nowadays, algorithms are a crucial part of making decisions, such as search engine recommendations or programs that suggest which stocks to acquire. Moreover, algorithms are used in the hiring process. Programs have made this process easier for job seekers and companies; however, they have opened a new space for biases. Bias in hiring algorithms exacerbates inequalities; therefore laws and constant supervision while creating and employing these programs should be established.

Tools used in the Hiring Process

To be selected for a job position, a person undergoes several stages of the hiring process. For each stage to find the right applicant, an algorithm exists to aid the recruiter; therefore, bias can introduce itself at any point.

The first point of interaction with an applicant would be in the recruiting stage. To reach out to the desired profile of job seekers, sourcing algorithms exist. These algorithms are programmed to notify people of new job positions. Some of these algorithms reach out to the people who have a higher probability of clicking the job or are designed to recreate certain decisions from the recruiter. Bogen (2019) argues that these designs are not always ideal as the ultimate goal should be to find the most suitable candidates for the job position. Related to this argument, Facebook was sued in 2016 because its job advertisement tool excluded African Americans and Latinos while it favored white Americans.  This happened because the tool was designed to create a candidate pool based on the profile of the current employees which led to “replicat[e] racial or gender disparities” (Ajunwa, 2019, para. 6).

As many applicants might apply for a specific position, the recruiters have to reduce the applicant pool by setting certain guidelines. To accomplish this, many screening algorithms are designed to recreate previous screening decisions. Recreating past decisions can lead to the repetition of prejudices and biases made before (Bogen, 2019). For example, Amazon discontinued the use of its recruitment tool because it unfavored resumes that were identified to be women’s. This happened because the tool was designed to “recognize word patterns in the resumes, rather than relevant skill sets, and these data were benchmarked against the company’s predominantly male engineering department to determine an applicant’s fit” (West, 2019, para. 14).

During the interviewing stage, some employers use tools that analyze a person’s gestures and word choice in an interview to examine soft skills. However, it has been discovered that these tools cannot accurately analyze the faces of people of color or people with certain accents in their speech (Yang & See, 2019). Furthermore, to finally select a candidate, selection algorithms are used. Some of these algorithms are programmed to determine which of the possible candidates to fill the position is more likely to accept the job offer. According to an article in the Harvard Business Review, these sourcing algorithms “could subvert laws banning employers from asking about salary history directly, locking in… longstanding patterns of pay disparity” (Bogen, 2019, para. 17).

Lastly, besides determining who to hire, companies may be using algorithms to determine who to fire. Some tools exist to assess the performance of a person and if they are meeting certain goals within the company. According to an article in Forbes, in November of 2023, Uber did not “comply with the European Union’s algorithmic transparency requirements, which prohibits using AI or similar technology from being the sole decision-maker on actions that have ‘legal or other significant effects’ on people” (Kelly, 2023, para. 17). To create all these different tools in the hiring process, the companies first assess their current needs and inform them to the developer who will create the program or work along with them to create it (Institute for the Future of Work, 2022).

Ways Bias is Introduced to an Algorithm

Bias can be introduced into algorithms in different ways, whether intentionally or unintentionally. One of the ways algorithm bias happens is when the inherent biases that the people training or using the algorithm tools have are manifested in the program (Jackson, 2021, p. 12). Moreover, the people using these tools have to be aware of choosing adequate guidelines that will show an accurate prediction. If the guideline chosen does not apply to all the demographics where it will be used, then disparities will be shown in the results, for example, picking the number of hours worked as a variable in a performance tool will unfairly evaluate women who sometimes have to leave work due to situations with their kids (Le, 2021). Besides reflecting the internal biases of the person creating or using the program, bias may also happen by using an unrepresentative data collection. The dismissal of a demographic group may happen due to the use of a dataset that fails to accurately represent that group or population (Jackson, 2021, p. 12).

Proposed Solutions to Mitigate Algorithm Bias

Companies should ensure that they are notifying and providing equal opportunities to everyone. For this, constant supervision of their decisions and their tools is needed to make sure they are not taking away an opportunity from someone else. While one might argue that companies do not want to invest their time and money into setting proper standards and ensuring their tools are making fair decisions, it can be noted that keeping an unbiased model can be too costly for the company (Ebert, 2022). Biased algorithms lead to bad decisions that end up affecting a company in the long term, for example, not having a diverse team, which leads to products that are not proven to work for everyone, and losing a positive public perception (Ebert, 2022).

One possible solution for companies to make sure that they are making fair decisions is the incorporation of race-conscious algorithms. Currently, algorithms are designed to ignore the demographic aspects of each person, such as gender or race; nevertheless, these programs still manage to have biases. Algorithms are programmed to ignore this kind of information, so companies can prevent lawsuits against discrimination. However, this should be reconsidered because allowing algorithms to be aware of race makes it easier to fix the circumstances that are not the same for everyone (Le, 2021). For example, a dataset can be adjusted to represent adequately all groups by collecting more data from underrepresented groups or removing data that is not the same for all demographics. While these strategies are considered race-conscious, laws should not stop their implementation because ultimately they can lead to fair decisions (Kim, 2022, p. 39).

Moreover, transparency in the process behind an algorithm should be encouraged. Companies should strive to be transparent with the use of their algorithms, so the public can understand the process behind the final decisions. A way to maintain transparency would be by creating impact assessments that “would require public agencies and other algorithmic operators to evaluate their automated decision systems and their impacts on fairness, justice, bias, and other community concerns, and to consult with affected communities” (Le, 2021, p. 27). These impact assessments promote the evaluation and reflection of the possible biases and impacts that algorithms might cause. Promoting these kinds of evaluations allows transparency and understanding in the use of these tools to keep its decisions fair and accountable. The New York University’s AI Now Institute has created algorithmic impact assessments (AIAs) for government agencies to use (West, 2019). This framework consists of three types of reviews: internal, external, and public audiences. It first looks for potential biases, then identifies biases that have already occurred, and lastly, it encourages federal agencies to “challenge algorithmic decisions that feel unfair” (West, 2019, Operators of algorithms must develop a bias impact statement section, para. 3). To make sure that agencies and companies carry out impact assessments, the enactment of laws that require this form of evaluation on algorithms is necessary. Also, government agencies should hire data regulators who have adequate technical knowledge to understand the algorithms and “can penalize companies for unfair and illegal practices” (Le, 2021, p. 28). To keep transparency in decision-making, the companies have to follow these laws and evaluate from the start and throughout the hiring process, the impacts of the algorithms. Also, government agencies should regularly perform these impact assessments to make sure no one in the public is treated unfairly and that the companies are making good use of their tools. Knowing how the algorithms work also allows the users to file a complaint if they were not hired for a position due to an unfair decision. Transparency in the hiring process is a necessity because it ensures that everyone will be treated fairly.

One of the commonly proposed solutions to mitigate algorithm bias is to require more human supervision in the overall process. However, involving more humans in the supervision of the algorithmic decision might not be the right approach because humans are inherently biased. Also, placing a person to check the ultimate decision would be hard because the data is to extensive to be verified by a human (Solove & Matsumi, 2023, 17). Even though humans are biased and are already part of the development of the algorithms, the idea that adding more people could be beneficial should not be dismissed. It is always a good idea for constant supervision because even with modern solutions like race-aware algorithms, bias can still happen to some extent. To make sure adding more supervision results in improvement, laws could describe where each supervisor should be assigned and establish the expected goals with these additions (Solove & Matsumi, 2023, 17). Furthermore, algorithms should only be a complement to human decisions (West, 2019); therefore, involving more humans in the process will allow more control over each case and decision.

Overall, algorithm bias in hiring tools can lead to the reduction of employment opportunities for people because the program fails to evaluate each applicant correctly. It can happen at any stage of the hiring process, and this is due to the lack of a complete dataset to train the algorithm or due to picking a guideline that does not fairly evaluate people from all the demographics that will be analyzed. Just like humans, algorithms can lead to bias; nonetheless, they are useful tools that can help us reach informed decisions if we use them wisely. With job postings having thousands of applications, the need to incorporate tools to simplify the process is understandable. Therefore, to prevent or reduce algorithm bias several solutions may be effective, such as the use of race-conscious algorithms, human supervision, and impact assessments. Instead of sticking with one solution, all of them should be implemented to ensure transparent and fair decisions.

References

Ajunwa, I. (2019, October 8). Opinion | Beware of automated hiring. The New York Times. https://www.nytimes.com/2019/10/08/opinion/ai-hiring-discrimination.html

Bias Definition & Meaning. (n.d.). Merriam-Webster. https://www.merriam-webster.com/dictionary/bias

Bogen, M. (2019, May 6). All the ways hiring algorithms can introduce bias. Harvard Business Review. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias

Ebert, A. (2022, February 9). 7 Reasons for bias in AI and what to do about it. Inside Big Data. https://insidebigdata.com/2022/02/09/7-reasons-for-bias-in-ai-and-what-to-do-about-it/

Institute for the Future of Work. (2022, September 27). Algorithmic hiring systems: what are they and what are the risks? – IFOW. Institute for the Future of Work. https://www.ifow.org/news-articles/algorithmic-hiring-systems

Jackson, M. C. (2021). Artificial intelligence & algorithmic bias: The issues with artificial intelligence & algorithmic bias: The issues with technology reflecting history & humans. Journal of Business & Technology Law, 16(2), 19.
https://digitalcommons.law.umaryland.edu/jbtl/vol16/iss2/5

Kelly, J. (2023, November 4). How companies are hiring and reportedly firing with AI. Forbes. https://www.forbes.com/sites/jackkelly/2023/11/04/how-companies-are-hiring-and-firing-with-ai/

Kim, P. T. (2022, January 30). Race-aware algorithms: fairness, nondiscrimination, and affirmative action. California Law Review, 110, 58. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4018414

Le, V. (2021, February 18). Algorithmic bias explained: How automated decision-making becomes automated discrimination. The Greenlining Institute. https://greenlining.org/publications/algorithmic-bias-explained/

Solove, D. J., & Matsumi, H. (2023, October 16). AI, algorithms, and awful humans. 96 Fordham Law Review (forthcoming 2024), 19. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4603992

West, D. M. (2019, May 22). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms | Brookings. Brookings Institution. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

Yang, J. R., & See, R. (2019, November 9). The promise and threat of artificial intelligence in combating (or worsening) employment discrimination. NAPABA convention session #506. efaidnbmnnnibpcajpcglclefindmkaj/https://cdn.ymaws.com/www.napaba.org/resource/resmgr/2019_napaba_con/cle/cle_506.pdf

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Lion's Pride, Vol. 17 Copyright © 2024 by Ana Herrera is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book