Big data’s moment has arrived, and it’s already redefining business processes, overhauling decision-making, and making prognostications for the future in hiring. Although big data promises to give recruiters newer and better tools to find the right applicants, it could also trigger new ethical questions.
Big data can be “the great equalizer” because it doesn’t care about what college someone went to or who their parents are; it also doesn’t factor in gender, race, or disability status. Whereas companies once depended on gut feelings about applicants in their hiring decisions, they can now use artificial intelligence (AI) and analytics to parse the data on current employees and apply the results to prospective employees in a pool of candidates.
In theory, it’s a great solution. But in practice, it can introduce unintentional and harmful bias into the hiring process.
No One-Size-Fits-All Approach
Using algorithms to help determine hiring decisions comes with the risk that the datasets they’re built on lack the necessary information — and could produce false results. More than that, the systems could be poorly designed or outdated.
All of this is to say that employers should be careful when integrating big data into their hiring processes. Data has terrific potential, but it isn’t a foolproof tool. With this in mind, we must find ways to use this tool in pre-employment testing while keeping the potential big challenges of using big data in mind.
The most important thing to remember is not to go all-in without understanding the risks. Things can go wrong quickly if HR departments aren’t mindful. For instance, geolocating applicants with big data can be an incredibly valuable tool in candidate selection. But businesses that primarily do this in the suburbs, far away from city centers, can quickly whitewash what could have been a diverse workforce. Removing bias from AI may not be an immediate option, but it’s crucial to be mindful of the ramifications that bias could create.
Many projects to develop AI based on big data have resulted in biased and discriminatory algorithms. Why? The answer lies in the algorithm creation. When the people who construct algorithms and evaluate data come from similar backgrounds, their personal experiences and biases can influence their work. An algorithm’s results can only be as good as the instructions used to build that algorithm, so developers must account for potential implicit bias in AI algorithms.
With all this in mind, here are three specific things HR teams must know to reap the benefits of using big data responsibly moving forward:
1. The more selection depends on data, the more analysis is necessary to prevent adverse impact.
While the Uniform Guidelines on Employee Selection Procedures allow validity to trump adverse impact, Section 15(C)6 states that alternate selection procedures should be investigated and analyzed in light of their impact, describing the scope, method, and findings. This is why content validity is so important in pre-employment testing. If an employer can say it is simulating essential tasks from the job in its testing and can justify the practice of those in the test, then it is protected from adverse impact; but if all it can say is that it analyzed data based on which applicants preferred one color to another — or some other big data construct — the risk is still a serious threat.
2. Work with vendors that focus on diversity.
Be intentional about working with data companies that have an excellent track record on diversity. Data on its own isn’t enough to prevent bias in the hiring process, so it’s essential to have a partner and solution that will work together with you for the good of your efforts. You should also consider what any vendors are doing to combat bias, paying close attention to what you find. If they have not been proactive on this front, it might be time to find a new partner.
3. Don’t set it and forget it.
Finally, work with your technical colleagues to ensure any data vendors understand what kinds of interventions could be necessary to address algorithmic bias. Conscientious vendors share validation results for their algorithms, highlighting whether their results introduce any bias — and conscientious HR leaders ensure they’re implementing technology that has been and will continue to be updated.
Big data will likely continue to play an influential role in the future of every industry, and HR is no exception. That data can be incredibly useful, but it doesn’t come without risks. As an employer, be mindful of the potential pitfalls and intentional about avoiding them to keep bias in AI algorithmsout of your hiring process.
Do you want to learn more about how to use big data in your hiring processes? Contact us today!