Regular readers have seen my posts on how automation has impacted the skills required for jobs among hourly workers. Since the beginning of the industrial age, technology has been used to reduce physical labor and repetitive tasks. Whether it is in fast food or warehouses, technology has changes how humans fit into the labor equation.
The COVID pandemic has accelerated this process. While futurists can disagree about how fast technology changes were coming to work, labor being forced to be away from the office has accelerated the pace in which companies have implemented robotic process automation (RPA) and artificial intelligence (AI) in order to meet customer needs and improve efficiency. What is different now is that this technology is being applied more to salaried positions than before. Whether the new jobs in creating this tech will be equal to the number of people displaced by it is an important question for future college graduates. But these changes should also get companies thinking about their recruitment, validated selection, and training processes.
One of my clients does back office processing of financial information. This is exactly the kind of thing where RPA can eventually take over some the tasks currently done by their analysts. Currently, we test job candidates for their willingness to follow procedures and their detail orientation (among other characteristics). If RPA were applied to this job, we would need to analyze what the skills, abilities, and personal characteristics were still valid and what any new ones would be. This would likely lead to the elimination of certain parts of the current test and emphasizing others. This would likely impact recruiting and training as well. In a broader sense, when organizational change comes, the updating of recruitment, selection, and training of employees usually is done (seemingly) as an afterthought. As companies apply RPA and AI, and nearly all of them will in one way or another, they should be prepared for the impact on how employees do their work, not just whether they will still have a job.
Using artificial intelligence (AI) in hiring has grabbed the attention of law makers in California, New York, and other states. The gist of the proposed California law is that any AI used in hiring would have to show no adverse impact before implementation, adverse impact would have to be reviewed annually, and they could not be used if found to have adverse impact. Exceptions for business necessity would still be made. Also, the test could still be used if it had adverse impact but the impact was less than a previously used process. The proposed law would represent an extension of the federal anti-bias laws that apply to pre-employment tests.
The proposed New York law requires greater transparency in the use of AI in selection. Like the California bill, AI assessments would have to demonstrate a lack of bias before being used. Companies would have to disclose to candidates when they use AI technology for hiring and the specific job qualifications or characteristics the AI is evaluating. I have no idea how the latter helps reduce discrimination.
There are some important impacts and nuances to these proposed laws:
- The California law uses the 4/5ths rule as the determiner of adverse impact. Federal agencies are moving away from this standard to one that focuses more on the scored differences between groups. I would advise using the latter standard when assessing adverse impact.
- The laws ask that the AI be pre-tested for adverse impact, but interestingly, not validity, unless there is AI.
- Speaking of the pre-tests, it may be a challenge for employers to demonstrate lack of adverse impact on applicant groups since they are not required by law to provide information on their race, sex, or age when taking a test. Of course, if video interviewing is part of the AI, then the system can gather that information.
- I found it interesting that companies could use systems that had adverse impact as long as they had less bias than the previous test. This is not exactly an incentive for developers to create fair tests.
There’s some common sense and science to using any selection system, especially one that is strictly data driven, including:
- Know what it is scoring. If the AI algorithm doesn’t make sense, then it is likely taking advantage of something unusual in the data from which it was developed. This means that it is unlikely to be a valuable predictor in the future. And, I don’t think you want to tell a jury why you are hiring people based on how many times they tug their ear lobe during an interview.
- Look closely at missing demographic data. As I mentioned above, providing this information is optional for applicants and plenty of people will not give it. A vendor should be able to tell you about their experience with missing cases and how it affects your adverse impact.
- Machine based scoring models are geared towards finding very small, but consistent, differences among people. This is valuable it you hire 100,000 people a year (or trying to get someone to buy a different brand of soap), but less so in most circumstances. Be sure to evaluate the business impact of the scoring model, particularly those elements that may have adverse impact.
- You can always ask the vendor to only use scoring elements that do not have adverse impact. If it also turns that the assessment doesn’t have any validity, then it’s time to find a new test.
There’s no inherent reason to be suspicious of AI in selection. However, we also do not want to rely on “black boxes” either.