Using artificial intelligence (AI) in hiring has grabbed the attention of law makers in California, New York, and other states.  The gist of the proposed California law is that any AI used in hiring would have to show no adverse impact before implementation, adverse impact would have to be reviewed annually, and they could not be used if found to have adverse impact.  Exceptions for business necessity would still be made.  Also, the test could still be used if it had adverse impact but the impact was less than a previously used process.  The proposed law would represent an extension of the federal anti-bias laws that apply to pre-employment tests.

The proposed New York law requires greater transparency in the use of AI in selection.  Like the California bill, AI assessments would have to demonstrate a lack of bias before being used.  Companies would have to disclose to candidates when they use AI technology for hiring and the specific job qualifications or characteristics the AI is evaluating.  I have no idea how the latter helps reduce discrimination.

There are some important impacts and nuances to these proposed laws:

  1. The California law uses the 4/5ths rule as the determiner of adverse impact.  Federal agencies are moving away from this standard to one that focuses more on the scored differences between groups.  I would advise using the latter standard when assessing adverse impact.
  2. The laws ask that the AI be pre-tested for adverse impact, but interestingly, not validity, unless there is AI.
  3. Speaking of the pre-tests, it may be a challenge for employers to demonstrate lack of adverse impact on applicant groups since they are not required by law to provide information on their race, sex, or age when taking a test.  Of course, if video interviewing is part of the AI, then the system can gather that information.
  4. I found it interesting that companies could use systems that had adverse impact as long as they had less bias than the previous test.  This is not exactly an incentive for developers to create fair tests.

There’s some common sense and science to using any selection system, especially one that is strictly data driven, including:

  1. Know what it is scoring.  If the AI algorithm doesn’t make sense, then it is likely taking advantage of something unusual in the data from which it was developed.  This means that it is unlikely to be a valuable predictor in the future.  And, I don’t think you want to tell a jury why you are hiring people based on how many times they tug their ear lobe during an interview.
  2. Look closely at missing demographic data.  As I mentioned above, providing this information is optional for applicants and plenty of people will not give it.  A vendor should be able to tell you about their experience with missing cases and how it affects your adverse impact.
  3. Machine based scoring models are geared towards finding very small, but consistent, differences among people.  This is valuable it you hire 100,000 people a year (or trying to get someone to buy a different brand of soap), but less so in most circumstances.  Be sure to evaluate the business impact of the scoring model, particularly those elements that may have adverse impact.
  4. You can always ask the vendor to only use scoring elements that do not have adverse impact.  If it also turns that the assessment doesn’t have any validity, then it’s time to find a new test.

There’s no inherent reason to be suspicious of AI in selection.  However, we also do not want to rely on “black boxes” either.