Ah, so much for the summer blogging break.  I have so much to say about WFH and the delta variant whiplash.  But, that is for another time.  What I have been thinking about lately is how there is so much available talent (people looking for work and to change jobs) and what employers are doing about it.

Typically, this kind of environment is great news for companies that are hiring.  That is not entirely the case now because, since there is so much employee movement, job candidates seem to have the upper hand in terms of salary and WFH flexibility.  However, employers do have a lot of options as well.  The only problem is that some are over playing their hand.

Case in point is the use of resume screening software.  Don’t get me wrong—this type of tech is something that companies need to use.  It is an efficient and objective way to go through resumes.  However, as this article (thanks to Denis Adsit for sending this my way and the link requires a subscription) points out, employers are likely missing out on lots of good candidates.  It is not because the algorithms don’t work.  It’s because they filled with untested assumptions that are provided by hiring companies.

It is amazing how many myths companies have about who they hire.  For instance, each time I have done a validation study in a contact center, line managers insist that previous experience is a plus.  And each time the data does not support that assumption.1  If you attempted to validate similar assumptions, I am sure that you would find that fewer than 50% were actually good predictors of performance.

When these myths are plugged into resume screen algorithms, they help to screen out people randomly.  This means that you have fewer resumes to read, but it also means that the ones you are reading are not better or worse than the ones you don’t.

Another problem with the data companies give to the algorithms is that that the choices are draconian because they are used as a thumbs-up or thumbs-down screen.  A better approach would be one where certain elements are given points (again, based on a validation study) and a cut-score is used.  For instance, let’s say that your algorithm selects out people who have had more than 3 jobs in 5 years.  You might be missing out on people who have several other very attractive things in their resumes.  And you would potentially be interviewing people with fewer attractive things on their resume but stayed at a job for 5 years.  Is that one thing really such a deal breaker at this stage of the process?

The other hurdles that companies place in front of candidates in the algorithms are unnecessary educational requirements.  I’ve written about this before, so I won’t get into again here.  However, if you are going to validate other assumptions about what is predictive of success on resumes, you should do the same regarding educational requirements.  This will widen your pool and likely lead to an equally, if not more, qualified pool and one that is more diverse.

Resume screening software is a very useful tool in pre-screening resumes.  Like any other computer program, they are only as good as the data that goes into them.  By ensuring that what you provide the algorithms is based on fact rather than myth, you will get a lot more out of the screens.

1 If anything, my experience shows that for contact centers, the opposite is true—those who worked in them before do worse in the next job.  Why?  Well, if they were good at the job, they would not have made a job change in the first place.  Also, there is a lot of unlearning that has to go on when training veterans on the customer management software.