Taking Mythology Out of Resume Screening

Ah, so much for the summer blogging break.  I have so much to say about WFH and the delta variant whiplash.  But, that is for another time.  What I have been thinking about lately is how there is so much available talent (people looking for work and to change jobs) and what employers are doing about it.

Typically, this kind of environment is great news for companies that are hiring.  That is not entirely the case now because, since there is so much employee movement, job candidates seem to have the upper hand in terms of salary and WFH flexibility.  However, employers do have a lot of options as well.  The only problem is that some are over playing their hand.

Case in point is the use of resume screening software.  Don’t get me wrong—this type of tech is something that companies need to use.  It is an efficient and objective way to go through resumes.  However, as this article (thanks to Denis Adsit for sending this my way and the link requires a subscription) points out, employers are likely missing out on lots of good candidates.  It is not because the algorithms don’t work.  It’s because they filled with untested assumptions that are provided by hiring companies.

It is amazing how many myths companies have about who they hire.  For instance, each time I have done a validation study in a contact center, line managers insist that previous experience is a plus.  And each time the data does not support that assumption.1  If you attempted to validate similar assumptions, I am sure that you would find that fewer than 50% were actually good predictors of performance.

When these myths are plugged into resume screen algorithms, they help to screen out people randomly.  This means that you have fewer resumes to read, but it also means that the ones you are reading are not better or worse than the ones you don’t.

Another problem with the data companies give to the algorithms is that that the choices are draconian because they are used as a thumbs-up or thumbs-down screen.  A better approach would be one where certain elements are given points (again, based on a validation study) and a cut-score is used.  For instance, let’s say that your algorithm selects out people who have had more than 3 jobs in 5 years.  You might be missing out on people who have several other very attractive things in their resumes.  And you would potentially be interviewing people with fewer attractive things on their resume but stayed at a job for 5 years.  Is that one thing really such a deal breaker at this stage of the process?

The other hurdles that companies place in front of candidates in the algorithms are unnecessary educational requirements.  I’ve written about this before, so I won’t get into again here.  However, if you are going to validate other assumptions about what is predictive of success on resumes, you should do the same regarding educational requirements.  This will widen your pool and likely lead to an equally, if not more, qualified pool and one that is more diverse.

Resume screening software is a very useful tool in pre-screening resumes.  Like any other computer program, they are only as good as the data that goes into them.  By ensuring that what you provide the algorithms is based on fact rather than myth, you will get a lot more out of the screens.

1 If anything, my experience shows that for contact centers, the opposite is true—those who worked in them before do worse in the next job.  Why?  Well, if they were good at the job, they would not have made a job change in the first place.  Also, there is a lot of unlearning that has to go on when training veterans on the customer management software.

Removing Unnecessary Employment Barriers

Let’s play some DE&I trivia!  As many of you know, the landmark case in employment discrimination is Griggs v. Duke Power.  But, what was the aspect of Duke Power’s hiring that got them into court?

If you said their use of pre-employment tests, you’d only be partially right.  The decision was also based on the use of discriminatory educational requirements (in this instance, a high school diploma).  Interestingly, after that tests got a bad name, but companies continued to use school credentials with little or no problem.

As the US economy and culture pushed more and more students towards college, racial disparities in educational attainment have persisted.  Yet, companies rarely questioned whether asking for high school or college degrees for certain jobs really gets them better candidates.  In some cases, this requirement is a classic “like me” bias?

Of course, the only way to see if a high school or college degree is necessary for a job is to conduct a job analysis and compare that knowledge, skills, and abilities with a high school or college curriculum.  Yes, I want my surgeon to have an MD, thank you very much. Far too often companies have used degrees as a de facto job requirement without ever thinking about its impact on organizational performance (are we turning away qualified people?) or fairness.  This is particularly true in IT where there are many self-taught people in the field.

Due to a confluence of factors, some big companies have rethought their use of degrees as qualifications.  Besides this leading to potentially more diverse hiring, it will also save them money (but be an economic boom to the new hires).  Whether it would lead to less college enrollment and lower higher education costs is certainly possible.  More importantly, it would lead to a paradigm shift of associating all white collar jobs with college degrees.

One can argue that getting a college degree shows tenacity and commitment over a long period of time.  And I would agree.  But, there are other ways to show this as well.

Change only comes when we do things in a different way.  And solutions to long term problems often require big actions.  Removing high school or college degrees as job qualifications when they are unnecessary removes a significant barrier to employment for racial minorities that could have an impact at your company.

Hiring For and Developing Resilience

I frequently hear clients talk about how they need to hire people who are resilient.  When I press them on what that means to them, they come up with words and phrases like:

Bends, but doesn’t break.

Learns from adversity

Performs well under stress

Doesn’t take work setbacks personally

These things are all true and part of the personality trait Emotional Stability, which can be a good predictor for some jobs.  But the aspect of resilience which gets overlooked, but can be equally important for employee selection and development, is sociability.  While the stereotype of the resilient person as one who swallows his/her/their emotions and hunkers down, there is scientific evidence that we build resilience when we reach out to, and accept help from, others.

This is useful in selection in that it can alter the types of tests that we give and what we look for in responses to interview questions.  For instance, when asking, “Tell me about when you had to meet a tight deadline,” an answer like, “I reached out to my team and asked how they could help” shows more resilience than, “I put everything aside and worked by myself until I completed the assignment.”  This additional dimension is useful in interpreting validated personality tests as we can then look for people who score high on Emotional Stability and willingness to work with others.

For development, we can teach people the power of reaching out to others during difficult times.  For managers, this means offering assistance to those who are struggling rather than waiting for them to pull themselves up by the bootstraps.  For individual contributors, this requires a message that reaching out to others when you need help builds resistance and is not a sign of weakness.  Of course, these messages require reinforcement by senior management so that they become part of the culture.

To see if resilience is a key part of jobs at your company, you can do the following:

  1. Conduct a job analysis.  Whether you use surveys or interviews, you can gather data about how much stress or pressure feel people feel in their jobs.
  2. Find out what effective resilience looks like.  During the job analysis, have people describe critical incidents where resilience was (and wasn’t) shown.

This data can be used for selecting future hires by:

  1. Sourcing a validated and reliable instrument that measures Emotional Stability and willingness to communicate with others (and other tests which measure important aspects of the work as found in your job analysis). 
  2. Administering the test(s) to incumbents.
  3. Gathering measures of how participants are performing on the job, including resilience.
  4. Analyzing the data to see if the measures of Emotional Stability and communication are correlated with measures of resilience and/or performance.
  5. Using the results to screen future candidates.

The job analysis data can be used for developing employees by:

  1. Sourcing or designing training materials that address the critical incidents described in the job analysis.  The more behavior/role playing in the training, the better.
  2. Gathering measures of how participants are performing on the job, including resilience.
  3. Conducting the training and gather feedback from participants.
  4. Measuring performance, including resilience, after enough time to see the impact of the training.
  5. Making adjustments to the training material based on the feedback and performance data.

Note that in each case you are seeking to demonstrate the impact of improving resilience in your organization. Just as importantly, you are establishing its importance in the company and taking steps to weave it into your culture.

Let Your Exit Interviews Leave the Building

One of the most intuitively appealing HR concepts is that of the exit interview.  If we only knew what was going through the mind of those who chose to leave our company, we could fix our turnover problems.  The thing is that there is more than enough research data to show that exit interviews are not useful in predicting or fixing turnover.  Yet, just the other day, I got a newsletter e-mail from a reputable management publication with suggestions on how to make my exit interviews better.

Exit interviews are not effective for several reasons, including:

  1. Low response rates. There really is not an upside for the leaving employee to participate, so why go through the stress and confrontation?  So, whatever data that you get is unlikely to be representative of the people who leave.

  2. Lack of candor.  Most people who would be willing to participate are also not very willing to burn bridges.  So their responses are going to be more about them than your organization.

  3. What do you think the leavers are going to tell you that you should not already know?  If a particular manager has higher turnover than the organization at large, it is probably because he/she/they is treating people poorly.  You do not need an exit interview to figure that out.

It is the last point that deserves a bit more attention.  The biggest problem with the concept of exit interviews is that they are reactive, trying to put the horses back in the barn, so to speak.  To keep turnover down, organizations should be addressing those things that lead to turnover before they become significant issues.  Identifying and acting upon turnover requires a commitment to gathering data and acting upon it.  Two steps you can take include:

  1. Using turnover as a performance measure when validating pre-employment tests.  You can lower churn for many entry level jobs by understanding which people are more likely to stay in the position and use that information in screening candidates.
  2. If you think you are going to get good information from people who are no longer engaged with your organization during the exit interview, why not get it from those who still are engaged and more likely to be candid? When you gather employee engagement data through short surveys over time, you can determine what the leading indicators of turnover are.  It takes commitment to view surveys as a process rather than events, but doing so can provide a high level of insight into employee turnover.

There will also be macro-economic factors that drive voluntary turnover that organizations may not be able to impact.  But, as the light at the end of the COVID tunnel becomes brighter and companies return to new-normal staffing levels, it provides a fresh opportunity to be proactive in understanding turnover.  This is a better approach than relying on failed techniques of the past.

A View From the Other Side

As long as there are pre-employment tests there will be people who do not like taking them. That is fair—most people would like to get the job of their choosing without jumping through hoops or by going through some process which perfectly recognizes their unique (and superior) skills compared to other applicants. But, that is not the reality that we live in (or one that is fair to all applicants). But, we must be mindful of how those who take tests and go through interviews see them. We want their experience to be one that would encourage them to recommend others to apply at a particular company or use them when they are in a position to hire/promote others.

This article is not atypical of what industrial psychologists hear about tests. 

“The test was stupid.” 

“It did not measure skills or abilities that I would actually use on the job.” 

“I did not have an opportunity to show my true skills during the process.”

But, the author does more than complain.  He offers suggestions that he (and the singular is important, because the all of the comments on the article do not support his positions) thinks would improve the hiring process.  Listening to test takers who want to improve the process, and not just get a free pass, can lead to valuable improvements in your systems.

In my experience, the top 3 things that candidates want from a testing experience are:

  1. Convenience.  The industry has gone a long way towards this by adapting to mobile technology and shortening personality and aptitude tests.
  2. Something that looks like the job or their expectations of it.  Sometimes this means interacting with others rather than just solving a problem individually.  Or, answering questions where the process is as important as the answer (since many real life work problems have more than one solution).  When a portion of the assessment does not feel like the job, candidates are more likely to exit the process.
  3. Not feeling as if they are being “tricked.”  This can range from being asked (seemingly) the same question more than once on a personality test to impossible brain teasers.  While the former can have some statistical value, Google and others have found that the latter does not. 

Pre-employment and promotional testing is a zero-sum game.  Many people, due to the fundamental attribution error, are more than willing to fault the process than themselves.  That is fine and as assessment and interview developers and users, we should listen to them.

Can We Accurately Evaluate Leadership Before Someone Has a Chance to Lead?

In general, our personalities are pretty stable over our adulthood. Yes, we mature and big life events can alter us, but the building blocks of who we are as people are closer in stability to our eye color than our hair color.

This stability has been important to the science of employee selection. The underlying idea of giving any type of pre-employment or promotional test is that the knowledge, skill, ability, or characteristic being measured is stable in that person for a given period of time so that it can be used to predict future performance. With skills, we assume that they will improve over time, so we look for those that a person has right now. For personality and cognitive abilities, we assume that a person will have those at a consistent level for many years and that these aptitudes can be used to develop specific skills, such as leadership.

When I conduct leadership workshops, I typically ask participants if leaders are born (e.g., do some people just have what it takes) or made (e.g., pretty much anyone can be an effective leader if given the right opportunities to develop). The conversation gets people thinking about what behaviors are necessary to lead (good communication, willingness to direct others, attention to details, etc.), which of those can be taught, and which cannot. Put another way, to become a professional basketball player, I can improve how well I shoot, but I cannot do too much about how tall I am.

But, what if we have the direction of trait to leadership wrong? What if the traits to become a leader don’t blossom until someone is given the chance to lead?

This study suggests that being promoted into a leadership position does change the conscientiousness factor of personality. Conscientiousness has been found to be a significant predictor of overall manager effectiveness. It’s an interesting idea in that it suggests that, for some people, we do not know if they have a sufficient amount of a trait that contribute to leadership success until after they become leaders.

As with all good research, it poses as many new questions as answers. For instance, were there increases in conscientiousness across the spectrum or only among certain groups (e.g., were there gains for those who already showed relatively high levels of conscientiousness, so the rich got richer)? Or, does it take a leadership experience to bring out conscientiousness in people who typically do not show it? Or, is leadership a tide that raises everyone’s conscientiousness?

Practically speaking, this is where the study has me thinking about assessing leadership:

1)  Putting a re-emphasis on using performance on temporary assignments that involve leadership as part of the selection process in promoting people into supervisory positions. 

2)  Validating responses on personality tests that are taken after a person goes through a leadership role-play exercise or situational judgment test.

3)  Re-thinking what aspects of personality indicate leadership potential (e.g., willingness to direct others and resilience) and broaden our list of things that are leadership skills to include some other aspects of personality (e.g., conscientiousness). We can then focus on selecting based on the former and training on the latter.

Some people have the right mix of attributes that allow leadership to come easily to them. As it turns out, some of those things become more apparent after a person has a chance to lead. This should encourage us to think about how we choose to evaluate leadership potential.

Training Hiring AI Not to be Biased

Artificial Intelligence (AI) and Machine Learning (ML) play integral roles in our lives.  In fact, many of you probably came across this blog post due to a type of one of these systems.  AI is the idea that machines should be taught to do tasks (everything from search engines to driving cars).  ML is an application of AI where machines get to learn for themselves based on available data.

ML is gaining popularity in the evaluation of job candidates because, given large enough datasets, the process can find small, but predictive, bits of data and maximize their use.  This idea of letting the data guide decisions is not new.  I/O psychologists used this kind of process when developing work/life inventories (biodata) and examining response patterns of test items (item response theory—IRT).  The approaches have their advantages (being atheoretical, they are free from pre-conceptions) and problems (the number of people participating need to be very large so that results are not subject to peculiarities about the sample).  ML accelerated the ideas behind both biodata and IRT, which I think has led to solutions that don’t generalize well.  But, that’s for another blog post.

What is important here is the data made available and whether that data is biased.  For instance, if your hiring algorithm includes zipcodes or a classification of college/university attended, it has race baked in.  This article has several examples of how ML systems get well trained on only the data that goes in, leading to all kinds of biases (and not just human ones).  So, if your company wants to avoid bias based on race, sex, and age, it needs to dig into each element the ML is looking at to see if it is a proxy for something else (for instance, many hobbies are sex specific).  You then have to ask yourself whether the predictive value of that bit is worth the bias it has.

Systemic bias in hiring is insidious and we need to hunt it down.  It is not enough to say, “We have a data driven system” and presume that it is not discriminatory.  If the ML driving it was based on inadvertent bias, it will perpetuate it.  We need to check the elements that go into these systems to ensure that they are valid and fair to candidates.

I’d like to thank Dennis Adsit for recommending the article from The Economist to me.

Blacks Welcome to Apply

The aftermath of George Floyd’s murder has many of us asking, “What can I do better?” when it comes to ending racism.  This is critical in that racial bias in hiring have changed little in 30 years.  HR and I/O psychology play a unique role in that we create the processes that allow for equal employment.

None of the suggestions below require lowering of standards.  Rather, it provides a framework for applying standards in an equitable way.  Science and good sense points us in this direction with these actions:

  1. Widen your recruitment net.  If you recruit from the same places, your workforce will always look the same.  There is talent everywhere—go find it.  Whether from a high school in a different part of town or a historically black college/university.
  2. Make Resumes Anonymous.  The science is very clear that anonymous resumes reduce racial and gender bias.  It is not an expensive process to implement and works for all kinds of business.
  3. Examine minimum qualifications carefully.  Whether based on job experience or education, these can serve as barriers to black job candidates.  The ground breaking employment discrimination lawsuit, Griggs v. Duke Power, was based on an invalid requirement that supervisors needed a high school diploma.  Don’t get me wrong—I want my surgeon to be an M.D. But, do your entry level positions really need a college degree?  Do your managers really need to be MBAs?  If you analyze the relationships between education/experience and job performance, you are likely to find that they are not as strong as you think.
  4. Use validated pre-employment and promotional tests.  As a rule, validated pre-employment tests do not adversely affect blacks and are certainly less biased than interviews (see below).  This is particularly true for work sample tests (show me what you can do) and personality tests.  However, cognitive ability tests, especially speeded ones, may lead to discrimination.  If you use them, analyze your cutting score to ensure that it is not set so high that qualified candidates are being screened out.
  5. Reduce reliance on interviews.  Interviews can be biased by race and ethnicity.  And, more often than not, they are far less valid than tests.  We need to convince hiring managers that they are not good judges of talent—very few people are.  Remember, interviewing someone to see if s/he is a “good fit” is another way of saying, “this person is like me.” 

  6. Make your interviews more structured.  This can be achieved by asking candidates the same questions and using an objective scoring methodology.   Adding structure to the interview process can reduce bias (and improve validity).

You may already be doing some of the above.  I would encourage you to do all of them.  The outcome is fairness AND better hires.  What could be better than that?

What Are Your Company’s Selection Myths?

For North American sports fans, there is not a more public selection process than that National Football League (NFL) preparing for the annual talent draft.  This is the process for allocating new players (rookies) who have finished their college careers to the teams.  Players cannot sign a contract with any team they choose until after they complete their rookie contract with the team that drafts them.  Players not chosen in the draft can sign with any team.

Besides evaluating players based on their college games, the NFL teams also invite the top players to be evaluated at what they call a combine.  At the combine, players get interviewed by teams and are put through a variety of physical and medical tests.  Teams use all of this information to compare players against each other (by position) so they can make the best choices during the draft.

Of course, in reality, the top draft choices are made mostly based on the players performance in college.  Players at the best schools compete with and against other players who are likely to be drafted, so watching them perform in a game tells teams pretty much what they need to know.  And, as I wrote about last week, there is a big bias towards players who went to the “best” schools.  But, the teams do use information at the combine to inform them about players who they don’t feel they have good data on.  For instance, those who are recovering from injuries or played at schools that don’t compete against the top schools.

There’s only one problem:  There is very little data that supports that the “tests” given at the combine of predictive of success in the NFL.  This article about the problems in measuring hand size in quarterbacks provides just one example of that.

One can see how this all got started.  Quarterbacks need to be able to throw a ball well (with a lot of speed and accuracy) and to be able to hold on to it under pressure and having a large hand (as measured from the tip of the thumb to tip of the pinkie) would seemingly be related to both of those.  But, it’s not.  All quarterbacks grip the ball a little bit differently, regardless of hand size, to get the best results.  The article suggests that hand strength is the better predictor of quarterback performance and that it is unrelated to size.  But, those who evaluate quarterbacks just cannot let the size measurement go.

I am guessing that most of your organizations have an unproven selection myth, such as, “Our best managers have gotten their MBAs from one of 10 schools” or “Good supervisors are those who have worked their way up in our industry” or “Our most successful programmers had previous experience at specific companies.”  I used to hear, “Our best call center agents have previous experience before coming here” all of the time.  But, when I conducted validation studies in contact centers, it was rare that previous experience was a good predictor of future performance. These myths are easy to evaluate, but changing HR practices is harder.  It often requires good data and a shift in culture to change thinking.  However, moving on from myths is often required to make better talent decisions.

Adjusting Your HR Strategy When Your Company Decides to Train For Basic Job Skills

There is a presumption that the US education system will provide employers with workers that possess requisite job skills.  Companies are then responsible for providing more advanced ones through apprenticeships, job training, and leadership development.  But, what if job seekers do not possess the skills for tech jobs?

This article describes what lengths some employers are going to get people in their talent pipeline.  In many ways, there is nothing new here.  It comes down to searching for talent where they previously hadn’t and providing training rather than expecting people to come with skills.  It’s the latter that I find most interesting.

When designing selection programs, particularly for entry level positions, we tend to focus on what knowledge or skills the candidates needs on the first day.  Those expectations are higher if we expect someone to come with experience than if we are going to be providing a lot of training.  This has important impacts on how we select candidates, including:

  1. Use of aptitude tests rather than knowledge tests.  Aptitude tests are terrific measures of basic skills and are quite valid.  However, speeded ones can lead to adverse impact, so they require good validation studies, meaningful passing scores, and adverse impact analyses.
  2. Alter interview questions so that a wide variety of experiences can be used to answer them.  If you are hiring people who don’t have experiences in your industry, you should be asking valid questions that people with little or no job experience can answer.  For instance, instead of, “Tell me about a time when you led a team project at work and…” use “Tell me about a time when you had to influence a group of friends and…”
  3. Focus on reducing turnover.  Training is EXPENSIVE, so hiring mistakes in a boot camp environment are very costly.  Take special care in developing realistic job previews and other ways that allow candidates to decide if they are not a good fit.  Collect information (previous experiences, referral sources, school majors, etc.) that may be indicative of future turnover and validate them.  These can be part of very useful pre-employment processes.

What this approach really presents is a change in HR strategy from one that relies on people to be able to start on day one to taking time to get them up to speed.  By having recruitment, selection, and development leaders involved in the execution, organizations can adapt their tactics for identifying and selecting talent and have a smoother transition.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE