Adjusting HR to Robotic Process Automation in a Post-COVID World

Regular readers have seen my posts on how automation has impacted the skills required for jobs among hourly workers.  Since the beginning of the industrial age, technology has been used to reduce physical labor and repetitive tasks.  Whether it is in fast food or warehouses, technology has changes how humans fit into the labor equation.

The COVID pandemic has accelerated this process.  While futurists can disagree about how fast technology changes were coming to work, labor being forced to be away from the office has accelerated the pace in which companies have implemented robotic process automation (RPA) and artificial intelligence (AI) in order to meet customer needs and improve efficiency.  What is different now is that this technology is being applied more to salaried positions than before.  Whether the new jobs in creating this tech will be equal to the number of people displaced by it is an important question for future college graduates.  But these changes should also get companies thinking about their recruitment, validated selection, and training processes.

One of my clients does back office processing of financial information.  This is exactly the kind of thing where RPA can eventually take over some the tasks currently done by their analysts.  Currently, we test job candidates for their willingness to follow procedures and their detail orientation (among other characteristics).  If RPA were applied to this job, we would need to analyze what the skills, abilities, and personal characteristics were still valid and what any new ones would be.  This would likely lead to the elimination of certain parts of the current test and emphasizing others. This would likely impact recruiting and training as well. In a broader sense, when organizational change comes, the updating of recruitment, selection, and training of employees usually is done (seemingly) as an afterthought.  As companies apply RPA and AI, and nearly all of them will in one way or another, they should be prepared for the impact on how employees do their work, not just whether they will still have a job.

Hiring For and Developing Resilience

I frequently hear clients talk about how they need to hire people who are resilient.  When I press them on what that means to them, they come up with words and phrases like:

Bends, but doesn’t break.

Learns from adversity

Performs well under stress

Doesn’t take work setbacks personally

These things are all true and part of the personality trait Emotional Stability, which can be a good predictor for some jobs.  But the aspect of resilience which gets overlooked, but can be equally important for employee selection and development, is sociability.  While the stereotype of the resilient person as one who swallows his/her/their emotions and hunkers down, there is scientific evidence that we build resilience when we reach out to, and accept help from, others.

This is useful in selection in that it can alter the types of tests that we give and what we look for in responses to interview questions.  For instance, when asking, “Tell me about when you had to meet a tight deadline,” an answer like, “I reached out to my team and asked how they could help” shows more resilience than, “I put everything aside and worked by myself until I completed the assignment.”  This additional dimension is useful in interpreting validated personality tests as we can then look for people who score high on Emotional Stability and willingness to work with others.

For development, we can teach people the power of reaching out to others during difficult times.  For managers, this means offering assistance to those who are struggling rather than waiting for them to pull themselves up by the bootstraps.  For individual contributors, this requires a message that reaching out to others when you need help builds resistance and is not a sign of weakness.  Of course, these messages require reinforcement by senior management so that they become part of the culture.

To see if resilience is a key part of jobs at your company, you can do the following:

  1. Conduct a job analysis.  Whether you use surveys or interviews, you can gather data about how much stress or pressure feel people feel in their jobs.
  2. Find out what effective resilience looks like.  During the job analysis, have people describe critical incidents where resilience was (and wasn’t) shown.

This data can be used for selecting future hires by:

  1. Sourcing a validated and reliable instrument that measures Emotional Stability and willingness to communicate with others (and other tests which measure important aspects of the work as found in your job analysis). 
  2. Administering the test(s) to incumbents.
  3. Gathering measures of how participants are performing on the job, including resilience.
  4. Analyzing the data to see if the measures of Emotional Stability and communication are correlated with measures of resilience and/or performance.
  5. Using the results to screen future candidates.

The job analysis data can be used for developing employees by:

  1. Sourcing or designing training materials that address the critical incidents described in the job analysis.  The more behavior/role playing in the training, the better.
  2. Gathering measures of how participants are performing on the job, including resilience.
  3. Conducting the training and gather feedback from participants.
  4. Measuring performance, including resilience, after enough time to see the impact of the training.
  5. Making adjustments to the training material based on the feedback and performance data.

Note that in each case you are seeking to demonstrate the impact of improving resilience in your organization. Just as importantly, you are establishing its importance in the company and taking steps to weave it into your culture.

Let Your Exit Interviews Leave the Building

One of the most intuitively appealing HR concepts is that of the exit interview.  If we only knew what was going through the mind of those who chose to leave our company, we could fix our turnover problems.  The thing is that there is more than enough research data to show that exit interviews are not useful in predicting or fixing turnover.  Yet, just the other day, I got a newsletter e-mail from a reputable management publication with suggestions on how to make my exit interviews better.

Exit interviews are not effective for several reasons, including:

  1. Low response rates. There really is not an upside for the leaving employee to participate, so why go through the stress and confrontation?  So, whatever data that you get is unlikely to be representative of the people who leave.

  2. Lack of candor.  Most people who would be willing to participate are also not very willing to burn bridges.  So their responses are going to be more about them than your organization.

  3. What do you think the leavers are going to tell you that you should not already know?  If a particular manager has higher turnover than the organization at large, it is probably because he/she/they is treating people poorly.  You do not need an exit interview to figure that out.

It is the last point that deserves a bit more attention.  The biggest problem with the concept of exit interviews is that they are reactive, trying to put the horses back in the barn, so to speak.  To keep turnover down, organizations should be addressing those things that lead to turnover before they become significant issues.  Identifying and acting upon turnover requires a commitment to gathering data and acting upon it.  Two steps you can take include:

  1. Using turnover as a performance measure when validating pre-employment tests.  You can lower churn for many entry level jobs by understanding which people are more likely to stay in the position and use that information in screening candidates.
  2. If you think you are going to get good information from people who are no longer engaged with your organization during the exit interview, why not get it from those who still are engaged and more likely to be candid? When you gather employee engagement data through short surveys over time, you can determine what the leading indicators of turnover are.  It takes commitment to view surveys as a process rather than events, but doing so can provide a high level of insight into employee turnover.

There will also be macro-economic factors that drive voluntary turnover that organizations may not be able to impact.  But, as the light at the end of the COVID tunnel becomes brighter and companies return to new-normal staffing levels, it provides a fresh opportunity to be proactive in understanding turnover.  This is a better approach than relying on failed techniques of the past.

A View From the Other Side

As long as there are pre-employment tests there will be people who do not like taking them. That is fair—most people would like to get the job of their choosing without jumping through hoops or by going through some process which perfectly recognizes their unique (and superior) skills compared to other applicants. But, that is not the reality that we live in (or one that is fair to all applicants). But, we must be mindful of how those who take tests and go through interviews see them. We want their experience to be one that would encourage them to recommend others to apply at a particular company or use them when they are in a position to hire/promote others.

This article is not atypical of what industrial psychologists hear about tests. 

“The test was stupid.” 

“It did not measure skills or abilities that I would actually use on the job.” 

“I did not have an opportunity to show my true skills during the process.”

But, the author does more than complain.  He offers suggestions that he (and the singular is important, because the all of the comments on the article do not support his positions) thinks would improve the hiring process.  Listening to test takers who want to improve the process, and not just get a free pass, can lead to valuable improvements in your systems.

In my experience, the top 3 things that candidates want from a testing experience are:

  1. Convenience.  The industry has gone a long way towards this by adapting to mobile technology and shortening personality and aptitude tests.
  2. Something that looks like the job or their expectations of it.  Sometimes this means interacting with others rather than just solving a problem individually.  Or, answering questions where the process is as important as the answer (since many real life work problems have more than one solution).  When a portion of the assessment does not feel like the job, candidates are more likely to exit the process.
  3. Not feeling as if they are being “tricked.”  This can range from being asked (seemingly) the same question more than once on a personality test to impossible brain teasers.  While the former can have some statistical value, Google and others have found that the latter does not. 

Pre-employment and promotional testing is a zero-sum game.  Many people, due to the fundamental attribution error, are more than willing to fault the process than themselves.  That is fine and as assessment and interview developers and users, we should listen to them.

Can We Accurately Evaluate Leadership Before Someone Has a Chance to Lead?

In general, our personalities are pretty stable over our adulthood. Yes, we mature and big life events can alter us, but the building blocks of who we are as people are closer in stability to our eye color than our hair color.

This stability has been important to the science of employee selection. The underlying idea of giving any type of pre-employment or promotional test is that the knowledge, skill, ability, or characteristic being measured is stable in that person for a given period of time so that it can be used to predict future performance. With skills, we assume that they will improve over time, so we look for those that a person has right now. For personality and cognitive abilities, we assume that a person will have those at a consistent level for many years and that these aptitudes can be used to develop specific skills, such as leadership.

When I conduct leadership workshops, I typically ask participants if leaders are born (e.g., do some people just have what it takes) or made (e.g., pretty much anyone can be an effective leader if given the right opportunities to develop). The conversation gets people thinking about what behaviors are necessary to lead (good communication, willingness to direct others, attention to details, etc.), which of those can be taught, and which cannot. Put another way, to become a professional basketball player, I can improve how well I shoot, but I cannot do too much about how tall I am.

But, what if we have the direction of trait to leadership wrong? What if the traits to become a leader don’t blossom until someone is given the chance to lead?

This study suggests that being promoted into a leadership position does change the conscientiousness factor of personality. Conscientiousness has been found to be a significant predictor of overall manager effectiveness. It’s an interesting idea in that it suggests that, for some people, we do not know if they have a sufficient amount of a trait that contribute to leadership success until after they become leaders.

As with all good research, it poses as many new questions as answers. For instance, were there increases in conscientiousness across the spectrum or only among certain groups (e.g., were there gains for those who already showed relatively high levels of conscientiousness, so the rich got richer)? Or, does it take a leadership experience to bring out conscientiousness in people who typically do not show it? Or, is leadership a tide that raises everyone’s conscientiousness?

Practically speaking, this is where the study has me thinking about assessing leadership:

1)  Putting a re-emphasis on using performance on temporary assignments that involve leadership as part of the selection process in promoting people into supervisory positions. 

2)  Validating responses on personality tests that are taken after a person goes through a leadership role-play exercise or situational judgment test.

3)  Re-thinking what aspects of personality indicate leadership potential (e.g., willingness to direct others and resilience) and broaden our list of things that are leadership skills to include some other aspects of personality (e.g., conscientiousness). We can then focus on selecting based on the former and training on the latter.

Some people have the right mix of attributes that allow leadership to come easily to them. As it turns out, some of those things become more apparent after a person has a chance to lead. This should encourage us to think about how we choose to evaluate leadership potential.

Training Hiring AI Not to be Biased

Artificial Intelligence (AI) and Machine Learning (ML) play integral roles in our lives.  In fact, many of you probably came across this blog post due to a type of one of these systems.  AI is the idea that machines should be taught to do tasks (everything from search engines to driving cars).  ML is an application of AI where machines get to learn for themselves based on available data.

ML is gaining popularity in the evaluation of job candidates because, given large enough datasets, the process can find small, but predictive, bits of data and maximize their use.  This idea of letting the data guide decisions is not new.  I/O psychologists used this kind of process when developing work/life inventories (biodata) and examining response patterns of test items (item response theory—IRT).  The approaches have their advantages (being atheoretical, they are free from pre-conceptions) and problems (the number of people participating need to be very large so that results are not subject to peculiarities about the sample).  ML accelerated the ideas behind both biodata and IRT, which I think has led to solutions that don’t generalize well.  But, that’s for another blog post.

What is important here is the data made available and whether that data is biased.  For instance, if your hiring algorithm includes zipcodes or a classification of college/university attended, it has race baked in.  This article has several examples of how ML systems get well trained on only the data that goes in, leading to all kinds of biases (and not just human ones).  So, if your company wants to avoid bias based on race, sex, and age, it needs to dig into each element the ML is looking at to see if it is a proxy for something else (for instance, many hobbies are sex specific).  You then have to ask yourself whether the predictive value of that bit is worth the bias it has.

Systemic bias in hiring is insidious and we need to hunt it down.  It is not enough to say, “We have a data driven system” and presume that it is not discriminatory.  If the ML driving it was based on inadvertent bias, it will perpetuate it.  We need to check the elements that go into these systems to ensure that they are valid and fair to candidates.

I’d like to thank Dennis Adsit for recommending the article from The Economist to me.

Blacks Welcome to Apply

The aftermath of George Floyd’s murder has many of us asking, “What can I do better?” when it comes to ending racism.  This is critical in that racial bias in hiring have changed little in 30 years.  HR and I/O psychology play a unique role in that we create the processes that allow for equal employment.

None of the suggestions below require lowering of standards.  Rather, it provides a framework for applying standards in an equitable way.  Science and good sense points us in this direction with these actions:

  1. Widen your recruitment net.  If you recruit from the same places, your workforce will always look the same.  There is talent everywhere—go find it.  Whether from a high school in a different part of town or a historically black college/university.
  2. Make Resumes Anonymous.  The science is very clear that anonymous resumes reduce racial and gender bias.  It is not an expensive process to implement and works for all kinds of business.
  3. Examine minimum qualifications carefully.  Whether based on job experience or education, these can serve as barriers to black job candidates.  The ground breaking employment discrimination lawsuit, Griggs v. Duke Power, was based on an invalid requirement that supervisors needed a high school diploma.  Don’t get me wrong—I want my surgeon to be an M.D. But, do your entry level positions really need a college degree?  Do your managers really need to be MBAs?  If you analyze the relationships between education/experience and job performance, you are likely to find that they are not as strong as you think.
  4. Use validated pre-employment and promotional tests.  As a rule, validated pre-employment tests do not adversely affect blacks and are certainly less biased than interviews (see below).  This is particularly true for work sample tests (show me what you can do) and personality tests.  However, cognitive ability tests, especially speeded ones, may lead to discrimination.  If you use them, analyze your cutting score to ensure that it is not set so high that qualified candidates are being screened out.
  5. Reduce reliance on interviews.  Interviews can be biased by race and ethnicity.  And, more often than not, they are far less valid than tests.  We need to convince hiring managers that they are not good judges of talent—very few people are.  Remember, interviewing someone to see if s/he is a “good fit” is another way of saying, “this person is like me.” 

  6. Make your interviews more structured.  This can be achieved by asking candidates the same questions and using an objective scoring methodology.   Adding structure to the interview process can reduce bias (and improve validity).

You may already be doing some of the above.  I would encourage you to do all of them.  The outcome is fairness AND better hires.  What could be better than that?

Valid Virtual Employee Selection

I wrote a couple of weeks ago about how the National Football League (NFL) had to adapt their selection procedures to deal with the pandemic.  To recap, the NFL selects new players primarily through a draft of eligible college football players.  Leading up to the draft teams review the players’ performance in previous games, have them go through physical examinations, athletic drills, personality and cognitive tests, structured interviews, and background investigations.  However, with COVID-19, the NFL ruled out many of these things for health reasons.

It is much too early to tell if the slimming of the selection tools impacted the effectiveness of any team’s draft.  However, there are two observations that can be made:

  1. The order of the most talented players chosen was pretty much what was expected by experts back in January.  26 of the first 32 players drafted were predicted (by one expert), with 7 of the first 8 going to the predicted team as well.  This is pretty typical.

  2. The lack of some the selection tools appeared to hurt those who attended smaller and/or not as well-known schools.  Typically, about 18 players from such schools are taken in the draft.  This year, only 6 were.  With a lack of information, teams may not have known, or wanted to take a risk, on such players.

For the latter, this is not a case of re-arranging crumbs.  Some of the best players in the NFL have come from these schools, so the teams lose a competitive advantage when they don’t properly identify relatively unknown talent. 

What we saw is easily explained: Past performance is the best (but not perfect) predictor of future performance.  The teams could evaluate how well players from the bigger schools performed against similar talent in college.  The NFL did not have, and did not develop, tools to uncover the best players who did not have the opportunity to play against other very talented players.  So, they relied on what they knew best.  But, this resulted in opportunity costs for them and created a slew of players with chips on their shoulders.

Since this selection event takes place once a year, it is likely that the NFL draft will (largely) return to normal next year.  But, what if it doesn’t? Or, in the future there is another interruption?  The teams that find alternative (and equally valid) methods of evaluating talent will benefit.  Your company should be thinking in the same vein during COVID-19 and beyond.

Silver Linings to Losing Some of Your Selection Processes

COVID-19 is, and will continue to, affect many parts of our work processes.  One of them is how we select new employees. Yes, even with layoffs some companies are hiring now and most will be again before the end of the year.  With social distancing and the acceptance of video-conferencing, we are beginning to accept that how we select candidates will change.

This does provide for a process improvement opportunity in what we do.  Are all of the current steps we use necessary or are some based on myth?  For instance, the National Football League is going forward with their big selection weekend at the end of the month, but there are concerns from those who evaluate the candidates that they do not have access to the tools that they normally would in doing their final rankings.  I am guessing that they will find that some of those tools are for making people feel important in the process and do not really add a lot of value in finding meaningful differences between players.  You may find that some aspects of your process are redundant or done for the sake of tradition rather than adding value.

Here are some selection traditions that we are going to have to let go of for a bit and the silver linings associate with the changes:

  1. Face to face interviews.  Whether social distancing is officially with us for four more weeks or four more months, the hesitancy to be physically close to others will likely be with us for a while.  People are becoming more comfortable and adept with video calls and we should continue to utilize them.  Silver lining:  In areas with heavy traffic, the video calls are easier to schedule for both parties.

  2. Virtual assessments.  Whether it is for skills and personality testing, or role-plays, assessments have been moving online for several years and the current situation will likely convert some who have not yet made the switch.  Silver lining: giving these assessments online is very efficient.  The reduced cost improves their business impact and will make it easier to process candidates when hiring picks up again.

  3. Being ultra-professional.  Being interviewed or assessed online was a way to put one’s best professional foot forward.  Doing so from home, with kids and pets around, is going to chip away at the veneer.  Silver lining: While I feel for the candidate who is trying to respond to a question with a barking dog in the background, I do think that interviewees will bring forth more of their authentic self.  Whether this leads to a more valid process is an open question.  But, hiring managers and HR will have a better idea of the “real” person being hired.

In HR we often talk about implementing change, but this is a time where we also need to be the leaders of it in our own areas.  Let’s skip the denial of what is happening and ditch the resistance to new ways of evaluating candidates.  I think we will be pleasantly surprised with the results.

What Are Your Company’s Selection Myths?

For North American sports fans, there is not a more public selection process than that National Football League (NFL) preparing for the annual talent draft.  This is the process for allocating new players (rookies) who have finished their college careers to the teams.  Players cannot sign a contract with any team they choose until after they complete their rookie contract with the team that drafts them.  Players not chosen in the draft can sign with any team.

Besides evaluating players based on their college games, the NFL teams also invite the top players to be evaluated at what they call a combine.  At the combine, players get interviewed by teams and are put through a variety of physical and medical tests.  Teams use all of this information to compare players against each other (by position) so they can make the best choices during the draft.

Of course, in reality, the top draft choices are made mostly based on the players performance in college.  Players at the best schools compete with and against other players who are likely to be drafted, so watching them perform in a game tells teams pretty much what they need to know.  And, as I wrote about last week, there is a big bias towards players who went to the “best” schools.  But, the teams do use information at the combine to inform them about players who they don’t feel they have good data on.  For instance, those who are recovering from injuries or played at schools that don’t compete against the top schools.

There’s only one problem:  There is very little data that supports that the “tests” given at the combine of predictive of success in the NFL.  This article about the problems in measuring hand size in quarterbacks provides just one example of that.

One can see how this all got started.  Quarterbacks need to be able to throw a ball well (with a lot of speed and accuracy) and to be able to hold on to it under pressure and having a large hand (as measured from the tip of the thumb to tip of the pinkie) would seemingly be related to both of those.  But, it’s not.  All quarterbacks grip the ball a little bit differently, regardless of hand size, to get the best results.  The article suggests that hand strength is the better predictor of quarterback performance and that it is unrelated to size.  But, those who evaluate quarterbacks just cannot let the size measurement go.

I am guessing that most of your organizations have an unproven selection myth, such as, “Our best managers have gotten their MBAs from one of 10 schools” or “Good supervisors are those who have worked their way up in our industry” or “Our most successful programmers had previous experience at specific companies.”  I used to hear, “Our best call center agents have previous experience before coming here” all of the time.  But, when I conducted validation studies in contact centers, it was rare that previous experience was a good predictor of future performance. These myths are easy to evaluate, but changing HR practices is harder.  It often requires good data and a shift in culture to change thinking.  However, moving on from myths is often required to make better talent decisions.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE