Adapting to Changes in Job Duties

I wrote a couple of months ago about how McDonald’s is changing the cognitive requirements of some of its jobs by adding channels for customers to order food. I argued that such a development should get them thinking about who they hire and how they train new employees.

If you have recently wandered into one of their stores, you probably noticed that, if it is not too busy, a McDonald’s employee may bring you your order. OK, this is not particularly revolutionary. But, to quote a franchisee in an article, “We’re bringing the employees from behind the counter out front to engage, in a more personal way, with our customers.” Maybe I am making more out of this particular example than it warrants, but this strikes me a really upping the customer service requirements of a McDonald’s employee. And I am guessing that a fair amount of the employees are not going to meet it. It’s just not what they signed up for.

This is not about whether McDonald’s employees are capable of providing the additional service or whether their ability to do it well affects the customer experience and/or sales. Rather, it appears to be an example of company changing job requirements and then assuming that people hired using a process that does not account for the new skills will be able to carry out the new duties.

Changing skills requirements is a good thing. It shows adaptation to technology and customer needs and makes the work experience more interesting for people in repetitive jobs. But, companies cannot assume that the incumbents can magically adapt without training and revised performance expectations.

This change also requires updating validation selection processes. Whether it means increasing the weight given to certain aspects or validating a new test, we must adapt our workforce to new job requirements on the front end. As jobs change, hiring practices should as well.

Technology and customers are big drivers of change in the skills, abilities, and personality characteristics required of employees. Smart companies not only redesign work to account for this, but they also update how they train and hire to help their workforce adapt.

Selection When There Are More Jobs Than People

As the economy adds new jobs, some sectors are having a problem finding enough workers for them, including construction. This is regardless of the pay and benefits associated with the jobs. However, the same is true in other blue-collar sectors. This is not a shock to those of you who have been trying to hire people for these types of positions in companies that were not hit by the great recession. For instance, utility companies have been having a difficult time recruiting lineman (sic) for years, and these jobs pay into the six-figures will full benefits.

While the reasons for the hiring shortage are numerous (“You can’t pay me enough to do that kind of work,” “I’d rather work in tech,” “I want to set my own hours,” etc.), these businesses do have a significant challenge. There are some things that you cannot use technology to replace (yet).

In this situation, HR should take the long view. With low unemployment, it’s unlikely that you can just hire your way out this. The labor pool won’t support it. Rather, companies need to engage with high schools and trade colleges to develop candidates. But, they also need to promote and market these jobs in a way that will make them more appealing because right now. This is because many more young people (and their parents) would rather code than swing a hammer.

To avoid the expense of high turnover when hiring for these positions, companies need to do a very good job of validating good selection tools with tenure in mind (as well as performance). They include:

1) Modified versions of Interest inventories (what are someone’s likes and dislikes).

2) Biographical information (do candidates enjoy physically difficult hobbies) surveys (also known as biodata) are very useful ways to determine whether a person is likely to stay in a specific area of work.

I have had good success in validating these for hard to fill positions in manufacturing. This is especially true where giving physical ability tests are either expensive, have a risk of injury, or may lead to high levels of adverse impact against women.

These companies also need to embrace the investment in training and accelerating wages as new hires gain more skills. I have seen this put to effective use in reducing turnover.

There will not be a silver-bullet for creating enough workers for physically demanding jobs in the near term. However, employers who think long term may find viable solutions that will serve them well.

Adapting Selection Systems After the Robots Take Over

I am not sure that any HR futurist can tell us how many jobs will be displaced by automation over the next 5, 10, or 20 years. The answer is clearly more than zero. The latest example of this can be read here. The theme of the article is, “Really, a formula can make predictions better than a person’s intuition?” In psychology (well, industrial psychology), we have only known this since the mid-1950s (see this book), so I can see why the idea is just catching on.

Any kind of judgment that is made based on accumulating data will ALWAYS be more accurate over time when done by a machine than a person. This is because the machine is not biased by what has happened most recently, how impacted it is by the decision, how attractive the others who are involved are, etc. While this type of analysis is somewhat difficult for people to do consistently well, it is simple math for a computer. There is really no reason, besides stroking someone’s ego, to have humans do it.

As computers continue to remove the computational portions of jobs, such as analyzing trends, making buying decisions, they will impact HR in the following ways:

• Fewer customer facing jobs to manage, but more IT related ones.

• Many of the remaining jobs will require less cognitive ability and more interpersonal skills. This is because these employees could potentially spend more time meeting specific customer needs and being the interface between end users and the algorithms.

• The key predictors of job success would potentially become conscientiousness, agreeableness, and customer service orientation rather than problem solving ability.

• Developing a validating a different set of pre-employment tests.

• Recruiters will need to source people with very specific skills (cognitive ability for programmers and willingness to get along with others for many other jobs).

The challenge to industrial psychology continues to be developing more valid measures of personality. Tests of cognitive ability predict job performance about twice as well as those of “soft” skills, even in those that already have a high personality component (such as customer service). This also means developing better measures of performance (e.g., how interpersonal skills impact business outcomes).

Or, maybe the robots will do it for us.

People–Can’t Profit With Them, Can’t Profit Without Them

So, in the same week that Tesla says that lack of people is a problem in their business (too many robots!), Starbucks comes to the conclusion that people are biased and are hurting its business, everyone gets training. So, which one is right?

Let’s start with Tesla. Their statement is not as much about how wonderful people are as it is that they haven’t quite (yet) gotten the engineering down for their new cars to be built completely by robots. So, it is not exactly an “Up with people” moment as a “Well, we guess we have to put up with them for a bit longer” one.

The Starbucks situation is a bit stickier. On one hand, they clearly felt as if they had to do something after a horrible incident involving African-American customers to maintain their brand image. But, I think they are setting themselves up for failure. Implicit bias training is well meaning, but correcting a lifetime of assumptions about people in a ½ day seminar is a pretty tall order. What will they do next time a racially tinged incident occurs? Do a full day of training? Validate a test that predicts levels of implicit bias?

Where I think the training will have the most impact is on their new hires. It sets a cultural norm of what is and is not OK. Yes, this will require management support and some way of recognizing employees for being decent human beings. But, in reading the comments on their social media pages after the announcement that may not matter as a lot of people were pretty bent out of shape of having to go one whole afternoon without their Starbucks. Ah, the downsides of selling a legal, but addicting, product.

Service sector organizations will always face the challenge of directing the activities of people in a way that is consistent with their values. Manufacturers are always challenged with introducing technology (which improves efficiency), but also understanding its limits (for now). We are not quite at a point where people can be engineered out of business. So, we still need to lead them in productive ways.

Should Employers Embrace the Push for GEDs?

The U.S. has a lot of people who do not get a high school diploma. This can lead to significant barriers in employment and future opportunities in college. As a result, in 2013, over 500,000 people took and passed a high school equivalency exam (GED). This was a 20% increase over 2012. The Bureau of Labor Statistics accepts a diploma and GED as being the same. But, should employers?

The idea behind the GED is that some people are unable to complete high school for a variety of reasons and by passing the test they show that they have acquired the same amount of knowledge. That may be true, but there is little high school knowledge, except perhaps some math, that employers find valuable. What is valuable is the skill of being able to navigate something for 4 years. But, you don’t have to take my word for it. This report outlines in detail that the career and economic trajectories for those with a GED more closely resemble high school dropouts without a GED than those who complete high school. From a public policy perspective, this leads me to believe that that the proponents of the test are selling snake oil.

Employers should strongly consider this in their applications. Why? Because there may be economic consequences of treating a GED and a high school diploma the same way. In working with a client to validate ways to help them reduce turnover, we looked at the retention rates by education level for entry level positions. What we found was that after 12 months, the retention rate of those with a high school diploma compared to those with a GED 80% vs 65%. After 24 months the retention rates were 68% vs 50%. At a hiring rate of about 1000 per year and a cost of hire a bit more than $5k per person, these are significant differences. After checking with some colleagues, these results are not unusual.

The overall picture shows that employers should not be treating those with GEDs like those with high school diplomas. Rather, you should validate the impact of education level against turnover or performance as evaluate it accordingly in your application, biodata, or training and experience scoring process.

What Do Grades Tell Us When Hiring?

Welcome to 2018! This first link actually highlights a look at valid personality testing on a largely read website. This makes me think that the year is off to a good start in the field.

Along those same lines of predicting behavior, a line of thought has always been that school grades are indicative of future success. The logic behind this makes sense. If a student applies him/herself and does well in school, then it is likely that he or she will do the same at work. Critics will say that grades measure something very specific that does not really translate to work and there are biases in how grades are given (which is why universities use standardized tests).

As always, what makes a good predictor really depends on the outcomes you are looking for. If your goal is to hire people who are good at following rules and doing lots of things pretty well, then this article suggests that school grades should be part of your evaluation process. But, if you want to hire very creative and novel thinkers, then GPA probably is not your best answer.

What also grabbed me about the article was the definition of success. The research article cited indicated that those who did very well in high school, nearly all of them were doing well in work and leading good lives. But, for the authors, this apparently is not enough. Why? Because none of them have “impressed the world,” whatever that means. And because there are lots of millionaires with relatively low GPAs (here is a suggestion: how about controlling for parents’ wealth before making that calculation?).

From an employment perspective, we need to be clear what valuable performance looks like when validating and part of the selection process. If your goal is to select people into positions that require developing unique solutions, then GPA may not be a useful predictor. However, if you expect people to follow processes and execute procedures, then GPA is likely to be a useful tool which should be used with other valid predictors.

And, if you are looking to hire people who are going to “impress the world,” good luck to you.

The Challenge in Finding Good Performance Data

In validating tests, getting a hold of good individual performance data is key.  But, it is also one of the more difficult parts of the process to get right.

Intuitively, we all think we can judge performance well (sort of like we all think we are good interviewers).  But, we also know that supervisor ratings of performance can be, well, unreliable.  This is so much the case that there is a whole scientific literature about performance appraisals, even as there is currently a movement within the business community to get rid of them.Facetime For PC

But, what about objectively measuring performance (for every new account opened you get $X)?  If the Wells Fargo imbroglio tells us anything, it’s that hard measures of performance that are incented can run amok.  Also, while they are objective, single objective measures (sales, piece work manufacturing, etc.) rarely reflect the entirety of performance.  Lastly, for jobs where people work interdependently it can be very difficult to determine exactly who did what well, even if you wanted to.

So, what’s one to do?

  • Establish multiple measures of performance. For instance, call centers can measure productivity (average call time) and quality (number of people who have to call back a second time).  Don’t rely on just one number.
  • Even when a final product is the result of a group effort, each individual is still responsible for some pieces of it. If you focus on key parts of the process, you can find those touch points which are indicative of individual performance.  Again, look for quality (was there any rework done?) and productivity (were deadlines met?) measures.
  • Objective performance measures do not have to have the same frequency as piece work or rely on one “ta-da” measure at the end. Think of meeting deadlines, whether additional resources were required to complete the work, etc.
  • Don’t get bogged down in whether or not a small percentage of people can game the system with objective measures. We seem OK with rampant errors in supervisory judgment, but then get all excited because 1 out of 100 people can make his productivity seem higher than it is.  If you dig into the data you are likely to be able to spot when this happens.

When I hear people say that you cannot measure individual performance well, I cringe.  Of course you can.  You just need to know where to look and focus on what is important.

 

 

What We Find at the Intersection of Management and Psychology

There’s a figurative store where the roads of Management and Psychology cross.  The shelves up front have the new and shiny theory or practice.  More likely than not, it will join the former new and shiny ideas in the dingy back of the store.  Some are just flat out wrong and others are just a repackaging of what’s already out there.  It’s kind of depressing in that the time would have been better spent working on something truly innovative.

A common theme of these books is denigrating the role of intelligence in employee selection.  Let’s be clear—there is a mountain of research that shows that for most jobs, the smarter people (using Western measures of intelligence for doing jobs in Western economies) will perform better. And these tests are consistently better predictors than non-cognitive (e.g., personality) assessments.  Ignoring these facts reduces the value that HR brings to an enterprise.

Cognitive ability tests are not perfect predictors, and even if they were, there is plenty of room left to find additional ones. This is the space that the shiny new theories try to fill.  In addition, the new characteristics cannot be traits, but rather a skill that can be developed (y’know, so the author can sell seminars, workbooks, etc.).  This, combined with the current wave of anti-intellectualism in the U.S., leads to the search for something new, but not necessarily innovative.

The questions are:

  • What value do these “new” methods bring (e.g., do they work) and
  • Are they really different than what we already have?

One of the shiniest new objects in the store is Grit.  The name taps into a very American cultural value.  If you dig deep and try hard, you will succeed.  Right there with pulling yourself up by the bootstraps.  While its proponents don’t claim that it’s brand new, they won’t concede that it is just shining up something we already have in Conscientiousness (which is one of the Big 5 personality traits).  Conscientiousness is a good and consistent predictor of job performance, but not as good as cognitive ability.  Measures of Grit are very highly correlated with those of Conscientiousness (Duckworth et al. [2007, 2009]), so it’s likely that we are not dealing with anything new.

Does this spiffed up version of an existing construct really work?  For that, we can go to the data.  And it says no.  The research currently shows that only one of Grit’s factors (perseverance) is at all predictive and it doesn’t predict beyond measures that we already have.

I am all for innovation and industrial psychology is really in need of some.  But, chasing the new and shiny is not going to get us there.  It’ll just clog up bookshelves.

 

A Crazy Way To Test Candidates

You think you have it bad when hiring. Imagine if:

  • All of your entry level job candidates were known to your entire industry and customers.
  • You and all of your competitors had access to exactly the same background, pre-employment, and past performance data, outside of your one chance to interview this person.
  • Oh, and at least one of the pre-employment tests that are given doesn’t correlate with the performance of your most critical employees.
  • The cost of acquiring the labor is huge and the compensation levels are fixed.
  • If you make a mistake, it takes a year to correct.
  • It may be 3 years before you know if you made a good hire.
  • The order of when you and your competitors can make a job offer is pre-determined, though for a high price you can jump the line.
  • And this all takes place on national television in front of your customers.

Welcome to the drafting of professional sports players in the United States. And this time of the year, the focus is on the National Football League (NFL).

I bring this up because the NFL brings nearly all of the prospective players to a group workout called a combine, which leads to the drafting of players in April. In the combine, the players are prodded and poked by medical staffs, given psychological tests, and are put through a variety of physical skill exercises. Teams also have a chance to interview players individually. The combine is organized so that the teams can see what the roughly 300 players can do without flying them all over the country. For players’ perspectives on this and the drafting process, click here and here.

 

The oddest thing about the combine is that they take single measurements of core skills (speed, jumping ability, etc) when they have access to recordings of every single play in which the player has participated (real performance). Granted, different players go against different levels of competition, but you would think that about 1000 samples of a person’s performance would be a bit of a better indicator than how fast he covers 40 yards (usually a touch under 5 seconds, even for the big guys). The interviews can be all over the map with clubs asking about drinking behavior (the players are college students) and the ability to breakdown plays. And then many players get picked by teams that don’t interview them at all.

From a validation point of view, the performance data on players are actually readily available now. Much like call centers, the NFL records some very detailed individual statistics and not just team wins and losses to evaluate players. Whether the number of times a defensive lineman can bench press 225 lbs correlates with tackles for loss is not known (or at least published), but you get the idea.

Much is made about the pressure that the players are under to perform well at the combine. This is probably more so for those from smaller schools or with whom the teams are less familiar. But, the pressure is also really on the talent scouts (sports’ version of recruiters). They only get to pick 7 players in the draft. Undrafted players can be signed by any team and history shows that they have a surprisingly high success rate (see below).

Because of the amount of data available on players, the draft process is reasonably efficient, if you use the metrics of percentage of players who are in the starting lineup on rosters by draft position, turnover (which is mostly involuntary, and achieving high performance (measured by being voted onto the all-start team), higher drafter players do better than lower drafted ones. Of course, the higher a player is taken in the draft, the more he’s paid for the first part of his career, so there is some financial bias to start higher drafted players. Interestingly, undrafted players perform at the same level on these metrics as third round picks. Perhaps there’s something to having a chip on your shoulder.

What we can learn from the NFL is that when there’s a lot of data available, you can make better selection decisions, even when your competitors have the same data. Second, there’s still plenty of good (though not the best) talent available that’s overlooked by the masses. Finding that inefficiency in the selection process and addressing it can lead to a significant competitive advantage. A good validation process can help you do that.

For more thoughts and insights regarding pre-employment test validation, contact Warren Bobrow.

Curious About Openness

One of my favorite personality scales to administer is Openness to New Experiences. It is one of the “Big 5” personality constructs and is supported by a great deal of research. People who score high on this scale seek new experiences and to engage in self-examination. They draw connections between seemingly unconnected ideas. People who score low are more comfortable with things that they find familiar.

I bring this up this week because I have heard from a few clients who want to hire people who are “curious.” Also, I came across this interview where the CEO was talking about looking for curious people. Note that he’s dead wrong in thinking that Openness is not related to intelligence. Why is it that people go out of their way to denigrate cognitive ability testing when it is THE most accurate predictor for most jobs? OK, that’s for another post on another day.

Part of this trend may come from gaming. Being successful in gaming requires searching in any place available for that clue, weapon, whatever that allows you to get to the next level. It is also a welcoming environment for failure. But, those who show curiosity, problem solving ability (at least learning the logic of the programmer), and the desire to keep learning will be successful.

Measuring curiosity as an outcome is an entirely different story. However, it should include spending time on research, a willingness to fail, and using unique sources of information when developing a solution.

I am intrigued (curious?) about this interest in Openness/Curiosity and I plan to follow-up on it. Is Openness/Curiosity important to your firm or practice? If so, what are you doing to measure it in your candidates?

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE