Adapting to Changes in Job Duties

I wrote a couple of months ago about how McDonald’s is changing the cognitive requirements of some of its jobs by adding channels for customers to order food. I argued that such a development should get them thinking about who they hire and how they train new employees.

If you have recently wandered into one of their stores, you probably noticed that, if it is not too busy, a McDonald’s employee may bring you your order. OK, this is not particularly revolutionary. But, to quote a franchisee in an article, “We’re bringing the employees from behind the counter out front to engage, in a more personal way, with our customers.” Maybe I am making more out of this particular example than it warrants, but this strikes me a really upping the customer service requirements of a McDonald’s employee. And I am guessing that a fair amount of the employees are not going to meet it. It’s just not what they signed up for.

This is not about whether McDonald’s employees are capable of providing the additional service or whether their ability to do it well affects the customer experience and/or sales. Rather, it appears to be an example of company changing job requirements and then assuming that people hired using a process that does not account for the new skills will be able to carry out the new duties.

Changing skills requirements is a good thing. It shows adaptation to technology and customer needs and makes the work experience more interesting for people in repetitive jobs. But, companies cannot assume that the incumbents can magically adapt without training and revised performance expectations.

This change also requires updating validation selection processes. Whether it means increasing the weight given to certain aspects or validating a new test, we must adapt our workforce to new job requirements on the front end. As jobs change, hiring practices should as well.

Technology and customers are big drivers of change in the skills, abilities, and personality characteristics required of employees. Smart companies not only redesign work to account for this, but they also update how they train and hire to help their workforce adapt.

Adapting Selection Systems After the Robots Take Over

I am not sure that any HR futurist can tell us how many jobs will be displaced by automation over the next 5, 10, or 20 years. The answer is clearly more than zero. The latest example of this can be read here. The theme of the article is, “Really, a formula can make predictions better than a person’s intuition?” In psychology (well, industrial psychology), we have only known this since the mid-1950s (see this book), so I can see why the idea is just catching on.

Any kind of judgment that is made based on accumulating data will ALWAYS be more accurate over time when done by a machine than a person. This is because the machine is not biased by what has happened most recently, how impacted it is by the decision, how attractive the others who are involved are, etc. While this type of analysis is somewhat difficult for people to do consistently well, it is simple math for a computer. There is really no reason, besides stroking someone’s ego, to have humans do it.

As computers continue to remove the computational portions of jobs, such as analyzing trends, making buying decisions, they will impact HR in the following ways:

• Fewer customer facing jobs to manage, but more IT related ones.

• Many of the remaining jobs will require less cognitive ability and more interpersonal skills. This is because these employees could potentially spend more time meeting specific customer needs and being the interface between end users and the algorithms.

• The key predictors of job success would potentially become conscientiousness, agreeableness, and customer service orientation rather than problem solving ability.

• Developing a validating a different set of pre-employment tests.

• Recruiters will need to source people with very specific skills (cognitive ability for programmers and willingness to get along with others for many other jobs).

The challenge to industrial psychology continues to be developing more valid measures of personality. Tests of cognitive ability predict job performance about twice as well as those of “soft” skills, even in those that already have a high personality component (such as customer service). This also means developing better measures of performance (e.g., how interpersonal skills impact business outcomes).

Or, maybe the robots will do it for us.

Ways That We Punish, Rather Than Coach, Poor Performers

During the 4th of July holiday, I was binge watching an Australian cooking competition show with my family. It was pretty mindless and entertaining stuff. The gist of each episode was that contestants competed in a theme-based challenge. One was selected as the best for the day. Two others were deemed the poorest performers and then they competed to stay on the show. What I found most interesting was that they task they were given to avoid elimination (getting fired) was harder (by design) than the original one.

Of course, there is not necessarily a straight line to be drawn between entertainment shows and the work place. But this did get me thinking about how we develop poor performers. While it seems intuitive that resources spent on improving their performance would have a significant return-on-investment, data show that high performers generally benefit more from training than low ones do.

HR needs to consider how to develop all levels of talent. With the current low unemployment rates, companies are losing some of their control over their talent levels, especially now there is more job hopping. There are a few considerations in developing low performers:

• Are you rewarding progress until the person is capable of delivering results? The key here is that improving performance requires changes in behavior. If they are reinforced, the new behaviors are more likely to be learned. Telling people “try harder” or dangling a future carrot are not good strategies for improving performance.

• Are they sufficiently skilled in the tasks you expecting them to do? Before concluding that the person is not going to be a good employee, be sure that they have the basic skills/experience to perform the job. You should not expect someone to be a pastry chef if s/he does not know how to make a cake. This is where valid pre-employment testing programs are valuable.

• Are there other areas of the business that appeal more to their interests? I have a client that staffs its own call center. They have higher than average turnover in the call center, but somewhat lower in the company overall, because after people spend 6 months there they can bid for any other open position in the company for which they are qualified. Allowing easy lateral transfers helps you keep good employees who may just be in jobs they do not find engaging.

Low unemployment rates mean that new talent is going to be more expensive. It may indicate a good return-on-investment in developing under-performing talent than usual. However, getting people in the right place and having alternate reward strategies are essential to getting the most out of their development.

Can Robots Reduce Turnover By Making Work More Interesting for People?

Lower unemployment rates mean that many industries, including hospitality, need ways to attract and retain more talent. Higher minimum wage laws in many states and cities have likely encouraged people to stay in jobs they may have previously left. But, what about using automation to get them to stay?

The typical assumption is that automation leads to fewer workers, which makes sense in many cases. The cotton gin took people out of the fields and it does not take as many people to put together a car now as it did 30 years ago. What automation also does is offload boring tasks so that people can do more interesting work. We see that in offices (no longer lots of people mindlessly typing memos all day) and now we are seeing a bit of it in the hospitality sector. Granted, most of the turnover in restaurants is due to still crappy pay and low benefits. But an employer quoted in the article thinks that it is partly due to the work itself (note, I was unable to find another dataset that confirmed this, but it makes for an interesting argument). From this perspective, a restaurant can provide more value to the employee (and, presumably the customer) by having that person deliver food instead of taking orders (which customers are doing themselves from kiosks or smart phones). Perhaps these are both minimum wage tasks and the former is more interesting for the worker than the latter.

The idea of reducing turnover by making the work more interesting goes back to the 1970’s. It is pretty simple: Most people do not want to do boring and repetitive tasks and they will be more satisfied and engaged with their work (e.g., more likely to stay) if it is not mundane. This is not rocket science. However, giving people more tasks and more autonomy may also require a different skill set. Where employers who choose this approach (either through job redesign or automation) miss the boat is when they implement these changes without considering whether employees have the skills sets necessary.

Most organizational change efforts I have observed save the planning for new selection systems or training until the end (if they are thought of at all). For instance, if I have always asked workers to follow one single process but now I am giving them the autonomy to override it, I need to understand that these are two different sets of performance expectations. If you asking for new behaviors from those in a job title, you need to be sure you are hiring people with those abilities using validated tests and/or provide them with proper training.

What Do Grades Tell Us When Hiring?

Welcome to 2018! This first link actually highlights a look at valid personality testing on a largely read website. This makes me think that the year is off to a good start in the field.

Along those same lines of predicting behavior, a line of thought has always been that school grades are indicative of future success. The logic behind this makes sense. If a student applies him/herself and does well in school, then it is likely that he or she will do the same at work. Critics will say that grades measure something very specific that does not really translate to work and there are biases in how grades are given (which is why universities use standardized tests).

As always, what makes a good predictor really depends on the outcomes you are looking for. If your goal is to hire people who are good at following rules and doing lots of things pretty well, then this article suggests that school grades should be part of your evaluation process. But, if you want to hire very creative and novel thinkers, then GPA probably is not your best answer.

What also grabbed me about the article was the definition of success. The research article cited indicated that those who did very well in high school, nearly all of them were doing well in work and leading good lives. But, for the authors, this apparently is not enough. Why? Because none of them have “impressed the world,” whatever that means. And because there are lots of millionaires with relatively low GPAs (here is a suggestion: how about controlling for parents’ wealth before making that calculation?).

From an employment perspective, we need to be clear what valuable performance looks like when validating and part of the selection process. If your goal is to select people into positions that require developing unique solutions, then GPA may not be a useful predictor. However, if you expect people to follow processes and execute procedures, then GPA is likely to be a useful tool which should be used with other valid predictors.

And, if you are looking to hire people who are going to “impress the world,” good luck to you.

A Crazy Way To Test Candidates

You think you have it bad when hiring. Imagine if:

  • All of your entry level job candidates were known to your entire industry and customers.
  • You and all of your competitors had access to exactly the same background, pre-employment, and past performance data, outside of your one chance to interview this person.
  • Oh, and at least one of the pre-employment tests that are given doesn’t correlate with the performance of your most critical employees.
  • The cost of acquiring the labor is huge and the compensation levels are fixed.
  • If you make a mistake, it takes a year to correct.
  • It may be 3 years before you know if you made a good hire.
  • The order of when you and your competitors can make a job offer is pre-determined, though for a high price you can jump the line.
  • And this all takes place on national television in front of your customers.

Welcome to the drafting of professional sports players in the United States. And this time of the year, the focus is on the National Football League (NFL).

I bring this up because the NFL brings nearly all of the prospective players to a group workout called a combine, which leads to the drafting of players in April. In the combine, the players are prodded and poked by medical staffs, given psychological tests, and are put through a variety of physical skill exercises. Teams also have a chance to interview players individually. The combine is organized so that the teams can see what the roughly 300 players can do without flying them all over the country. For players’ perspectives on this and the drafting process, click here and here.

 

The oddest thing about the combine is that they take single measurements of core skills (speed, jumping ability, etc) when they have access to recordings of every single play in which the player has participated (real performance). Granted, different players go against different levels of competition, but you would think that about 1000 samples of a person’s performance would be a bit of a better indicator than how fast he covers 40 yards (usually a touch under 5 seconds, even for the big guys). The interviews can be all over the map with clubs asking about drinking behavior (the players are college students) and the ability to breakdown plays. And then many players get picked by teams that don’t interview them at all.

From a validation point of view, the performance data on players are actually readily available now. Much like call centers, the NFL records some very detailed individual statistics and not just team wins and losses to evaluate players. Whether the number of times a defensive lineman can bench press 225 lbs correlates with tackles for loss is not known (or at least published), but you get the idea.

Much is made about the pressure that the players are under to perform well at the combine. This is probably more so for those from smaller schools or with whom the teams are less familiar. But, the pressure is also really on the talent scouts (sports’ version of recruiters). They only get to pick 7 players in the draft. Undrafted players can be signed by any team and history shows that they have a surprisingly high success rate (see below).

Because of the amount of data available on players, the draft process is reasonably efficient, if you use the metrics of percentage of players who are in the starting lineup on rosters by draft position, turnover (which is mostly involuntary, and achieving high performance (measured by being voted onto the all-start team), higher drafter players do better than lower drafted ones. Of course, the higher a player is taken in the draft, the more he’s paid for the first part of his career, so there is some financial bias to start higher drafted players. Interestingly, undrafted players perform at the same level on these metrics as third round picks. Perhaps there’s something to having a chip on your shoulder.

What we can learn from the NFL is that when there’s a lot of data available, you can make better selection decisions, even when your competitors have the same data. Second, there’s still plenty of good (though not the best) talent available that’s overlooked by the masses. Finding that inefficiency in the selection process and addressing it can lead to a significant competitive advantage. A good validation process can help you do that.

For more thoughts and insights regarding pre-employment test validation, contact Warren Bobrow.

Curious About Openness

One of my favorite personality scales to administer is Openness to New Experiences. It is one of the “Big 5” personality constructs and is supported by a great deal of research. People who score high on this scale seek new experiences and to engage in self-examination. They draw connections between seemingly unconnected ideas. People who score low are more comfortable with things that they find familiar.

I bring this up this week because I have heard from a few clients who want to hire people who are “curious.” Also, I came across this interview where the CEO was talking about looking for curious people. Note that he’s dead wrong in thinking that Openness is not related to intelligence. Why is it that people go out of their way to denigrate cognitive ability testing when it is THE most accurate predictor for most jobs? OK, that’s for another post on another day.

Part of this trend may come from gaming. Being successful in gaming requires searching in any place available for that clue, weapon, whatever that allows you to get to the next level. It is also a welcoming environment for failure. But, those who show curiosity, problem solving ability (at least learning the logic of the programmer), and the desire to keep learning will be successful.

Measuring curiosity as an outcome is an entirely different story. However, it should include spending time on research, a willingness to fail, and using unique sources of information when developing a solution.

I am intrigued (curious?) about this interest in Openness/Curiosity and I plan to follow-up on it. Is Openness/Curiosity important to your firm or practice? If so, what are you doing to measure it in your candidates?

Yes, Only Computers Should Decide Who Gets Hired

There is always a sense of excitement and dread when I learn of validated pre-employment testing making its way into different media. On one hand, I appreciate the field getting wider recognition. However, the articles invariably have some cringe-worthy statements in them that really mislead people about the field.

This is as an example. The title (Should Computers Should Decide Who Gets Hired) is provocative, which is fine. I understand that media needs to attract eyeballs. But, it implicitly implies a false choice of human vs. machine. Also, it ignores the expertise of people required to develop a valid test, including job analysis, performance evaluation, data analysis (as much art as science there), and setting cut scores. This makes it easy for the reader to think that tests can be pulled off of the internet and used effectively.

The authors then show their bias of disbelieving that a valid test could actually do better than a human (ignoring 50+ years of research on the topic). Then they reach for straws with, “But on the other hand relegating people—and the firms they work for—to data points focuses only on the success of firms in terms of productivity and tenure, and that might be a shallow way of interpreting what makes a company successful.”

Why on earth would hiring people based on their probability to succeed and stay be shallow? What other criteria would you want?

They continue with, “Firms that are populated only by high-achieving test takers could run the risk of becoming full of people who are all the same—from similar schools, or with the same types of education, similar personality traits, or the same views.”

How would a test choose people from similar schools? In fact, it’s people who make these types of selections. The authors also make the (incorrect) assumption that all tests are based on achievement, ignoring many other types of valid test, including the ones in the research paper they cite, which include “technical skills, personality, and cognitive skills.”

Lastly, “And that could potentially stall some of the hard work being done in the effort to introduce more diversity of every kind into companies all over the country. And that type of diversity, too, has been proven to be an increasingly important factor in overall firm performance.”

The logic here is circular. The test is validated on high performers, who must be diverse. But, by using a test that predicts high performers, you won’t have a diverse workplace. Huh?

You can download the source research here. I’ll cut to the chase with the main idea from the abstract:

Our results suggest that firms can improve worker quality by limiting managerial discretion. This is because, when faced with similar applicant pools, managers who exercise more discretion (as measured by their likelihood of overruling job test recommendations) systematically end up with worse hires.

The only change I would make is to the first two words, which should read, “All results every gathered on the topic show that firms…”

So, the message in the popular press is, “Tests work, but that scares us, so we’ll make up unsubstantiated reasons why you should not fully use tests.” At least they get the “tests work” part right.

For more information on how you can effectively use valid pre-employment tests, contact Warren Bobrow.

Equal Pay for Similar Work—A New Era in Job Analysis and Salary Negotiations?

California has prohibited gender-based wage discrimination since 1949. Courts ruled that the law applied only to exactly the same work. The state took it one step further this week by passing a law this week saying that women have a discrimination claim if there is unequal pay for substantially similar work. Some feel that the new law is good news for everyone from cleaning crews to Hollywood’s biggest actresses.

Practically speaking, the decision could lead to renewed interest in job analysis (let’s not get too excited, OK?). The law is written so that the burden is on the employer to demonstrate that the difference in pay is due to job related factors and not gender. So, if someone is going to argue that two jobs are substantially similar, there is going to need to be some data to back that up.

The law states that similarities are based on “a composite of skill, effort, and responsibility, and performed under similar working conditions.” A good job analysis will quantify these so that jobs can be compared. Who knows what statistical test tells you when jobs are substantially similar, but the data will tell you if they are the same or really different, and that’s a start. Regardless, I’m guessing that the meaning and demonstration of substantially similar will be litigated for a while.

The other impact of this law is likely to be on salary/raise negotiations. There’s plenty of data which indicates that men are less averse to this process than women, and this has real economic impacts. Companies may want to consider whether to make non-negotiable offers to avoid bias claims.

California, as usual, is setting a new standard in equal pay legislation. There’s the usual concern that this will cost the state jobs, but it may also attract more professional women. Either way, companies will need to review their compensation structures and determine which jobs are substantially similar to each other.

For more information on analyzing and grouping job titles, please contact Warren Bobrow.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE