People–Can’t Profit With Them, Can’t Profit Without Them

So, in the same week that Tesla says that lack of people is a problem in their business (too many robots!), Starbucks comes to the conclusion that people are biased and are hurting its business, everyone gets training. So, which one is right?

Let’s start with Tesla. Their statement is not as much about how wonderful people are as it is that they haven’t quite (yet) gotten the engineering down for their new cars to be built completely by robots. So, it is not exactly an “Up with people” moment as a “Well, we guess we have to put up with them for a bit longer” one.

The Starbucks situation is a bit stickier. On one hand, they clearly felt as if they had to do something after a horrible incident involving African-American customers to maintain their brand image. But, I think they are setting themselves up for failure. Implicit bias training is well meaning, but correcting a lifetime of assumptions about people in a ½ day seminar is a pretty tall order. What will they do next time a racially tinged incident occurs? Do a full day of training? Validate a test that predicts levels of implicit bias?

Where I think the training will have the most impact is on their new hires. It sets a cultural norm of what is and is not OK. Yes, this will require management support and some way of recognizing employees for being decent human beings. But, in reading the comments on their social media pages after the announcement that may not matter as a lot of people were pretty bent out of shape of having to go one whole afternoon without their Starbucks. Ah, the downsides of selling a legal, but addicting, product.

Service sector organizations will always face the challenge of directing the activities of people in a way that is consistent with their values. Manufacturers are always challenged with introducing technology (which improves efficiency), but also understanding its limits (for now). We are not quite at a point where people can be engineered out of business. So, we still need to lead them in productive ways.

Should Employers Embrace the Push for GEDs?

The U.S. has a lot of people who do not get a high school diploma. This can lead to significant barriers in employment and future opportunities in college. As a result, in 2013, over 500,000 people took and passed a high school equivalency exam (GED). This was a 20% increase over 2012. The Bureau of Labor Statistics accepts a diploma and GED as being the same. But, should employers?

The idea behind the GED is that some people are unable to complete high school for a variety of reasons and by passing the test they show that they have acquired the same amount of knowledge. That may be true, but there is little high school knowledge, except perhaps some math, that employers find valuable. What is valuable is the skill of being able to navigate something for 4 years. But, you don’t have to take my word for it. This report outlines in detail that the career and economic trajectories for those with a GED more closely resemble high school dropouts without a GED than those who complete high school. From a public policy perspective, this leads me to believe that that the proponents of the test are selling snake oil.

Employers should strongly consider this in their applications. Why? Because there may be economic consequences of treating a GED and a high school diploma the same way. In working with a client to validate ways to help them reduce turnover, we looked at the retention rates by education level for entry level positions. What we found was that after 12 months, the retention rate of those with a high school diploma compared to those with a GED 80% vs 65%. After 24 months the retention rates were 68% vs 50%. At a hiring rate of about 1000 per year and a cost of hire a bit more than $5k per person, these are significant differences. After checking with some colleagues, these results are not unusual.

The overall picture shows that employers should not be treating those with GEDs like those with high school diplomas. Rather, you should validate the impact of education level against turnover or performance as evaluate it accordingly in your application, biodata, or training and experience scoring process.

What Do Grades Tell Us When Hiring?

Welcome to 2018! This first link actually highlights a look at valid personality testing on a largely read website. This makes me think that the year is off to a good start in the field.

Along those same lines of predicting behavior, a line of thought has always been that school grades are indicative of future success. The logic behind this makes sense. If a student applies him/herself and does well in school, then it is likely that he or she will do the same at work. Critics will say that grades measure something very specific that does not really translate to work and there are biases in how grades are given (which is why universities use standardized tests).

As always, what makes a good predictor really depends on the outcomes you are looking for. If your goal is to hire people who are good at following rules and doing lots of things pretty well, then this article suggests that school grades should be part of your evaluation process. But, if you want to hire very creative and novel thinkers, then GPA probably is not your best answer.

What also grabbed me about the article was the definition of success. The research article cited indicated that those who did very well in high school, nearly all of them were doing well in work and leading good lives. But, for the authors, this apparently is not enough. Why? Because none of them have “impressed the world,” whatever that means. And because there are lots of millionaires with relatively low GPAs (here is a suggestion: how about controlling for parents’ wealth before making that calculation?).

From an employment perspective, we need to be clear what valuable performance looks like when validating and part of the selection process. If your goal is to select people into positions that require developing unique solutions, then GPA may not be a useful predictor. However, if you expect people to follow processes and execute procedures, then GPA is likely to be a useful tool which should be used with other valid predictors.

And, if you are looking to hire people who are going to “impress the world,” good luck to you.

The Challenge in Finding Good Performance Data

In validating tests, getting a hold of good individual performance data is key.  But, it is also one of the more difficult parts of the process to get right.

Intuitively, we all think we can judge performance well (sort of like we all think we are good interviewers).  But, we also know that supervisor ratings of performance can be, well, unreliable.  This is so much the case that there is a whole scientific literature about performance appraisals, even as there is currently a movement within the business community to get rid of them.Facetime For PC

But, what about objectively measuring performance (for every new account opened you get $X)?  If the Wells Fargo imbroglio tells us anything, it’s that hard measures of performance that are incented can run amok.  Also, while they are objective, single objective measures (sales, piece work manufacturing, etc.) rarely reflect the entirety of performance.  Lastly, for jobs where people work interdependently it can be very difficult to determine exactly who did what well, even if you wanted to.

So, what’s one to do?

  • Establish multiple measures of performance. For instance, call centers can measure productivity (average call time) and quality (number of people who have to call back a second time).  Don’t rely on just one number.
  • Even when a final product is the result of a group effort, each individual is still responsible for some pieces of it. If you focus on key parts of the process, you can find those touch points which are indicative of individual performance.  Again, look for quality (was there any rework done?) and productivity (were deadlines met?) measures.
  • Objective performance measures do not have to have the same frequency as piece work or rely on one “ta-da” measure at the end. Think of meeting deadlines, whether additional resources were required to complete the work, etc.
  • Don’t get bogged down in whether or not a small percentage of people can game the system with objective measures. We seem OK with rampant errors in supervisory judgment, but then get all excited because 1 out of 100 people can make his productivity seem higher than it is.  If you dig into the data you are likely to be able to spot when this happens.

When I hear people say that you cannot measure individual performance well, I cringe.  Of course you can.  You just need to know where to look and focus on what is important.

 

 

What We Find at the Intersection of Management and Psychology

There’s a figurative store where the roads of Management and Psychology cross.  The shelves up front have the new and shiny theory or practice.  More likely than not, it will join the former new and shiny ideas in the dingy back of the store.  Some are just flat out wrong and others are just a repackaging of what’s already out there.  It’s kind of depressing in that the time would have been better spent working on something truly innovative.

A common theme of these books is denigrating the role of intelligence in employee selection.  Let’s be clear—there is a mountain of research that shows that for most jobs, the smarter people (using Western measures of intelligence for doing jobs in Western economies) will perform better. And these tests are consistently better predictors than non-cognitive (e.g., personality) assessments.  Ignoring these facts reduces the value that HR brings to an enterprise.

Cognitive ability tests are not perfect predictors, and even if they were, there is plenty of room left to find additional ones. This is the space that the shiny new theories try to fill.  In addition, the new characteristics cannot be traits, but rather a skill that can be developed (y’know, so the author can sell seminars, workbooks, etc.).  This, combined with the current wave of anti-intellectualism in the U.S., leads to the search for something new, but not necessarily innovative.

The questions are:

  • What value do these “new” methods bring (e.g., do they work) and
  • Are they really different than what we already have?

One of the shiniest new objects in the store is Grit.  The name taps into a very American cultural value.  If you dig deep and try hard, you will succeed.  Right there with pulling yourself up by the bootstraps.  While its proponents don’t claim that it’s brand new, they won’t concede that it is just shining up something we already have in Conscientiousness (which is one of the Big 5 personality traits).  Conscientiousness is a good and consistent predictor of job performance, but not as good as cognitive ability.  Measures of Grit are very highly correlated with those of Conscientiousness (Duckworth et al. [2007, 2009]), so it’s likely that we are not dealing with anything new.

Does this spiffed up version of an existing construct really work?  For that, we can go to the data.  And it says no.  The research currently shows that only one of Grit’s factors (perseverance) is at all predictive and it doesn’t predict beyond measures that we already have.

I am all for innovation and industrial psychology is really in need of some.  But, chasing the new and shiny is not going to get us there.  It’ll just clog up bookshelves.

 

A Crazy Way To Test Candidates

You think you have it bad when hiring. Imagine if:

  • All of your entry level job candidates were known to your entire industry and customers.
  • You and all of your competitors had access to exactly the same background, pre-employment, and past performance data, outside of your one chance to interview this person.
  • Oh, and at least one of the pre-employment tests that are given doesn’t correlate with the performance of your most critical employees.
  • The cost of acquiring the labor is huge and the compensation levels are fixed.
  • If you make a mistake, it takes a year to correct.
  • It may be 3 years before you know if you made a good hire.
  • The order of when you and your competitors can make a job offer is pre-determined, though for a high price you can jump the line.
  • And this all takes place on national television in front of your customers.

Welcome to the drafting of professional sports players in the United States. And this time of the year, the focus is on the National Football League (NFL).

I bring this up because the NFL brings nearly all of the prospective players to a group workout called a combine, which leads to the drafting of players in April. In the combine, the players are prodded and poked by medical staffs, given psychological tests, and are put through a variety of physical skill exercises. Teams also have a chance to interview players individually. The combine is organized so that the teams can see what the roughly 300 players can do without flying them all over the country. For players’ perspectives on this and the drafting process, click here and here.

 

The oddest thing about the combine is that they take single measurements of core skills (speed, jumping ability, etc) when they have access to recordings of every single play in which the player has participated (real performance). Granted, different players go against different levels of competition, but you would think that about 1000 samples of a person’s performance would be a bit of a better indicator than how fast he covers 40 yards (usually a touch under 5 seconds, even for the big guys). The interviews can be all over the map with clubs asking about drinking behavior (the players are college students) and the ability to breakdown plays. And then many players get picked by teams that don’t interview them at all.

From a validation point of view, the performance data on players are actually readily available now. Much like call centers, the NFL records some very detailed individual statistics and not just team wins and losses to evaluate players. Whether the number of times a defensive lineman can bench press 225 lbs correlates with tackles for loss is not known (or at least published), but you get the idea.

Much is made about the pressure that the players are under to perform well at the combine. This is probably more so for those from smaller schools or with whom the teams are less familiar. But, the pressure is also really on the talent scouts (sports’ version of recruiters). They only get to pick 7 players in the draft. Undrafted players can be signed by any team and history shows that they have a surprisingly high success rate (see below).

Because of the amount of data available on players, the draft process is reasonably efficient, if you use the metrics of percentage of players who are in the starting lineup on rosters by draft position, turnover (which is mostly involuntary, and achieving high performance (measured by being voted onto the all-start team), higher drafter players do better than lower drafted ones. Of course, the higher a player is taken in the draft, the more he’s paid for the first part of his career, so there is some financial bias to start higher drafted players. Interestingly, undrafted players perform at the same level on these metrics as third round picks. Perhaps there’s something to having a chip on your shoulder.

What we can learn from the NFL is that when there’s a lot of data available, you can make better selection decisions, even when your competitors have the same data. Second, there’s still plenty of good (though not the best) talent available that’s overlooked by the masses. Finding that inefficiency in the selection process and addressing it can lead to a significant competitive advantage. A good validation process can help you do that.

For more thoughts and insights regarding pre-employment test validation, contact Warren Bobrow.

Curious About Openness

One of my favorite personality scales to administer is Openness to New Experiences. It is one of the “Big 5” personality constructs and is supported by a great deal of research. People who score high on this scale seek new experiences and to engage in self-examination. They draw connections between seemingly unconnected ideas. People who score low are more comfortable with things that they find familiar.

I bring this up this week because I have heard from a few clients who want to hire people who are “curious.” Also, I came across this interview where the CEO was talking about looking for curious people. Note that he’s dead wrong in thinking that Openness is not related to intelligence. Why is it that people go out of their way to denigrate cognitive ability testing when it is THE most accurate predictor for most jobs? OK, that’s for another post on another day.

Part of this trend may come from gaming. Being successful in gaming requires searching in any place available for that clue, weapon, whatever that allows you to get to the next level. It is also a welcoming environment for failure. But, those who show curiosity, problem solving ability (at least learning the logic of the programmer), and the desire to keep learning will be successful.

Measuring curiosity as an outcome is an entirely different story. However, it should include spending time on research, a willingness to fail, and using unique sources of information when developing a solution.

I am intrigued (curious?) about this interest in Openness/Curiosity and I plan to follow-up on it. Is Openness/Curiosity important to your firm or practice? If so, what are you doing to measure it in your candidates?

What Millennials Want. JK

Every generation gets over researched, and Millennials are no exception. I say over researched because it’s easy to stereotype younger workers based on this data. It’s almost like sewing on their Myers-Briggs type. Generations of workers are shaped by the culture of work, and vice versa, so it can be interesting to look at some of the data.

We cannot start and manage companies in the “gig” economy while simultaneously complaining that millenials don’t want to stay at the same job for a long time. Just like we should not criticize people for job hopping during boom-and-bust cycles in the tech sector. A more stable employment environment leads to more stable workers (see post war America).

So, among the many things employers can do to reduce turnover is create an engaging work culture and one that shows that you care. Think of it like products you purchase that don’t compete on price, but do so based on quality and value.

This article talks about what some small businesses are doing to reduce their turnover among younger workers. Many of you may be thinking that these ideas don’t apply to your bigger company, but they really do. If your managers have an “ownership” mentality,    Download BlueStacks App Player for PC and the company has policies that support it, they can implement many of these programs.

One of the intriguing approaches was looking for workers from non-traditional career paths. When I’m asked to validate tests, I’ll often use biographical history items (asking about experiences). Clients always think that experience in similar fields is important for candidates to have in order for them to successful in the new company, but it is rarely the case. As this example shows, skill, ability, and drive are much more important than traveling the “right” path.

Whatever your approach to lowering turnover, remember the best takeaway from the article is, “For any incentives to work over the long term, employees must be invested in a company and its mission.” And that means a company must be invested in the career plans of the millennial employee.

For more information about creating a more engaging work environment, please contact Warren Bobrow.

Yes, Only Computers Should Decide Who Gets Hired

There is always a sense of excitement and dread when I learn of validated pre-employment testing making its way into different media. On one hand, I appreciate the field getting wider recognition. However, the articles invariably have some cringe-worthy statements in them that really mislead people about the field.

This is as an example. The title (Should Computers Should Decide Who Gets Hired) is provocative, which is fine. I understand that media needs to attract eyeballs. But, it implicitly implies a false choice of human vs. machine. Also, it ignores the expertise of people required to develop a valid test, including job analysis, performance evaluation, data analysis (as much art as science there), and setting cut scores. This makes it easy for the reader to think that tests can be pulled off of the internet and used effectively.

The authors then show their bias of disbelieving that a valid test could actually do better than a human (ignoring 50+ years of research on the topic). Then they reach for straws with, “But on the other hand relegating people—and the firms they work for—to data points focuses only on the success of firms in terms of productivity and tenure, and that might be a shallow way of interpreting what makes a company successful.”

Why on earth would hiring people based on their probability to succeed and stay be shallow? What other criteria would you want?

They continue with, “Firms that are populated only by high-achieving test takers could run the risk of becoming full of people who are all the same—from similar schools, or with the same types of education, similar personality traits, or the same views.”

How would a test choose people from similar schools? In fact, it’s people who make these types of selections. The authors also make the (incorrect) assumption that all tests are based on achievement, ignoring many other types of valid test, including the ones in the research paper they cite, which include “technical skills, personality, and cognitive skills.”

Lastly, “And that could potentially stall some of the hard work being done in the effort to introduce more diversity of every kind into companies all over the country. And that type of diversity, too, has been proven to be an increasingly important factor in overall firm performance.”

The logic here is circular. The test is validated on high performers, who must be diverse. But, by using a test that predicts high performers, you won’t have a diverse workplace. Huh?

You can download the source research here. I’ll cut to the chase with the main idea from the abstract:

Our results suggest that firms can improve worker quality by limiting managerial discretion. This is because, when faced with similar applicant pools, managers who exercise more discretion (as measured by their likelihood of overruling job test recommendations) systematically end up with worse hires.

The only change I would make is to the first two words, which should read, “All results every gathered on the topic show that firms…”

So, the message in the popular press is, “Tests work, but that scares us, so we’ll make up unsubstantiated reasons why you should not fully use tests.” At least they get the “tests work” part right.

For more information on how you can effectively use valid pre-employment tests, contact Warren Bobrow.

Is There a Customer Service Gene?

In working with clients on developing pre-employment testing systems, I’ve heard the expression “The Customer Service Gene,” or some variant of it, dozens of times. I like it because it transmits the idea that some people have it and some do not. It casually underlines the idea that there are some things you cannot train away. But, having good genes only provides potential. They only translate into high performance if nurtured through good training, coaching, and performance management.

I thought about what makes up the CSG while reading this interview with Jonathan Tisch. One thing that has always struck me when analyzing call center work is that there are only so many types of customer issues an agent encounters, but the customer’s circumstances are much more variable. The best agents are those who can be empathetic to unique circumstances while applying problem solving skills and creativity.

We also have to consider that there may be several sets of CSGs and that those which are the “best” really depend on the situation. For instance, there’s good data that suggest that those who call a contact center have different customer service expectations that those who text/e-mail. The former is looking for more of a personal interaction whereas the latter’s criteria for a good experience is getting the problem solved. Both sets of agents need to be creative problem solvers, but only one also has to have superior interaction skills.

The good news is that there are valid tests that assess candidates on these attributes that are cost effectively. Using them will help you identify who has the appropriate CSG (or at lest a lot of it) for your contact center.

For more information on the Customer Service Gene and validated pre-employment testing programs, please contact Warren Bobrow.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE