A Crazy Way To Test Candidates

You think you have it bad when hiring. Imagine if:

  • All of your entry level job candidates were known to your entire industry and customers.
  • You and all of your competitors had access to exactly the same background, pre-employment, and past performance data, outside of your one chance to interview this person.
  • Oh, and at least one of the pre-employment tests that are given doesn’t correlate with the performance of your most critical employees.
  • The cost of acquiring the labor is huge and the compensation levels are fixed.
  • If you make a mistake, it takes a year to correct.
  • It may be 3 years before you know if you made a good hire.
  • The order of when you and your competitors can make a job offer is pre-determined, though for a high price you can jump the line.
  • And this all takes place on national television in front of your customers.

Welcome to the drafting of professional sports players in the United States. And this time of the year, the focus is on the National Football League (NFL).

I bring this up because the NFL brings nearly all of the prospective players to a group workout called a combine, which leads to the drafting of players in April. In the combine, the players are prodded and poked by medical staffs, given psychological tests, and are put through a variety of physical skill exercises. Teams also have a chance to interview players individually. The combine is organized so that the teams can see what the roughly 300 players can do without flying them all over the country. For players’ perspectives on this and the drafting process, click here and here.

 

The oddest thing about the combine is that they take single measurements of core skills (speed, jumping ability, etc) when they have access to recordings of every single play in which the player has participated (real performance). Granted, different players go against different levels of competition, but you would think that about 1000 samples of a person’s performance would be a bit of a better indicator than how fast he covers 40 yards (usually a touch under 5 seconds, even for the big guys). The interviews can be all over the map with clubs asking about drinking behavior (the players are college students) and the ability to breakdown plays. And then many players get picked by teams that don’t interview them at all.

From a validation point of view, the performance data on players are actually readily available now. Much like call centers, the NFL records some very detailed individual statistics and not just team wins and losses to evaluate players. Whether the number of times a defensive lineman can bench press 225 lbs correlates with tackles for loss is not known (or at least published), but you get the idea.

Much is made about the pressure that the players are under to perform well at the combine. This is probably more so for those from smaller schools or with whom the teams are less familiar. But, the pressure is also really on the talent scouts (sports’ version of recruiters). They only get to pick 7 players in the draft. Undrafted players can be signed by any team and history shows that they have a surprisingly high success rate (see below).

Because of the amount of data available on players, the draft process is reasonably efficient, if you use the metrics of percentage of players who are in the starting lineup on rosters by draft position, turnover (which is mostly involuntary, and achieving high performance (measured by being voted onto the all-start team), higher drafter players do better than lower drafted ones. Of course, the higher a player is taken in the draft, the more he’s paid for the first part of his career, so there is some financial bias to start higher drafted players. Interestingly, undrafted players perform at the same level on these metrics as third round picks. Perhaps there’s something to having a chip on your shoulder.

What we can learn from the NFL is that when there’s a lot of data available, you can make better selection decisions, even when your competitors have the same data. Second, there’s still plenty of good (though not the best) talent available that’s overlooked by the masses. Finding that inefficiency in the selection process and addressing it can lead to a significant competitive advantage. A good validation process can help you do that.

For more thoughts and insights regarding pre-employment test validation, contact Warren Bobrow.

Curious About Openness

One of my favorite personality scales to administer is Openness to New Experiences. It is one of the “Big 5” personality constructs and is supported by a great deal of research. People who score high on this scale seek new experiences and to engage in self-examination. They draw connections between seemingly unconnected ideas. People who score low are more comfortable with things that they find familiar.

I bring this up this week because I have heard from a few clients who want to hire people who are “curious.” Also, I came across this interview where the CEO was talking about looking for curious people. Note that he’s dead wrong in thinking that Openness is not related to intelligence. Why is it that people go out of their way to denigrate cognitive ability testing when it is THE most accurate predictor for most jobs? OK, that’s for another post on another day.

Part of this trend may come from gaming. Being successful in gaming requires searching in any place available for that clue, weapon, whatever that allows you to get to the next level. It is also a welcoming environment for failure. But, those who show curiosity, problem solving ability (at least learning the logic of the programmer), and the desire to keep learning will be successful.

Measuring curiosity as an outcome is an entirely different story. However, it should include spending time on research, a willingness to fail, and using unique sources of information when developing a solution.

I am intrigued (curious?) about this interest in Openness/Curiosity and I plan to follow-up on it. Is Openness/Curiosity important to your firm or practice? If so, what are you doing to measure it in your candidates?

Yes, Only Computers Should Decide Who Gets Hired

There is always a sense of excitement and dread when I learn of validated pre-employment testing making its way into different media. On one hand, I appreciate the field getting wider recognition. However, the articles invariably have some cringe-worthy statements in them that really mislead people about the field.

This is as an example. The title (Should Computers Should Decide Who Gets Hired) is provocative, which is fine. I understand that media needs to attract eyeballs. But, it implicitly implies a false choice of human vs. machine. Also, it ignores the expertise of people required to develop a valid test, including job analysis, performance evaluation, data analysis (as much art as science there), and setting cut scores. This makes it easy for the reader to think that tests can be pulled off of the internet and used effectively.

The authors then show their bias of disbelieving that a valid test could actually do better than a human (ignoring 50+ years of research on the topic). Then they reach for straws with, “But on the other hand relegating people—and the firms they work for—to data points focuses only on the success of firms in terms of productivity and tenure, and that might be a shallow way of interpreting what makes a company successful.”

Why on earth would hiring people based on their probability to succeed and stay be shallow? What other criteria would you want?

They continue with, “Firms that are populated only by high-achieving test takers could run the risk of becoming full of people who are all the same—from similar schools, or with the same types of education, similar personality traits, or the same views.”

How would a test choose people from similar schools? In fact, it’s people who make these types of selections. The authors also make the (incorrect) assumption that all tests are based on achievement, ignoring many other types of valid test, including the ones in the research paper they cite, which include “technical skills, personality, and cognitive skills.”

Lastly, “And that could potentially stall some of the hard work being done in the effort to introduce more diversity of every kind into companies all over the country. And that type of diversity, too, has been proven to be an increasingly important factor in overall firm performance.”

The logic here is circular. The test is validated on high performers, who must be diverse. But, by using a test that predicts high performers, you won’t have a diverse workplace. Huh?

You can download the source research here. I’ll cut to the chase with the main idea from the abstract:

Our results suggest that firms can improve worker quality by limiting managerial discretion. This is because, when faced with similar applicant pools, managers who exercise more discretion (as measured by their likelihood of overruling job test recommendations) systematically end up with worse hires.

The only change I would make is to the first two words, which should read, “All results every gathered on the topic show that firms…”

So, the message in the popular press is, “Tests work, but that scares us, so we’ll make up unsubstantiated reasons why you should not fully use tests.” At least they get the “tests work” part right.

For more information on how you can effectively use valid pre-employment tests, contact Warren Bobrow.

Equal Pay for Similar Work—A New Era in Job Analysis and Salary Negotiations?

California has prohibited gender-based wage discrimination since 1949. Courts ruled that the law applied only to exactly the same work. The state took it one step further this week by passing a law this week saying that women have a discrimination claim if there is unequal pay for substantially similar work. Some feel that the new law is good news for everyone from cleaning crews to Hollywood’s biggest actresses.

Practically speaking, the decision could lead to renewed interest in job analysis (let’s not get too excited, OK?). The law is written so that the burden is on the employer to demonstrate that the difference in pay is due to job related factors and not gender. So, if someone is going to argue that two jobs are substantially similar, there is going to need to be some data to back that up.

The law states that similarities are based on “a composite of skill, effort, and responsibility, and performed under similar working conditions.” A good job analysis will quantify these so that jobs can be compared. Who knows what statistical test tells you when jobs are substantially similar, but the data will tell you if they are the same or really different, and that’s a start. Regardless, I’m guessing that the meaning and demonstration of substantially similar will be litigated for a while.

The other impact of this law is likely to be on salary/raise negotiations. There’s plenty of data which indicates that men are less averse to this process than women, and this has real economic impacts. Companies may want to consider whether to make non-negotiable offers to avoid bias claims.

California, as usual, is setting a new standard in equal pay legislation. There’s the usual concern that this will cost the state jobs, but it may also attract more professional women. Either way, companies will need to review their compensation structures and determine which jobs are substantially similar to each other.

For more information on analyzing and grouping job titles, please contact Warren Bobrow.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE