There is always a sense of excitement and dread when I learn of validated pre-employment testing making its way into different media. On one hand, I appreciate the field getting wider recognition. However, the articles invariably have some cringe-worthy statements in them that really mislead people about the field.
This is as an example. The title (Should Computers Should Decide Who Gets Hired) is provocative, which is fine. I understand that media needs to attract eyeballs. But, it implicitly implies a false choice of human vs. machine. Also, it ignores the expertise of people required to develop a valid test, including job analysis, performance evaluation, data analysis (as much art as science there), and setting cut scores. This makes it easy for the reader to think that tests can be pulled off of the internet and used effectively.
The authors then show their bias of disbelieving that a valid test could actually do better than a human (ignoring 50+ years of research on the topic). Then they reach for straws with, “But on the other hand relegating people—and the firms they work for—to data points focuses only on the success of firms in terms of productivity and tenure, and that might be a shallow way of interpreting what makes a company successful.”
Why on earth would hiring people based on their probability to succeed and stay be shallow? What other criteria would you want?
They continue with, “Firms that are populated only by high-achieving test takers could run the risk of becoming full of people who are all the same—from similar schools, or with the same types of education, similar personality traits, or the same views.”
How would a test choose people from similar schools? In fact, it’s people who make these types of selections. The authors also make the (incorrect) assumption that all tests are based on achievement, ignoring many other types of valid test, including the ones in the research paper they cite, which include “technical skills, personality, and cognitive skills.”
Lastly, “And that could potentially stall some of the hard work being done in the effort to introduce more diversity of every kind into companies all over the country. And that type of diversity, too, has been proven to be an increasingly important factor in overall firm performance.”
The logic here is circular. The test is validated on high performers, who must be diverse. But, by using a test that predicts high performers, you won’t have a diverse workplace. Huh?
You can download the source research here. I’ll cut to the chase with the main idea from the abstract:
Our results suggest that firms can improve worker quality by limiting managerial discretion. This is because, when faced with similar applicant pools, managers who exercise more discretion (as measured by their likelihood of overruling job test recommendations) systematically end up with worse hires.
The only change I would make is to the first two words, which should read, “All results every gathered on the topic show that firms…”
So, the message in the popular press is, “Tests work, but that scares us, so we’ll make up unsubstantiated reasons why you should not fully use tests.” At least they get the “tests work” part right.
For more information on how you can effectively use valid pre-employment tests, contact Warren Bobrow.