Economists are really taking a liking to human resources. Perhaps this is an off-shoot of psychologists winning Nobel prizes in economics so they are moving into our turf. Or, maybe it is because they love large data sets and technology has made them aware that there is a lot of information to be had in studying people. Regardless, a NY Times economics reporter is writing about hiring practices.
There’s nothing in the article that I have not blogged about before. It is about the promise of more accurate selection decisions coming from big data and new and creative ways to gather that data. Nothing new there. Regarding using gaming assessments, I will say that I would look at mean differences by age before implementing one. Someone is going to get hit with a big age discrimination suit for using them and you do not want to be the test case.
What caught my eye in the article was that the reporting was lazy (companies just now coming up with new ways to validate tests?) and over-generalizations without substantiation (“Human beings still beat computers at detecting these sorts of soft skills, like empathy.” Really? Tell me more.). But, that is what happens when a reporter files a story on a topic with which s/he is unfamiliar.
There are really two facets to argument of using Big Data in selection:
1) Better Validation: Big Data gives us the opportunity to perform more detailed analyses of pre-employment data with more granular performance metrics (and more of them).
2) Learn More About Candidates: Big Data allows us to gather and analyze candidate behavior and information that was not previously available. This could include online footprints from work related code that’s been posted online to books bought on Amazon.
The “better validation” argument is persuasive. The weakness in validating selection systems isn’t the tests as they are normally (caveat emptor) well designed and reliable. The problem is in measuring performance. There are a few jobs where there is a steady stream of reliable performance data (call centers, for one), but they are not the majority. Using multiple measures of job performance over time will only help our understanding of the accuracy of existing tests and the development of new ones.
The “learn more about candidates” argument has some pros and cons. If focused on job-related behaviors (posted code online), there is a lot of value. However, fishing for buying or search behavior online and then correlating it with future job performance would need some further justification to balance out potential invasion of privacy issues. Also, the adverse impact of what is found online would have to be evaluated.
I welcome those from other fields into employee selection. New ideas and perspectives will almost certainly contribute to the field. It would be nice if they learned a little bit about us first so they could tell us how their new approach would add to our existing knowledge and practice.
For more information on pre-employment testing, test validation, skills assessment and talent management, please contact Warren at 310 670-4175 or firstname.lastname@example.org.