Yes, Only Computers Should Decide Who Gets Hired

There is always a sense of excitement and dread when I learn of validated pre-employment testing making its way into different media. On one hand, I appreciate the field getting wider recognition. However, the articles invariably have some cringe-worthy statements in them that really mislead people about the field.

This is as an example. The title (Should Computers Should Decide Who Gets Hired) is provocative, which is fine. I understand that media needs to attract eyeballs. But, it implicitly implies a false choice of human vs. machine. Also, it ignores the expertise of people required to develop a valid test, including job analysis, performance evaluation, data analysis (as much art as science there), and setting cut scores. This makes it easy for the reader to think that tests can be pulled off of the internet and used effectively.

The authors then show their bias of disbelieving that a valid test could actually do better than a human (ignoring 50+ years of research on the topic). Then they reach for straws with, “But on the other hand relegating people—and the firms they work for—to data points focuses only on the success of firms in terms of productivity and tenure, and that might be a shallow way of interpreting what makes a company successful.”

Why on earth would hiring people based on their probability to succeed and stay be shallow? What other criteria would you want?

They continue with, “Firms that are populated only by high-achieving test takers could run the risk of becoming full of people who are all the same—from similar schools, or with the same types of education, similar personality traits, or the same views.”

How would a test choose people from similar schools? In fact, it’s people who make these types of selections. The authors also make the (incorrect) assumption that all tests are based on achievement, ignoring many other types of valid test, including the ones in the research paper they cite, which include “technical skills, personality, and cognitive skills.”

Lastly, “And that could potentially stall some of the hard work being done in the effort to introduce more diversity of every kind into companies all over the country. And that type of diversity, too, has been proven to be an increasingly important factor in overall firm performance.”

The logic here is circular. The test is validated on high performers, who must be diverse. But, by using a test that predicts high performers, you won’t have a diverse workplace. Huh?

You can download the source research here. I’ll cut to the chase with the main idea from the abstract:

Our results suggest that firms can improve worker quality by limiting managerial discretion. This is because, when faced with similar applicant pools, managers who exercise more discretion (as measured by their likelihood of overruling job test recommendations) systematically end up with worse hires.

The only change I would make is to the first two words, which should read, “All results every gathered on the topic show that firms…”

So, the message in the popular press is, “Tests work, but that scares us, so we’ll make up unsubstantiated reasons why you should not fully use tests.” At least they get the “tests work” part right.

For more information on how you can effectively use valid pre-employment tests, contact Warren Bobrow.

Keep Your Statistics, Please.

Target has had a rough time with pre-employment tests. The previously lost a case of using a clinical psychology instrument in hiring security guards. Now they have settled again with the EEOC for using tests with adverse impact. I’m very curious as to which tests they were using, but I haven’t been able to find out online and since they settle the case they don’t have to disclose the information.

For those of you who are using pre-employment tests (and shame on those of you who are not!), there are a few very important takeaways from the case:

  • Do your adverse impact analyses when you implement AND periodically as you are using the tests. Why? According the EEOC, “The tests [Target was using] were not sufficiently job-related. It’s not something in particular about the contents of the tests. The tests on their face were neutral. Our statistical analysis showed an adverse impact. They disproportionately screened out people in particular groups, name blacks, Asians and women.” Just because your tests do not look like they should have adverse impact doesn’t mean that they don’t.
  • Really, how good is your validity evidence? The key quote from above is “not sufficiently job-related,” which really means the job-relatedness of the tests was not strong enough to support the adverse impact they had. Having a valid test is your defense against an adverse impact claim. Oh, and it’s also the way to show others in your organization their value.
  • Track your data. I was gobsmacked that Target, “failed to maintain the records required to assess the impact of its hiring procedures.” After all, this is the company that knows when women are pregnant before their families do. If you’re the cynical type, you are probably thinking, “Well, they knew it would be bad, so they didn’t keep track of it.” If you get a visit from the EEOC (or your state equal opportunity agency), they won’t look kindly on you not having this kind of information. And it makes you look guilty. Part of the responsibility of having a pre-employment testing program is tracking its adverse impact and validity. If you are thinking of outsourcing it, find out how your contractor plans on following the data.

In the end, Target figured it was worth $2.8 million to make this go away, especially since they claim they are not using the tests anymore. They can probably find that money between the cushions they sell. What’s left unanswered is whether they will continue to use different tests to select managers and others.

For the rest of us, Target gives us a cautionary tale. Big class action lawsuits about tests are riding in to the sunset because the standards for validation and implementation are codified into US law. The standards are clear and they are ignored at your peril.

For more information on using validated pre-employment tests, contact Warren Bobrow.

Do CEO Personality Traits Matter?

One could argue that the person who has the most impact on a company’s profitability is the CEO. Given the crazy compensation that some of them get, boards of directors certainly think this is the case. A lot goes into selecting CEOs and you can’t exactly conduct a validation study for one at your company (as if the board would allow science into the conversation anyway).

As it turns out, accountants are just as interested in this as HR professionals. This article summarizes this study of one particular CEO trait—narcissism. As you would suspect, their findings show that narcissistic CEOs are good for themselves (negotiate higher salaries), but not so much for long term outcomes for their companies.

Those of you who are research minded are probably thinking, “That all sounds great, but how did they get enough CEO narcissism data to do the research?” The answer is handwriting analysis. Yes, you read that correctly.

Physical attributions of personality have a long and poor track record in employee selection. From phrenology to handwriting analysis, these types of inferences have not been shown to be valid or reliable. This study uses the size of a signature, as found in documents filed with the Securities and Exchange Commission, as an indicator of narcissism. And, while probably not as accurate as other measures, it turns out that signature size is a pretty reasonable proxy of the trait. You can also see how it would be less easily faked than a personality inventory.

The bigger question is how HR can have more influence in the hiring of high stakes positions. While asking for executive applicants’ signatures may not be the best way to go, informing hiring committees about predictors of performance may at least get them thinking in the right direction rather than being swayed by red herrings.

What do you do to influence hiring decisions for executives?

 

 

Big Data Hat, But No Digital Cattle

Right after I write about how there is way more ideas about Big Data in HR than actual results, NPR publishes this story. In it, they talk with several people about new ways of gathering pre-employment test data. However, the most important phrases are:

“The science of these claims isn’t yet clear.”

“And very little independent research exists.”

It occurred to me when reading the article that if any of the purveyors of these new tests have the data which shows that their techniques are better than (or significantly add to) existing types of tests, they would have been shouting from the rooftops by now. Until they do (and have some peer reviewed research studies done), you can keep me in the hopeful, but skeptical, camp.

In the meantime, if you have some good Big Data results (e.g., how using Big Data impacted your decision making), please let us all know in the comments section.

For more information on validated pre-employment testing, please contact Warren Bobrow at 310 670-4175.

Expediency vs. Effectiveness

I’ve blogged several times (here, here and here) about the City of LA’s firefighter selection process. More specifically, how factors besides the validity of the test, interviews, etc. are being used to cull applicants. Since my last post on the subject, RAND has completed a study of the city’s firefighter selection process. Full disclosure: my wife works at RAND, but she was not involved in this study.

The paper is a good read and provides a solid overview on conducting validation studies. As the title suggests, their task was to suggest to the city how to improve recruiting and hiring of firefighters. The study outlines how the city currently attracts applicants and screens them. The authors then provide recommendations on making these processes better in terms of streamlining and making them more valid.

What is clear throughout the study is that the city’s biggest issue in managing this process is the sheer number of applications they get. Several selection decisions are made based on reducing the number of people in the process. This was the driver behind the city stating that applications would be evaluated on a first come, first served basis, which lead to the application cutoff period being one minute after submission.

The city is between a rock and hard place when it comes to narrowing the applicant pool early in the process. The most pressing is that they do not have the budget to process as many applications as they receive. One would think that a solution to this would be to raise the passing score on the tests. However, based on the data presented, raising the passing scores on the written test will lead to adverse impact against African-Americans and Hispanics and doing so on the physical abilities test would negatively affect women. Interestingly, the city’s “first come, first served” policy led to even more adverse impact against racial minorities and women. Some will say this is because the policy was not well publicized outside of the fire department so this gave an advantage to friends and family members of existing firefighters (note that data shows that a very high percentage of new hires in the department are family members of current firefighters who tend to be white males). To its credit, the interview process, which can often lead to adverse impact, has been shown to be fair to racial minorities and provides an advantage to women over men.

 

I was pleased to see that RAND’s suggestions to reduce adverse impact were not to make the test(s) easier to pass, but to target recruiting efforts on minorities and women that would increase their passing rates. Specifically, the report suggested reaching out to female athletes (more likely to pass the physical abilities test) and minority valedictorians (more likely to pass the written test). The former is a solid idea. However, I’m thinking that school valedictorians (and their parents) are normally looking for a career path that includes a 4-year college and a job in a knowledge industry, but you never know.

Most interesting in the report is the city’s focus on managing the numbers rather than the quality of the process. The city insisted that RAND analyze the impact of randomly choosing people to continue in the process when the number of applications gets too large. This is a solution which does nothing to improve the quality of firefighters hired and is as likely to make the adverse impact worse as better. The study suggest using random sampling by specific groups (stratified), but that does not change the fact that people with lower tests scores are going to be chosen over those with higher ones. Not exactly a recipe for staying out of court.

I do not understand why the city sets its hiring schedule in such a boom or bust fashion. Test results are good for a year, so why not accept applications at several times during the year? RAND also makes other suggestions for managing the number of applications by putting more of the background screening at the front end (and online). Yes, all of this costs money, but so does scrapping a system, creating a new one, and hiring RAND to make recommendations. The cities focus should be on investing in a firefighter selection system that delivers the best available firefighters to the city while minimizing adverse impact and not making short-term decisions based on cost.

For more information about validated pre-employment test practices and services, please contact Warren Bobrow at 310 670-4175.

Creating Smarter Teams

The science behind selecting people who are good at their jobs is well established. We know, based on large research studies, that there are certain tests and assessments that predict different abilities. Those who possess abilities that are required for a specific job do that job better than those who do not have them.

However, there is a lot of work that takes place in groups where individual performance is difficult to measure and may not be relevant. Once we look at groups, we are not as interested in how well a person does as we are how well the team is performing. It makes sense to ask what are the personal attributes of the members that make for these high performing teams?

This article delves into the question of what makes a smart team. They demonstrate that teams have “intelligence” (as defined by consistent scores on a variety of group problem solving tasks). The authors cite research which shows the following to be predictive of intelligent teams:

  • The less variance in the contributions (conversational turn taking) by team members, the more productive the group. Put another way, if people are participating in roughly the same proportion, the team is more likely to be smarter.
  • The better the individuals in the group are, on average, in attending to the social needs of others, the more intelligent the group. This is particularly interesting because this factor was demonstrated in groups that meet face-to-face and those that meet virtually.
  • The more women in the group, the smarter it was. However, the authors caution that a big part of this is that women are better in attending to the social needs of others. Based on the data, they could have just as easily included Openness to Experience instead of gender to get the same results.

I have some research methods concerns about the studies besides the one above, but I think the main points are interesting. Of most interest to me is that we can quantify what makes for a smart team and the behaviors of those individuals who comprise them. This should allows us to design and use valid pre-employment tests to select people into the teams to make them more intelligent.

Where the research comes up short is in testing whether the smarter teams are better performing teams in real life situations. This would be difficult, but not impossible, to demonstrate for a variety of reasons. However, doing so could unlock a lot of productivity in companies.

For more information on using valid pre-employment testing to create smart teams, contact Warren Bobrow.

Make Fish Want to Come to Your Pond

One of the tricky things about being in the pre-employment testing business is dealing with adverse impact. It is true that some demographic groups tend to score better on some tests than others, which can lead to problems. Some companies decide that the best solution is either lowering the passing score on the test or getting rid of the assessment all together. In both instances, they are shooting themselves in the foot as they are allowing a solvable issue to reduce the economic effectiveness of their selection process. A better approach is to draw better talent to your organization.

This article talks about how one high tech company approached the diversity issue. Rather than using quotas or dumbing down their hiring process, it made itself more attractive to female applicants so it had a more talented pool of candidates from which to draw.

One thing that helped their approach was that they had a very specific focus (females, particularly software engineers). There wasn’t a reference to a “right” number of hires, which I think is good. This means that they are concentrating on getting diverse talent and not jumping through a numeric hoop.

To recruit a diverse and talented pool of candidates, you also need to know where your process is excluding people disproportionately. That will tell you whether is a recruiting issues (we’re just not getting enough talented people in the pipeline) or potentially a bias issue (why do our interview outcomes look this way?).

For more information on pre-employment testing and recruiting practices, contact me at warren@allaboutperformance.biz.

Personality Testing and the ADA

I wanted to wait until the furor (well, within I/O psychology circles) over the Wall St. Journal article on personality testing died down a bit before commenting on it. Articles in the popular press on HR and industrial psychology topics are always have a bit of “they sky is falling!” quality, and this one is no different. The article outlines questions from the EEOC and some plaintiffs as to whether common personality tests discriminate against those with mental illnesses such as depression, bi-polar disorder, etc.

Here’s the most important part of the article:

In general, though, “if a person’s results are affected by the fact that they have an impairment and the results are used to exclude the person from a job, the employer needs to defend their use of the test even if the test was lawful and administered correctly,” says Christopher Kuczynski, EEOC acting associate legal counsel.

Guess what? That is EXACTLY the current law having to do with discrimination by race, gender, and age when using pre-employment tests. If you use a test that leads to protected groups scoring differently you need to show that the test is job related and there isn’t a better way to select on that trait or ability. So, why the big deal?

The test publishers hate this kind of scrutiny because it brings individual items out into the public. It is hard to make the case that your test is “fair” when some of the items look strange out of context. Target learned this the hard way. Also, no one likes to explain science to the public. Doing so just makes the current hole you are standing in deeper.

I think the EEOC is fishing here. They’re a little bored in this arena since class action pre-employment testing cases have pretty much gone away since Wards Cove. They tried to bully Sears when no one was stepping forward to sue, but only got a $100,000 settlement (far less than the cost of going to class action trial). I also think they will lose (or get small settlements to make them go away) in these disability cases.

The difficulty the EEOC will face is that the medications for the illnesses involved are effective. As a result, it’s likely that the test scores for those who take them will be similar to the rest of the population. Drug treatment for depression, bipolar disorder, ADHD, etc are so prevalent that I have to believe that the passing rates for this group on personality tests is about the same as the general the population.

Where employers will get in trouble is when they do not have documentation showing that the particular personality construct they are measuring is a critical part of the job and that the test predicts it. Many companies take validity generalization for granted without doing the required job analysis work. And, as always, employers really need to look at the test items they are using before implementing any instrument and determine if the risk of using the is worth the potential value.

If the EEOC is serious about pursuing these kinds of cases, employers will need to rethink their job analysis and posting processes. For instance, most job postings list the physical demands of a job. I can see a section on the mental demands of the job (Must be able to work under mentally stressful conditions for long periods of time; Must be able to maintain focus throughout the day; etc).

The employment rights of those with mental illnesses should be protected. We have come a long way in recognizing, treating, and de-stigmatizing these conditions. However, suing over legitimate pre-employment tests is a solution looking for a problem.

For more information on pre-employment test validation, please contact Warren Bobrow at 310 670-4175.

Unintended, But Predictable Consequences

I’ve written before (here and here) on the LA Fire Department’s struggles in implementing a valid and fair way to hire firefighters. They started with a poorly conceived random system that favored those with inside knowledge to cull the applicant pool. Then the politicians got involved. Then they implemented lottery which is weighted by gender and race. What do you think happened next?

That’s right, a brewing lawsuit. Many people react negatively to quotas and they are illegal. The city’s implicit hypothesis, that the ability to be a good firefighter is randomly distributed across gender and race in the applicant pool, is untested. Note that none of the quotations from the city in the article say anything about hiring the best applicants. They just want the process to be numerically unbiased (35% white males in the applicant pool, then 35% of the test takers will be white males).

The next question is whether these quotas will be extended to the step after the test. Will applicants be selected on the test in a top-down fashion until that group’s slots are filled? This will, of course, lead to a lawsuit from someone who was not allowed to move on scoring higher on the test than someone who was. Or, will the city be satisfied that the pool of test takers is representative of the applicants and let the test do its job? At this point, the city holds its breath and hopes that the test doesn’t have adverse impact. Otherwise, they will have to explain why it was OK to take action to ensure that candidates represented the applicant pool at one stage of the process but not another.

Of course there will always be challenges to police and fire tests—they are high profile positions that attract a lot of applicants and have seen patterns of discrimination in the past. The high ground was to focus on the quality of hire from the beginning. The city chose not to do that, so they will be on the defensive the rest of the way.

For more information on pre-employment testing and talent management, please contact Warren Bobrow at 310 670-4175.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE