What Implicit Bias Looks Like

The idea of implicit bias has been making its way into the business vernacular.  It involves the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.  As you probably gathered from the definition, implicit bias is something we all have.  They are little mental shortcuts we have which can lead to discriminatory behavior.

Examples of implicit bias are found throughout the hiring process, including recruiting, interviews, and performance appraisals.  I think that you will find this interview very helpful in understanding how these biases creep into our decision making. 

It really breaks down the abstract to the actual behaviors and their impacts.

At this point of the blog is where I normally come up with a prescription of what to do.  The only problem is that there are no good empirical studies showing how to reduce implicit bias.  There are some lab studies with college students which support some short-term effectiveness, but some police departments swear that they are a waste of time.  So, the jury is still out.  But, there are some things you can do to reduce the opportunity for bias:

  • You can (mostly) decode gender out of job postings.
  • Take names off of applications before they are sent for review. The law requires that race, gender, and age information be optional on applications to help avoid discrimination.  For the same reason, you should redact names on applications and resumes before they are evaluated (if they are not already being machine scored).
  • If you are using pre-employment tests that do not have adverse impact, weight them more than your interviews, which are likely loaded with bias. If you insist on putting final decisions in the hands of interviewers, use a very structured process (pre-written questions, detailed scoring rubrics, etc.).

All humans have implicit biases—we want to be surrounded by our in-group.  A reduction in these biases, or at least fewer opportunities to express them, will likely lead you to a more diverse, and better performing, team.

Just Pay People for Their Work

There’s been much talk about the new department of labor rule that will require overtime pay for salaried employees making less than $47,476 (the current threshold is $23,660) starting December 1, 2016.  This threshold will now update every three years.  This has led to some typical hand-wringing about whether this will help these employees (it’s a big raise since this ceiling hasn’t been raised in 12 years and no one thought of putting in a cost of living increase) or hinder them (employers will cut out the positions).

Others are really concerned that this will hurt opportunities for younger professionals.  The logic is that if new salaried employees aren’t working 12-14 hour days that they can’t show the boss how much work drive they have.  Or, they’ll miss out on those only-in-the-movies serendipitous meetings with the El Jefe that will put their careers on the fast track.  One executive is quoted as saying, “You wan to bump into the boss at 8 o’clock at night.”

I’ve got an idea. Why doesn’t everyone just leave the office by, oh, 7 o’clock?  OK, this idea is somewhat outdated since even if everyone was at home, they would still be doing work on their phones.  But, at least they are at home.

Another school of thought says that with fewer unpaid hours, “…they will not receive sufficient career development or see timely advancement and/or promotions.”  This is hogwash.  Career development benefits the company and the employee and if everyone is working under the same rules employers will make the time.

Let’s be clear: The employers that work professional people this much and don’t pay overtime are no different than sweatshop operators, even if they think people are putting in the extra hours “of their own volition” (read: they had better or they will get fired)..  They want free labor and are upset that they are going to lose it.

I do get the “this is how we build a hard working culture” argument to a point.  Those that put in the extra hours (and, presumably, the highest results) get rewarded.  This is tied into, “Well, this is how I got to where I am” logic.  Where the problem lies is that it perpetuates promoting a homogeneous group of people (those with a poor worklife balance), which limits you ability to grow the best talent.  Not everyone who puts in a lot of hours is a high performer (don’t confuse activity with results).

If we are to value work in a capitalist economy we have to pay for it.  Convincing people to work overtime for nothing is coercion, plain and simple.  That breeds a culture of fear and taking advantage of others.  Are those your company’s values?

Blind Hiring

I wrote a few weeks ago about Intel’s drive to diversify its workforce. Regular readers know that I write about bias occasionally. It’s good that the topic makes it to the mainstream media occasionally when not related to a lawsuit.

The article talks about techniques to reduce bias. Some are old (truly blind auditions for musicians) and other are new, such as software that provides only the relevant hiring info without showing a person’s name, school attended, or other information that would potentially bias the hiring manager. This puts a premium on validated tests, which I like. Though, I’m sure that there are some readers who would argue that some of these tests are biased as well, but that’s a topic for another post.

This is all well and good, but as any logistics or customer service person will tell you, it’s the last mile that really matters. I can have as diverse of a candidate pool as I want, but if there is bias in the interviewing process, I will be rejecting qualified candidates for non-valid reasons. So, what’s a hiring manager to do?

First, give less weight to the interview and/or make it more valid. Why this barely better than a coin-flip technique makes or breaks a hiring decision when proven and validated techniques are shoved the side is beyond me. OK—I get it. People want to feel in control and have buy-in to the hiring process. But, can we at least be more rational about it? Interview scores should be combined with other data (with appropriate weighting) and the overall score should be used to make hiring decisions, not the one unreliable data point.

Second, why not blind interviewing? Hear me out. How many jobs really require someone to think on their feet and provide oral answers to complex questions? Sure, there are some (sales, for instance), but not that many. Why not have candidates submit written answers to interview questions? The scoring would be more reliable (evaluating grammar/spelling could be optional for jobs where it’s not critical), and accents, gender, and skin color would be taken out of the equation. Think about it.

Lastly, a diverse workforce is a result of a valid and inclusive selection process. When companies approach it the other way (working backwards from hiring goals by demographic group), they miss the point. Diversity isn’t about filling buckets. It’s about providing equal opportunity every step of the way when hiring.

For more information on valid pre-employment testing hiring practices, contact Warren Bobrow.

Filling Diversity Buckets

With great fanfare, Intel announced recently that it is making progress in meeting its diversity goals. I’m not going to pick on their numbers as their current demographics are what they are. There are some good lessons we can learn from how they approached the issue.

  • You have to be holistic. They understand that culture, recruitment, and retention all play a part in attracting, hiring, and keeping diverse talent.
  • Drill down in the data. Intel looks at hiring and retention in different job categories. Saying that you are diverse overall, but not in high paying jobs, is not much of a victory.
  • It takes significant resources to make changes. Developing a pipeline of diverse talent requires money in scholarships, helping schools, etc and finding untapped recruitment pools take time and money.
  • Just like any other business outcome, the goals are reached only if they are measured AND if there are rewards for doing so. Sorry, but you cannot assume that people will strive for noble goals out of the goodness of their hearts.
  • It’s more than hiring numbers. You need to get the compensation and culture right to retain people. Oh, and selecting and developing good managers, as that has a huge influence on turnover.

This article goes into a bit more depth about the challenges Intel are facing. Not surprisingly, there are concerns about balancing multiculturalism (celebrating differences) and integration (making one big happy family). It also points out that if people are spending time on diversity programs, it takes them away from their “real” job (unless they are in the diversity department) and makes it tougher to get promoted and make the higher ranks more diverse.

Just as importantly, this is a case study about what doesn’t work. There is a lot of good science about unconscious bias. However, Intel found that training people about it doesn’t really affect their decisions, or at least as much as tying their compensation to it does.

You can see how Intel treated this as a supply chain as a human resources issue. It’s an interesting approach that probably led to some creative ideas. You’ll note that there is no discussion about lowering standards, which is divisive and bad for the businesses. It is also something that probably is not discussed when they are sourcing equipment. Just something to keep in mind when making important business decisions.

Should HR Use Social Media Blinders?

Every couple of weeks I come across some sort of article or opinion piece about whether or not HR departments should use social media sites when recruiting or selecting candidates. The articles usually fall into one of two categories:

  • Of course you should, dummy! Any data is good data. How can you pass this up?
  • Using social media data is a one-way ticket to court and is immoral! Every bias companies have is out there and you’ll be discriminating against people, whether you want to or not.

The latest one that caught my eye was definitely in the second category. Not surprisingly, the author uncovered research data that showed that certain information found on social media would bias employers. Sort of like everything we know about how information about race, age, gender, religion, etc in resumes and interviews leads to bias. No surprises here.

People who think all social media information should be ignored seem to have this idea that HR departments spend a lot of time snooping candidates’ social media. Maybe some do, even if to check work history on LinkedIn, but that attitude strikes me as paranoid.

We do know that social media activity does correlate with personality traits which are predictive of job performance, so there is likely some valid data out there. My biggest issue with using social media to recruit or make selections is the self-selection bias. Not everyone uses social media or uses it in the same way. So, while there might be predictive information within a sample of candidates (those active on social media), it is less reliable for the population of candidates (everyone you may be interested in, whether or not they are active on social media).

As with any selection tool, you’ll want to make the playing field level. If you want to read about candidates’ business history, let them know that you’ll be taking a look at their profiles, connections, etc on LinkedIn (where they’ll have their “professional” face on). If I’m hiring for a programmer, you can bet that I would be interested in the open source code contributions they have made.

We’re at the tip of the iceberg as to what valid information can be gleaned from social media. By the time we find out, the platforms we use now are likely to be obsolete (what, we can soon use more than 140 characters on Twitter?). But, the “rules” for using social media information should be the same as any other selection tool:

  • Is what you are looking for job related?
  • Is the information gathered reliable, or just one person’s opinion about what it means?
  • Would the information potentially have adverse impact against protected groups?
  • Is this really the best way to learn whether the person possesses that knowledge, skill, ability, or personal characteristic?

What, if anything are you doing to evaluate candidates online?

For more information about valid selection methods, contact Warren Bobrow.

Is There a Hiring Bias Against Those With Disabilities?

Most HR professionals are aware of the biases that come along with hiring racial minorities and women. Can we now add disabilities to the list? This is an important question given the number of wounded veterans and others with disabilities in the workforce.

The paper describes how resumes were sent to accounting firms (of varying size) and listed no disability, Asperger’s Syndrome, or a spinal cord injury. Those with disabilities got about 25% fewer responses. Interestingly, the differences were largely explained by the size of the firm, with the smallest ones being less inclined to respond favorable to a resume from someone who is disabled.

While the study is interesting, I question the methodology. For instance, do people with disabilities normally put them on their resumes? Websites dedicated to helping those with disabilities find employment tell people NOT to. Also, it’s not required by the American with Disabilities Act (ADA). I get the authors point and they are following a methodology used to measure discrimination against women and minorities based on their names, but I don’t think it generalizes to disabilities if listing it on a resume is not common practice.

However, if we assume that hesitancy to hire the disable generalizes from a resume to seeing someone in person, it is telling that smaller firms (<15 employees) were more likely to reject the disabled applicants than larger ones. These businesses account for about 18% of the employment in the U.S., so we’re not talking about small potatoes. Also, the ADA doesn’t apply to them, while there is a patchwork of state ADA-ish laws that might. As such, those companies may not be as aware of what is discrimination against the disabled or given much thought to reasonable accommodations. One can easily imagine the thinking (or subconscious) of one of these firms when they realize an applicant is disabled. What about my health insurance costs? How will I cover any additional days missed?

At the risk of being cynical, one can only suppose that those with disabilities face some level of discrimination. I am optimistic in thinking that every small business wants to hire the best person, regardless of disability. Maybe small business just needs some education on the topic.

For more information on recruiting and hiring those with the skills and abilities to do your work, please contact Warren Bobrow.

 

 

 

 

 

 

Equal Pay for Similar Work—A New Era in Job Analysis and Salary Negotiations?

California has prohibited gender-based wage discrimination since 1949. Courts ruled that the law applied only to exactly the same work. The state took it one step further this week by passing a law this week saying that women have a discrimination claim if there is unequal pay for substantially similar work. Some feel that the new law is good news for everyone from cleaning crews to Hollywood’s biggest actresses.

Practically speaking, the decision could lead to renewed interest in job analysis (let’s not get too excited, OK?). The law is written so that the burden is on the employer to demonstrate that the difference in pay is due to job related factors and not gender. So, if someone is going to argue that two jobs are substantially similar, there is going to need to be some data to back that up.

The law states that similarities are based on “a composite of skill, effort, and responsibility, and performed under similar working conditions.” A good job analysis will quantify these so that jobs can be compared. Who knows what statistical test tells you when jobs are substantially similar, but the data will tell you if they are the same or really different, and that’s a start. Regardless, I’m guessing that the meaning and demonstration of substantially similar will be litigated for a while.

The other impact of this law is likely to be on salary/raise negotiations. There’s plenty of data which indicates that men are less averse to this process than women, and this has real economic impacts. Companies may want to consider whether to make non-negotiable offers to avoid bias claims.

California, as usual, is setting a new standard in equal pay legislation. There’s the usual concern that this will cost the state jobs, but it may also attract more professional women. Either way, companies will need to review their compensation structures and determine which jobs are substantially similar to each other.

For more information on analyzing and grouping job titles, please contact Warren Bobrow.

Yes, We Are All Biased, But We Don’t Have to Be

Nearly all judgments we make about people are subject to some bias. We carry around these mental shortcuts so that every social situation doesn’t have to consist of obtaining all new information. I will leave to the evolutionary biologists to fill in the details as to why we do this.

From a practical point of view, these biases invade our work related decisions, such as deciding who did better in an interview, which employee should get a special assignment or a higher performance evaluation, etc. Of course, these biases go both ways. Employees are making the same types of judgments about their boss, interviewer, etc.

We have good ways to minimize these biases in hiring tools (evaluate tests scores by group to ensure that different groups are scoring equivalently, adding structure to interviews, using objective performance metrics rather than ratings, etc.). However, these biases also extend to how we communicate broadly.

Take a look (or listen) to this story. It describes steps that a company took to widen its applicant pool (BTW: This is my favorite way to combat adverse impact). Through a data analysis of language in job postings it was found that certain words/phrases would encourage or discourage certain applicant groups. Changes were made and applications increased.

The article addresses two uncomfortable truths:

  • We all have biases
  • They cannot be trained away.

The second one is a bit tougher for my friends in OD to deal with because a core tenant to diversity training is that if we are aware of our biases we can some how eliminate them. The research indicates that this is not the case.

However, in recruiting and selection, we can take steps to reduce bias from the process, including:

  • Careful wording of recruitment notices so that they don’t send unintended messages that would lead members of certain groups not to apply.
  • Using selection tools which minimize human bias, such as validated pre-employment tests. Perhaps this also means using audio, instead of video, for evaluating interviews, assessment center exercises, and work sample tests. Many symphonies now do this when evaluating musicians.
  • Adding as much structure as possible to interview protocols.

We know that good selection techniques have a higher ROI than training. Likewise, it is more cost efficient to implement good practices to mitigate bias than to train it out of people.

What are you doing to reduce bias on your selection/promotion procedures?

For more information on valid pre-employment testing, structured interviews and other fair selection techniques, please contact Warren Bobrow.

Keep Your Statistics, Please.

Target has had a rough time with pre-employment tests. The previously lost a case of using a clinical psychology instrument in hiring security guards. Now they have settled again with the EEOC for using tests with adverse impact. I’m very curious as to which tests they were using, but I haven’t been able to find out online and since they settle the case they don’t have to disclose the information.

For those of you who are using pre-employment tests (and shame on those of you who are not!), there are a few very important takeaways from the case:

  • Do your adverse impact analyses when you implement AND periodically as you are using the tests. Why? According the EEOC, “The tests [Target was using] were not sufficiently job-related. It’s not something in particular about the contents of the tests. The tests on their face were neutral. Our statistical analysis showed an adverse impact. They disproportionately screened out people in particular groups, name blacks, Asians and women.” Just because your tests do not look like they should have adverse impact doesn’t mean that they don’t.
  • Really, how good is your validity evidence? The key quote from above is “not sufficiently job-related,” which really means the job-relatedness of the tests was not strong enough to support the adverse impact they had. Having a valid test is your defense against an adverse impact claim. Oh, and it’s also the way to show others in your organization their value.
  • Track your data. I was gobsmacked that Target, “failed to maintain the records required to assess the impact of its hiring procedures.” After all, this is the company that knows when women are pregnant before their families do. If you’re the cynical type, you are probably thinking, “Well, they knew it would be bad, so they didn’t keep track of it.” If you get a visit from the EEOC (or your state equal opportunity agency), they won’t look kindly on you not having this kind of information. And it makes you look guilty. Part of the responsibility of having a pre-employment testing program is tracking its adverse impact and validity. If you are thinking of outsourcing it, find out how your contractor plans on following the data.

In the end, Target figured it was worth $2.8 million to make this go away, especially since they claim they are not using the tests anymore. They can probably find that money between the cushions they sell. What’s left unanswered is whether they will continue to use different tests to select managers and others.

For the rest of us, Target gives us a cautionary tale. Big class action lawsuits about tests are riding in to the sunset because the standards for validation and implementation are codified into US law. The standards are clear and they are ignored at your peril.

For more information on using validated pre-employment tests, contact Warren Bobrow.

Expediency vs. Effectiveness

I’ve blogged several times (here, here and here) about the City of LA’s firefighter selection process. More specifically, how factors besides the validity of the test, interviews, etc. are being used to cull applicants. Since my last post on the subject, RAND has completed a study of the city’s firefighter selection process. Full disclosure: my wife works at RAND, but she was not involved in this study.

The paper is a good read and provides a solid overview on conducting validation studies. As the title suggests, their task was to suggest to the city how to improve recruiting and hiring of firefighters. The study outlines how the city currently attracts applicants and screens them. The authors then provide recommendations on making these processes better in terms of streamlining and making them more valid.

What is clear throughout the study is that the city’s biggest issue in managing this process is the sheer number of applications they get. Several selection decisions are made based on reducing the number of people in the process. This was the driver behind the city stating that applications would be evaluated on a first come, first served basis, which lead to the application cutoff period being one minute after submission.

The city is between a rock and hard place when it comes to narrowing the applicant pool early in the process. The most pressing is that they do not have the budget to process as many applications as they receive. One would think that a solution to this would be to raise the passing score on the tests. However, based on the data presented, raising the passing scores on the written test will lead to adverse impact against African-Americans and Hispanics and doing so on the physical abilities test would negatively affect women. Interestingly, the city’s “first come, first served” policy led to even more adverse impact against racial minorities and women. Some will say this is because the policy was not well publicized outside of the fire department so this gave an advantage to friends and family members of existing firefighters (note that data shows that a very high percentage of new hires in the department are family members of current firefighters who tend to be white males). To its credit, the interview process, which can often lead to adverse impact, has been shown to be fair to racial minorities and provides an advantage to women over men.

 

I was pleased to see that RAND’s suggestions to reduce adverse impact were not to make the test(s) easier to pass, but to target recruiting efforts on minorities and women that would increase their passing rates. Specifically, the report suggested reaching out to female athletes (more likely to pass the physical abilities test) and minority valedictorians (more likely to pass the written test). The former is a solid idea. However, I’m thinking that school valedictorians (and their parents) are normally looking for a career path that includes a 4-year college and a job in a knowledge industry, but you never know.

Most interesting in the report is the city’s focus on managing the numbers rather than the quality of the process. The city insisted that RAND analyze the impact of randomly choosing people to continue in the process when the number of applications gets too large. This is a solution which does nothing to improve the quality of firefighters hired and is as likely to make the adverse impact worse as better. The study suggest using random sampling by specific groups (stratified), but that does not change the fact that people with lower tests scores are going to be chosen over those with higher ones. Not exactly a recipe for staying out of court.

I do not understand why the city sets its hiring schedule in such a boom or bust fashion. Test results are good for a year, so why not accept applications at several times during the year? RAND also makes other suggestions for managing the number of applications by putting more of the background screening at the front end (and online). Yes, all of this costs money, but so does scrapping a system, creating a new one, and hiring RAND to make recommendations. The cities focus should be on investing in a firefighter selection system that delivers the best available firefighters to the city while minimizing adverse impact and not making short-term decisions based on cost.

For more information about validated pre-employment test practices and services, please contact Warren Bobrow at 310 670-4175.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE