Eliminating Subtle Age Bias

Since age bias is something that could affect nearly all HR professionals, I am surprised that it does not get more attention. But, with the average age of employees in the U.S. going up (see here) and companies likely to recruit more older workers due to the unemployment rate being near recent lows, we are likely to see more attention paid to it, particularly in the technology field.

As with most bias, it can be introduced in a subtle way. For example, the term “digital native” describes those born roughly after 1990 that have had current technology (internet, smart phones, etc) pretty much their whole lives. A quick Indeed.com search shows many jobs where “digital native” is part of the description. Put another way, those older than 35ish should think twice before applying. Similarly, there is a whole literature (this article is an example) on how gender loaded terms in job postings affect who will respond to them.

Now, I get that you are advertising for tech jobs you are looking for employees who are completely comfortable in a digital environment and communicating with others who are. But, those are behaviors that can be assessed for with valid pre-employment tests without having to make assumptions about a person’s age.

And that is really the point about implicit bias—we make assumptions about groups without understanding people as individuals. We face a challenge in employee selection of creating processes that treat everyone fairly, but at the same time learn about them as individuals. It is a challenging needle to thread, but one that our businesses depend on us to do well. Using a combination of unbiased language and valid pre-employment tools can help us get there.

Or, if you would rather beat them than join them, you can open an art gallery that only focuses on artists ages 60 and older.

While I am Likely to be Wrong, Allow me to Continue

Interviews are worse predictors of job success than you think.  And I do not care if you don’t think very highly of them as you read this, it’s still lower.  Yet, there is an insistence that they are better than they are and no company I know of is willing to give them up.  Why is this?

Of course, it’s because doing them is ingrained in our corporate cultures.  Thomas Edison (supposedly) conducted the first one.  However, note in the example that it was a test and not an interview, which ensured that it could not be less valid, even with the ridiculous questions cited.  Obviously, if a guy as smart as Thomas Edison was doing it, it must be right.  Then again, he was not trained in understanding human behavior.

The problem with interviews (besides just these) is that they are fraught with noise.  As I’ve written about before, interviewers (which are all of us) are loaded with biases which skew a good deal of information that we get from the person being interviewed.  Of course, the person being interviewed is likely to have prepared for the questions.  In fact some millennial job seekers I know informed me about how they are rarely asked a question they haven’t seen online and when they do get one they post it immediately.  From all of this pre-work, the interviewer is getting canned answers that may not reflect the person.  Of course, this may be a fine attribute if hiring someone who is not supposed to give his/her opinion on anything.  However, it does hurt the validity of even the best structured interviews.  As an aside, if you MUST interview, please make it highly structured and use it as a lever to find out about the person’s job related skills.

So, what is a company to do?  Go on a blind date with candidates?

I would suggest making your hiring decision BEFORE you interview.  Let the interview be the last bit of data that might break ties.  For instance, be sure that someone you are hiring for a customer facing position can actually make eye contact and put a few sentences together.  Or, use it to double-check the person’s availability for your work hours, give them a tour/realistic job preview, etc.  This allows HR or the hiring manager the last look without adding unpredictive noise to the process.  And think about how much time you will save!

It is not your fault that interviews do not predict performance.  The question is what are you going to do to prevent them from messing up your hiring process?

Is Seeing Really Believing?

Something I hear frequently from clients is, “I wish I had a day/week/month to see my candidates do the job.  Then I would make fewer hiring mistakes.”  It is, of course, an intriguing idea.  We test drive cars before we buy them.  Why not try out people before we hire them?

There is a long history of sampling work behavior in selection systems, whether it be using Assessment Centers to hire/promote managers and executives or having people make things for craft positions.  The accuracy of these types of assessments is good, falling somewhere between cognitive ability tests and interviews.  For candidates, the appeal is that they feel that they can really show what they can do rather than have their job related skills or personality inferred from a multiple choice test.

The issues in using a job tryout would include:

  • Paying the person for their time. There is an ethical, in some cases legal, issue in having a person work for free.  So, be prepared for your cost per hire to go up significantly.
  • Candidates would either need flexible schedules or plenty of PTO to participate in such a program.
  • Having meaningful work for the candidates to do. If you are going to narrow the gap between what the assessment and the job look like, then you would have to have projects that impact process, customers, etc that you would be willing to have a short-term contractor do.  Or, that you already have them doing.
  • Determining how to score the job tryout. Most organizations do a pretty poor job of measuring job performance over a full year, let a lone a couple of days.  Developing scoring criteria would be key for making good decisions and avoiding bias.
  • Having someone who is not your employee perform work that could affect your customers or the safety of others will make your attorney break out in a cold sweat.  This is should convince you not to do job tryouts, but you will have to sell that person on the idea.

What got me thinking about job tryouts was this article.  I was impressed that the company had thought through the problems in their selection process and came up with a creative way to address them. They certainly handle the pay issue well and they currently have the growth and profitability to make the program worthwhile. What is left unsaid, but communicated through some derisive comments about multiple-choice tests, is that they feel that using tests would not fit their culture well.

My concerns were that they are more worried about “fit” than skills.  This also translates into not having an objective way to evaluate how well a person did.  This leads me to believe that they would run into the problem of only hiring people who are just like them.

Lastly, they have a pretty high pass rate that “feels right.”  If I worked for them, I would be concerned that a lot of time and effort is being spent confirming what was seen in the less valid interview.  This is particularly true in a company where metrics are important for everything else.  Having people work for you for a few days and not having an objective way to measure how well they did is not going to lead to better candidates than a series of interviews.

Advances in selection tools will likely come from start-up companies who are not bound by tradition when it comes to hiring.  The tech sector presents a lot of opportunities to improve valid selection systems by their nature:  They are setup to disrupt and they gather a lot of data.  This presents a great platform for seeing what people do before you hire them to do it.

What Implicit Bias Looks Like

The idea of implicit bias has been making its way into the business vernacular.  It involves the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.  As you probably gathered from the definition, implicit bias is something we all have.  They are little mental shortcuts we have which can lead to discriminatory behavior.

Examples of implicit bias are found throughout the hiring process, including recruiting, interviews, and performance appraisals.  I think that you will find this interview very helpful in understanding how these biases creep into our decision making. 

It really breaks down the abstract to the actual behaviors and their impacts.

At this point of the blog is where I normally come up with a prescription of what to do.  The only problem is that there are no good empirical studies showing how to reduce implicit bias.  There are some lab studies with college students which support some short-term effectiveness, but some police departments swear that they are a waste of time.  So, the jury is still out.  But, there are some things you can do to reduce the opportunity for bias:

  • You can (mostly) decode gender out of job postings.
  • Take names off of applications before they are sent for review. The law requires that race, gender, and age information be optional on applications to help avoid discrimination.  For the same reason, you should redact names on applications and resumes before they are evaluated (if they are not already being machine scored).
  • If you are using pre-employment tests that do not have adverse impact, weight them more than your interviews, which are likely loaded with bias. If you insist on putting final decisions in the hands of interviewers, use a very structured process (pre-written questions, detailed scoring rubrics, etc.).

All humans have implicit biases—we want to be surrounded by our in-group.  A reduction in these biases, or at least fewer opportunities to express them, will likely lead you to a more diverse, and better performing, team.

The Challenge in Finding Good Performance Data

In validating tests, getting a hold of good individual performance data is key.  But, it is also one of the more difficult parts of the process to get right.

Intuitively, we all think we can judge performance well (sort of like we all think we are good interviewers).  But, we also know that supervisor ratings of performance can be, well, unreliable.  This is so much the case that there is a whole scientific literature about performance appraisals, even as there is currently a movement within the business community to get rid of them.Facetime For PC

But, what about objectively measuring performance (for every new account opened you get $X)?  If the Wells Fargo imbroglio tells us anything, it’s that hard measures of performance that are incented can run amok.  Also, while they are objective, single objective measures (sales, piece work manufacturing, etc.) rarely reflect the entirety of performance.  Lastly, for jobs where people work interdependently it can be very difficult to determine exactly who did what well, even if you wanted to.

So, what’s one to do?

  • Establish multiple measures of performance. For instance, call centers can measure productivity (average call time) and quality (number of people who have to call back a second time).  Don’t rely on just one number.
  • Even when a final product is the result of a group effort, each individual is still responsible for some pieces of it. If you focus on key parts of the process, you can find those touch points which are indicative of individual performance.  Again, look for quality (was there any rework done?) and productivity (were deadlines met?) measures.
  • Objective performance measures do not have to have the same frequency as piece work or rely on one “ta-da” measure at the end. Think of meeting deadlines, whether additional resources were required to complete the work, etc.
  • Don’t get bogged down in whether or not a small percentage of people can game the system with objective measures. We seem OK with rampant errors in supervisory judgment, but then get all excited because 1 out of 100 people can make his productivity seem higher than it is.  If you dig into the data you are likely to be able to spot when this happens.

When I hear people say that you cannot measure individual performance well, I cringe.  Of course you can.  You just need to know where to look and focus on what is important.

 

 

How Far Will You Go to Emphasize a Culture of Customer Service?

I am a regular listener to the podcast of This American Life.  Recently, they had a segment on customer satisfaction and L.L. Bean’s extreme version interpreting it (“Our products are guaranteed to give 100% satisfaction in every way. Return anything purchased from us at any time if it proves otherwise. We do not want you to have anything from L.L.Bean that is not completely satisfactory.”).  You can read the transcript here (go down to “Act Two. Bean Counter”) or download to podcast (I recommend the latter).  Some of the stories are stunning.

The first issue is who gets to define customer satisfaction?  The company?  The customer?  Of course, only the customer knows if s/he is truly satisfied with a product or service and the satisfaction is generally derived from whether or not they got exactly what they wanted out of the transaction. So, the question really is what will a company do when the customer is not satisfied?

In non-competitive marketplaces (think government services or other monopolies), don’t hold your breath.  To them, customer satisfaction is a cost, not an investment.  There is no incentive for them to make you happy (though, one could argue that working in a customer service oriented environment would lead to a better culture, which leads to better word of mouth when it comes to recruiting).   Lucky Patcher app The same goes for pseudo-competitive marketplaces (like cable/satellite TV) where you have some limited choices but making a switch is really a pain in the ass.

However, in competitive marketplaces, like apparel, delivering superior customer service is a differentiator.  All of us can think of how different retailers interpret customer satisfaction and the investment they put into it.

What I found most interesting during the podcast was the training that went into ensuring that LL Bean’s return policy was carried out.  It’s one thing to say, “OK, we’ll take back anything that you bring back for store credit” and another to do it in a way that provides for a satisfying experience for the customer.  Note that they trained people who work in returns to execute the letter and the spirit of the policy.  LL Bean also selects people who work in that department who have the personality and skills to carry it out.

What this tells us is that customer service does not just happen.  Rather, it is a combination of strategy, culture, and the right people to treat customers the way the company wants to treat them.  What is your plan to align all of these?

What We Find at the Intersection of Management and Psychology

There’s a figurative store where the roads of Management and Psychology cross.  The shelves up front have the new and shiny theory or practice.  More likely than not, it will join the former new and shiny ideas in the dingy back of the store.  Some are just flat out wrong and others are just a repackaging of what’s already out there.  It’s kind of depressing in that the time would have been better spent working on something truly innovative.

A common theme of these books is denigrating the role of intelligence in employee selection.  Let’s be clear—there is a mountain of research that shows that for most jobs, the smarter people (using Western measures of intelligence for doing jobs in Western economies) will perform better. And these tests are consistently better predictors than non-cognitive (e.g., personality) assessments.  Ignoring these facts reduces the value that HR brings to an enterprise.

Cognitive ability tests are not perfect predictors, and even if they were, there is plenty of room left to find additional ones. This is the space that the shiny new theories try to fill.  In addition, the new characteristics cannot be traits, but rather a skill that can be developed (y’know, so the author can sell seminars, workbooks, etc.).  This, combined with the current wave of anti-intellectualism in the U.S., leads to the search for something new, but not necessarily innovative.

The questions are:

  • What value do these “new” methods bring (e.g., do they work) and
  • Are they really different than what we already have?

One of the shiniest new objects in the store is Grit.  The name taps into a very American cultural value.  If you dig deep and try hard, you will succeed.  Right there with pulling yourself up by the bootstraps.  While its proponents don’t claim that it’s brand new, they won’t concede that it is just shining up something we already have in Conscientiousness (which is one of the Big 5 personality traits).  Conscientiousness is a good and consistent predictor of job performance, but not as good as cognitive ability.  Measures of Grit are very highly correlated with those of Conscientiousness (Duckworth et al. [2007, 2009]), so it’s likely that we are not dealing with anything new.

Does this spiffed up version of an existing construct really work?  For that, we can go to the data.  And it says no.  The research currently shows that only one of Grit’s factors (perseverance) is at all predictive and it doesn’t predict beyond measures that we already have.

I am all for innovation and industrial psychology is really in need of some.  But, chasing the new and shiny is not going to get us there.  It’ll just clog up bookshelves.

 

Blind Hiring

I wrote a few weeks ago about Intel’s drive to diversify its workforce. Regular readers know that I write about bias occasionally. It’s good that the topic makes it to the mainstream media occasionally when not related to a lawsuit.

The article talks about techniques to reduce bias. Some are old (truly blind auditions for musicians) and other are new, such as software that provides only the relevant hiring info without showing a person’s name, school attended, or other information that would potentially bias the hiring manager. This puts a premium on validated tests, which I like. Though, I’m sure that there are some readers who would argue that some of these tests are biased as well, but that’s a topic for another post.

This is all well and good, but as any logistics or customer service person will tell you, it’s the last mile that really matters. I can have as diverse of a candidate pool as I want, but if there is bias in the interviewing process, I will be rejecting qualified candidates for non-valid reasons. So, what’s a hiring manager to do?

First, give less weight to the interview and/or make it more valid. Why this barely better than a coin-flip technique makes or breaks a hiring decision when proven and validated techniques are shoved the side is beyond me. OK—I get it. People want to feel in control and have buy-in to the hiring process. But, can we at least be more rational about it? Interview scores should be combined with other data (with appropriate weighting) and the overall score should be used to make hiring decisions, not the one unreliable data point.

Second, why not blind interviewing? Hear me out. How many jobs really require someone to think on their feet and provide oral answers to complex questions? Sure, there are some (sales, for instance), but not that many. Why not have candidates submit written answers to interview questions? The scoring would be more reliable (evaluating grammar/spelling could be optional for jobs where it’s not critical), and accents, gender, and skin color would be taken out of the equation. Think about it.

Lastly, a diverse workforce is a result of a valid and inclusive selection process. When companies approach it the other way (working backwards from hiring goals by demographic group), they miss the point. Diversity isn’t about filling buckets. It’s about providing equal opportunity every step of the way when hiring.

For more information on valid pre-employment testing hiring practices, contact Warren Bobrow.

A Crazy Way To Test Candidates

You think you have it bad when hiring. Imagine if:

  • All of your entry level job candidates were known to your entire industry and customers.
  • You and all of your competitors had access to exactly the same background, pre-employment, and past performance data, outside of your one chance to interview this person.
  • Oh, and at least one of the pre-employment tests that are given doesn’t correlate with the performance of your most critical employees.
  • The cost of acquiring the labor is huge and the compensation levels are fixed.
  • If you make a mistake, it takes a year to correct.
  • It may be 3 years before you know if you made a good hire.
  • The order of when you and your competitors can make a job offer is pre-determined, though for a high price you can jump the line.
  • And this all takes place on national television in front of your customers.

Welcome to the drafting of professional sports players in the United States. And this time of the year, the focus is on the National Football League (NFL).

I bring this up because the NFL brings nearly all of the prospective players to a group workout called a combine, which leads to the drafting of players in April. In the combine, the players are prodded and poked by medical staffs, given psychological tests, and are put through a variety of physical skill exercises. Teams also have a chance to interview players individually. The combine is organized so that the teams can see what the roughly 300 players can do without flying them all over the country. For players’ perspectives on this and the drafting process, click here and here.

 

The oddest thing about the combine is that they take single measurements of core skills (speed, jumping ability, etc) when they have access to recordings of every single play in which the player has participated (real performance). Granted, different players go against different levels of competition, but you would think that about 1000 samples of a person’s performance would be a bit of a better indicator than how fast he covers 40 yards (usually a touch under 5 seconds, even for the big guys). The interviews can be all over the map with clubs asking about drinking behavior (the players are college students) and the ability to breakdown plays. And then many players get picked by teams that don’t interview them at all.

From a validation point of view, the performance data on players are actually readily available now. Much like call centers, the NFL records some very detailed individual statistics and not just team wins and losses to evaluate players. Whether the number of times a defensive lineman can bench press 225 lbs correlates with tackles for loss is not known (or at least published), but you get the idea.

Much is made about the pressure that the players are under to perform well at the combine. This is probably more so for those from smaller schools or with whom the teams are less familiar. But, the pressure is also really on the talent scouts (sports’ version of recruiters). They only get to pick 7 players in the draft. Undrafted players can be signed by any team and history shows that they have a surprisingly high success rate (see below).

Because of the amount of data available on players, the draft process is reasonably efficient, if you use the metrics of percentage of players who are in the starting lineup on rosters by draft position, turnover (which is mostly involuntary, and achieving high performance (measured by being voted onto the all-start team), higher drafter players do better than lower drafted ones. Of course, the higher a player is taken in the draft, the more he’s paid for the first part of his career, so there is some financial bias to start higher drafted players. Interestingly, undrafted players perform at the same level on these metrics as third round picks. Perhaps there’s something to having a chip on your shoulder.

What we can learn from the NFL is that when there’s a lot of data available, you can make better selection decisions, even when your competitors have the same data. Second, there’s still plenty of good (though not the best) talent available that’s overlooked by the masses. Finding that inefficiency in the selection process and addressing it can lead to a significant competitive advantage. A good validation process can help you do that.

For more thoughts and insights regarding pre-employment test validation, contact Warren Bobrow.

Curious About Openness

One of my favorite personality scales to administer is Openness to New Experiences. It is one of the “Big 5” personality constructs and is supported by a great deal of research. People who score high on this scale seek new experiences and to engage in self-examination. They draw connections between seemingly unconnected ideas. People who score low are more comfortable with things that they find familiar.

I bring this up this week because I have heard from a few clients who want to hire people who are “curious.” Also, I came across this interview where the CEO was talking about looking for curious people. Note that he’s dead wrong in thinking that Openness is not related to intelligence. Why is it that people go out of their way to denigrate cognitive ability testing when it is THE most accurate predictor for most jobs? OK, that’s for another post on another day.

Part of this trend may come from gaming. Being successful in gaming requires searching in any place available for that clue, weapon, whatever that allows you to get to the next level. It is also a welcoming environment for failure. But, those who show curiosity, problem solving ability (at least learning the logic of the programmer), and the desire to keep learning will be successful.

Measuring curiosity as an outcome is an entirely different story. However, it should include spending time on research, a willingness to fail, and using unique sources of information when developing a solution.

I am intrigued (curious?) about this interest in Openness/Curiosity and I plan to follow-up on it. Is Openness/Curiosity important to your firm or practice? If so, what are you doing to measure it in your candidates?

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE