Eliminating Subtle Age Bias

Since age bias is something that could affect nearly all HR professionals, I am surprised that it does not get more attention. But, with the average age of employees in the U.S. going up (see here) and companies likely to recruit more older workers due to the unemployment rate being near recent lows, we are likely to see more attention paid to it, particularly in the technology field.

As with most bias, it can be introduced in a subtle way. For example, the term “digital native” describes those born roughly after 1990 that have had current technology (internet, smart phones, etc) pretty much their whole lives. A quick Indeed.com search shows many jobs where “digital native” is part of the description. Put another way, those older than 35ish should think twice before applying. Similarly, there is a whole literature (this article is an example) on how gender loaded terms in job postings affect who will respond to them.

Now, I get that you are advertising for tech jobs you are looking for employees who are completely comfortable in a digital environment and communicating with others who are. But, those are behaviors that can be assessed for with valid pre-employment tests without having to make assumptions about a person’s age.

And that is really the point about implicit bias—we make assumptions about groups without understanding people as individuals. We face a challenge in employee selection of creating processes that treat everyone fairly, but at the same time learn about them as individuals. It is a challenging needle to thread, but one that our businesses depend on us to do well. Using a combination of unbiased language and valid pre-employment tools can help us get there.

Or, if you would rather beat them than join them, you can open an art gallery that only focuses on artists ages 60 and older.

Putting Tech Diversity Puzzle Pieces Together

I am going to write about an issue with political ramifications while doing my best not to be political, so please accept these thoughts in that light.

One thread going through the proposed ban on legal immigration to the U.S. is the effect it will have on the tech industry.  Those companies are concerned that some of the talent they need from other countries will be unable to either enter the U.S. on an H-1 visa or be allowed to immigrate here.

Another issue that the tech industry has struggled with is hiring a diverse workforce in the U.S.  Much has been made of the lack of women and (domestic) minorities in the tech field.

However, there are more programs to teach tech skills to minorities and girls than you can shake a stick at.  A Google search of “minority tech training” garnered almost 18 million hits and “girls tech training” got 198 million. So, there is not a shortage of opportunities to obtain coding or other tech skills in the U.S. and these programs have created a pipeline of talent.  Likewise, there are specific tech incubator programs for women and minorities who want to start their own companies.

Why is all of this important?  Primarily because for a company to be innovative it needs to look at the world through a window and not a straw.  There are more tech users outside of the U.S. than inside, so to be successful internationally companies need foreign talent.  Shutting our borders and wanting our companies to compete overseas is a difficult problem to solve.  Women and minorities outnumber white males in the U.S., so the companies that harness those perspectives are likely to be the most successful ones.

So, what might be the barriers to connecting talent to opportunity?

  • Hiring like us. I’ve written before about the built in bias we all have of wanting to be with others who have a similar background.  This is very prevalent when it comes to which schools the person attended, which sports s/he played, etc.  This can be alleviated by:
    1. Removing names from resumes.
    2. Removing schools and extra-curricular activities from resumes, unless you have data supporting their use (and the literature on the validity of training and experience measures is not encouraging).
    3. Recruiting where you have never recruited before. For instance, go to schools that are not currently represented in your workforce.
  • Candidates being unfamiliar with the hiring process. In our bubbles we think that every step of the hiring process is normal.  But, to use a tech example, if I come from an area without many tech companies, I might not be familiar with “whiteboarding” code problems as part of an interview.  Being transparent about the steps and letting people know what to expect removes a potential barrier between you and a qualified candidate.
  • Hiring processes that invite bias. Whether it is how you score your interviews or evaluate resumes, having an evaluation rubric will reduce bias.
  • Echo chambers on interview panels. Have diverse points of view (e.g., people from different departments) on your interview panel.  This is likely to encourage meaningful follow-up questions, even within a structured interview.

I do not think that lowering standards is an appropriate response to creating a more diverse and innovative workforce.  Building walls (metaphorical or otherwise) makes the problem worse.  Rather, companies need to build more bridges to find qualified candidates to bring different perspectives to their organizations.  Yeah, I get that it is more work, but a competitive marketplace demands it.

Is Seeing Really Believing?

Something I hear frequently from clients is, “I wish I had a day/week/month to see my candidates do the job.  Then I would make fewer hiring mistakes.”  It is, of course, an intriguing idea.  We test drive cars before we buy them.  Why not try out people before we hire them?

There is a long history of sampling work behavior in selection systems, whether it be using Assessment Centers to hire/promote managers and executives or having people make things for craft positions.  The accuracy of these types of assessments is good, falling somewhere between cognitive ability tests and interviews.  For candidates, the appeal is that they feel that they can really show what they can do rather than have their job related skills or personality inferred from a multiple choice test.

The issues in using a job tryout would include:

  • Paying the person for their time. There is an ethical, in some cases legal, issue in having a person work for free.  So, be prepared for your cost per hire to go up significantly.
  • Candidates would either need flexible schedules or plenty of PTO to participate in such a program.
  • Having meaningful work for the candidates to do. If you are going to narrow the gap between what the assessment and the job look like, then you would have to have projects that impact process, customers, etc that you would be willing to have a short-term contractor do.  Or, that you already have them doing.
  • Determining how to score the job tryout. Most organizations do a pretty poor job of measuring job performance over a full year, let a lone a couple of days.  Developing scoring criteria would be key for making good decisions and avoiding bias.
  • Having someone who is not your employee perform work that could affect your customers or the safety of others will make your attorney break out in a cold sweat.  This is should convince you not to do job tryouts, but you will have to sell that person on the idea.

What got me thinking about job tryouts was this article.  I was impressed that the company had thought through the problems in their selection process and came up with a creative way to address them. They certainly handle the pay issue well and they currently have the growth and profitability to make the program worthwhile. What is left unsaid, but communicated through some derisive comments about multiple-choice tests, is that they feel that using tests would not fit their culture well.

My concerns were that they are more worried about “fit” than skills.  This also translates into not having an objective way to evaluate how well a person did.  This leads me to believe that they would run into the problem of only hiring people who are just like them.

Lastly, they have a pretty high pass rate that “feels right.”  If I worked for them, I would be concerned that a lot of time and effort is being spent confirming what was seen in the less valid interview.  This is particularly true in a company where metrics are important for everything else.  Having people work for you for a few days and not having an objective way to measure how well they did is not going to lead to better candidates than a series of interviews.

Advances in selection tools will likely come from start-up companies who are not bound by tradition when it comes to hiring.  The tech sector presents a lot of opportunities to improve valid selection systems by their nature:  They are setup to disrupt and they gather a lot of data.  This presents a great platform for seeing what people do before you hire them to do it.

Learning to Manage

I cannot tell you how many times I have worked with a client who has told me some sort of story about how they promote from within, but have a problem with the supervisors and/or managers not being able to let go of wanting to do the technical work instead of managing the technical work.  It is not hard to understand.  People get into a field because of their interests or passion, rarely for their desire to manage others.

An organization’s challenge is to either create technical career opportunities or help those who are technically proficient to successfully move into management.  But how?  Here are some tips:

  • Clearly identify the skill sets required of managers and note how different they are from those required of technical workers. One of the places I would start is with Delegation and Holding People Accountable.
  • Make the management skill sets part of your internal recruitment AND learning and development process.
  • Require internal candidates to demonstrate management skills before being promoted through an assessment center or other valid selection process.
  • Start people at an appropriate management level, regardless of how technically proficient they are.

While I’m not one to think that sports are necessarily a good analogy for the business world, I found this article to be an exception.  It describes how John Elway,

a multiple Super Bowl winning quarterback with the Denver Broncos, learned management skills from the ground up.  He wasn’t made a Vice President of the team after he retired.  Rather, he honed his business skills in another field and then transferred them to a low level of football.  It wasn’t until he demonstrated success there that he was giving the big opportunity.  The time spent out of the spotlight clearly led to many learning experiences.

What makes the story powerful is the understanding that while there were some technical skills which would translate for him from the field to the front office, Elway (and his bosses) understood that others would have to be learned.  The organization was willing to let him take the time to learn how to manage and lead in a non-technical role.

The lessons for the rest of us are that:

  • Management skills are different from technical ones (e.g., the best sales person is not necessarily the best sales manager). We can use valid tools to identify which of our technical experts possess them.
  • Management development is a journey, as is the acquisition of any skill set.

The Challenge in Finding Good Performance Data

In validating tests, getting a hold of good individual performance data is key.  But, it is also one of the more difficult parts of the process to get right.

Intuitively, we all think we can judge performance well (sort of like we all think we are good interviewers).  But, we also know that supervisor ratings of performance can be, well, unreliable.  This is so much the case that there is a whole scientific literature about performance appraisals, even as there is currently a movement within the business community to get rid of them.Facetime For PC

But, what about objectively measuring performance (for every new account opened you get $X)?  If the Wells Fargo imbroglio tells us anything, it’s that hard measures of performance that are incented can run amok.  Also, while they are objective, single objective measures (sales, piece work manufacturing, etc.) rarely reflect the entirety of performance.  Lastly, for jobs where people work interdependently it can be very difficult to determine exactly who did what well, even if you wanted to.

So, what’s one to do?

  • Establish multiple measures of performance. For instance, call centers can measure productivity (average call time) and quality (number of people who have to call back a second time).  Don’t rely on just one number.
  • Even when a final product is the result of a group effort, each individual is still responsible for some pieces of it. If you focus on key parts of the process, you can find those touch points which are indicative of individual performance.  Again, look for quality (was there any rework done?) and productivity (were deadlines met?) measures.
  • Objective performance measures do not have to have the same frequency as piece work or rely on one “ta-da” measure at the end. Think of meeting deadlines, whether additional resources were required to complete the work, etc.
  • Don’t get bogged down in whether or not a small percentage of people can game the system with objective measures. We seem OK with rampant errors in supervisory judgment, but then get all excited because 1 out of 100 people can make his productivity seem higher than it is.  If you dig into the data you are likely to be able to spot when this happens.

When I hear people say that you cannot measure individual performance well, I cringe.  Of course you can.  You just need to know where to look and focus on what is important.

 

 

What We Find at the Intersection of Management and Psychology

There’s a figurative store where the roads of Management and Psychology cross.  The shelves up front have the new and shiny theory or practice.  More likely than not, it will join the former new and shiny ideas in the dingy back of the store.  Some are just flat out wrong and others are just a repackaging of what’s already out there.  It’s kind of depressing in that the time would have been better spent working on something truly innovative.

A common theme of these books is denigrating the role of intelligence in employee selection.  Let’s be clear—there is a mountain of research that shows that for most jobs, the smarter people (using Western measures of intelligence for doing jobs in Western economies) will perform better. And these tests are consistently better predictors than non-cognitive (e.g., personality) assessments.  Ignoring these facts reduces the value that HR brings to an enterprise.

Cognitive ability tests are not perfect predictors, and even if they were, there is plenty of room left to find additional ones. This is the space that the shiny new theories try to fill.  In addition, the new characteristics cannot be traits, but rather a skill that can be developed (y’know, so the author can sell seminars, workbooks, etc.).  This, combined with the current wave of anti-intellectualism in the U.S., leads to the search for something new, but not necessarily innovative.

The questions are:

  • What value do these “new” methods bring (e.g., do they work) and
  • Are they really different than what we already have?

One of the shiniest new objects in the store is Grit.  The name taps into a very American cultural value.  If you dig deep and try hard, you will succeed.  Right there with pulling yourself up by the bootstraps.  While its proponents don’t claim that it’s brand new, they won’t concede that it is just shining up something we already have in Conscientiousness (which is one of the Big 5 personality traits).  Conscientiousness is a good and consistent predictor of job performance, but not as good as cognitive ability.  Measures of Grit are very highly correlated with those of Conscientiousness (Duckworth et al. [2007, 2009]), so it’s likely that we are not dealing with anything new.

Does this spiffed up version of an existing construct really work?  For that, we can go to the data.  And it says no.  The research currently shows that only one of Grit’s factors (perseverance) is at all predictive and it doesn’t predict beyond measures that we already have.

I am all for innovation and industrial psychology is really in need of some.  But, chasing the new and shiny is not going to get us there.  It’ll just clog up bookshelves.

 

Blind Hiring

I wrote a few weeks ago about Intel’s drive to diversify its workforce. Regular readers know that I write about bias occasionally. It’s good that the topic makes it to the mainstream media occasionally when not related to a lawsuit.

The article talks about techniques to reduce bias. Some are old (truly blind auditions for musicians) and other are new, such as software that provides only the relevant hiring info without showing a person’s name, school attended, or other information that would potentially bias the hiring manager. This puts a premium on validated tests, which I like. Though, I’m sure that there are some readers who would argue that some of these tests are biased as well, but that’s a topic for another post.

This is all well and good, but as any logistics or customer service person will tell you, it’s the last mile that really matters. I can have as diverse of a candidate pool as I want, but if there is bias in the interviewing process, I will be rejecting qualified candidates for non-valid reasons. So, what’s a hiring manager to do?

First, give less weight to the interview and/or make it more valid. Why this barely better than a coin-flip technique makes or breaks a hiring decision when proven and validated techniques are shoved the side is beyond me. OK—I get it. People want to feel in control and have buy-in to the hiring process. But, can we at least be more rational about it? Interview scores should be combined with other data (with appropriate weighting) and the overall score should be used to make hiring decisions, not the one unreliable data point.

Second, why not blind interviewing? Hear me out. How many jobs really require someone to think on their feet and provide oral answers to complex questions? Sure, there are some (sales, for instance), but not that many. Why not have candidates submit written answers to interview questions? The scoring would be more reliable (evaluating grammar/spelling could be optional for jobs where it’s not critical), and accents, gender, and skin color would be taken out of the equation. Think about it.

Lastly, a diverse workforce is a result of a valid and inclusive selection process. When companies approach it the other way (working backwards from hiring goals by demographic group), they miss the point. Diversity isn’t about filling buckets. It’s about providing equal opportunity every step of the way when hiring.

For more information on valid pre-employment testing hiring practices, contact Warren Bobrow.

A Crazy Way To Test Candidates

You think you have it bad when hiring. Imagine if:

  • All of your entry level job candidates were known to your entire industry and customers.
  • You and all of your competitors had access to exactly the same background, pre-employment, and past performance data, outside of your one chance to interview this person.
  • Oh, and at least one of the pre-employment tests that are given doesn’t correlate with the performance of your most critical employees.
  • The cost of acquiring the labor is huge and the compensation levels are fixed.
  • If you make a mistake, it takes a year to correct.
  • It may be 3 years before you know if you made a good hire.
  • The order of when you and your competitors can make a job offer is pre-determined, though for a high price you can jump the line.
  • And this all takes place on national television in front of your customers.

Welcome to the drafting of professional sports players in the United States. And this time of the year, the focus is on the National Football League (NFL).

I bring this up because the NFL brings nearly all of the prospective players to a group workout called a combine, which leads to the drafting of players in April. In the combine, the players are prodded and poked by medical staffs, given psychological tests, and are put through a variety of physical skill exercises. Teams also have a chance to interview players individually. The combine is organized so that the teams can see what the roughly 300 players can do without flying them all over the country. For players’ perspectives on this and the drafting process, click here and here.

 

The oddest thing about the combine is that they take single measurements of core skills (speed, jumping ability, etc) when they have access to recordings of every single play in which the player has participated (real performance). Granted, different players go against different levels of competition, but you would think that about 1000 samples of a person’s performance would be a bit of a better indicator than how fast he covers 40 yards (usually a touch under 5 seconds, even for the big guys). The interviews can be all over the map with clubs asking about drinking behavior (the players are college students) and the ability to breakdown plays. And then many players get picked by teams that don’t interview them at all.

From a validation point of view, the performance data on players are actually readily available now. Much like call centers, the NFL records some very detailed individual statistics and not just team wins and losses to evaluate players. Whether the number of times a defensive lineman can bench press 225 lbs correlates with tackles for loss is not known (or at least published), but you get the idea.

Much is made about the pressure that the players are under to perform well at the combine. This is probably more so for those from smaller schools or with whom the teams are less familiar. But, the pressure is also really on the talent scouts (sports’ version of recruiters). They only get to pick 7 players in the draft. Undrafted players can be signed by any team and history shows that they have a surprisingly high success rate (see below).

Because of the amount of data available on players, the draft process is reasonably efficient, if you use the metrics of percentage of players who are in the starting lineup on rosters by draft position, turnover (which is mostly involuntary, and achieving high performance (measured by being voted onto the all-start team), higher drafter players do better than lower drafted ones. Of course, the higher a player is taken in the draft, the more he’s paid for the first part of his career, so there is some financial bias to start higher drafted players. Interestingly, undrafted players perform at the same level on these metrics as third round picks. Perhaps there’s something to having a chip on your shoulder.

What we can learn from the NFL is that when there’s a lot of data available, you can make better selection decisions, even when your competitors have the same data. Second, there’s still plenty of good (though not the best) talent available that’s overlooked by the masses. Finding that inefficiency in the selection process and addressing it can lead to a significant competitive advantage. A good validation process can help you do that.

For more thoughts and insights regarding pre-employment test validation, contact Warren Bobrow.

Curious About Openness

One of my favorite personality scales to administer is Openness to New Experiences. It is one of the “Big 5” personality constructs and is supported by a great deal of research. People who score high on this scale seek new experiences and to engage in self-examination. They draw connections between seemingly unconnected ideas. People who score low are more comfortable with things that they find familiar.

I bring this up this week because I have heard from a few clients who want to hire people who are “curious.” Also, I came across this interview where the CEO was talking about looking for curious people. Note that he’s dead wrong in thinking that Openness is not related to intelligence. Why is it that people go out of their way to denigrate cognitive ability testing when it is THE most accurate predictor for most jobs? OK, that’s for another post on another day.

Part of this trend may come from gaming. Being successful in gaming requires searching in any place available for that clue, weapon, whatever that allows you to get to the next level. It is also a welcoming environment for failure. But, those who show curiosity, problem solving ability (at least learning the logic of the programmer), and the desire to keep learning will be successful.

Measuring curiosity as an outcome is an entirely different story. However, it should include spending time on research, a willingness to fail, and using unique sources of information when developing a solution.

I am intrigued (curious?) about this interest in Openness/Curiosity and I plan to follow-up on it. Is Openness/Curiosity important to your firm or practice? If so, what are you doing to measure it in your candidates?

Should HR Use Social Media Blinders?

Every couple of weeks I come across some sort of article or opinion piece about whether or not HR departments should use social media sites when recruiting or selecting candidates. The articles usually fall into one of two categories:

  • Of course you should, dummy! Any data is good data. How can you pass this up?
  • Using social media data is a one-way ticket to court and is immoral! Every bias companies have is out there and you’ll be discriminating against people, whether you want to or not.

The latest one that caught my eye was definitely in the second category. Not surprisingly, the author uncovered research data that showed that certain information found on social media would bias employers. Sort of like everything we know about how information about race, age, gender, religion, etc in resumes and interviews leads to bias. No surprises here.

People who think all social media information should be ignored seem to have this idea that HR departments spend a lot of time snooping candidates’ social media. Maybe some do, even if to check work history on LinkedIn, but that attitude strikes me as paranoid.

We do know that social media activity does correlate with personality traits which are predictive of job performance, so there is likely some valid data out there. My biggest issue with using social media to recruit or make selections is the self-selection bias. Not everyone uses social media or uses it in the same way. So, while there might be predictive information within a sample of candidates (those active on social media), it is less reliable for the population of candidates (everyone you may be interested in, whether or not they are active on social media).

As with any selection tool, you’ll want to make the playing field level. If you want to read about candidates’ business history, let them know that you’ll be taking a look at their profiles, connections, etc on LinkedIn (where they’ll have their “professional” face on). If I’m hiring for a programmer, you can bet that I would be interested in the open source code contributions they have made.

We’re at the tip of the iceberg as to what valid information can be gleaned from social media. By the time we find out, the platforms we use now are likely to be obsolete (what, we can soon use more than 140 characters on Twitter?). But, the “rules” for using social media information should be the same as any other selection tool:

  • Is what you are looking for job related?
  • Is the information gathered reliable, or just one person’s opinion about what it means?
  • Would the information potentially have adverse impact against protected groups?
  • Is this really the best way to learn whether the person possesses that knowledge, skill, ability, or personal characteristic?

What, if anything are you doing to evaluate candidates online?

For more information about valid selection methods, contact Warren Bobrow.

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE