Should Employers Embrace the Push for GEDs?

The U.S. has a lot of people who do not get a high school diploma. This can lead to significant barriers in employment and future opportunities in college. As a result, in 2013, over 500,000 people took and passed a high school equivalency exam (GED). This was a 20% increase over 2012. The Bureau of Labor Statistics accepts a diploma and GED as being the same. But, should employers?

The idea behind the GED is that some people are unable to complete high school for a variety of reasons and by passing the test they show that they have acquired the same amount of knowledge. That may be true, but there is little high school knowledge, except perhaps some math, that employers find valuable. What is valuable is the skill of being able to navigate something for 4 years. But, you don’t have to take my word for it. This report outlines in detail that the career and economic trajectories for those with a GED more closely resemble high school dropouts without a GED than those who complete high school. From a public policy perspective, this leads me to believe that that the proponents of the test are selling snake oil.

Employers should strongly consider this in their applications. Why? Because there may be economic consequences of treating a GED and a high school diploma the same way. In working with a client to validate ways to help them reduce turnover, we looked at the retention rates by education level for entry level positions. What we found was that after 12 months, the retention rate of those with a high school diploma compared to those with a GED 80% vs 65%. After 24 months the retention rates were 68% vs 50%. At a hiring rate of about 1000 per year and a cost of hire a bit more than $5k per person, these are significant differences. After checking with some colleagues, these results are not unusual.

The overall picture shows that employers should not be treating those with GEDs like those with high school diplomas. Rather, you should validate the impact of education level against turnover or performance as evaluate it accordingly in your application, biodata, or training and experience scoring process.

What Do Grades Tell Us When Hiring?

Welcome to 2018! This first link actually highlights a look at valid personality testing on a largely read website. This makes me think that the year is off to a good start in the field.

Along those same lines of predicting behavior, a line of thought has always been that school grades are indicative of future success. The logic behind this makes sense. If a student applies him/herself and does well in school, then it is likely that he or she will do the same at work. Critics will say that grades measure something very specific that does not really translate to work and there are biases in how grades are given (which is why universities use standardized tests).

As always, what makes a good predictor really depends on the outcomes you are looking for. If your goal is to hire people who are good at following rules and doing lots of things pretty well, then this article suggests that school grades should be part of your evaluation process. But, if you want to hire very creative and novel thinkers, then GPA probably is not your best answer.

What also grabbed me about the article was the definition of success. The research article cited indicated that those who did very well in high school, nearly all of them were doing well in work and leading good lives. But, for the authors, this apparently is not enough. Why? Because none of them have “impressed the world,” whatever that means. And because there are lots of millionaires with relatively low GPAs (here is a suggestion: how about controlling for parents’ wealth before making that calculation?).

From an employment perspective, we need to be clear what valuable performance looks like when validating and part of the selection process. If your goal is to select people into positions that require developing unique solutions, then GPA may not be a useful predictor. However, if you expect people to follow processes and execute procedures, then GPA is likely to be a useful tool which should be used with other valid predictors.

And, if you are looking to hire people who are going to “impress the world,” good luck to you.

Inviting Introverts to Lead

Whenever I teach about leadership the participants and I talk about the value of charisma. Not surprisingly, most of those in the workshop feel that the most effective leaders are these larger-than-life figures. That is, until we start talking about ones that are not (and often one of them is the CEO of their company). So, what gives?

This article delves into the issue. Note that the author sometimes confuses behavior (which can be changed) with personality (which is VERY stable, despite her claim and her link that is not associated with any research). The real issue is what can introverts do to be effective leaders?

For many, what it comes down to is the expectations of the situation. If I think any task is going to be painful, of course I am going to avoid it. This is how introverts feel about an assignment that involves a lot of group interaction.

This study looked at potential barriers to introverts being effective leaders. What they found was that negative thinking about assuming the role inhibited performance (as measured by emergent leadership). However, and this is important, positive thinking did not lead to more emergent leadership. So, in working with high potential introverts, this data (and it is only one study) suggests that removing undesirable thoughts about the role (e.g., your fears are not accurate, you will not be a failure, etc.) will lead to more leadership behaviors than selling the role (e.g., you will be fabulous, there is no doubt that you will be successful, etc.).

This is important because it shows that those who lack the extroversion trait associated with charisma may still be effective leaders. This increases your pool of leadership potential in your company. It also provides a road map for encouraging introverts, who are otherwise qualified, to take on leadership assignments in way that allows them to be successful.

From a selection perspective, understanding this nuance would be valuable to determining who you choose to be leaders. Rather than assessing introversion/extroversion, you can look at a person’s attitudes towards leading groups as potentially a more valued predictor.

Eliminating Subtle Age Bias

Since age bias is something that could affect nearly all HR professionals, I am surprised that it does not get more attention. But, with the average age of employees in the U.S. going up (see here) and companies likely to recruit more older workers due to the unemployment rate being near recent lows, we are likely to see more attention paid to it, particularly in the technology field.

As with most bias, it can be introduced in a subtle way. For example, the term “digital native” describes those born roughly after 1990 that have had current technology (internet, smart phones, etc) pretty much their whole lives. A quick Indeed.com search shows many jobs where “digital native” is part of the description. Put another way, those older than 35ish should think twice before applying. Similarly, there is a whole literature (this article is an example) on how gender loaded terms in job postings affect who will respond to them.

Now, I get that you are advertising for tech jobs you are looking for employees who are completely comfortable in a digital environment and communicating with others who are. But, those are behaviors that can be assessed for with valid pre-employment tests without having to make assumptions about a person’s age.

And that is really the point about implicit bias—we make assumptions about groups without understanding people as individuals. We face a challenge in employee selection of creating processes that treat everyone fairly, but at the same time learn about them as individuals. It is a challenging needle to thread, but one that our businesses depend on us to do well. Using a combination of unbiased language and valid pre-employment tools can help us get there.

Or, if you would rather beat them than join them, you can open an art gallery that only focuses on artists ages 60 and older.

While I am Likely to be Wrong, Allow me to Continue

Interviews are worse predictors of job success than you think.  And I do not care if you don’t think very highly of them as you read this, it’s still lower.  Yet, there is an insistence that they are better than they are and no company I know of is willing to give them up.  Why is this?

Of course, it’s because doing them is ingrained in our corporate cultures.  Thomas Edison (supposedly) conducted the first one.  However, note in the example that it was a test and not an interview, which ensured that it could not be less valid, even with the ridiculous questions cited.  Obviously, if a guy as smart as Thomas Edison was doing it, it must be right.  Then again, he was not trained in understanding human behavior.

The problem with interviews (besides just these) is that they are fraught with noise.  As I’ve written about before, interviewers (which are all of us) are loaded with biases which skew a good deal of information that we get from the person being interviewed.  Of course, the person being interviewed is likely to have prepared for the questions.  In fact some millennial job seekers I know informed me about how they are rarely asked a question they haven’t seen online and when they do get one they post it immediately.  From all of this pre-work, the interviewer is getting canned answers that may not reflect the person.  Of course, this may be a fine attribute if hiring someone who is not supposed to give his/her opinion on anything.  However, it does hurt the validity of even the best structured interviews.  As an aside, if you MUST interview, please make it highly structured and use it as a lever to find out about the person’s job related skills.

So, what is a company to do?  Go on a blind date with candidates?

I would suggest making your hiring decision BEFORE you interview.  Let the interview be the last bit of data that might break ties.  For instance, be sure that someone you are hiring for a customer facing position can actually make eye contact and put a few sentences together.  Or, use it to double-check the person’s availability for your work hours, give them a tour/realistic job preview, etc.  This allows HR or the hiring manager the last look without adding unpredictive noise to the process.  And think about how much time you will save!

It is not your fault that interviews do not predict performance.  The question is what are you going to do to prevent them from messing up your hiring process?

Is Seeing Really Believing?

Something I hear frequently from clients is, “I wish I had a day/week/month to see my candidates do the job.  Then I would make fewer hiring mistakes.”  It is, of course, an intriguing idea.  We test drive cars before we buy them.  Why not try out people before we hire them?

There is a long history of sampling work behavior in selection systems, whether it be using Assessment Centers to hire/promote managers and executives or having people make things for craft positions.  The accuracy of these types of assessments is good, falling somewhere between cognitive ability tests and interviews.  For candidates, the appeal is that they feel that they can really show what they can do rather than have their job related skills or personality inferred from a multiple choice test.

The issues in using a job tryout would include:

  • Paying the person for their time. There is an ethical, in some cases legal, issue in having a person work for free.  So, be prepared for your cost per hire to go up significantly.
  • Candidates would either need flexible schedules or plenty of PTO to participate in such a program.
  • Having meaningful work for the candidates to do. If you are going to narrow the gap between what the assessment and the job look like, then you would have to have projects that impact process, customers, etc that you would be willing to have a short-term contractor do.  Or, that you already have them doing.
  • Determining how to score the job tryout. Most organizations do a pretty poor job of measuring job performance over a full year, let a lone a couple of days.  Developing scoring criteria would be key for making good decisions and avoiding bias.
  • Having someone who is not your employee perform work that could affect your customers or the safety of others will make your attorney break out in a cold sweat.  This is should convince you not to do job tryouts, but you will have to sell that person on the idea.

What got me thinking about job tryouts was this article.  I was impressed that the company had thought through the problems in their selection process and came up with a creative way to address them. They certainly handle the pay issue well and they currently have the growth and profitability to make the program worthwhile. What is left unsaid, but communicated through some derisive comments about multiple-choice tests, is that they feel that using tests would not fit their culture well.

My concerns were that they are more worried about “fit” than skills.  This also translates into not having an objective way to evaluate how well a person did.  This leads me to believe that they would run into the problem of only hiring people who are just like them.

Lastly, they have a pretty high pass rate that “feels right.”  If I worked for them, I would be concerned that a lot of time and effort is being spent confirming what was seen in the less valid interview.  This is particularly true in a company where metrics are important for everything else.  Having people work for you for a few days and not having an objective way to measure how well they did is not going to lead to better candidates than a series of interviews.

Advances in selection tools will likely come from start-up companies who are not bound by tradition when it comes to hiring.  The tech sector presents a lot of opportunities to improve valid selection systems by their nature:  They are setup to disrupt and they gather a lot of data.  This presents a great platform for seeing what people do before you hire them to do it.

What Implicit Bias Looks Like

The idea of implicit bias has been making its way into the business vernacular.  It involves the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.  As you probably gathered from the definition, implicit bias is something we all have.  They are little mental shortcuts we have which can lead to discriminatory behavior.

Examples of implicit bias are found throughout the hiring process, including recruiting, interviews, and performance appraisals.  I think that you will find this interview very helpful in understanding how these biases creep into our decision making. 

It really breaks down the abstract to the actual behaviors and their impacts.

At this point of the blog is where I normally come up with a prescription of what to do.  The only problem is that there are no good empirical studies showing how to reduce implicit bias.  There are some lab studies with college students which support some short-term effectiveness, but some police departments swear that they are a waste of time.  So, the jury is still out.  But, there are some things you can do to reduce the opportunity for bias:

  • You can (mostly) decode gender out of job postings.
  • Take names off of applications before they are sent for review. The law requires that race, gender, and age information be optional on applications to help avoid discrimination.  For the same reason, you should redact names on applications and resumes before they are evaluated (if they are not already being machine scored).
  • If you are using pre-employment tests that do not have adverse impact, weight them more than your interviews, which are likely loaded with bias. If you insist on putting final decisions in the hands of interviewers, use a very structured process (pre-written questions, detailed scoring rubrics, etc.).

All humans have implicit biases—we want to be surrounded by our in-group.  A reduction in these biases, or at least fewer opportunities to express them, will likely lead you to a more diverse, and better performing, team.

The Challenge in Finding Good Performance Data

In validating tests, getting a hold of good individual performance data is key.  But, it is also one of the more difficult parts of the process to get right.

Intuitively, we all think we can judge performance well (sort of like we all think we are good interviewers).  But, we also know that supervisor ratings of performance can be, well, unreliable.  This is so much the case that there is a whole scientific literature about performance appraisals, even as there is currently a movement within the business community to get rid of them.Facetime For PC

But, what about objectively measuring performance (for every new account opened you get $X)?  If the Wells Fargo imbroglio tells us anything, it’s that hard measures of performance that are incented can run amok.  Also, while they are objective, single objective measures (sales, piece work manufacturing, etc.) rarely reflect the entirety of performance.  Lastly, for jobs where people work interdependently it can be very difficult to determine exactly who did what well, even if you wanted to.

So, what’s one to do?

  • Establish multiple measures of performance. For instance, call centers can measure productivity (average call time) and quality (number of people who have to call back a second time).  Don’t rely on just one number.
  • Even when a final product is the result of a group effort, each individual is still responsible for some pieces of it. If you focus on key parts of the process, you can find those touch points which are indicative of individual performance.  Again, look for quality (was there any rework done?) and productivity (were deadlines met?) measures.
  • Objective performance measures do not have to have the same frequency as piece work or rely on one “ta-da” measure at the end. Think of meeting deadlines, whether additional resources were required to complete the work, etc.
  • Don’t get bogged down in whether or not a small percentage of people can game the system with objective measures. We seem OK with rampant errors in supervisory judgment, but then get all excited because 1 out of 100 people can make his productivity seem higher than it is.  If you dig into the data you are likely to be able to spot when this happens.

When I hear people say that you cannot measure individual performance well, I cringe.  Of course you can.  You just need to know where to look and focus on what is important.

 

 

How Far Will You Go to Emphasize a Culture of Customer Service?

I am a regular listener to the podcast of This American Life.  Recently, they had a segment on customer satisfaction and L.L. Bean’s extreme version interpreting it (“Our products are guaranteed to give 100% satisfaction in every way. Return anything purchased from us at any time if it proves otherwise. We do not want you to have anything from L.L.Bean that is not completely satisfactory.”).  You can read the transcript here (go down to “Act Two. Bean Counter”) or download to podcast (I recommend the latter).  Some of the stories are stunning.

The first issue is who gets to define customer satisfaction?  The company?  The customer?  Of course, only the customer knows if s/he is truly satisfied with a product or service and the satisfaction is generally derived from whether or not they got exactly what they wanted out of the transaction. So, the question really is what will a company do when the customer is not satisfied?

In non-competitive marketplaces (think government services or other monopolies), don’t hold your breath.  To them, customer satisfaction is a cost, not an investment.  There is no incentive for them to make you happy (though, one could argue that working in a customer service oriented environment would lead to a better culture, which leads to better word of mouth when it comes to recruiting).   Lucky Patcher app The same goes for pseudo-competitive marketplaces (like cable/satellite TV) where you have some limited choices but making a switch is really a pain in the ass.

However, in competitive marketplaces, like apparel, delivering superior customer service is a differentiator.  All of us can think of how different retailers interpret customer satisfaction and the investment they put into it.

What I found most interesting during the podcast was the training that went into ensuring that LL Bean’s return policy was carried out.  It’s one thing to say, “OK, we’ll take back anything that you bring back for store credit” and another to do it in a way that provides for a satisfying experience for the customer.  Note that they trained people who work in returns to execute the letter and the spirit of the policy.  LL Bean also selects people who work in that department who have the personality and skills to carry it out.

What this tells us is that customer service does not just happen.  Rather, it is a combination of strategy, culture, and the right people to treat customers the way the company wants to treat them.  What is your plan to align all of these?

What We Find at the Intersection of Management and Psychology

There’s a figurative store where the roads of Management and Psychology cross.  The shelves up front have the new and shiny theory or practice.  More likely than not, it will join the former new and shiny ideas in the dingy back of the store.  Some are just flat out wrong and others are just a repackaging of what’s already out there.  It’s kind of depressing in that the time would have been better spent working on something truly innovative.

A common theme of these books is denigrating the role of intelligence in employee selection.  Let’s be clear—there is a mountain of research that shows that for most jobs, the smarter people (using Western measures of intelligence for doing jobs in Western economies) will perform better. And these tests are consistently better predictors than non-cognitive (e.g., personality) assessments.  Ignoring these facts reduces the value that HR brings to an enterprise.

Cognitive ability tests are not perfect predictors, and even if they were, there is plenty of room left to find additional ones. This is the space that the shiny new theories try to fill.  In addition, the new characteristics cannot be traits, but rather a skill that can be developed (y’know, so the author can sell seminars, workbooks, etc.).  This, combined with the current wave of anti-intellectualism in the U.S., leads to the search for something new, but not necessarily innovative.

The questions are:

  • What value do these “new” methods bring (e.g., do they work) and
  • Are they really different than what we already have?

One of the shiniest new objects in the store is Grit.  The name taps into a very American cultural value.  If you dig deep and try hard, you will succeed.  Right there with pulling yourself up by the bootstraps.  While its proponents don’t claim that it’s brand new, they won’t concede that it is just shining up something we already have in Conscientiousness (which is one of the Big 5 personality traits).  Conscientiousness is a good and consistent predictor of job performance, but not as good as cognitive ability.  Measures of Grit are very highly correlated with those of Conscientiousness (Duckworth et al. [2007, 2009]), so it’s likely that we are not dealing with anything new.

Does this spiffed up version of an existing construct really work?  For that, we can go to the data.  And it says no.  The research currently shows that only one of Grit’s factors (perseverance) is at all predictive and it doesn’t predict beyond measures that we already have.

I am all for innovation and industrial psychology is really in need of some.  But, chasing the new and shiny is not going to get us there.  It’ll just clog up bookshelves.

 

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE