Let Your Exit Interviews Leave the Building

One of the most intuitively appealing HR concepts is that of the exit interview.  If we only knew what was going through the mind of those who chose to leave our company, we could fix our turnover problems.  The thing is that there is more than enough research data to show that exit interviews are not useful in predicting or fixing turnover.  Yet, just the other day, I got a newsletter e-mail from a reputable management publication with suggestions on how to make my exit interviews better.

Exit interviews are not effective for several reasons, including:

  1. Low response rates. There really is not an upside for the leaving employee to participate, so why go through the stress and confrontation?  So, whatever data that you get is unlikely to be representative of the people who leave.

  2. Lack of candor.  Most people who would be willing to participate are also not very willing to burn bridges.  So their responses are going to be more about them than your organization.

  3. What do you think the leavers are going to tell you that you should not already know?  If a particular manager has higher turnover than the organization at large, it is probably because he/she/they is treating people poorly.  You do not need an exit interview to figure that out.

It is the last point that deserves a bit more attention.  The biggest problem with the concept of exit interviews is that they are reactive, trying to put the horses back in the barn, so to speak.  To keep turnover down, organizations should be addressing those things that lead to turnover before they become significant issues.  Identifying and acting upon turnover requires a commitment to gathering data and acting upon it.  Two steps you can take include:

  1. Using turnover as a performance measure when validating pre-employment tests.  You can lower churn for many entry level jobs by understanding which people are more likely to stay in the position and use that information in screening candidates.
  2. If you think you are going to get good information from people who are no longer engaged with your organization during the exit interview, why not get it from those who still are engaged and more likely to be candid? When you gather employee engagement data through short surveys over time, you can determine what the leading indicators of turnover are.  It takes commitment to view surveys as a process rather than events, but doing so can provide a high level of insight into employee turnover.

There will also be macro-economic factors that drive voluntary turnover that organizations may not be able to impact.  But, as the light at the end of the COVID tunnel becomes brighter and companies return to new-normal staffing levels, it provides a fresh opportunity to be proactive in understanding turnover.  This is a better approach than relying on failed techniques of the past.

Lost Communication and the Death of Process

The business world is both transient and stable.  People and priorities change, but as long as the organization is in existence it has processes that continue on.  When the information gets disseminated is becomes institutional knowledge.  We often connect this with an individual (When Maria leaves we are in trouble because she has so much institutional knowledge.), but keeping this information available improves the understanding of processes throughout the enterprise.
download showbox for iphone

But, how we speak about things changes over time.  For instance, organizational commitment became employee satisfaction, which became employee engagement.  There are subtle differences between them, but what we have always been talking about is, “How much do people want to be here and contribute?”  Yet, if we did not have records about how we understood these concepts we would have some difficulty understanding how and why we conduct (or don’t conduct) employee surveys.

The challenge is how to keep track of the amount of information that organizations generate and keep it in an update language that makes sense as the business changes.  For instance, in my practice it is normally takes quite a bit of communication and presentations to keep validated testing programs going when there is a change in HR leadership.

Perhaps a more interesting example is outlined in this article, which describes how a group of volunteers are taking handwritten letters to reconstruct the English language during Shakespeare’s time.  You may say that 400 years is much longer than any business organization has been around, but think about the rapid changes in computer languages and how important understanding the “old” ones are in maintaining or updating systems.  The archaic can be useful

What potentially gets lost over time and change in language are the research and reasons for doing things.  We lose the ability to answer the most basic of business questions, “Why are we doing this?  Why are we doing it this way?”  Being able to communicate the answers those questions allows for adapting processes when the environment changes and prevents reinventing the wheel.



Big Data, Evidence Based Decision Making, and the “Golden Motion”

The term “Big Data” really means nothing more than doing a deep analysis of the information you have. It’s “big” because we have more data than we used to have (bigger data would actually be the better term). The analytics really have more to do with being able to answer questions more reliably due to the larger number of data points. Evidenced based decision making really asks, “How can previously analyzed data help us make the best choice here?”

In this article, the author does a very good job of describing “big data” and how it can practically be used to solve business problems. Boiled down to its essence, he uses the information to find the one nugget (or more) of information that leads to improved business performance. When referring to something done by a user, he calls this the “golden motion,” or, what the customer does that leads to higher sales. When using the information to make smart business choices, it’s using the evidence to make decisions.

We can apply the “golden motion” and evidenced based decision making to HR as well. Here are a couple of examples:

1)    Signing up for benefits. Presuming that you have a self-service model of employees signing up for health care, 401(k) contributions etc., have you looked at you web portal stats to see which behaviors directly lead to more signups? Is it going to a particular FAQ? Or, a page which contains a specific graph?

2)    Are employees engaged? While engagement surveys are normally anonymous (and they should be), you can still look at the behaviors of groups which tend to be more engaged than others. Do they have more/less meetings than other groups? Do they do more organized “fun” things together? Do they ask for more/less input from executives? What behaviors do their managers exhibit?

3)    What are the best predictors of job performance and/or turnover? Have you analyzed your pre-employment and current employment data points to see which ones are truly indicative of superstars or those who left too soon? What are some of the things that top performers do (besides doing their jobs well)?

What “big data” does is get us to think about is the hidden side of things. And you probably have the data.

For instance, employee engagement underlies productivity and profitability. According to Gallup’s employee data from 2010-2011, “organizations with an average of 9.3 engaged employees for every disengaged employee experienced 147% higher earnings per share than their competition.” Understanding the drivers of engagement can help you increase it to the benefit of your bottom line. But, you need to dig into the data to find these nuggets.

But are you asking the right questions to find the “golden motion”?

For more information on using employee engagement and pre-employment testing data to improve profitability, please contact Warren at 310 670-4175 or  [email protected]

Do You Have the Culture You Want?

I know that it’s not in fashion to talk about organizational culture anymore as the concept of engagement has taken over.  But there is something still to be said about how people think they should act and the social norms that influence them.  These influences exist whether you talk about them at the country, state, or organizational level.

A good example in the US is a recent story with the Internal Revenue Service (IRS).  Employees in one office admitted to putting more scrutiny into tax-exemption applications from political opponents of the president than other groups.  Best that we can tell now, there wasn’t an effort coming from the White House to do this.  But, at some point someone thought that the new approach was a good idea and went ahead with it.  Maybe there’s data which shows that those types of applications are more fraudulent than others.  Or, the culture of that office was one that made assumptions about one political group versus others.

Every company has a culture.  Some, like Hewlett-Packard and Google, wear it on their sleeves.  Others only think about their culture when something that completely goes against it occurs.  For example, a utility company may have an unspoken cultural norm of “keeping the lights on” that’s never really talked about until they suffer a massive failure.

When working with companies on their culture, it’s amazing how consistent employees are in describing it, even across departments.  Comments about what it is like to work at a company, such as “Do what you are told,” “You can question authority,” “It’s all about profit,” or “It’s all about the customer” are heard consistently.

Where does culture come from?  Sometimes from stories passed down (apocryphal or not) from something the founders did or said.  More likely, culture comes from the behaviors that are rewarded.  If working 18 hours a day gets people promoted, then people will work 18 hours a day.  If doing outside charitable work in the company’s name gets recognized, then employees will think of the company as one that gives back to the community.  If you want to change your company’s culture, you must examine what truly gets rewarded and recognized.  Only after those changes are made, and the passage of time, will you see a difference.  There is no culture change because a leader says so.

Your engagement survey results should be able to tell you quite a bit about your culture.  You can get a sense of your company’s culture by looking at your performance appraisal form.  If you want to be innovative, are you recognizing and rewarding (reasonable) risk or cost containment?  If you want engaged employees, are you recognizing managers who have low turnover?  While the saying “What gets measured get done” is a cliche, it also happens to be true.  The corollary is “What gets measured and recognized is your culture.”

What steps are you taking with senior management to develop the culture you want?

For more information on employee engagement, please contact Warren at 310 670-4175 or [email protected]

What Does the Presidential Election Tell Us About Executive Decision Making?

While this post is about the election, it is NOT about politics.  Rather, it’s about decision making and how letting pre-determined analyses about the facts affect the process.

In this case, the facts were the polling data regarding the presidential election (you can see which ones were the most accurate here).  For the last 40 years, the polls have been pretty accurate and their errors are evenly distributed among the two political parties (see this link).

So, I’m at networking business lunch the day before the election discussing what might happen on Election Day.  Most the people at my table were pretty strong Republicans.  When I talked about the tracking data and their historic accuracy, they dismissed those facts with anecdotes.  For instance, saying that Obama wasn’t drawing big crowds but Romney is, etc. as being indicative of turnout, when pollsters explicitly base their data on those who say they are likely voters.  They clearly had an idea in their heads and nothing would have convinced them to let it go.  I’m sure if the situation was the exactly opposite (sitting with a group of Democratic partisans talking about the Republican candidate’s lead) it would have been the same conversation.

This reliance of going from the “gut” instead of with facts is something I see from some (certainly not all) supervisors, managers, and leaders all of the time.  The question is how do we consult with managers to help them make better (read: unbiased) decisions in the HR arena?

One approach is to find out the person’s biases upfront.  If the person already knows the “right” answer to the questions s/he is asking you, ask them what kind of data would change his/her mind.  If the answer is “nothing,” then you at least know how much time you want to spend on the project.  You have to pick your battles.

When it comes to selection, managers have a built in bias towards systems that have (or haven’t) allowed them to progress.  If they’ve passed tests to help them move up, then all test results are accurate.  If someone else got the promotion after going through an assessment center, they are a waste of time and money.  The challenge here is to get them to see the pattern of effectiveness, not just a single result.  There are plenty of good analytic tools for doing this.

It’s important to remember that when a person has this type of bias there is a reason for it.  The bias itself comes from some other experience.  Learning more about how the person developed the strongly held opinion gives both of you a better understanding of it.  This will also allow you to present data that may be more acceptable to him/her.

Finally, remember that the person’s bias may lead him/her to the best decision (just because someone’s has their mind made up does not make him/her wrong).  However, the process is important.  And presenting solid HR data can only help you in the long run.

For more information on leadership, please contact Warren at 310 670-4175 or [email protected]

Caveat Emptor

Let’s say you’re about to purchase an employment related test.  It could be for hiring entry level personnel, help make a promotional decision or for providing feedback to people on their skills.  There are about 50,000 of these available and several that purport to measure the same things (leadership, EQ, etc.), so how do you choose the right one?

Professional standards require that test publishers provide important information about tests so that buyers can make informed choices.  Of course, not everyone selling tests, especially on the internet, bothers with such niceties.  At a minimum, here’s what you should find out about a test before you buy it (note that some of these may require you consult with someone with good knowledge of statistics):

  1. What is the validity of the test?  Validity means the relationship between scores on the test and doing the job.  For practical purposes, there are two ways to show the validity of a test:a) Is there a statistical relationship between test scores and job performance?  Remember correlations from college statistics?  What you are looking for here is a statistically significant (large) correlation between how someone does on the test and how well they do on the job.  Look for correlations of about .30 and higher.  This provides evidence that the test does what it’s supposed to do:  measure how well someone can do a job.   This type of data is critical when you are looking at personality or aptitude tests.

    b) Do the items on the test represent the job?  If you were looking at a test for an electrician, you would want to see items about Ohm’s law, different phased motors, etc.  Or, if you were looking for a manager assessment, you’d want it to involve coaching, providing directions, etc.  This is more than just the test looking right.  They key here is that the content of the test should match the content of the job you want to use it for.  Test publishers should be able to show that the test items were written based on input from job subject matter experts.

  2. What is the distribution of scores on the test (or the average score and standard deviation) for people in the same job as those you are going to give the test to?  This is how you can tell if a person’s score is good, bad, or average.  Saying that someone scored a 37 on a test is meaningless without this information.
  3. Is there a difference in scores between different demographic groups on the test?  Test publishers should be able to let you know if group (gender, race, age) differences exist on the test.  This doesn’t make a test good or bad, but it lets you understand whether particular passing points might lead to adverse impact.
  4. You’ll want to get some administrative details as well.  What’s the time limit? Does it have online and paper-and-pencil versions?  If online, how is feedback given?

If the test publisher doesn’t have this information available, it means they have not done the proper R&D work on their test and you shouldn’t buy it (unless you want to be the R&D site).  Remember, once you administer a test to a candidate or employee, it’s your responsibility and you need to live with the outcomes.  Be sure you have full knowledge of the tests that you are using.

For more information on pre-employment testing, test validation, skills assessment, and talent management, please contact Warren at 310 670-4175 or [email protected]

Does Employee Development Work?

You might think that’s an odd question given how much time and money companies spend on it. However, they rarely go back and examine whether or not performance improved or what they can do to ensure that it does next time.  Why is that?

Part of the reason is that organizational development specialists aren’t really trained do so.  Also, since the need for training is intuitive, management is not hard pressed to justify the value of it.  Note that this becomes a real problem when budgets are cut as training is almost always one of the first things to go.  Why?  Because its value hasn’t been documented.  Acquiring these kinds of measurement skills or resources would actually help organizational development professionals keep their jobs as they would then be spending time/money on initiatives that work and could show their value.

I had an employee development project with a client where we had the data to measure the results. First, we measured performance and assessed key skills. Then the group received training based on overall results and individuals received training based on their individual needs. After eight months time we went back and measured performance (see below).

What did we find? There were some areas of performance where there was significant group improvement. In other areas there were not (there wasn’t a decrease in performance in any area). However, we now know which areas are still trouble spots for the team and which ones are not. Therefore, training dollars and time can be spent in areas where they are needed.

I’ll be clear about my bias here:  Money spent on valid pre-employment or promotional assessments bring a much greater return than money spent on development.  However, there will always be need to improve the skills of your current workforce.

Training/Development initiatives should be approached with the goal of improving performance (and not doing better on a test).  Once the performance area is identified, you should be able benchmark current levels and compare them to post-training levels to provide an indication of success.  Sure, a control group would also be nice, but not always practical.

Evaluating the effectiveness of employee development isn’t easy, but it can and should be done. To learn more about how to do it, or this particular project, send me an e-mail using this link.

For more information on  skills assessment, and talent management, please contact Warren at 310 670-4175 or [email protected]

Surveys – Confidential or not?

Some surveys (like 360s or those that measure employee engagement) are conducted in confidence, meaning that you only share group level data, and not individual responses, with others.  Other surveys, such as those given to customers or asking group members for suggestions may not be.  Under the latter condition, what level of confidentiality do you owe the participants?

First and foremost, you want to make the level of confidentiality very clear before giving the survey.  In some cases, people are asked to give up any confidentiality (please give us your name if you’d like to be contacted).  In others, the lack of confidentiality is explicitly given up (in which projects would you like to be involved?).  Additionally, let people know how the data is going to be used before asking them to give up their confidentiality.

However, even when confidence is given up, there are other factors to be considered before you would share results by name with others in the organization.  Are the person’s responses to the survey offensive or mean-spirited?  Does it appear that the person thought that the comments would be made in confidence?

Remember, confidence implies trust.  Use your judgment when protecting it.

For more information on 360 feedback, employee engagement, and talent management, please contact Warren at 310 670-4175 or [email protected]

Thanks for coming by!

Please provide this information so we can stay in touch.