Bringing the Right Tools to the Table

At some point, all of us are in a meeting where a discussion breaks out over whether a particular business initiative should be implemented. Someone will say, “I heard about it on a podcast/TedTalk,” or “A friend of mine at XYZ company did it and it worked for them,” or something similar. The question then is how do we really know that it will work under a given set of circumstances? While we never have 100% of the information we would like to have before making such a decision, we do have tools to help guide us.

Dennis Adsit and I entered such a discussion about intentionally letting go the bottom 10% of a company’s workforce annually a while back. This was one of Jack Welch’s tactics and it became known as “Rank and Yank” (R&Y). The idea behind it was that the amount of resources spent on better performers has a higher return on investment than putting them towards the lowest performers. After a bit of back and forth, we decided to test this the best way we could. The result was an article in Consulting Psychology Journal: Research and Practice.

There are two main takeaways from the article:

1) Under certain circumstances, R&Y may be a very viable option for improving organizational effectiveness. Dennis summarizes this well in this post.

2) When management comes to your team with “I’ve got a great idea…” you must be prepared to develop an analysis to respond to the request. It is this that I want to address a bit more.

People sometimes confuse having all of the information and having an evidenced-based recommendation. In our paper we simulate an outcome based upon a set of assumptions. We talked quite a bit about those assumptions before we accepted them. There were also cases where we thought different assumptions were important, so we ran the numbers under different conditions. This allowed us to draw better conclusions from the data.

In the article we chose to model call center agents for several reasons. Among them were that we knew from experience with clients that their job performance (after training) is consistent on a week-to-week and can be measured objectively. This helped in estimating the impact of turnover. But, we also found that others had measured the “softer” costs of turnover on agent performance. This served as an excellent reminder that with enough diligence and care there are many aspects to productivity that can be measured, but that are not. HR brings a lot of value to table when it rolls up its sleeves and digs into these issues.

It did not really matter than we chose to simulate the effectiveness of R&Y. It could have been a selection system, a management development program, or a training class. What is important is taking the time and effort to listen to others and work through the data. That allows HR to have significant value and impact.

How Committed Are You to Developing a Skilled Workforce?

The economy is in a unique position right now. Unemployment is at the lowest rate this century as is the net migration rate. This leaves employers of a skilled workforce in the position of a smaller pool of candidates in general and likely one that contains fewer people with the talents they are looking for.

When more the jobs in the country were in the industrial sector (and there was a higher participation in private sector unions), management and labor worked out apprentice programs. This allowed lower skill workers to obtain the knowledge and skills for jobs over time. It also required the companies to really think about how they wanted the work done and train people accordingly.

The knowledge and service economy (along with companies’ willingness to expand/contact their head counts and greater employee mobility) has ground the apprentice approach to a near halt. People are more willing to skip from job to job to gain skills and employers are less leery of candidates who have multiple firms on their resumes. This gives hiring companies less control of the skills of the people they are hiring. I was considering these ideas when I read this article about Kaiser Permanente breaking ground its own medical school.

Kaiser’s jump into medical education can be taken in several ways, but the one that interests me the most is that a very large player in a big industry (health care) has gone to another big industry (medical schools) and said, “You all are behind the times in providing us workers and we think we can do it better.” It would be like a software company offering degrees in computer science (I think I just gave Amazon an idea!). This is potentially a disruption of a 300 year old model of providing workers.

The investment Kaiser is making is large, but they obviously see the benefit is even bigger in the quality of their doctor labor pool. I would think that if this foray is successful that they would open schools for other professions where they hire a lot of people (e.g., nursing).

The question for other business sectors is this: If your pool of available skilled talent is getting smaller, what are you doing to do to ensure you have access to it in the future? Are you going to poach from competitors or are you planning on creating your own talent pipeline?

I get that the investment in training is high and has its risks (I don’t want to spend a lot of money investing in people just to see them leave). However, it provides you the opportunity to develop the right skills and create the culture you want. It seems like money well spent.

When Convenience Gets Under Your Skin

Whether it is Amazon planning on stores without cash registers, or being able to buy drinks in a club without your wallet, to tracking the movement of just about any goods you can think of, RFID (Radio-frequency identification) is part of lives. But, what if your CEO or CTO came to you and said, “What if our employees had an RFID chip implanted in them?”

As with a lot of tech, the argument in favor of it is about convenience. Employees could access buildings, rooms, computers, vending machines, etc just by walking past an RFID reader. No more reaching for or losing key cards.

So, a company in Wisconsin is trying it out. The non-squeamish volunteers will get the chip (about the size of a grain of rice) put into their hand between the thumb and forefinger.

I will avoid making ominous comparisons to 1984. But, I am curious as to what are the real productivity or engagement benefits of doing this. How much time is being wasted fumbling for security cards? Does this help prevent any security breaches? I am just not seeing the ROI, so I doubt that many companies will adopt this.

I am not anti-technology or else this blog would show up on a piece of paper. Nor do I expect that every tech idea to be a good or bad one. However, business decisions that affect employees should be made on something beyond, “This would be cool!” Someone at the companies adopting this technology did just that (probably after an amazing sales pitch). Does it establish them as having a forward thinking tech-enable culture? Sure. Does it also show them as favoring style over substance? I think again, the answer is, “Yes.”

We can help companies establish a culture of good decision making by facilitating data-driven discussions. Questions like, “What are our goals?” and “How do we determine if this innovation is successful?” is a good way to separate a fad from effective organizational initiatives.

Is Seeing Really Believing?

Something I hear frequently from clients is, “I wish I had a day/week/month to see my candidates do the job.  Then I would make fewer hiring mistakes.”  It is, of course, an intriguing idea.  We test drive cars before we buy them.  Why not try out people before we hire them?

There is a long history of sampling work behavior in selection systems, whether it be using Assessment Centers to hire/promote managers and executives or having people make things for craft positions.  The accuracy of these types of assessments is good, falling somewhere between cognitive ability tests and interviews.  For candidates, the appeal is that they feel that they can really show what they can do rather than have their job related skills or personality inferred from a multiple choice test.

The issues in using a job tryout would include:

  • Paying the person for their time. There is an ethical, in some cases legal, issue in having a person work for free.  So, be prepared for your cost per hire to go up significantly.
  • Candidates would either need flexible schedules or plenty of PTO to participate in such a program.
  • Having meaningful work for the candidates to do. If you are going to narrow the gap between what the assessment and the job look like, then you would have to have projects that impact process, customers, etc that you would be willing to have a short-term contractor do.  Or, that you already have them doing.
  • Determining how to score the job tryout. Most organizations do a pretty poor job of measuring job performance over a full year, let a lone a couple of days.  Developing scoring criteria would be key for making good decisions and avoiding bias.
  • Having someone who is not your employee perform work that could affect your customers or the safety of others will make your attorney break out in a cold sweat.  This is should convince you not to do job tryouts, but you will have to sell that person on the idea.

What got me thinking about job tryouts was this article.  I was impressed that the company had thought through the problems in their selection process and came up with a creative way to address them. They certainly handle the pay issue well and they currently have the growth and profitability to make the program worthwhile. What is left unsaid, but communicated through some derisive comments about multiple-choice tests, is that they feel that using tests would not fit their culture well.

My concerns were that they are more worried about “fit” than skills.  This also translates into not having an objective way to evaluate how well a person did.  This leads me to believe that they would run into the problem of only hiring people who are just like them.

Lastly, they have a pretty high pass rate that “feels right.”  If I worked for them, I would be concerned that a lot of time and effort is being spent confirming what was seen in the less valid interview.  This is particularly true in a company where metrics are important for everything else.  Having people work for you for a few days and not having an objective way to measure how well they did is not going to lead to better candidates than a series of interviews.

Advances in selection tools will likely come from start-up companies who are not bound by tradition when it comes to hiring.  The tech sector presents a lot of opportunities to improve valid selection systems by their nature:  They are setup to disrupt and they gather a lot of data.  This presents a great platform for seeing what people do before you hire them to do it.

The Challenge in Finding Good Performance Data

In validating tests, getting a hold of good individual performance data is key.  But, it is also one of the more difficult parts of the process to get right.

Intuitively, we all think we can judge performance well (sort of like we all think we are good interviewers).  But, we also know that supervisor ratings of performance can be, well, unreliable.  This is so much the case that there is a whole scientific literature about performance appraisals, even as there is currently a movement within the business community to get rid of them.Facetime For PC

But, what about objectively measuring performance (for every new account opened you get $X)?  If the Wells Fargo imbroglio tells us anything, it’s that hard measures of performance that are incented can run amok.  Also, while they are objective, single objective measures (sales, piece work manufacturing, etc.) rarely reflect the entirety of performance.  Lastly, for jobs where people work interdependently it can be very difficult to determine exactly who did what well, even if you wanted to.

So, what’s one to do?

  • Establish multiple measures of performance. For instance, call centers can measure productivity (average call time) and quality (number of people who have to call back a second time).  Don’t rely on just one number.
  • Even when a final product is the result of a group effort, each individual is still responsible for some pieces of it. If you focus on key parts of the process, you can find those touch points which are indicative of individual performance.  Again, look for quality (was there any rework done?) and productivity (were deadlines met?) measures.
  • Objective performance measures do not have to have the same frequency as piece work or rely on one “ta-da” measure at the end. Think of meeting deadlines, whether additional resources were required to complete the work, etc.
  • Don’t get bogged down in whether or not a small percentage of people can game the system with objective measures. We seem OK with rampant errors in supervisory judgment, but then get all excited because 1 out of 100 people can make his productivity seem higher than it is.  If you dig into the data you are likely to be able to spot when this happens.

When I hear people say that you cannot measure individual performance well, I cringe.  Of course you can.  You just need to know where to look and focus on what is important.

 

 

The Culture of Over the Top Customer Service

There are several approaches that companies take with their customer service:

  • Necessary evil. This is when they figure they have to have it (by expectation or regulation).
  • A cost of doing business. Companies take this approach when they still take orders online and need to provide some support for returns, etc. This is a transactional approach.
  • An opportunity to build the brand and the business. Companies take this approach to cross-sell and build lifetime customers.

Intellectually, most contact center execs would like to see themselves as in the third category. But, doing so presents an amazing challenge in terms of culture and money. This article talks about what some companies have done to use their contact center to gain market share.

Though we normally think of a company’s culture as an internal issue, it does influence how it treats its customers. Think about the last time you contacted a bank versus a tech company. Or, a legacy firm versus a start-up. Whether or not it is by design, these are different customer experiences. That’s the company’s culture at work.

 

The article describes how Dollar Shave Club consciously brings their culture to their contact center. They are linking the image they portray in their ads to their customer service. The goal is to extend their brand to every touch point with prospects and subscribers.

The article draws a dichotomy between scripted (implied: boring) and unscripted (implied: exciting). I don’t buy this. There are plenty of unscripted contact centers that are not very exciting.

Another issue is really how much judgment the agents have and how this affects how you select them. When agents are heavily scripted and following a decision tree, they don’t get to use much judgment, but they need to be very conscientious. When agents are unscripted, judgment is much more important. When you add the culture component, then you must ensure that the person’s personality fits your organization. Each of these represents a different set of attributes that you need to consider when recruiting and hiring.

Interestingly, it takes more training to get people to execute a culture in an unscripted call center than to follow rules in a scripted one. I would imagine that this training also has a lot to do with the limits (or lack thereof) of the customer interactions and indoctrinating the culture into the reps. This leads to a more consistent (or controlled, as Zappos says) customer experience, even without scripts.

The ROI calculation of the extreme customer contact centers described in the article is pretty straight forward.

1) What are the additional costs in spending 7 weeks to train reps versus 2-3 weeks? The additional thought here is whether the training cost per seat per year is the same, factoring in turnover rates. If unscripted agents stay long longer, then maybe the training doesn’t cost that much more. You can reduce the training time and turnover with valid selection procedures.

2) How many more reps do you need in an unscripted center, which will lead to longer handle times? If this approach leads to more first call resolutions, then some of the costs will be mitigated. If unscripted call lead to more variance in your handle times, how do you accurately schedule agents?

3) Does this approach lead to more sales and/or the development of customers for life? The key word here is more. If scripted service leads to the same amount of sales, then the extra investment in unscripted training and FTEs is not worth it. It’s easy to say that the unscripted approach feels better, but that doesn’t pay the bills. You need to track the impact on sales (or at least conversions). Note that the most of the companies cited in the article using unscripted calls are living off of investor money rather than revenue.

Your contact center is an extension of your business model and culture. Be mindful of how they impact the strategy behind your center. Most importantly, keep focused on the steak and not the wow sizzle.

For more information on the link between culture, selection, and your contact center, contact Warren Bobrow.

Is There a Customer Service Gene?

In working with clients on developing pre-employment testing systems, I’ve heard the expression “The Customer Service Gene,” or some variant of it, dozens of times. I like it because it transmits the idea that some people have it and some do not. It casually underlines the idea that there are some things you cannot train away. But, having good genes only provides potential. They only translate into high performance if nurtured through good training, coaching, and performance management.

I thought about what makes up the CSG while reading this interview with Jonathan Tisch. One thing that has always struck me when analyzing call center work is that there are only so many types of customer issues an agent encounters, but the customer’s circumstances are much more variable. The best agents are those who can be empathetic to unique circumstances while applying problem solving skills and creativity.

We also have to consider that there may be several sets of CSGs and that those which are the “best” really depend on the situation. For instance, there’s good data that suggest that those who call a contact center have different customer service expectations that those who text/e-mail. The former is looking for more of a personal interaction whereas the latter’s criteria for a good experience is getting the problem solved. Both sets of agents need to be creative problem solvers, but only one also has to have superior interaction skills.

The good news is that there are valid tests that assess candidates on these attributes that are cost effectively. Using them will help you identify who has the appropriate CSG (or at lest a lot of it) for your contact center.

For more information on the Customer Service Gene and validated pre-employment testing programs, please contact Warren Bobrow.

Yes, We Are All Biased, But We Don’t Have to Be

Nearly all judgments we make about people are subject to some bias. We carry around these mental shortcuts so that every social situation doesn’t have to consist of obtaining all new information. I will leave to the evolutionary biologists to fill in the details as to why we do this.

From a practical point of view, these biases invade our work related decisions, such as deciding who did better in an interview, which employee should get a special assignment or a higher performance evaluation, etc. Of course, these biases go both ways. Employees are making the same types of judgments about their boss, interviewer, etc.

We have good ways to minimize these biases in hiring tools (evaluate tests scores by group to ensure that different groups are scoring equivalently, adding structure to interviews, using objective performance metrics rather than ratings, etc.). However, these biases also extend to how we communicate broadly.

Take a look (or listen) to this story. It describes steps that a company took to widen its applicant pool (BTW: This is my favorite way to combat adverse impact). Through a data analysis of language in job postings it was found that certain words/phrases would encourage or discourage certain applicant groups. Changes were made and applications increased.

The article addresses two uncomfortable truths:

  • We all have biases
  • They cannot be trained away.

The second one is a bit tougher for my friends in OD to deal with because a core tenant to diversity training is that if we are aware of our biases we can some how eliminate them. The research indicates that this is not the case.

However, in recruiting and selection, we can take steps to reduce bias from the process, including:

  • Careful wording of recruitment notices so that they don’t send unintended messages that would lead members of certain groups not to apply.
  • Using selection tools which minimize human bias, such as validated pre-employment tests. Perhaps this also means using audio, instead of video, for evaluating interviews, assessment center exercises, and work sample tests. Many symphonies now do this when evaluating musicians.
  • Adding as much structure as possible to interview protocols.

We know that good selection techniques have a higher ROI than training. Likewise, it is more cost efficient to implement good practices to mitigate bias than to train it out of people.

What are you doing to reduce bias on your selection/promotion procedures?

For more information on valid pre-employment testing, structured interviews and other fair selection techniques, please contact Warren Bobrow.

Keep Your Statistics, Please.

Target has had a rough time with pre-employment tests. The previously lost a case of using a clinical psychology instrument in hiring security guards. Now they have settled again with the EEOC for using tests with adverse impact. I’m very curious as to which tests they were using, but I haven’t been able to find out online and since they settle the case they don’t have to disclose the information.

For those of you who are using pre-employment tests (and shame on those of you who are not!), there are a few very important takeaways from the case:

  • Do your adverse impact analyses when you implement AND periodically as you are using the tests. Why? According the EEOC, “The tests [Target was using] were not sufficiently job-related. It’s not something in particular about the contents of the tests. The tests on their face were neutral. Our statistical analysis showed an adverse impact. They disproportionately screened out people in particular groups, name blacks, Asians and women.” Just because your tests do not look like they should have adverse impact doesn’t mean that they don’t.
  • Really, how good is your validity evidence? The key quote from above is “not sufficiently job-related,” which really means the job-relatedness of the tests was not strong enough to support the adverse impact they had. Having a valid test is your defense against an adverse impact claim. Oh, and it’s also the way to show others in your organization their value.
  • Track your data. I was gobsmacked that Target, “failed to maintain the records required to assess the impact of its hiring procedures.” After all, this is the company that knows when women are pregnant before their families do. If you’re the cynical type, you are probably thinking, “Well, they knew it would be bad, so they didn’t keep track of it.” If you get a visit from the EEOC (or your state equal opportunity agency), they won’t look kindly on you not having this kind of information. And it makes you look guilty. Part of the responsibility of having a pre-employment testing program is tracking its adverse impact and validity. If you are thinking of outsourcing it, find out how your contractor plans on following the data.

In the end, Target figured it was worth $2.8 million to make this go away, especially since they claim they are not using the tests anymore. They can probably find that money between the cushions they sell. What’s left unanswered is whether they will continue to use different tests to select managers and others.

For the rest of us, Target gives us a cautionary tale. Big class action lawsuits about tests are riding in to the sunset because the standards for validation and implementation are codified into US law. The standards are clear and they are ignored at your peril.

For more information on using validated pre-employment tests, contact Warren Bobrow.

Going From Bad to Just as Bad

In a post a few months ago, I wrote about the city of Los Angeles selecting which applicants for their firefighter positions would move on in the process based on whether they got their applications in immediately after the opening period. This was deemed as being fair since it was random. Later on it was found that some people in the fire department knew about this and may have told their friends to be sure they got their materials in as soon as possible. Regardless, there was some outcry over the “unfairness” of choosing people based on how quickly they got in their applications.

The city, at the behest of the mayor, has now revised its approach. People have the opportunity to submit applications over the next couple of weeks. A random draw (well, not quite random as we’ll see) will be used to choose those who get to take the written exam. Those not chosen will stay in the pool for the next drawing. A spokesperson for the mayor said, “Mayor Garcetti is seeking a system that results in a fire department that better reflects the city of Los Angeles and has the best possible firefighters.”

How will this process better reflect the city? Well, it’s not a random draw after all. The drawing is weighted so that the demographic results match those who apply. Importantly, not the demographics of the city, despite what the mayor’s office says. Unless you think that firefighter applicants represent the city in terms of race and gender. So, from the sub-pool of Hispanic females, the drawing is random. So, if 20% of the applicants are Hispanic females, about 20% of those chosen to take the test will be also. No guarantee (yet) as to whether 20% of those who pass the written exam will also have to be.

Let’s be clear. When you are randomly picking people from a large group of applicants you are not taking steps to select the best possible people for the job, unless there is nothing learned about a person from the application that is a predictor of performance or a disqualifier (e.g., criminal record). Then again, training and experience (T&E) evaluations are not good predictors of performance, so maybe the city is on to something here. Regardless, a random draw ensures that the quality of the people taking the test is the same as the entire pool of applicants. That is not making certain that the best people are taking the test.

I get the reality that the city is at more risk for being sued over discriminatory hiring practices than over the performance of its firefighters. It is doubtful that the ROI of hiring more effective firefighters in the city will ever be computed so it could be compared to any legal payouts. However, choosing people randomly at any point in a selection process sets a bad precedent. It ignores science and common sense and sends the message that the city is not particularly interested in investing the resources to hire those most likely to be successful in a job, whether it be firefighter or janitor. How is that fair to the applicants or city residents?

For more information on legal pre-employment testing and skill assessment systems, please contact Warren at 310 670-4175 or  warren@allaboutperformance.biz

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE