When Convenience Gets Under Your Skin

Whether it is Amazon planning on stores without cash registers, or being able to buy drinks in a club without your wallet, to tracking the movement of just about any goods you can think of, RFID (Radio-frequency identification) is part of lives. But, what if your CEO or CTO came to you and said, “What if our employees had an RFID chip implanted in them?”

As with a lot of tech, the argument in favor of it is about convenience. Employees could access buildings, rooms, computers, vending machines, etc just by walking past an RFID reader. No more reaching for or losing key cards.

So, a company in Wisconsin is trying it out. The non-squeamish volunteers will get the chip (about the size of a grain of rice) put into their hand between the thumb and forefinger.

I will avoid making ominous comparisons to 1984. But, I am curious as to what are the real productivity or engagement benefits of doing this. How much time is being wasted fumbling for security cards? Does this help prevent any security breaches? I am just not seeing the ROI, so I doubt that many companies will adopt this.

I am not anti-technology or else this blog would show up on a piece of paper. Nor do I expect that every tech idea to be a good or bad one. However, business decisions that affect employees should be made on something beyond, “This would be cool!” Someone at the companies adopting this technology did just that (probably after an amazing sales pitch). Does it establish them as having a forward thinking tech-enable culture? Sure. Does it also show them as favoring style over substance? I think again, the answer is, “Yes.”

We can help companies establish a culture of good decision making by facilitating data-driven discussions. Questions like, “What are our goals?” and “How do we determine if this innovation is successful?” is a good way to separate a fad from effective organizational initiatives.

Is Seeing Really Believing?

Something I hear frequently from clients is, “I wish I had a day/week/month to see my candidates do the job.  Then I would make fewer hiring mistakes.”  It is, of course, an intriguing idea.  We test drive cars before we buy them.  Why not try out people before we hire them?

There is a long history of sampling work behavior in selection systems, whether it be using Assessment Centers to hire/promote managers and executives or having people make things for craft positions.  The accuracy of these types of assessments is good, falling somewhere between cognitive ability tests and interviews.  For candidates, the appeal is that they feel that they can really show what they can do rather than have their job related skills or personality inferred from a multiple choice test.

The issues in using a job tryout would include:

  • Paying the person for their time. There is an ethical, in some cases legal, issue in having a person work for free.  So, be prepared for your cost per hire to go up significantly.
  • Candidates would either need flexible schedules or plenty of PTO to participate in such a program.
  • Having meaningful work for the candidates to do. If you are going to narrow the gap between what the assessment and the job look like, then you would have to have projects that impact process, customers, etc that you would be willing to have a short-term contractor do.  Or, that you already have them doing.
  • Determining how to score the job tryout. Most organizations do a pretty poor job of measuring job performance over a full year, let a lone a couple of days.  Developing scoring criteria would be key for making good decisions and avoiding bias.
  • Having someone who is not your employee perform work that could affect your customers or the safety of others will make your attorney break out in a cold sweat.  This is should convince you not to do job tryouts, but you will have to sell that person on the idea.

What got me thinking about job tryouts was this article.  I was impressed that the company had thought through the problems in their selection process and came up with a creative way to address them. They certainly handle the pay issue well and they currently have the growth and profitability to make the program worthwhile. What is left unsaid, but communicated through some derisive comments about multiple-choice tests, is that they feel that using tests would not fit their culture well.

My concerns were that they are more worried about “fit” than skills.  This also translates into not having an objective way to evaluate how well a person did.  This leads me to believe that they would run into the problem of only hiring people who are just like them.

Lastly, they have a pretty high pass rate that “feels right.”  If I worked for them, I would be concerned that a lot of time and effort is being spent confirming what was seen in the less valid interview.  This is particularly true in a company where metrics are important for everything else.  Having people work for you for a few days and not having an objective way to measure how well they did is not going to lead to better candidates than a series of interviews.

Advances in selection tools will likely come from start-up companies who are not bound by tradition when it comes to hiring.  The tech sector presents a lot of opportunities to improve valid selection systems by their nature:  They are setup to disrupt and they gather a lot of data.  This presents a great platform for seeing what people do before you hire them to do it.

The Challenge in Finding Good Performance Data

In validating tests, getting a hold of good individual performance data is key.  But, it is also one of the more difficult parts of the process to get right.

Intuitively, we all think we can judge performance well (sort of like we all think we are good interviewers).  But, we also know that supervisor ratings of performance can be, well, unreliable.  This is so much the case that there is a whole scientific literature about performance appraisals, even as there is currently a movement within the business community to get rid of them.Facetime For PC

But, what about objectively measuring performance (for every new account opened you get $X)?  If the Wells Fargo imbroglio tells us anything, it’s that hard measures of performance that are incented can run amok.  Also, while they are objective, single objective measures (sales, piece work manufacturing, etc.) rarely reflect the entirety of performance.  Lastly, for jobs where people work interdependently it can be very difficult to determine exactly who did what well, even if you wanted to.

So, what’s one to do?

  • Establish multiple measures of performance. For instance, call centers can measure productivity (average call time) and quality (number of people who have to call back a second time).  Don’t rely on just one number.
  • Even when a final product is the result of a group effort, each individual is still responsible for some pieces of it. If you focus on key parts of the process, you can find those touch points which are indicative of individual performance.  Again, look for quality (was there any rework done?) and productivity (were deadlines met?) measures.
  • Objective performance measures do not have to have the same frequency as piece work or rely on one “ta-da” measure at the end. Think of meeting deadlines, whether additional resources were required to complete the work, etc.
  • Don’t get bogged down in whether or not a small percentage of people can game the system with objective measures. We seem OK with rampant errors in supervisory judgment, but then get all excited because 1 out of 100 people can make his productivity seem higher than it is.  If you dig into the data you are likely to be able to spot when this happens.

When I hear people say that you cannot measure individual performance well, I cringe.  Of course you can.  You just need to know where to look and focus on what is important.

 

 

The Culture of Over the Top Customer Service

There are several approaches that companies take with their customer service:

  • Necessary evil. This is when they figure they have to have it (by expectation or regulation).
  • A cost of doing business. Companies take this approach when they still take orders online and need to provide some support for returns, etc. This is a transactional approach.
  • An opportunity to build the brand and the business. Companies take this approach to cross-sell and build lifetime customers.

Intellectually, most contact center execs would like to see themselves as in the third category. But, doing so presents an amazing challenge in terms of culture and money. This article talks about what some companies have done to use their contact center to gain market share.

Though we normally think of a company’s culture as an internal issue, it does influence how it treats its customers. Think about the last time you contacted a bank versus a tech company. Or, a legacy firm versus a start-up. Whether or not it is by design, these are different customer experiences. That’s the company’s culture at work.

 

The article describes how Dollar Shave Club consciously brings their culture to their contact center. They are linking the image they portray in their ads to their customer service. The goal is to extend their brand to every touch point with prospects and subscribers.

The article draws a dichotomy between scripted (implied: boring) and unscripted (implied: exciting). I don’t buy this. There are plenty of unscripted contact centers that are not very exciting.

Another issue is really how much judgment the agents have and how this affects how you select them. When agents are heavily scripted and following a decision tree, they don’t get to use much judgment, but they need to be very conscientious. When agents are unscripted, judgment is much more important. When you add the culture component, then you must ensure that the person’s personality fits your organization. Each of these represents a different set of attributes that you need to consider when recruiting and hiring.

Interestingly, it takes more training to get people to execute a culture in an unscripted call center than to follow rules in a scripted one. I would imagine that this training also has a lot to do with the limits (or lack thereof) of the customer interactions and indoctrinating the culture into the reps. This leads to a more consistent (or controlled, as Zappos says) customer experience, even without scripts.

The ROI calculation of the extreme customer contact centers described in the article is pretty straight forward.

1) What are the additional costs in spending 7 weeks to train reps versus 2-3 weeks? The additional thought here is whether the training cost per seat per year is the same, factoring in turnover rates. If unscripted agents stay long longer, then maybe the training doesn’t cost that much more. You can reduce the training time and turnover with valid selection procedures.

2) How many more reps do you need in an unscripted center, which will lead to longer handle times? If this approach leads to more first call resolutions, then some of the costs will be mitigated. If unscripted call lead to more variance in your handle times, how do you accurately schedule agents?

3) Does this approach lead to more sales and/or the development of customers for life? The key word here is more. If scripted service leads to the same amount of sales, then the extra investment in unscripted training and FTEs is not worth it. It’s easy to say that the unscripted approach feels better, but that doesn’t pay the bills. You need to track the impact on sales (or at least conversions). Note that the most of the companies cited in the article using unscripted calls are living off of investor money rather than revenue.

Your contact center is an extension of your business model and culture. Be mindful of how they impact the strategy behind your center. Most importantly, keep focused on the steak and not the wow sizzle.

For more information on the link between culture, selection, and your contact center, contact Warren Bobrow.

Is There a Customer Service Gene?

In working with clients on developing pre-employment testing systems, I’ve heard the expression “The Customer Service Gene,” or some variant of it, dozens of times. I like it because it transmits the idea that some people have it and some do not. It casually underlines the idea that there are some things you cannot train away. But, having good genes only provides potential. They only translate into high performance if nurtured through good training, coaching, and performance management.

I thought about what makes up the CSG while reading this interview with Jonathan Tisch. One thing that has always struck me when analyzing call center work is that there are only so many types of customer issues an agent encounters, but the customer’s circumstances are much more variable. The best agents are those who can be empathetic to unique circumstances while applying problem solving skills and creativity.

We also have to consider that there may be several sets of CSGs and that those which are the “best” really depend on the situation. For instance, there’s good data that suggest that those who call a contact center have different customer service expectations that those who text/e-mail. The former is looking for more of a personal interaction whereas the latter’s criteria for a good experience is getting the problem solved. Both sets of agents need to be creative problem solvers, but only one also has to have superior interaction skills.

The good news is that there are valid tests that assess candidates on these attributes that are cost effectively. Using them will help you identify who has the appropriate CSG (or at lest a lot of it) for your contact center.

For more information on the Customer Service Gene and validated pre-employment testing programs, please contact Warren Bobrow.

Yes, We Are All Biased, But We Don’t Have to Be

Nearly all judgments we make about people are subject to some bias. We carry around these mental shortcuts so that every social situation doesn’t have to consist of obtaining all new information. I will leave to the evolutionary biologists to fill in the details as to why we do this.

From a practical point of view, these biases invade our work related decisions, such as deciding who did better in an interview, which employee should get a special assignment or a higher performance evaluation, etc. Of course, these biases go both ways. Employees are making the same types of judgments about their boss, interviewer, etc.

We have good ways to minimize these biases in hiring tools (evaluate tests scores by group to ensure that different groups are scoring equivalently, adding structure to interviews, using objective performance metrics rather than ratings, etc.). However, these biases also extend to how we communicate broadly.

Take a look (or listen) to this story. It describes steps that a company took to widen its applicant pool (BTW: This is my favorite way to combat adverse impact). Through a data analysis of language in job postings it was found that certain words/phrases would encourage or discourage certain applicant groups. Changes were made and applications increased.

The article addresses two uncomfortable truths:

  • We all have biases
  • They cannot be trained away.

The second one is a bit tougher for my friends in OD to deal with because a core tenant to diversity training is that if we are aware of our biases we can some how eliminate them. The research indicates that this is not the case.

However, in recruiting and selection, we can take steps to reduce bias from the process, including:

  • Careful wording of recruitment notices so that they don’t send unintended messages that would lead members of certain groups not to apply.
  • Using selection tools which minimize human bias, such as validated pre-employment tests. Perhaps this also means using audio, instead of video, for evaluating interviews, assessment center exercises, and work sample tests. Many symphonies now do this when evaluating musicians.
  • Adding as much structure as possible to interview protocols.

We know that good selection techniques have a higher ROI than training. Likewise, it is more cost efficient to implement good practices to mitigate bias than to train it out of people.

What are you doing to reduce bias on your selection/promotion procedures?

For more information on valid pre-employment testing, structured interviews and other fair selection techniques, please contact Warren Bobrow.

Keep Your Statistics, Please.

Target has had a rough time with pre-employment tests. The previously lost a case of using a clinical psychology instrument in hiring security guards. Now they have settled again with the EEOC for using tests with adverse impact. I’m very curious as to which tests they were using, but I haven’t been able to find out online and since they settle the case they don’t have to disclose the information.

For those of you who are using pre-employment tests (and shame on those of you who are not!), there are a few very important takeaways from the case:

  • Do your adverse impact analyses when you implement AND periodically as you are using the tests. Why? According the EEOC, “The tests [Target was using] were not sufficiently job-related. It’s not something in particular about the contents of the tests. The tests on their face were neutral. Our statistical analysis showed an adverse impact. They disproportionately screened out people in particular groups, name blacks, Asians and women.” Just because your tests do not look like they should have adverse impact doesn’t mean that they don’t.
  • Really, how good is your validity evidence? The key quote from above is “not sufficiently job-related,” which really means the job-relatedness of the tests was not strong enough to support the adverse impact they had. Having a valid test is your defense against an adverse impact claim. Oh, and it’s also the way to show others in your organization their value.
  • Track your data. I was gobsmacked that Target, “failed to maintain the records required to assess the impact of its hiring procedures.” After all, this is the company that knows when women are pregnant before their families do. If you’re the cynical type, you are probably thinking, “Well, they knew it would be bad, so they didn’t keep track of it.” If you get a visit from the EEOC (or your state equal opportunity agency), they won’t look kindly on you not having this kind of information. And it makes you look guilty. Part of the responsibility of having a pre-employment testing program is tracking its adverse impact and validity. If you are thinking of outsourcing it, find out how your contractor plans on following the data.

In the end, Target figured it was worth $2.8 million to make this go away, especially since they claim they are not using the tests anymore. They can probably find that money between the cushions they sell. What’s left unanswered is whether they will continue to use different tests to select managers and others.

For the rest of us, Target gives us a cautionary tale. Big class action lawsuits about tests are riding in to the sunset because the standards for validation and implementation are codified into US law. The standards are clear and they are ignored at your peril.

For more information on using validated pre-employment tests, contact Warren Bobrow.

Going From Bad to Just as Bad

In a post a few months ago, I wrote about the city of Los Angeles selecting which applicants for their firefighter positions would move on in the process based on whether they got their applications in immediately after the opening period. This was deemed as being fair since it was random. Later on it was found that some people in the fire department knew about this and may have told their friends to be sure they got their materials in as soon as possible. Regardless, there was some outcry over the “unfairness” of choosing people based on how quickly they got in their applications.

The city, at the behest of the mayor, has now revised its approach. People have the opportunity to submit applications over the next couple of weeks. A random draw (well, not quite random as we’ll see) will be used to choose those who get to take the written exam. Those not chosen will stay in the pool for the next drawing. A spokesperson for the mayor said, “Mayor Garcetti is seeking a system that results in a fire department that better reflects the city of Los Angeles and has the best possible firefighters.”

How will this process better reflect the city? Well, it’s not a random draw after all. The drawing is weighted so that the demographic results match those who apply. Importantly, not the demographics of the city, despite what the mayor’s office says. Unless you think that firefighter applicants represent the city in terms of race and gender. So, from the sub-pool of Hispanic females, the drawing is random. So, if 20% of the applicants are Hispanic females, about 20% of those chosen to take the test will be also. No guarantee (yet) as to whether 20% of those who pass the written exam will also have to be.

Let’s be clear. When you are randomly picking people from a large group of applicants you are not taking steps to select the best possible people for the job, unless there is nothing learned about a person from the application that is a predictor of performance or a disqualifier (e.g., criminal record). Then again, training and experience (T&E) evaluations are not good predictors of performance, so maybe the city is on to something here. Regardless, a random draw ensures that the quality of the people taking the test is the same as the entire pool of applicants. That is not making certain that the best people are taking the test.

I get the reality that the city is at more risk for being sued over discriminatory hiring practices than over the performance of its firefighters. It is doubtful that the ROI of hiring more effective firefighters in the city will ever be computed so it could be compared to any legal payouts. However, choosing people randomly at any point in a selection process sets a bad precedent. It ignores science and common sense and sends the message that the city is not particularly interested in investing the resources to hire those most likely to be successful in a job, whether it be firefighter or janitor. How is that fair to the applicants or city residents?

For more information on legal pre-employment testing and skill assessment systems, please contact Warren at 310 670-4175 or  warren@allaboutperformance.biz

Job Tryouts: Good Idea, but Are They Worth the Cost?

You know what would be the best way to select candidates? Have them work for you for about a year, evaluate their performance, then turn back the clock and make the right hire. However, that only happens in HR science fiction.

In this post I read about how a business uses job tryouts. This, of course, is not uncommon. Many large firms use internships to help determine who they will hire. Tests, assessments, and interviews can give you a sign of whether a person can do well on a job, but seeing him or her do it would really be the best predictor. Of course, this isn’t always practical due to safety, training, and other factors.

One of the most interesting quotes in the article is, “[organizational abilities are] very hard to suss out in the interview process,” which is true. However, there are ways to measure it (and other complex skills and abilities) without putting the person on the job. And the kicker? These kinds of assessments are designed to have the look and feel of the job. They are called Assessment Centers.

From an ROI perspective, you need to balance accuracy with cost. Yes, a job tryout can be very accurate (assuming what you have the person do is typical of his/her duties) but expensive (probably around $10,000 for three months for the type of job described in the article). Using a valid assessment upfront to screen people out (and to reduce the 50% washout rate of those in the tryouts) would be closer to $500. Sure, it’s not as accurate as the job tryout, but it isn’t 95% worse.

What the article really points out is that very creative business owners refuse to apply that thinking to hiring. The only choices for evaluating skills and abilities are NOT interview or 90 day probationary period.

What are your creative ideas for evaluating candidates?

For more information on legal pre-employment testing, skill assessments, and Assessment Centers, please contact Warren at 310 670-4175 or  warren@allaboutperformance.biz

Big Data, Evidence Based Decision Making, and the “Golden Motion”

The term “Big Data” really means nothing more than doing a deep analysis of the information you have. It’s “big” because we have more data than we used to have (bigger data would actually be the better term). The analytics really have more to do with being able to answer questions more reliably due to the larger number of data points. Evidenced based decision making really asks, “How can previously analyzed data help us make the best choice here?”

In this article, the author does a very good job of describing “big data” and how it can practically be used to solve business problems. Boiled down to its essence, he uses the information to find the one nugget (or more) of information that leads to improved business performance. When referring to something done by a user, he calls this the “golden motion,” or, what the customer does that leads to higher sales. When using the information to make smart business choices, it’s using the evidence to make decisions.

We can apply the “golden motion” and evidenced based decision making to HR as well. Here are a couple of examples:

1)    Signing up for benefits. Presuming that you have a self-service model of employees signing up for health care, 401(k) contributions etc., have you looked at you web portal stats to see which behaviors directly lead to more signups? Is it going to a particular FAQ? Or, a page which contains a specific graph?

2)    Are employees engaged? While engagement surveys are normally anonymous (and they should be), you can still look at the behaviors of groups which tend to be more engaged than others. Do they have more/less meetings than other groups? Do they do more organized “fun” things together? Do they ask for more/less input from executives? What behaviors do their managers exhibit?

3)    What are the best predictors of job performance and/or turnover? Have you analyzed your pre-employment and current employment data points to see which ones are truly indicative of superstars or those who left too soon? What are some of the things that top performers do (besides doing their jobs well)?

What “big data” does is get us to think about is the hidden side of things. And you probably have the data.

For instance, employee engagement underlies productivity and profitability. According to Gallup’s employee data from 2010-2011, “organizations with an average of 9.3 engaged employees for every disengaged employee experienced 147% higher earnings per share than their competition.” Understanding the drivers of engagement can help you increase it to the benefit of your bottom line. But, you need to dig into the data to find these nuggets.

But are you asking the right questions to find the “golden motion”?

For more information on using employee engagement and pre-employment testing data to improve profitability, please contact Warren at 310 670-4175 or  warren@allaboutperformance.biz

Thanks for coming by!

Please provide this information so we can stay in touch.

CLOSE