Published: Mon 22nd Jun 2015

8 things to consider before incentivising employees based on customer feedback

Incentivising employees based on customer experience or satisfaction feedback is an effective way to drive customer focused behaviour in companies. However, in order for these incentive programmes to work they need to be not only understood but also trusted by the organisation. Before embarking on this journey I would recommend getting a good understanding of the following 8 areas:

1. Maturity and robustness of your programme: has your programme run long enough to be trusted by the organisation? I would recommend that the programs runs at least 3-4 cycles (depending on sample sizes) before starting to use the data for targets and incentives as you will need the previously collected data to understand the level of your performance, how the score fluctuate etc.

2. How has your score developed in the past: have you experienced a slow and steady increase in performance then don’t set aggressive targets unless you have dedicated plans for improving the customer experience that will support these targets. Do you have seasonal changes which affect the scores during these periods? If you find that your score fluctuate due to these seasonal changes then either adjust the targets during these periods or extend the performance assessment period to cover more than the season.

3. What are you actually measuring: do you measure the right elements of the customer journey (key drivers of satisfaction)? Can the people being incentivised influence the driver that dictates their performance score? We once experienced a contact centre which had included a question about “ease of navigating the automated menu system” in their survey used for incentivising call centre employees. Unfortunately, this question did impact the overall satisfaction; however, the employees had no control over this element.

4. The amount of data you collect: many key metric used in incentive programmes need a lot of data points otherwise the score is allowed to have large fluctuations without it being a significant change in performance. Example: If you measure satisfaction on a 1-7 scale and your key metric is a net satisfaction score (Score = % of satisfied customers - % of dissatisfied customer) – with 20 responses per store or BTB customer your score need to increase/decrease with at least 38%-points before you can state that it’s an improvement or deterioration in performance.

5. The line of sight to the customer: can your employees directly influence the performance used in their incentive programme e.g. frontline personal that directly can influence the customer experience at a particular touch-point such as at the store till or the branch counter? Or are the employees in support functions that does not have direct line of sight to customers? In this case I would recommend a corporate or business unit wide customer experience target.

6. How you collect customer feedback and sampling: different ways of collecting customer feedback impact the score differently. We typically find higher scores when using phone interviews compared to emails and SMS. Having a person at the end of the phone tend to positively bias the results. Also the time a day or when during a week or months you collect responses influence your results. We previously found that changing data collection method from phone interviews to emails meant getting a more accurate reflection of performance. By using emails we collected significant more responses from the target group being customers between the ages of 35-55 compared to phone interviews that mainly took place during the day where this age group was working. Having the right representation across demographics are important for the validity of your results – if that can’t be ensured due to sampling then you can always weight the results to counteract for any over- or under representation.

7. The nature of the feedback: is there a lot of overlap between channels or business areas? In a multichannel environment we normally see between 5-10% overlap – e.g. feedback provided in one channel that was actually related to another channel. Also negative experiences in the past have a tendency to creep into the assessment even though the customers have had a good experience afterwards. Furthermore, it’s important to understand the time lag between customer experience and when you collect the feedback. The quicker you collect the feedback after an event the more accurate the assessment is.

8. Clear policies: ensure clear policies around what is included and excluded in the incentive plan as well as clear policies around appeals processes – if and when can a response be excluded from a score? Another elements is also what do you do at the end of a period – if you have monthly targets and a customer experience an event the last day of the month but don’t provide feedback until the first day in the next month – what do you do with that response?

The final recommendation I will give you is “don’t over-engineer”. If your employees don’t understand the metric or how they can influence the performance to reach the target – how can they improve their behaviour?


Loading Conversation