Businesses nowadays measure everything. From marketing campaign ROI and recruitment statistics to customer satisfaction and website performance, there are few stones left unturned. One area that’s just as, if not more, important to measure is customer service team performance, and one look at the statistics will tell you why.
83% of execs say that bad CX puts their revenue at risk. Meanwhile, 73% of consumers say a good experience is key to influencing their brand loyalty. There is no shortage of surveys and research that has unearthed similar findings, and they all arrive at the same conclusion—customer service is critical to the health of a business.
Evaluating and measuring the performance of your customer service team, then, is something that you need to stay on top of if you want to attract new customers and retain existing ones. But evaluating customer service performance is easier said than done; it can mean different things to different organizations, and this is where leaders can trip up and make mistakes.
With this in mind, here are the top three mistakes that you should avoid when evaluating your customer service performance.
Mistake #1: Confusing individual performance with team performance
Team performance metrics (for example, number of conversations solved) and individual agent performance metrics (for example, conversation escalations) are two very different things, yet it can be easy to conflate the two and arrive at incorrect conclusions and interpretations. It is, therefore, important to look at and evaluate the two in isolation. Customer service platforms like Dixa include reporting functionality that enables CS team leaders to track this information separately.
Evaluating individual customer service agent performance
When analyzing the performance of individual agents, you need to look at the actual, direct impact of an agent’s contribution. It’s no good looking at metrics that form part of a wider team effort or at the last assigned agent because this can cause confusion around who is responsible for a satisfactory resolution or a failed first contact resolution.
The best way to measure an agent’s direct contribution is through event metrics (e.g., replies, internal notes, successful resolutions). This helps you to determine exactly who is having a positive or negative impact on the team’s overall performance.
It is also key to put those metrics into perspective by taking into account the workload and the specialization of your agent: an agent working on escalation will usually have lower productivity as the tickets treated are more complex or require additional steps. These agent performance metrics can help you gain a deeper understanding of an individual’s contribution.
Customer service team performance metrics
When looking at team performance metrics, focus on customer experience rather than the individual outcome. For example, look at things like the number of conversations per channel, contact reason, customer priority, and any other metrics or aggregates that link back to your CX strategy.
We recently came across this exact situation while onboarding a new customer in the digital services industry. They were trying to figure out how many conversations were being closed per agent per day. At first, they were looking at how many conversations were being closed per day and which agents were assigned to those.
Prior to Dixa, the problem with this approach was that one customer issue could be worked on by several agents, and only the person who closed the conversation would be counted, even if that agent wasn’t the person who provided the resolution. With Dixa, they can now track which agents were involved in a resolution through side conversations and escalations.
Mistake #2: Being misled by interpretation biases
One of the most common metrics that organizations look at when evaluating customer service performance is customer satisfaction or CSAT.
This sounds obvious, right? After all, customer satisfaction is sure to be a pretty good indicator of whether your customer service team is doing a good job because, by definition, customer satisfaction is the measure of how products and services supplied by a company meet or surpass customer expectations.
The right answer is that it can be a good indicator if you look at the data properly. The issue with relying solely on customer satisfaction (CSAT) data is that it is a metric that is very rarely collected. According to statistics, the average return rate of a CSAT survey is between 13-15%, so looking at this data in isolation would make for a rather grim reading.
Instead, look at your CSAT data in conjunction with the number of ratings that have been returned to ensure that the data you’re looking at, and the inferences you are drawing from it, is a true reflection of the actual situation. One way to achieve this in Dixa is to define a minimum number of values that are required to compute a metric, which ensures that you aren’t basing your inferences on small, misleading datasets.
Mistake #3: Not understanding the context of key metrics
Two more metrics that are often used to evaluate customer service performance are first contact resolution and reopening rate:
- First contact resolution is the number of queries resolved during the first contact with an agent.
- Reopening rate is the number of queries that have been reopened after an initial resolution.
The trouble with metrics like reopening rate is that a conversation can be reopened for a variety of reasons, and this doesn’t necessarily mean that the answer provided didn’t solve the customer’s problem. For example, a customer might reopen a conversation by sending a thank you email, by coming back to the same conversation with an entirely new query, or because an agent accidentally closed a conversation before it was resolved.
As a result, it’s important for customer service leaders to understand the context of their key metrics instead of looking at them blindly. While a high reopening rate might look bad on the face of it, the true reopening rate—for example, the number of tickets reopened because the solution wasn’t adequate—is likely to be much lower.
You can conduct additional research with QA tools to investigate the reopening causes, and statistically draw conclusions about your real reopening rate.
KPIs need context when evaluating customer service performance
Your customer service key performance indicators (KPIs) are very important, but they are only useful when properly understood and interpreted (using the right scope of data).
To avoid making mistakes in assessing customer service team performance, customer service leaders need to ensure that they are looking in the right place and at the complete picture, including all available context, and that they properly understand what everything means. Quality assurance is an important part of tracking and improving customer service performance. Read The Fundamentals of Quality Assurance to get started.