Metrics – often referred to as “Key Performance Indicators” or simply “KPIs” – are a necessity, regardless of your field. If nothing else, it’s nigh impossible to say anything meaningful about performance without them, and improving the performance is turned from something quite achievable. to a Sisyphean task.
What’s measured improvesPeter Drucker
It is also, however, important not to have too many metrics. By limiting the number of metrics you have, you are more likely to be able to improve the results these metrics give you. Most articles I’ve read say that three to five should be sufficient, and I prefer to err on the low side. This is not to say that a KPI may not be a compound metric; resolution rate, for example, may have two versions, one including (the other excluding) tickets not resolved, showing both where the tickets are being solved, and how much of the backlog is being carried forward.
For reasons that should be obvious, metrics need to be tied to goals (SMART ones for preference), and while we must understand that most – if not all – metrics can be gamed, we should also be open, honest, and (at least internally) transparent about our metrics. Being internally transparent about metrics means that employees know which metrics are being applied, so that they can ensure that they deliver according to the metrics, or adapt their behavior when they fail to do so..
A fairly common target number for resolution rate, is for 70% of all tickets to be solved at tier one, and 70% of the remaining tickets should be solved at tier two, resulting in a fairly ambitious goal of 91% resolution rate within the support department. While resolution rate – i.e. the percentage of tickets solved per tier – becomes an important metric, it does not measure quality.
For quality, I like a metric known as %CA, short for “percent correct and approved” (which is fairly common in DevOps circles). This can be applied as either an active or a passive metric and – though I would love for it to be an active metric – I also know that, as a customer, when I am sent a survey after each and every interaction, I get annoyed. The value of an active metric, then, must compete with our need not to annoy our customers, which is why I prefer a passive metric.
My approach – and I’m not saying it’s the only correct one – is that any ticket that is re-opened twice fails this metric – and yes, I think it should be a straight pass/fail metric; we’re interested in the aggregate, rather than in each specific case (which should be resolved as best we can at any rate). If you wonder why I’m saying the ticket must be reopened twice, that’s simply because I know from experience that many of my users will respond to a closure notice with a “thank you”, immediately reopening the ticket.
Time to resolve
While resolution rate and quality are important, we also need to know something about how we are dealing with our case load. For this, I want a metric that looks at the resolution time. This should be separated into a measurement per tier, as well as an aggregate, measurements; the former ensures that higher tiers are not penalised when lower tiers struggle to keep up with incoming case loads, while the aggregate measurement gives us an idea of the time from registration to closure – on average – per case for the entire company.