We wanted to do more calls with less people and introduced KPIs that motivated agents to finish the call as soon as possible and in such way we could save some money by not hiring additional agents. This resulted in agents trying to kill warm leads instead of trying to sell them our product.
We had a large call centre which was managed by the client support department. At first we had only client support and upselling teams but and there were teams responsible for retention or warm lead conversion so after some experiments that we concluded, we understood that we need to create teams for conversion and retention. As it was a marketing initiative, marketing department was responsible for retention and conversion teams and in this case study I will explain what was wrong with the conversion team. In some geographies where we had more online-focused operations the conversion team was marketing department responsibility but in more offline-centric makets it was customer support department responsibility because we didn't have enough warm leads to employ more than one full time employee (FTE) so it was decided to keep it as a function within customer support. As call centre was not the main focus area in the marketing department at first we adopted some KPIs that had been used in the support department because they were experts in call centre operations and I didn't even bother looking at any conversion team KPIs in offline-centric markets where marketing was not responsible for the team.
We used the best practices from call centres and introduced following KPIs that should have ensured high efficiency. Agent bonuse were based on these KPIs:
Answer time - measured how fast we called user after the lead came in; (this was a team KPI)
Time on call - measured how much time agent spent on a call. The less the better because we can employ less FTEs;
Call quality - this was based on random call reviews using a scorecard that we took from customer support. The scorecard included points like introducing yorself, using correct tone of voice, doing a call summary at the end of the call etc. Reviews were done by one person for all of the teams in the call centre;
First call resolution - this meant that we solved the customers' problem and customer didn't call us back within next week.
Conversion rate - the rate at which agents were converting people they were calling. This was not a KPI used by customer support department but it was introduced in marketing-managed teams.
These KPIs made complete sense and we trusted that our strong client support team knows what they are doing. These were introduced only in online-centric countries and the majority of bonus for the agent was generated by the conversion rate.
We introduced the new teams in call centre and didn't touch them for a while (like 6 months) because we had some major campaigns and events to prepare in the marketing department. We were looking at the set KPIs and were happy about them because they were on par with the results that customer support team had and conversion rate was still showing a very good result. Then came the day when I finally had some time to breathe and decided challenge myself and do a little reality check on the situation with our new teams and the results that they are bringing. I had the opportunity to compare results between online and offline markets and I was pleasantly surprised that marketing-managed teams have much better results than offline markets.
I was happy about my teams but bad conversion metrics in offline markets still had impact on my customer acquisition cost so I couldn't leave it as is and had to dig deeper to understand the reason for this difference.
As calls were done in a language I did not understand and due to data protection reasons I actually had to travel to a different country (local market) to do a deeper review.
We sat down with my local marketing manager who I trusted and started doing a reality check on some calls - we were listening to call recordings and my manager was translating what he just heard on the call. At first I wanted to hear the top-rated calls to set a quality benchmark.
To our both surprise in the first call went something like this:
"Agent: This is AGENT NAME calling you from COMPANY about your recent application that you haven't finalized. Do you still want to continue the application process?
Customer: Yes, sorry, I didn't have time to continue back then.
Agent: Sad to hear that, can I help you continue the application?
Customer: Sorry, I don't have time now but I would be able to finish tomorrow.
Agent: Ok, I understand. Would you like me to cancel the application for you and then you could make a new one tomorrow.
Customer: Sure.
Agent: Ok, I will now cancel your application. .. Your application has been cancelled. You can apply again online or at any of our branches. Thank you, have a nice day."
What surprised us the most was that this call had one of the highest call ratings.
Then we listened to some more highly-rated calls and found a similar pattern - with first hiccups in the process agents offered to cancel the application instead of offering to call back at a better time or trying to overcome objections from the customer. This was really surprising to us because we were working effortlessly to give different incentives for users to choose us like discounts, better terms and services but we didn't hear them being used in any of the calls.
We did a similar quality audit in our branches through the surveillance cameras and checked what were the internal system statuses after I noticed that people are spending more than 10 minutes by the counter but I'll tell that story another day.
The next step for us was to understand why those calls had such good ratings because what we would have expected was that agent would try to finalise the application and get a conversion during the call and if agent wasn't doing that then the quality of the call should have been poor. We got the quality controller in the room and he explained that he has a scorecard and he filled the scorecard accordingly.
We reviewed the scorecard and understood that sales-related quality were a minority in it under "dealing with objections" and this quality criteria had the same weight as "introducing yourself", "summarising the call" and "wishing a nice day". This means that in essence if you made a nice call without even trying to sell anything you could get a 95% quality score.
And quality score was just a part of the bonus scheme. The other criteria were call duration, time to call and first call resolution which agents translated the following way:
Time on call - I must finish the call as soon as possible to get a higher bonus;
Answer time - the faster we finish our call the faster we will be able to process all incoming leads;
First call resolution - if we cancel the application, customer won't call again.
We reworked the quality score criteria and set up more sales-centric quality evaluations. We made the sales-related points to account for more than 60% of the total score. We did the same thing with the bonus system. This small change lead to a higher conversion rate in a few weeks. Of course, we had to do additional training on sales and best practices to show the better way of selling. Some of agents couldn't adopt to the new requirements and needed to be replaced but that's a different story which I wrote about here.
Additionally we ended up making a transparent dashboard to see our sales funnel results in all markets so we have a full understanding on where our marketing money is being spent and how our investment is being converted to revenue.
We wanted to call more people and do it faster with less agents instead of hiring additional people;
Bonus metrics were set up wrongly without taking the main goal as the metric with highest weight;
No one did a followup after introducing the new process.
If we had a proper growth model with metrics throughout the business or at least full funnel metric dashboard we would have noticed and addressed the difference sooner;
You must look further than your direct responsibilities to succeed;
You can't just copy-paste a best practices without adjustments;
You must always reserve time to follow-up and review anything you have newly implemented;
Reality checks are a must!
Do a reality check on your business as well and you might find similar inefficiencies. I you need help with that feel free to contact.