Real-time versus Post-call Analytics.posted by Remco Witkamp
Real-time versus Post-call Analytics.
Real-time monitoring is one of today’s hottest topics in the speech analytics world. I can understand why. Who wouldn’t want the ability to automatically detect when an event is happening and help an agent handle the call the right way the first time? Who doesn’t want to give the agent the information they need to answer a question before they even think about how to find it? No one ever wants to force a customer to wait on hold while the agent scours a knowledge base, chats with peers, or just plain invents a new response. The applications for real-time monitoring are varied and valuable, but many organizations are only just beginning to understand how to analyze recorded calls, much less know how to integrate real-time monitoring and alerting into their operations. Nevertheless, most organizations are asking the questions and trying to understand whether real-time monitoring should be a priority, and even whether it eliminates the need for post-call analytics. “When it comes to speech analytics, what’s more important: real-time monitoring or post-call analytics? And, if I want to do both, which should I tackle first?” As it turns out, this is a trick question. The answer is that neither approach is “better.” Instead, they are different sides of the same coin. You really need both in order to maximize the business value that’s possible from analyzing and improving the customer interactions occurring in your contact center. As for which to tackle first, you start with post-call analytics to determine what changes will bring the greatest returns, and then use real-time monitoring and alerting to make those changes happen. You can, indeed you should, have both applications, working together, for interaction analytics to drive as much improvement as possible.
Not All Applications Are Created Equal
Real-time monitoring is the ability of the speech engine to bridge into a live call and identify the words and phrases indicating a specific topic or event, immediately as the words are spoken. The two most important capabilities of a good RTM engine are (1) minimal latency and (2) structured logic. Latency refers to how much time the engine needs to first identify the event and then the agent and/or supervisor that the event occurred. If the application takes more than a second or two from when the event occurred to when the agent (or supervisor) received the alert, then more often than not the opportunity is past and the alert is no longer relevant. You realize the value of real-time monitoring only when the system can trigger an action by the agent or supervisor in time to actully alter the outcome of the call. The term “structured logic” refers to the ability to weave words and phrases together with time operators and Boolean logic to express complex events. Note that real-time monitoring analyzes calls as they happen. Events that occur in shorter intervals, such as one phrase said within 30 seconds of another phrase, are straight-forward for real-time monitoring to identify. Events that, by definition, evolve over long periods of time or must be defined in reference to the end of the call are not really appropriate topics for real-time monitoring. (After all, how can the real-time monitoring system find an event that occurs within a minute of the end of the call when call has yet to complete?) In these cases, post-call analysis can quickly and easily identify these types of long-duration events. Best Uses of Real-time Monitoring The strongest use cases for real-time monitoring are those where the application scans for words or phrases that indicate an event is happening and the identification of that event triggers an action that will make a difference in the outcome of the interaction. Most commonly, the system will “screen pop” relevant information to the agent desktop. This information provides guidance, usually derived from the company’s knowledge base, that helps the agent resolve the issue then and there. For instance, the agent may be reminded of disclosures they must make, advised how best to handle a complex technical question or given a sales or promotional offer to make to the customer that is tailored to the specific situation. Supervisors can also be notified about the issues that are happening in real-time. They might monitor the highest risk call or perhaps chat with the agent to help out in difficult situations. Agents do best on calls when they are actively engaged and listening to the customer, with minimal distractions. Any real-time monitoring program must ensure that the alerts sent to the agent during the call address only the most important issues or actions. Post-call analytics is your key to the successful prioritization of real-time alerts and requests for agent action. Post-call analytics will guide you as you determine which events to monitor, the actions those events should trigger and whether the actions had the desired effect.
You Have to Go Back to Go Forward
Post-call analytics, simply stated, refers to processing call recordings after they happen, to discover and communicate patterns in data that allow you quantify business processes and outcomes, test hypotheses, determine root causes, etc. You then use the results of these analyses to drive enterprise-wide change. Post-call analytics allows you to understand what’s driving interactions, how agents handle those interactions, and where improvements can be made. As previously mentioned, you need your real-time monitoring program to be tuned to the events that, when acted upon, will have the greatest impact on your business results. The only way to determine that is with the empirical, quantitative evidence that post-call interaction analytics provides. It’s like the old saying, “which came first, the chicken or the egg?” In the case of achieving the greatest return on investment from your interaction analytics solution, you’ll fare best if post-call analytics comes first. The primary way post-call analytics arrives at the empirical evidence is through complex, structured searches. This ability to use Boolean and time-based operators set against measurements such as the exact length and end time of a call is what sets post-call analytics apart from real-time monitoring and allows it to deliver results in their full context. Because of this, you use post-call analytics to understand the business processes and agent behaviors causing points of failure in the customer experience. Armed with this information, you then redesign workflows to include real-time monitoring and alerting, using it to deliver the right information to the agent at the right time to address the issues. Post-call analytics then measure how effective the workflow is in production. Are agents actually executing against the suggestions put in front of them? Are customers responding in the intended manner? Are calls becoming more efficient? Are customers more satisfied?
Back to the initial trick question. Can you really have a functional real-time monitoring system without a good post-call analytics foundation? I don’t believe so. Using real-time monitoring and alerting as a stand-alone application does it a disservice. Real-time monitoring is a wonderful tool for automating business rules, workflows, and driving changes in agent behavior. Real-time monitoring is at its best when post-call analysis drives the events it scans for and determines the actions it triggers. After all, automating a bad process does not magically turn it into a good one. Use post-call analysis to redesign the bad process into a good one, and then use real-time monitoring and alerting to make it happen – on every call.