The Current Limitations of Generative AI for Call Centers

With all the unique value that Generative AI brings, there are still a number of key limitations to unlocking its full potential.

Last updated on: 
August 10, 2023
Alexander Kvamme
Alexander Kvamme
July 25, 2023
3 minutes

In the previous posts of this series, we explored the transformative potential of Generative AI (GenAI) for call centers. GenAI is a class of artificial intelligence capable of generating text, images, or other media in response to prompts. As we delved into the capabilities of GenAI, we acknowledged its benefits in improving customer interactions, streamlining operations, and providing deeper insights into customer behavior. However, as with any technology, it's essential to understand its limitations. In this post, we'll discuss the current constraints of GenAI in the call center context.

Not All Models Are Created Equal

There are numerous AI models available, but not all of them offer the same level of performance or output quality. The upper echelon of AI models, including GPT-4, Claude 2, and PaLM 2 deliver outputs that meet high expectations. However, it's essential to be cautious of claims about in-house models that purportedly rival these top performers. Unless a company has invested substantially in the development of such a model, the promised level of output may not materialize.

Lack of Memory: A Double-Edged Sword

One of the key characteristics of current GenAI models is their lack of memory. While this ensures data privacy as no previous prompts or outputs are stored, it also means that all information necessary to answer a question or solve a problem must be presented in a single block of text, known as the context window. This limitation can prove challenging when dealing with complex or multi-step customer inquiries in a call center.

Training on Custom Data: A Work in Progress

Large Language Models (LLMs) like GPT-3 can generate highly coherent and contextually relevant responses, but the ability to train these models on custom data is currently limited. While it's possible to load an existing knowledge base into the AI, the process is not straightforward. Fine-tuning, a method to make the AI more accurate in specific tasks, is currently only available for older models, and its cost-effectiveness is still under debate.

Context Window and Rate Limitations: A Question of Scale

Top-tier AI models come with inherent limitations in the context window size and the rate at which they can process data. These limitations can pose challenges when trying to analyze large volumes of data or long conversations typical in a call center setting.

Hallucinations and Jailbreaking: Potential Risks

Two risks associated with GenAI are "hallucinations" and "jailbreaking". Hallucinations refer to instances when the AI, with great confidence, generates information that is entirely fictional. Jailbreaking, on the other hand, involves a malicious user manipulating the AI to reveal information it shouldn't. While these risks are real, they can be mitigated by carefully designing the AI system and implementing robust security measures. Currently there are no guaranteed ways of preventing both hallucinations or jailbreaking from happening. Therefore, the wider you roll out Generative AI, the more your risk increases. It’s recommended to limit the roll out of your first Generative AI to a smaller group of individuals, to just execs or to just folks within operations or analytics.

Stay tuned for the next post in this series, where we will explore how to build a business case for adopting Generative AI for your call center. 

Want to see a custom demo of Pathlight or get help finding the right plan? We'd love to chat.

Sign up for our newsletter

Be the first to learn about our latest manager interviews, articles, and customer stories.