It can be easy when you eat, sleep and breathe conversational artificial intelligence (or indeed any subject matter) to forget that not everyone else does too! But whilst we practitioners use technical terms and some jargon to discuss our craft, in any conversation we aim to be straightforward and understandable. With this in mind, our team has produced a glossary of some of the conversational AI terminology, jargon and acronyms we use every day that are second nature to conversational AI practitioners.
Here’s our CAI glossary, which we’ll continually update as (inevitably) new conversational AI terminology surfaces. If you’re a word–nerd and think we could improve our definitions, we welcome your feedback to make it better – or if there’s another term you’d like us to add please let us know.
Abandonment
Abandonment is when a user leaves a conversation with a chat or voice bot before completing their intended goal or transaction.
Agentic AI
AI systems that can autonomously analyse data, set goals, and take actions on their own without constant human direction. These systems demonstrate a higher level of independence and decision-making capability.
Why is Agentic AI so important?
Agentic AI represents the next step for Conversational AI systems – moving beyond responding to user prompts into proactively reasoning, planning, and acting. Instead of waiting for instructions, agentic systems can analyse data, set their own sub-goals, and decide on the best course of action. For virtual assistants, this means shifting from being reactive helpers to becoming collaborative partners who can anticipate needs, orchestrate tasks, and adapt as situations evolve.
This independence is particularly powerful in enterprise Conversational AI. A system infused with agentic capabilities can manage complex workflows, integrate with multiple business systems, and take action on behalf of the user – all while staying aligned with guardrails for safety, compliance, and brand voice. Understanding how to design and govern these new behaviours is essential for practitioners.
Application Programming Interface (API)
A set of rules and protocols that allow different software applications to communicate. Communicating with another application (e.g. getting delivery details from a customer database or sending an SMS) are actions that help conversational systems get things done, not just talk about it!
An Application Programme Interface (API) example
APIs are the gateway to services which elevate bots from being able to chat to being able to solve and support. Almost all enterprise bots will use an API to get details about authenticated users, enabling properly personalised conversations and support. There is no point offering a handover to a human agent if the queue wait time is too long, so we use an API to check that, and the integration of telephony and contact centre tech, that is all done with APIs too. We have worked on bots using SMS APIs to send messages to users, and to agentic services to harness the power of agentic workflows. Essentially, APIs give your bot the power to make things happen at exactly the point your users want it!
Artificial Intelligence (AI)
The simulation of human intelligence in machines, enabling them to learn, reason, communicate and perform tasks.
Unlocking the true potential of AI
For companies, AI represents a powerful toolkit to drive efficiency, personalization, and smarter decision-making. It can automate repetitive tasks, uncover hidden patterns in data, improve customer experiences through conversational AI, and even spark innovation by enabling rapid prototyping and content generation. Businesses that harness AI effectively can gain a significant competitive advantage by working faster, smarter, and more creatively.
However, implementing AI systems isn’t without its challenges. Success requires clean, well-structured data, integration with existing workflows, and alignment with business goals. Organizations must also address issues like trust, bias, and governance to ensure AI-driven decisions are reliable and ethical. The companies that navigate these challenges thoughtfully will be the ones who unlock the true potential of AI.
AI Models
AI models are the specific mathematical and computational frameworks that enable AI systems to process information and generate responses. These can include various types of neural networks and other machine learning architectures. Want a simpler definition? An AI model is a bit like a software program, but unlike a program which follows strict rules to handle tasks, an AI model learns patterns from data and can handle situations it wasn’t explicitly programmed for.
How would AI Models work in practice?
AI models sit at the heart of Conversational AI. They’re the engines that transform raw user input into something a virtual assistant can understand, reason over, and respond to. Unlike traditional software, which follows hard-coded rules, AI models learn from data – spotting patterns in language, tone, and context. This allows them to deal with the unexpected: questions phrased in new ways, shifts in sentiment, or requests that cross multiple domains.
For practitioners in Conversational AI, understanding models isn’t just a technical detail – it’s a practical necessity. The choice of model (from classical NLU pipelines to advanced LLMs) shapes how natural the conversation feels, how safely it operates, and how scalable the solution can be. As AI assistants become more capable, knowing how different models behave, and how to select and combine them, is central to designing experiences that are not only intelligent, but also trustworthy and effective.
Automatic Speech Recognition (ASR)
The process of converting spoken language into text. This is usually the first step in systems people can speak to, such as voicebots, smart speakers and voice assistants.
Bidirectional Encoder Representations from Transformers (BERT)
A deep learning model designed for natural language processing (NLP) tasks that considers the context of words in both directions.
Chatbot
A software application designed to hold a conversation with a human users, usually over the Internet. Chatbots can be rule-based or powered by AI. Rule-based chatbots tend to give the user limited options to choose from, whereas AI powered chatbots are more likely to try to emulate natural language, with varying degrees of success!
Chatbots in the real world
Lots of folk working in the Conversational AI world will have found that when they tell people that they work in chatbots, listeners generally recount an awful experience they’ve had with a bot. The usual perception of chatbots is that they are in the way between their issue and getting a human to resolve it. With products like ChatGPT now widespread, users expect bots to be fluent. However, there’s still important Conversational AI design and engineering work to be done in enterprises who need their bots to be more capable of doing rather than just responding. With the right skills and knowledge it’s possible to build truly effective bots that please their users – so we’re expecting to hear about far fewer of those awful experiences in future!
Classifier
A Machine Learning (ML) algorithm or component that categorises user inputs into predefined categories or intents. They’re commonly used in conversational systems to enable the software to understand what the user is trying to accomplish.
Containment
Deflection and Containment can sometimes be confused as the same thing. Containment can be considered as the bot assisting the user directly. For example, answering the customer’s query. In more complex use cases, the bot can perform certain actions itself, often in lieu of a human who would otherwise have to be involved. For example, a process that allows the customer to report a fault and log that fault together with their account and contact details might mean a dialogue is considered to be contained.
Context
Context can have multiple meanings. It may refer to the time, place and needs of the customer when they use the bot to communicate with the company. More often, it refers to the bot ‘knowing’ information about the customer without needed to re-ask it. If a customer is logged in to the website or app, an intelligent bot should have access to that persons’ account details, recent orders, open complaints etc. Good conversation design can harness that information to make for a more coherent customer experience, without them needing to repeat information that is already known by the company.
A practical example of Context
Handling the conversation context can make all the difference between a successful intelligent bot, and one that infuriates customers for asking dumb questions! It can sometimes be a tricky linguistic challenge too, e.g. the “it” in “it’s not working” means different things based on what the user has talked about or been told before – “it” might be a device, a particular webpage, a different webpage, a pop-up form, an app or perhaps a certain troubleshooting step. Context allows the chatbot to interpret the correct “it” and give appropriate responses – which might be a troubleshooting step for an application, or an acknowledgement that perhaps something might have broken and an agent would be better to equipped to assist.
Conversation
A series of exchanges between a user and an AI system that involves understanding context, maintaining coherence, and working toward resolving the user’s needs.
Customer experience (CX)
Customer Experience encompasses the entire business relationship from the viewpoint of the customer. Bots are frequently used in customer service settings, and should therefore help alleviate frustration and cognitive load on behalf of the customer, rather than add to it. Constant consideration of the entire Customer Experience is essential in creating effective bot solutions, and the reason why conversation design remains so important.
Deep Learning (DL)
A type of Machine Learning (ML) that uses neural networks with multiple layers to model complex patterns in data.
Deflection
Deflection and Containment can sometimes be confused as the same thing. However, deflection can be considered as directing a customer to use other resources available to them, rather than have a human involved in helping them. For example, directing the customer to an online form or an FAQ. Essentially, deflection prevents a human having to get involved, often in trivial or less important queries or tasks.
Dialogue Management (DM)
The component of a conversational AI system responsible for maintaining the context of the conversation and managing the flow of dialogue based on user inputs. Or in other words, the processes the chatbot goes through behind the scenes, to remember what the user has said, process new information from the user, and work out what to do or say next.
Disambiguation
The process of clarifying user intent when multiple interpretations of their input are possible, usually by asking follow-up questions.Entity Specific pieces of information in user input that can help clarify or add detail to the intent. For example, entities can include dates, locations, or product names.
F1 Score (F1)
A measure of a model’s accuracy that balances precision and recall (two other common ML evaluation metrics).
Fallback
A reply given by a chat or voice bot when it cannot understand what the user has just said. Usually the bot will ask the user to clarify what they mean or try to reword their query to try to match it to an intent.
First Contact Resolution (FCR)
In Customer Service, FCR is an important indicator of how effective and efficient a process is at dealing with customer queries or issues. Ideally, a customer should not have to contact a company repeatedly about the same thing. When that happens, it’s a waste of time on both sides and degrades the customer’s patience and loyalty. FCR seeks to resolve the customer’s query or issue the first time they inform the company.
Frequently Asked Questions (FAQ)
A structured knowledge base used to provide predefined answers to common customer enquiries.
FAQs in the world of conversational AI
What makes FAQs especially interesting in Conversational AI is how they evolve. Modern assistants don’t just serve static answers; they can expand FAQs with richer context, personalise responses, and even link simple questions to complex workflows. Well-designed FAQ content is also a critical training source for AI models, shaping their ability to understand intent and deliver value. Far from being just a list of questions and answers, FAQs are a powerful bridge between knowledge management and intelligent, conversational experiences.
GenAI / Generative Artificial Intelligence
In conversational AI, GenAI bots are bots that do not have pre-written responses. Instead, they typically rely on an LLM to interpret the customer’s query and create a response on-the-fly. Whilst this provides a more natural and conversational experience, there are risks that the bot will provide inaccurate or poorly translated answers. Using a RAG approach can help to mitigate such ‘hallucinations’ but they cannot be entirely eliminated.
Generative Artificial Intelligence (GenAI) advice
This, of course, is a hot topic right now with lots of rapid progress. It’s easy to become bewildered and confused by Generative AI capabilities. Whilst some see GenAI as some kind of database of ‘knowledge’ (it isn’t), others are mistrusting and dismissive of what it produces. I saw the best explanation of GenAI on social media, from Mark Federman; ‘ChatGPT does not give you an appropriate or correct answer. Rather, it gives you what an appropriate or correct answer might look like.’
Generative Pre-trained Transformer (GPT)
A specific type of LLM developed by OpenAI that generates coherent and contextually relevant text.
Generative Pre-trained Transformer (GPT) in practice
Almost everyone has heard of ChatGPT, OpenAI’s famous application, available to chat to directly or via API for integration into business apps. Myriad products using the GPT models are available, from marketing copywriting, programming code, and language learning. Organisations will often want to create their own GPT – essentially a version of a general-purpose LLM (like GPT-4) that has been adapted to a specific business need, domain, or use case. Companies can create a Custom GPT by providing configuration instructions and setting up a persona, essentially giving the GPT rules for behaviour – a bit like hiring a highly capable employee and giving them a playbook, training materials, and tools, so they answer questions and solve problems in a way that reflects your business.
Hallucination
Hallucination refers to instances where an LLM generates false, misleading or nonsensical information that may or may not seem plausible. Hallucinations occur because LLMs predict text based on patterns and not by understanding facts.
Handover (handoff, escalation)
A handover is when the bot transfers, or escalates, the conversation to an agent. As agentic AI progresses, we may see an increase in handovers to other agentic bots. However, a handover is usually to a human who will receive the transcript of the conversation thus far and can help with the more complex questions and issues. As a handover incurs a cost to the business, the handover rate is a key metric when assessing the effectiveness of a bot.
Human-Computer Interaction (HCI)
The study of designing user-friendly interactions between humans and machines.
Intent
The goal or purpose behind a user’s input in a conversation. Identifying the intent helps the system understand what the user wants to achieve. Intent might be identified using an ML classifier or an LLM, but in any case it’s the important step of understanding what the user wants or needs in order for the chatbot to form a response.
Interactive Voice Response (IVR)
A telephony system that interacts with callers using voice prompts and keypad inputs.
Large language model (LLM)
A Large Language Model is a model which has been trained on an enormous amount of content. An LLM is intended to produce the ‘completion’ text to a ‘prompt’. The prompt may be a question or an entire conversation and the LLM determines the next best word to use, based on what it has seen in its training. LLMs can appear to know everything and converse just like a human, but they can also be unpredictable and factually incorrect. However, an element of randomness is always present, and some controls can be dialled up or down. They are not information databases, and you cannot ‘search’ an LLM for answers per-se, although interfaces to LLMs with deep research models are now available. LLMs are an incredible utility and can be applied in a variety of ways in conversational AI.
How can Large Language Models (LLMs) be used successfully?
Applications like ChatGPT are some of the most famous products built using LLMs. Many enterprises are now using LLMs and GenAI as pieces of their tech stack. LLMs do some things brilliantly (and other things less well!). Some of our LLM work with clients and organisations include successfully using LLMs for identifying entities – such as in the reporting of damaged items in a delivery, for translation service – enabling speakers of over 100 languages to access support phone lines, and for analysis of 1000s of chats – identifying key topics of conversation.
Machine Learning (ML)
A subset of AI that uses algorithms to analyse data, learn from it, and make predictions or decisions without being explicitly programmed for specific tasks.
Message
A single unit of communication between the user and the chat or voice bot, which can be in the form of text, speech, or other formats (such a background systems transferring context data).
Named Entity Recognition (NER)
An NLP technique used to identify proper names, locations, dates, and other specific entities in text.
Natural Language Generation (NLG)
NLG is a subset of Natural Language Processing (NLP) that focuses on transforming structured or unstructured data into human-readable text or spoken language.
Natural Language Processing (NLP)
The branch of AI that focuses on enabling machines to understand, interpret, and generate human language.
Natural Language Understanding (NLU)
A subfield of NLP that enables machines to comprehend meaning, intent, and context from text or speech.
Regression Testing
Regression testing involves re-running previously developed tests to ensure that existing functionalities still work as expected after new code changes or additions. This practice helps verify that recent updates haven’t negatively impacted the existing software. High quality regression testing is important whether bots are LLM-powered, using more traditional NLP, or taking a hybrid approach. For more info read our blog Bulletproof chatbots: why regression testing is non-negotiable
Retrieval Augmented Generation (RAG)
In a Gen AI solution, retrieval augmented generation (RAG) is an approach that retrieves information relevant to the customers query, typically from an external data source. That information is then referenced in order to produce a relevant answer to a customers’ query. The additional information may be a large single document, or smaller chunks of various documents. The user’s query is passed to the LLM along with the additional information. This is also known as ‘grounding’ the LLM so that it has relevant reference material to call upon when creating a response, rather than relying on its original training data. It is a common approach to use vectorisation to retrieve information that is semantically similar to the user’s original query.
Safety Net
Built-in mechanisms and responses that help handle unexpected inputs or situations where the AI system might not understand what the user asked.
Sentiment Analysis
The process of determining the emotional tone behind a series of words, used to understand the user’s feelings and tailor responses accordingly.
Semantic
Semantic relates to the meaning of language; in conversational AI, this refers to the system’s ability to understand the actual meaning and context of user inputs rather than just matching keywords.
Small Language Model (SLM)
Similar to an LLM, but much smaller and therefore can be cheaper and faster, a Small Language Model is a Machine Learning model that can respond to and generate natural language. SLMs need fewer resources to train them and are then used to perform more specific tasks.
Speech-to-Text (STT)
A subset of ASR, but the terms are often used interchangeably. STT is the key function of transcribing spoken words into written form, often used in voice assistants.
Text-to-Speech (TTS)
A technology that synthesizes human-like speech from written text. TTS is commonly used in voice assistants to produce the spoken answers from the voicebot.
Training
Training is the process of teaching AI models to understand and respond appropriately to user inputs using large datasets and machine learning (ML) techniques.
Training data
The datasets used to train AI models. In conversational AI, this includes examples of user inputs and the corresponding outputs or responses.
Turn Count
The number of back-and-forth exchanges between the user and the AI system in a single conversation, often used as a metric for conversation efficiency. One user utterance, paired with the associated bot response, is counted as one turn.
How does Chatpulse identify turn count?
We found that we differed slightly with the definition of a ‘turn’ being either single message (for the user or the bot), or a pair of messages (from both the user and the bot). This definition also varied when we looked around online. In Chatpulse, we consider that a turn is one (or more) messages from a single ‘author’. So, if the user asks a question, and the bot responds, that would be two turns. We refer to a pair of turns as a transaction. Chatpulse can identify high turn count conversations – which could signal unnecessary friction, poor intent recognition, or confusing dialogue design. It can find unusually long conversations or spikes in turn count, which shows the bot is struggling to understand, leading to user frustration and poor outcomes.
Digging into the details of what caused unexpectedly high turn count chats gives bot developers the tools they need to improve flow design, fix bugs, and ultimately resolve queries more effectively, leading to happier users and improved KPIs all round.
User Experience (UX)
The overall experience a user has while interacting with a product or service. In conversational AI, this includes how intuitive and satisfying the conversation feels. In addition to the conversational aspects, UX also includes the chat interface or any other mechanism the user interacts with to get to or as part of a chat.
User Experience (UX) in practice
A chatbot transcript is actually a readable log of the user’s experience, yet it is often not as valued as it should be. A customer will quite often explain in detail their experience that led them to need to contact the company, and from there we are able to follow their experience in trying to get that matter resolved. In User Research terms, transcripts primarily hold qualitative data but also reveal quantitative data such as time on task, average turn count etc.
Utterance
An utterance refers to a single unit of speech or text in a chat or voicebot conversation, usually a single message or a statement from either the user or the bot. It can be as short as one word or as long as several sentences.
Virtual Assistant (VA)
An AI-powered software agent that performs tasks or provides information via text or voice interaction. Sometimes used synonymously with chatbot, conversational agent, voice assistant and similar terms.
Word Error Rate (WER)
A metric used to evaluate the accuracy of speech recognition by comparing transcriptions to the original speech. A speech recognition tool or system operating with a low WER indicates that the user’s speech is being well recognised — the first step to ensuring the bot provides good responses.