Quality AI basics

Quality AI analyzes customer service conversations, or interactions between contact center agents and users. Quality AI uses chat or call transcripts created using the Contact Center AI Insights API.

Conversation metrics

Conversations are analyzed using the following metrics:

  • Agent ID: A unique number assigned to each agent which identifies the conversations they have handled.
  • Agent total score: The average score of an agent's performance across that agent's conversations.
  • AHT: Average Handling Time, the average duration of an agent's conversations in a specified timeframe.
  • Average agent score: Average of each agent's total score. (See Agent total score.)
  • Average conversation score: Average score across all conversations.
  • Channel: The medium of conversation between a customer and an agent. Channel has one of two values: voice or chat
  • Conversation ID: A unique number assigned to identify each customer service conversation.
  • Conversation total score: Sum of question scores in a single conversation.
  • CSAT: Customer satisfaction rating ranging from 1-5.
  • Duration: Time the conversation spans, beginning to end.
  • Primary topic: The concern discussed during a conversation, determined by topic modeling. Quality AI only displays a primary topic if you've used topic modeling on that conversation.
  • Question: Used to evaluate an agent's performance in a conversation. You enter your questions into Quality AI, and the agent is then rated on whether they satisfied the criteria for each question.
  • Sentiment: The main emotional state conveyed by the conversation. Sentiment has one of three values: positive, neutral, or negative. Quality AI displays a sentiment only if you've used sentiment analysis on the conversation.
  • Silence: Time during which neither the customer nor the agent spoke or typed.
  • Start date: The date on which the conversation began.
  • Start time: The time at which the conversation began.
  • Total volume: The total number of conversations that a single agent handled in a specified timeframe.

Scorecards

The scorecard is a structured framework used to assess conversation quality and the performance of contact center agents during conversations. Each contact center has its own scorecard. Use the scorecard console to upload labeled data by navigating to the Scorecard page and adding the following five data types:

  • Question (Example: Did the agent provide an appropriate product compliment?)
  • Abbreviation for a question (Example: Product Compliment)
  • Answer choice (can be strings, numbers, or booleans. For example, yes, no, N/A; 1-5; product category 1, 2, 3; etc.)
  • Instructions for each answer choice
  • Weight (Example: 1, 100, any integers)

Conversation scores

Quality AI automatically evaluates conversations against the scorecards you supply. For each question, do the following:

  • Set the weight
  • Define the answer and possible answer category
  • Provide a maximum score per answer category

A conversation score is the sum of all scores per question, displayed as a percentage. For example, if the following is true:

  1. A scorecard has 10 questions
  2. Each question is a yes or no question
  3. Yes is weighted 1 and No is 0
  4. A conversation has received all Yes answers

Then the conversation score is 100%.