Salesforce Dictionary - Free Salesforce GlossarySalesforce Dictionary
Full Utterance entry
How-to guide

How to design and maintain an utterance set

Utterance design is the ongoing work of any bot. The steps below cover the initial build and the maintenance cycle that keeps a bot from degrading over time.

By Dipojjal Chakrabarti · Founder & Editor, Salesforce DictionaryLast updated May 16, 2026

Utterance design is the ongoing work of any bot. The steps below cover the initial build and the maintenance cycle that keeps a bot from degrading over time.

  1. Define the intent set first

    Before writing utterances, agree on the intents the bot handles. Each intent should be a single, sharp customer goal. Cancel Subscription is one intent. Reschedule Appointment is another. Both can live in the same bot but they should not share utterances.

  2. Mine real transcripts for seed utterances

    Pull two to four weeks of human-handled chat or email transcripts. Read the customer's opening message. Tag each with the closest intent from the set. The seed list almost always reveals intents the team forgot to plan.

  3. Add deliberate diversity

    For each intent, generate variants along the diversity dimensions: short, long, polite, frustrated, formal, casual, typo-laden. Target the 30 to 50 range. Avoid filling the slots with paraphrases that all sound the same.

  4. Train, test, measure

    Build the model. Run the test set provided by Bot Builder. Inspect any utterance the model misroutes and decide whether the utterance is mislabeled (fix it) or the intent boundary is unclear (rewrite the intent).

  5. Schedule weekly misroute review

    Review failed conversations weekly. Add corrected utterances to the right intent. Retrain. The bot improves continuously, or it does not, depending on this loop.

Key options
Confidence thresholdremember

The minimum confidence score required for the bot to act on a predicted intent. Higher threshold means more fallbacks, fewer misroutes.

Fallback intentremember

Captures utterances that match no real intent. Routes to a human or a clarifying question rather than forcing a wrong match.

Variations fieldremember

Per-utterance synonyms and slot variations Bot Builder uses to generate additional training samples without manual entry.

Language modelremember

Pick the NLU model behind the bot. The default Einstein NLU works for most use cases. Multi-language bots need a model per language.

Topic triggers (Agentforce)remember

The Agentforce equivalent of utterances. Smaller per-topic counts because the LLM generalizes more than a fixed NLU.

Gotchas
  • Overlapping utterances across two intents confuse the model and lower both intents' accuracy. Inspect the test report for intent confusion pairs and disambiguate at the utterance level.
  • Forty paraphrases of the same sentence are not 40 utterances. Diversity matters more than count. Add short, long, frustrated, and casual variants deliberately.
  • Bots trained only on staff-written utterances misroute on real customer phrasings. Always mine real transcripts before launch.
  • Skipping a fallback intent forces every out-of-scope utterance into the closest real intent. The customer gets a wrong answer instead of a clean handoff.
  • Misroute rate is not in any standard report. Build the weekly review process or the bot will degrade silently as customer language evolves.

See the full Utterance entry

Utterance includes the definition, worked example, deep dive, related terms, and a quiz.