Agent setup runs in five phases: design the use case, build Topics and Actions, define Instructions, ground on data, test, deploy. Plan four to eight weeks for a non-trivial agent rollout.
- Define the use case and scope
Pick a specific business problem: customer-facing order status agent, internal SDR-assist agent, employee benefits help-desk agent. Define what the agent handles and what it escalates. Scope creep is the biggest agent design failure mode; ship narrow first.
- Create the Agent in Agent Builder
Setup, Agent Builder, New Agent. Pick the agent type (Customer Service Agent, Sales Coach, custom). Configure name, description, and the underlying LLM. Save to enter the build workspace.
- Build Topics and add Actions
For each business area within scope, create a Topic. Add 5-15 sample utterances per Topic. Attach Actions: standard actions where pre-built ones fit, custom flow actions for org-specific logic, prompt templates for content generation. Set Topic-level Instructions.
- Configure grounding
Attach Data Cloud retrievers for unstructured knowledge. Wire flow Actions for direct CRM lookups. Upload reference documents to the agent''s document library. Test grounding by asking the agent factual questions before going live.
- Test and deploy to a channel
Use Agentforce Testing Center to run synthetic conversations. Build a test suite per Topic. Once tests pass, deploy to a single channel first (typically internal Slack or Service Console for early feedback). Expand to customer-facing channels after a release cycle of monitoring.
Customer Service Agent, Sales Coach, custom. Each ships with starter Topics and Actions for common use cases.
Atlas Reasoning Engine (Salesforce default), GPT (via Salesforce-managed OpenAI), or other supported models. Pick based on use case and Trust Layer compatibility.
Service Console, Experience Cloud, Slack, WhatsApp, SMS, voice. One agent serves multiple channels with channel-specific configuration.
Direct CRM lookup via flow Actions, Data Cloud retrievers, document libraries. Combine multiple sources for richer grounding.
- Scope creep kills agent quality. Narrow scope and well-designed Actions produce reliable agents; broad scope with many overlapping Actions produces confused ones.
- The LLM picks Actions based on names and descriptions. Poorly-named or overlapping Actions get called wrong; spend time on Action naming.
- Grounding is what prevents hallucination on factual content. Skip grounding and the agent invents customer-specific data that is wrong but sounds plausible.
- Testing requires synthetic conversations covering happy path, edge cases, and adversarial attempts. Ad-hoc testing misses too many failure modes.
- Migration from Einstein Bots requires rebuilding the conversation as Topics and Actions. There is no auto-convert from Dialog trees to Topic graphs.