Salesforce's AI features (Einstein, Agentforce, Prompt Builder, Atlas Reasoning) expose programmable interfaces from Apex.
Agentforce / Atlas Reasoning Engine — invocable from Apex via the Models API or Connect API for Generative AI.
apex ConnectApi.Generation gen = new ConnectApi.Generation(); gen.modelName = 'sfdc_ai__DefaultGPT4OmniMini'; gen.prompt = 'Summarize this account: ' + accountDetails; ConnectApi.GenerationResponse res = ConnectApi.GenerativeAi.generate(gen); String summary = res.generation.text;
Einstein Trust Layer — sits between your Apex and the LLM. It masks PII, audits prompts, applies guardrails. You don't bypass; the platform handles it.
Prompt Builder — lets admins/devs create reusable prompt templates with merge fields. Apex can invoke a template:
apex PromptBuilder.Generate promptCall = new PromptBuilder.Generate(); promptCall.templateApiName = 'Account_Summary_Template'; promptCall.recordId = accountId; promptCall.execute();
Custom AI agents (Agentforce) — define agents declaratively in Setup. Apex can:
- Trigger agent runs.
- Provide custom tools/actions the agent can invoke (Apex methods registered as agent tools).
- Read agent run results.
Patterns for production AI use:
- Async invocation — LLM calls are slow (seconds). Wrap in Queueable; never call from a trigger directly.
- Idempotency — the same input may produce slightly different outputs. Don't rely on exact match; design downstream logic to tolerate variation.
- Caching — for repeat queries, cache responses in Custom Metadata or Platform Cache to avoid recomputation cost.
- Cost awareness — LLM calls are billed (per-token). Monitor usage; set per-user / per-feature quotas.
- Fallback paths — if the AI service is unavailable or slow, have a non-AI fallback (canned response, pre-computed value).
- Prompt versioning — store prompts in Custom Metadata; deploy via DX. Iterate prompts in source control.
- Audit trail — log every AI call (prompt, response, user, latency) to a custom object. Required for compliance and improvement analysis.
- PII handling — let Trust Layer redact; don't manually concatenate raw PII into prompts.
Common use cases:
- Auto-summarisation of long records (Cases, Opportunities) into bite-sized fields.
- Email draft generation from a record context.
- Smart routing (use AI to classify and route).
- Q&A over Knowledge (RAG against your internal articles).
- Field auto-completion based on similar records.
The platform is evolving fast; build with abstractions so swapping models or moving from Einstein to Agentforce is a config change, not a rewrite.
