Salesforce Dictionary - Free Salesforce GlossarySalesforce Dictionary
All articles
Development·May 3, 2026·16 min read

The Apex Trigger Framework: Best Practices for Bulk-Safe, Scalable Triggers (2026)

One trigger per object · the logic-less pattern · bulkification · recursion control · framework comparison · security enforcement · test strategy.

Apex Trigger Framework — best practices for bulk-safe, scalable triggers in 2026

TL;DR

  • One trigger per object. Always. The order multiple triggers fire is undefined; debugging two triggers on Account is misery.
  • Logic-less triggers. The trigger itself dispatches to a Trigger Handler class. The handler holds all logic.
  • Bulk-safe by design. Process collections, never one record at a time. Test with 200 records.
  • Recursion control. Static Set<Id> of already-processed records prevents infinite loops.
  • Enforce CRUD/FLS explicitly in writes. The trigger runs as the running user; security still applies.

If you write Apex triggers in 2026 the way they were taught in 2018, you're inheriting more debt than you realize. This guide is the modern handbook: the canonical patterns, why they exist, the framework choices on offer, and what to actually put in production.

Rule 1: One trigger per object

Salesforce evaluates triggers in an undefined order when multiple exist on the same object. So if AccountTrigger1 and AccountTrigger2 both fire on before update, you don't know which runs first — or third, after a custom-built ManagedPackage trigger fires in between.

The fix: one trigger per object. Inside, dispatch to a single handler class. Add new logic by adding methods to the handler, not new triggers.

trigger AccountTrigger on Account (
  before insert, before update, before delete,
  after insert, after update, after delete, after undelete
) {
  new AccountTriggerHandler().run();
}

That's the entire trigger. Nothing else.

One trigger per object — the logic-less pattern

Rule 2: Logic-less triggers

Why is the trigger above just a one-liner? Because triggers are hard to test, hard to mock, and hard to compose. Classes are easy to test, easy to mock, and easy to compose.

The trigger's only job: forward to a handler. The handler's job: contain the logic.

A minimal handler:

public class AccountTriggerHandler {
  public void run() {
    if (Trigger.isBefore && Trigger.isInsert) onBeforeInsert(Trigger.new);
    if (Trigger.isBefore && Trigger.isUpdate) onBeforeUpdate(Trigger.new, Trigger.oldMap);
    if (Trigger.isAfter && Trigger.isInsert)  onAfterInsert(Trigger.new);
    if (Trigger.isAfter && Trigger.isUpdate)  onAfterUpdate(Trigger.new, Trigger.oldMap);
    // ...
  }

  private void onBeforeInsert(List<Account> news) { ... }
  private void onBeforeUpdate(List<Account> news, Map<Id, Account> olds) { ... }
  private void onAfterInsert(List<Account> news) { ... }
  private void onAfterUpdate(List<Account> news, Map<Id, Account> olds) { ... }
}

You can now write a test class for the handler directly:

@isTest
static void onBeforeInsert_setsDefaultIndustry() {
  List<Account> accts = new List<Account>{ new Account(Name = 'Test') };
  new AccountTriggerHandler().onBeforeInsertForTest(accts);
  System.assertEquals('Software', accts[0].Industry);
}

(You'd expose the private methods via a test-only @TestVisible modifier, or build the handler with public methods that take inputs.)

Rule 3: Bulkify everything

Triggers run on a list (Trigger.new is a List<SObject>). They get called with 1 record on UI saves and up to 200 records per chunk on Bulk API loads. Your code must handle both.

The fundamental anti-pattern: SOQL or DML inside a loop.

// ❌ Hits "Too many SOQL 101" on the 101st record.
for (Account a : Trigger.new) {
  List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :a.Id];
  for (Contact c : contacts) { ... }
}

// ✅ One query outside the loop, map lookup inside.
Set<Id> accountIds = new Map<Id, Account>(Trigger.new).keySet();
Map<Id, List<Contact>> byAccount = new Map<Id, List<Contact>>();
for (Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accountIds]) {
  if (!byAccount.containsKey(c.AccountId)) {
    byAccount.put(c.AccountId, new List<Contact>());
  }
  byAccount.get(c.AccountId).add(c);
}
for (Account a : Trigger.new) {
  List<Contact> contacts = byAccount.get(a.Id);
  // ...
}

Same applies to DML:

// ❌
for (Account a : Trigger.new) {
  a.NumberOfChildren__c = ...;
  update a;  // 200 records → 200 DML statements → governor failure
}

// ✅
List<Account> toUpdate = new List<Account>();
for (Account a : Trigger.new) {
  a.NumberOfChildren__c = ...;
  toUpdate.add(a);
}
update toUpdate;  // 1 DML statement

The single most useful test is one that inserts 200 records:

@isTest
static void bulkInsert_works() {
  List<Account> accts = new List<Account>();
  for (Integer i = 0; i < 200; i++) accts.add(new Account(Name = 'A' + i));
  Test.startTest();
  insert accts;
  Test.stopTest();
  System.assertEquals(200, [SELECT count() FROM Account WHERE Name LIKE 'A%']);
}

If the bulk insert fails or hits limits, you have a non-bulk-safe trigger. Fix it.

Rule 4: Control recursion

A trigger updates a record. The update fires the same trigger again. The trigger updates the record. Repeat until governor limits.

public class AccountTriggerHandler {
  private static Set<Id> processedIds = new Set<Id>();

  public void run() {
    List<Account> toProcess = new List<Account>();
    for (Account a : Trigger.new) {
      if (!processedIds.contains(a.Id)) {
        toProcess.add(a);
        processedIds.add(a.Id);
      }
    }
    if (toProcess.isEmpty()) return;
    // ... actual logic on toProcess ...
  }
}

The static Set<Id> survives within a single transaction. If the same trigger fires on the same Id twice in one transaction, the second pass is a no-op.

Three notes:

  • Reset processedIds between transactions if your tests insert + update in the same Test.startTest() block.
  • For "process once per record per operation type" (e.g., once per before update, once per after update), use separate sets per operation.
  • For complex cases, Trigger Handler frameworks like the popular fflib_SObjectDomain or Kevin O'Hara's sfdc-trigger-framework build this in.

Recursion control with a static Set<Id>

Rule 5: Choose a framework, then stay

Three popular trigger frameworks. They differ in style; all do the basics.

Kevin O'Hara's sfdc-trigger-framework

Battle-tested. The most common in production orgs. Pattern:

public class AccountTriggerHandler extends TriggerHandler {
  protected override void beforeInsert() { ... }
  protected override void afterUpdate() { ... }
}

Strengths: tested, simple, widespread. Weaknesses: opinionated and slightly older patterns.

fflib_SObjectDomain (Apex Enterprise Patterns)

Heavier-weight, batteries-included. Maps cleanly to Domain-Driven Design.

public class Accounts extends fflib_SObjectDomain {
  public Accounts(List<Account> records) { super(records); }
  public override void onBeforeInsert() { ... }
}

Strengths: deep integration with Service / Selector / UnitOfWork patterns. Weaknesses: bigger learning curve.

Custom interface-based framework

For teams that want to roll their own.

public interface ITriggerHandler {
  void beforeInsert(List<SObject> news);
  void beforeUpdate(List<SObject> news, Map<Id, SObject> olds);
  // ...
}

public abstract class TriggerHandlerBase implements ITriggerHandler { ... }

public class AccountTriggerHandler extends TriggerHandlerBase { ... }

Strengths: zero dependencies, total control. Weaknesses: you maintain it.

Pick one, document it, stick with it. Mixing patterns in one org is the second-most common cause of Apex regret.

Rule 6: Enforce CRUD and FLS

Triggers run with the running user's permissions by default. But the platform doesn't automatically enforce Field-Level Security on writes inside an Apex class — unless you tell it to.

Three options, in increasing strictness:

Option A: Use with sharing

public with sharing class AccountTriggerHandler { ... }

with sharing enforces sharing rules on SOQL queries the class issues. It does NOT enforce FLS — that's separate.

Option B: Use WITH SECURITY_ENFORCED

List<Account> accts = [SELECT Id, Name FROM Account WITH SECURITY_ENFORCED];

The query throws if the running user lacks read access to any field selected. Strict.

Option C: Use Schema.DescribeFieldResult checks

if (!Schema.sObjectType.Account.fields.Industry.isUpdateable()) {
  throw new SecurityException('No FLS to update Industry');
}

The most explicit. Best for production code where security failures must be very clear.

For Agentforce Actions and exposed Apex methods, always use one of the three. The Trust Layer enforces FLS on retrieval but not on your Action's writes — that's your responsibility.

Rule 7: Test classes that actually catch bugs

A 200-record bulk test is the minimum. Beyond that, test:

  • Null and edge values. Empty Trigger.new, fields set to null, null parent IDs.
  • Recursion. Insert + update in the same transaction.
  • Cross-object failures. Mock or stub if the related object's data isn't available in your test setup.
  • Order of operations. If trigger A depends on data created by trigger B, ensure both fire correctly in your test.

The most important rule: tests that pass against the wrong logic are worse than no tests. If your test only inserts one record, you'll never catch a bulk bug, and your CI will give you false confidence.

@isTest
static void bulkUpdate_doesNotExceedSoqlLimit() {
  List<Account> accts = new List<Account>();
  for (Integer i = 0; i < 200; i++) accts.add(new Account(Name = 'A' + i));
  insert accts;

  for (Account a : accts) a.Industry = 'Software';

  Test.startTest();
  Integer queriesBefore = Limits.getQueries();
  update accts;
  Integer queriesAfter = Limits.getQueries();
  Test.stopTest();

  // Assert the trigger didn't multiply queries by record count
  System.assert((queriesAfter - queriesBefore) < 10,
    'Query count exploded: ' + (queriesAfter - queriesBefore));
}

That assertion shows up in PR review and tells the next developer your trigger is bulk-safe.

Putting it together — a complete handler

public with sharing class AccountTriggerHandler {
  // Recursion guard
  private static Set<Id> processedIds = new Set<Id>();

  public void run() {
    if (Trigger.isBefore && Trigger.isInsert) {
      onBeforeInsert((List<Account>) Trigger.new);
    } else if (Trigger.isBefore && Trigger.isUpdate) {
      onBeforeUpdate((List<Account>) Trigger.new, (Map<Id, Account>) Trigger.oldMap);
    } else if (Trigger.isAfter && Trigger.isUpdate) {
      onAfterUpdate((List<Account>) Trigger.new, (Map<Id, Account>) Trigger.oldMap);
    }
  }

  @TestVisible
  private void onBeforeInsert(List<Account> news) {
    for (Account a : news) {
      if (String.isBlank(a.Industry)) a.Industry = 'Software';
    }
  }

  @TestVisible
  private void onBeforeUpdate(List<Account> news, Map<Id, Account> olds) {
    List<Account> toProcess = new List<Account>();
    for (Account a : news) {
      if (processedIds.contains(a.Id)) continue;
      Account old = olds.get(a.Id);
      if (a.Industry != old.Industry) toProcess.add(a);
      processedIds.add(a.Id);
    }
    // ... cross-record updates collected into a list, DML once at end ...
  }

  @TestVisible
  private void onAfterUpdate(List<Account> news, Map<Id, Account> olds) {
    Set<Id> accountIds = new Map<Id, Account>(news).keySet();
    Map<Id, List<Contact>> contactsByAccount = new Map<Id, List<Contact>>();
    for (Contact c : [
      SELECT Id, AccountId, Email FROM Contact
      WHERE AccountId IN :accountIds
    ]) {
      if (!contactsByAccount.containsKey(c.AccountId)) {
        contactsByAccount.put(c.AccountId, new List<Contact>());
      }
      contactsByAccount.get(c.AccountId).add(c);
    }

    List<Contact> toUpdate = new List<Contact>();
    for (Account a : news) {
      Account old = olds.get(a.Id);
      if (a.Industry != old.Industry) {
        for (Contact c : contactsByAccount.get(a.Id)) {
          c.Department = a.Industry;
          toUpdate.add(c);
        }
      }
    }
    if (!toUpdate.isEmpty()) update toUpdate;
  }
}

The patterns: one trigger, one handler, recursion guard, one SOQL outside the loop with map lookup, one DML on a list, @TestVisible for testability.

Complete handler flow — bulk-safe pattern with recursion guard and security check

When NOT to use a trigger

Triggers aren't always the right answer.

  • Field defaults that depend only on the record itself → before-save Flow is faster.
  • Cross-object updates that admins might want to change → after-save Flow gives admins ownership.
  • Volume-heavy work that doesn't need real-time → schedule Batch Apex instead.
  • Async work that depends on the record commit → Platform Event subscribers fire after commit and run in their own context.

Use a trigger when you genuinely need: real-time enforcement, complex iteration logic, or callouts that must happen synchronously with the save (rare).

How Agentforce changes the picture

When an agent calls an Apex Action that updates a record, your trigger fires. The trigger doesn't know it was the agent — it sees a normal save context.

Two implications:

  1. Your trigger logic affects agent reliability. If a trigger throws on a corner-case data shape the agent didn't anticipate, the agent's Action fails.
  2. Bulk-safe matters more. Agents may issue requests at higher concurrency than your UI ever did. A trigger that's marginal at 1 record fails at 50.

Test agent flows with bulk Apex Action invocations early. Don't wait for production traffic.

Common mistakes (the running list)

  • Multiple triggers per object. Always wrong. Consolidate.
  • Logic in the trigger body. Move it to a handler.
  • SOQL or DML in loops. The classic governor-limit failure.
  • No recursion guard. Or one that doesn't reset between transactions.
  • without sharing everywhere. Sometimes necessary, but a smell when applied broadly.
  • No FLS check on writes. Especially in Agentforce Actions.
  • Single-record-only tests. They never catch bulk bugs.
  • Mixing frameworks. Pick one. Use it everywhere.

Frequently asked questions

Should I move trigger logic to Flow? Sometimes yes. If the logic is purely declarative (set a field, send email, call a sub-flow), Flow is more maintainable. Apex triggers earn their keep when the logic is iteration-heavy, recursive, or needs precise CPU control.

What's the difference between before and after triggers? Before triggers run before the record commits — you can modify Trigger.new directly without a separate update. After triggers run after commit — you have access to record IDs (insert) and can do follow-up work that depends on the record being saved.

Do triggers fire on the Bulk API? Yes. Bulk API loads chunk records into 200-record batches; the trigger fires on each batch. Bulk safety is the entire reason for the patterns in this guide.

What about Batch Apex? Triggers fire when Batch Apex updates records, just like normal DML. Be aware that Batch jobs often have high record volume — your trigger had better be bulk-safe.

Are managed packages affected? Managed packages have their own namespaces and trigger contexts. Your triggers can fire alongside theirs but you can't see or modify their code. Test with the package installed.

Pick a framework today. Refactor your worst trigger to it. The codebase improvement is permanent; the regret stops accumulating.

Share this article

Sources

Related dictionary terms

Keep reading