System.LimitException: Maximum trigger depth exceeded
Triggers can fire other triggers, which can fire more triggers — but only 16 levels deep. Hit 17 and the platform stops the whole transaction. Distinct from 1,000-deep stack overflow, which is method recursion within Apex; this one counts cascading DML across triggers.
Also seen asMaximum trigger depth exceeded·Maximum trigger depth·trigger depth limit·Cascading trigger depth
Salesforce caps the depth of cascading triggers at 16. A trigger on Account that updates Contacts that have a trigger that updates Cases that have a trigger that updates Tasks... at depth 17, the platform refuses.
Concept: trigger depth vs stack depth
| Concept | What counts | Cap |
|---|---|---|
| Apex stack depth | Method-call frames | 1,000 |
| Trigger depth | Nested DML that fires triggers | 16 |
| Workflow rule cycles | Workflow re-evaluating the same record | 5 |
These are independent caps. You can hit any of them with the wrong design.
How to get to 17
User saves Account
→ Account.before-update fires
→ updates Contact records
→ Contact.before-update fires
→ updates Cases
→ Case.before-update fires
→ updates Tasks
→ Task.before-update fires
→ ... (each level += 1 to depth)
Most orgs accumulate this organically — over years, more triggers add more cascading effects. By the time someone hits depth 17 in production, no single trigger looks suspicious; it's the chain.
Diagnose with the debug log
Enable a debug log on the failing user, reproduce the issue, then look for BEFORE_UPDATE / AFTER_UPDATE events. Count the consecutive entries. The 17th is your culprit.
You can also add tracing to your triggers:
trigger AccountTrigger on Account (before update) {
System.debug('AccountTrigger fired at depth ' + getTriggerDepth());
// ... handler logic
}
// Helper class:
public class TriggerDepth {
public static Integer getTriggerDepth() {
return new System.DmlException().getStackTraceString().split('\n').size();
}
}
A bit hacky but lets you see depth growth in real time.
How to fix
The fixes fall into three buckets:
1. Break the cascade with a recursion guard
If A's trigger updates B and B's trigger updates A back, that's a 2-level loop. Add a static set:
public class TriggerContext {
public static Set<Id> recentlyTouchedAccounts = new Set<Id>();
}
trigger ContactTrigger on Contact (before update) {
for (Contact c : Trigger.new) {
if (TriggerContext.recentlyTouchedAccounts.contains(c.AccountId)) continue;
// ... only update parent if we haven't already
}
}
2. Move the cascading work to async
Each @future, queueable, or batch invocation starts at depth 1 again. So if you really need 30 levels of cascading work, do the first 8 synchronously and enqueue the rest for async processing.
3. Re-think the design
If you have triggers calling triggers calling triggers, you may have one big logical operation expressed as many micro-steps. Sometimes the right fix is to consolidate: one trigger that does the full propagation explicitly, instead of relying on each individual update to fire downstream triggers.
A common surprise: Flow + Apex mix
Record-triggered Flows can also kick off Apex triggers, and they all share the same depth budget. If your flow updates a record that has an Apex trigger that updates another record with a flow that... yes, depth 17 from a mix. Use the Flow Trigger Explorer to see what fires when on each object.
