Salesforce Dictionary - Free Salesforce GlossarySalesforce Dictionary
All articles
Development·May 3, 2026·14 min read

Salesforce Governor Limits Explained: The 2026 Cheat Sheet (with Examples)

Why limits exist, the canonical sync + async limits table, the top 10 patterns to avoid hitting them, and how to monitor what your org is really burning.

Salesforce Governor Limits 2026 — the complete cheat sheet

TL;DR

  • Limits exist because Salesforce is multitenant. Every org shares CPU, memory, and database resources. Without limits, one team's runaway code crashes everyone's pod.
  • The four limits you'll hit in production: SOQL queries (100 sync), DML statements (150), CPU time (10,000 ms sync), heap size (6 MB sync).
  • Async doubles or triples most limits — that's the right place for batch work.
  • The single best fix is bulkification. Move SOQL and DML out of loops, every time, no exceptions.

If you've ever seen Too many SOQL 101: 101 or Apex CPU time limit exceeded, you've met governor limits the hard way. They're not bugs to work around. They're the platform telling you your code isn't shaped for multitenancy.

This guide is the cheat sheet I wish I'd had on day one: the canonical limits, why they exist, the limits you'll actually hit in production, the 10 patterns that keep you safe, and how to measure what you're using before users complain.

Why governor limits exist: multitenancy and shared resources

Why governor limits exist

Salesforce is a multitenant platform. Thousands of orgs share the same physical pod — same database, same app servers, same network. If your code could allocate unlimited memory or run unlimited queries, your code could starve every other tenant on the pod.

Limits enforce a fair-share model. Every transaction gets a budget. When you hit the budget, the platform rolls back and throws — protecting everyone else, including you.

The trade-off: you can't write the code you'd write in a single-tenant Java backend. You have to think in bulk. Limits aren't a bug; they're the price of running on Salesforce.

The canonical limits — sync vs async

This is the table to bookmark.

LimitSync (per transaction)Async (per transaction)Notes
SOQL queries100200Includes relationship queries
SOQL rows50,00050,000Total rows across all queries
SOSL queries2020Total per transaction
DML statements150150One DML on a list = 1 statement
DML rows10,00010,000Total records affected
CPU time10,000 ms60,000 msExcludes DB wait time
Heap size6 MB12 MBLive heap, garbage-collected
Callouts100100HTTP callouts per transaction
Callout timeout120 sec total120 sec totalAggregate, not per call
Future calls50Can't @future from @future
Queueable depth50 (developer edition: 5)Chained Queueables
Email invocations10 (single)10Recipient lists count differently
Push notifications1010Per transaction

Batch Apex, Queueable, and @future use the async column. Triggers, controllers, anonymous Apex, and synchronous web service calls use the sync column.

The canonical Apex governor limits — sync vs async at a glance

The four limits you'll actually hit

In production, four limits cause ~95% of LimitException errors. The rest are vanishingly rare.

1. Too many SOQL queries: 101

The classic. You wrote a SOQL query inside a for loop. Every iteration fires a query. With 101 records, you hit the limit.

// ❌ The bug
for (Account a : Trigger.new) {
  List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :a.Id];
  // ...
}

// ✅ The fix — one query, then map lookup
Set<Id> accountIds = new Map<Id, Account>(Trigger.new).keySet();
Map<Id, List<Contact>> byAccount = new Map<Id, List<Contact>>();
for (Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accountIds]) {
  byAccount.computeIfAbsent(c.AccountId, k -> new List<Contact>()).add(c);
}
for (Account a : Trigger.new) {
  List<Contact> contacts = byAccount.get(a.Id);
  // ...
}

The general rule: never query inside a loop. Hoist the query out, build a Map<Id, ...>, look up by key inside the loop.

2. Apex CPU time limit exceeded

The hardest to debug because it isn't tied to a single line. CPU time accumulates across the whole transaction — every Trigger, every Flow, every formula recalculation.

The CPU wall comes from: deeply nested loops, complex formulas re-evaluated on every record, large collections that get filtered/sorted in code, regex on long strings, JSON parsing of huge payloads, recursive Triggers.

Fixes are surgical and depend on the cause. The single highest-leverage move: profile first. Use Limits.getCpuTime() to add measurement at suspected hot spots before optimizing.

3. DML statement limit: 151

You wrote update record; inside a loop. Each iteration counts as one DML statement. 151 records → boom.

// ❌ The bug
for (Account a : accounts) {
  a.NumberOfChildren__c = childCounts.get(a.Id);
  update a;  // one DML per iteration
}

// ✅ The fix
List<Account> toUpdate = new List<Account>();
for (Account a : accounts) {
  a.NumberOfChildren__c = childCounts.get(a.Id);
  toUpdate.add(a);
}
update toUpdate;  // one DML for the whole list

A DML on a List<SObject> of 10,000 records is one DML statement. It's the list you're using, not the count of records.

4. Apex heap size too large

You loaded too much into memory. Common causes: a query that returns 500K rows, a giant JSON parse, big custom-setting maps, or a Batch Apex start() method that returns too much.

Fixes:

  • Stream rather than materialize: use Database.query with iterators, or chunk in Batch Apex with smaller scope sizes.
  • Drop fields you don't need from your SELECT. The smaller the query, the smaller the heap.
  • Free references: myList = null or .clear() after you're done with intermediate state.

The 10 patterns that keep you safe

The patterns. Memorize. Apply on every new class.

Ten bulk patterns that keep your Apex out of the red

  1. One SOQL outside, lookup inside. If you need related data inside a loop, fetch it all once, build a Map<Id, ...>, look up by key.
  2. One DML on a list. Collect changes into a List<SObject>, then DML the whole list once.
  3. Selective queries always. Always filter on indexed fields (Id, Name, foreign keys, custom external IDs) when possible. Non-selective queries against large objects bypass the index and scan.
  4. Bulk-safe Trigger Handlers. Triggers must handle 1 record and 200 records the same way. Test both.
  5. Async for heavy work. Anything > 5 seconds of CPU goes async — Queueable for chains, Batch Apex for volume, Schedulable for cron.
  6. Limit chained Queueables. Stack depth caps at 50 in production. Don't chain forever; design for finite chains.
  7. Stream large data. Iterators in Batch Apex with scope size 200, or Database.QueryLocator for streaming 50M-row scans.
  8. Mind the formula recompute. Cross-object formulas re-evaluate on every Trigger. Cache aggressively; consider an Apex-computed field if the formula's expensive.
  9. Recursion guards. Static Set<Id> of already-processed records prevents Trigger recursion. Reset on before insert/update if needed.
  10. Profile then optimize. Don't guess at the hot spot. Use Limits class to measure, the Apex Replay Debugger to step, and EM (Event Monitoring) to see production patterns.

Monitoring: what's your org actually using?

Limits aren't binary — you can be at 80% of CPU and not know it. Three places to look.

The Limits class (in code)

System.debug('SOQL: ' + Limits.getQueries() + '/' + Limits.getLimitQueries());
System.debug('DML: ' + Limits.getDmlStatements() + '/' + Limits.getLimitDmlStatements());
System.debug('CPU: ' + Limits.getCpuTime() + '/' + Limits.getLimitCpuTime());
System.debug('Heap: ' + Limits.getHeapSize() + '/' + Limits.getLimitHeapSize());

Drop these at start, midpoint, and end of suspected hot transactions. Before optimizing anything: measure.

OrgLimits (org-level, not transactional)

List<OrgLimit> ol = OrgLimits.getAll();
for (OrgLimit l : ol) {
  System.debug(l.getName() + ': ' + l.getValue() + '/' + l.getLimit());
}

This is the org-wide picture — daily API call counts, daily Bulk API calls, daily email limits, etc. Different scope from per-transaction limits.

Event Monitoring (production)

If you have Salesforce Shield or Event Monitoring add-on, the ApexExecution event log captures CPU time, SOQL count, and DML count for every Apex execution in production. Aggregate it in Tableau (or your warehouse of choice) and you'll see your org's real distribution — including the long tail nobody knew about.

Common mistakes (and the fix)

  • Optimizing without measuring. Every team has a "this code is slow" hunch that turns out to be wrong. Use Limits first.
  • Treating async as a free lunch. Async limits are larger, not unlimited. A bad Batch Apex job can still hit 60,000 ms CPU.
  • Catching LimitException. Don't. It can't be caught reliably and shouldn't be — the transaction needs to roll back.
  • Skipping bulk tests. Trigger test classes that only insert one record never catch bulk bugs. Always test with 200 records minimum.
  • Tight CPU even in Flow. Flow can hit CPU limits too — it's not a free pass. The same patterns apply.
  • Forgetting Agentforce Actions inherit limits. When an agent calls an Apex Action, the same governor limits apply. Bulk-safe @InvocableMethod design matters even more.

Frequently asked questions

Are limits going up over time? Some. CPU and heap have crept up gradually. Most haven't moved in years. Don't plan around limits doubling.

Why is my Flow hitting governor limits? Flow shares the same per-transaction limits as Apex — they're enforced at the platform layer. A Flow that fires inside an Apex Trigger transaction shares the budget with the Trigger.

Can I monitor limits without Shield? Partial. The Limits class works in dev/sandbox without Shield. For production, your options are debug logs (limited retention) or the Apex Log Analyzer apps on AppExchange.

What's the highest-leverage limit to optimize? SOQL count first (most common offender), CPU second (most painful), heap third. DML is usually fine if you're using lists.

Do limits apply differently to managed packages? Each managed package's namespace has its own SOQL and DML limit budgets — slight relief if you're a customer with multiple packaged products. But CPU and heap are still org-wide per transaction.

If you only act on one section: add Limits.getCpuTime() debug calls to your three slowest transactions today. The numbers will surprise you, and "I'll fix it later" turns into "I'll fix it now."

Share this article

Sources

Related dictionary terms

Keep reading