Salesforce Governor Limits Explained: The 2026 Cheat Sheet (with Examples)
Why limits exist, the canonical sync + async limits table, the top 10 patterns to avoid hitting them, and how to monitor what your org is really burning.

TL;DR
- Limits exist because Salesforce is multitenant. Every org shares CPU, memory, and database resources. Without limits, one team's runaway code crashes everyone's pod.
- The four limits you'll hit in production: SOQL queries (100 sync), DML statements (150), CPU time (10,000 ms sync), heap size (6 MB sync).
- Async doubles or triples most limits — that's the right place for batch work.
- The single best fix is bulkification. Move SOQL and DML out of loops, every time, no exceptions.
If you've ever seen Too many SOQL 101: 101 or Apex CPU time limit exceeded, you've met governor limits the hard way. They're not bugs to work around. They're the platform telling you your code isn't shaped for multitenancy.
This guide is the cheat sheet I wish I'd had on day one: the canonical limits, why they exist, the limits you'll actually hit in production, the 10 patterns that keep you safe, and how to measure what you're using before users complain.
Why governor limits exist
Salesforce is a multitenant platform. Thousands of orgs share the same physical pod — same database, same app servers, same network. If your code could allocate unlimited memory or run unlimited queries, your code could starve every other tenant on the pod.
Limits enforce a fair-share model. Every transaction gets a budget. When you hit the budget, the platform rolls back and throws — protecting everyone else, including you.
The trade-off: you can't write the code you'd write in a single-tenant Java backend. You have to think in bulk. Limits aren't a bug; they're the price of running on Salesforce.
The canonical limits — sync vs async
This is the table to bookmark.
| Limit | Sync (per transaction) | Async (per transaction) | Notes |
|---|---|---|---|
| SOQL queries | 100 | 200 | Includes relationship queries |
| SOQL rows | 50,000 | 50,000 | Total rows across all queries |
| SOSL queries | 20 | 20 | Total per transaction |
| DML statements | 150 | 150 | One DML on a list = 1 statement |
| DML rows | 10,000 | 10,000 | Total records affected |
| CPU time | 10,000 ms | 60,000 ms | Excludes DB wait time |
| Heap size | 6 MB | 12 MB | Live heap, garbage-collected |
| Callouts | 100 | 100 | HTTP callouts per transaction |
| Callout timeout | 120 sec total | 120 sec total | Aggregate, not per call |
| Future calls | 50 | – | Can't @future from @future |
| Queueable depth | 50 (developer edition: 5) | – | Chained Queueables |
| Email invocations | 10 (single) | 10 | Recipient lists count differently |
| Push notifications | 10 | 10 | Per transaction |
Batch Apex, Queueable, and @future use the async column. Triggers, controllers, anonymous Apex, and synchronous web service calls use the sync column.
The four limits you'll actually hit
In production, four limits cause ~95% of LimitException errors. The rest are vanishingly rare.
1. Too many SOQL queries: 101
The classic. You wrote a SOQL query inside a for loop. Every iteration fires a query. With 101 records, you hit the limit.
// ❌ The bug
for (Account a : Trigger.new) {
List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :a.Id];
// ...
}
// ✅ The fix — one query, then map lookup
Set<Id> accountIds = new Map<Id, Account>(Trigger.new).keySet();
Map<Id, List<Contact>> byAccount = new Map<Id, List<Contact>>();
for (Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accountIds]) {
byAccount.computeIfAbsent(c.AccountId, k -> new List<Contact>()).add(c);
}
for (Account a : Trigger.new) {
List<Contact> contacts = byAccount.get(a.Id);
// ...
}
The general rule: never query inside a loop. Hoist the query out, build a Map<Id, ...>, look up by key inside the loop.
2. Apex CPU time limit exceeded
The hardest to debug because it isn't tied to a single line. CPU time accumulates across the whole transaction — every Trigger, every Flow, every formula recalculation.
The CPU wall comes from: deeply nested loops, complex formulas re-evaluated on every record, large collections that get filtered/sorted in code, regex on long strings, JSON parsing of huge payloads, recursive Triggers.
Fixes are surgical and depend on the cause. The single highest-leverage move: profile first. Use Limits.getCpuTime() to add measurement at suspected hot spots before optimizing.
3. DML statement limit: 151
You wrote update record; inside a loop. Each iteration counts as one DML statement. 151 records → boom.
// ❌ The bug
for (Account a : accounts) {
a.NumberOfChildren__c = childCounts.get(a.Id);
update a; // one DML per iteration
}
// ✅ The fix
List<Account> toUpdate = new List<Account>();
for (Account a : accounts) {
a.NumberOfChildren__c = childCounts.get(a.Id);
toUpdate.add(a);
}
update toUpdate; // one DML for the whole list
A DML on a List<SObject> of 10,000 records is one DML statement. It's the list you're using, not the count of records.
4. Apex heap size too large
You loaded too much into memory. Common causes: a query that returns 500K rows, a giant JSON parse, big custom-setting maps, or a Batch Apex start() method that returns too much.
Fixes:
- Stream rather than materialize: use
Database.querywith iterators, or chunk in Batch Apex with smaller scope sizes. - Drop fields you don't need from your
SELECT. The smaller the query, the smaller the heap. - Free references:
myList = nullor.clear()after you're done with intermediate state.
The 10 patterns that keep you safe
The patterns. Memorize. Apply on every new class.
- One SOQL outside, lookup inside. If you need related data inside a loop, fetch it all once, build a
Map<Id, ...>, look up by key. - One DML on a list. Collect changes into a
List<SObject>, then DML the whole list once. - Selective queries always. Always filter on indexed fields (
Id,Name, foreign keys, custom external IDs) when possible. Non-selective queries against large objects bypass the index and scan. - Bulk-safe Trigger Handlers. Triggers must handle 1 record and 200 records the same way. Test both.
- Async for heavy work. Anything > 5 seconds of CPU goes async — Queueable for chains, Batch Apex for volume, Schedulable for cron.
- Limit chained Queueables. Stack depth caps at 50 in production. Don't chain forever; design for finite chains.
- Stream large data. Iterators in Batch Apex with scope size 200, or
Database.QueryLocatorfor streaming 50M-row scans. - Mind the formula recompute. Cross-object formulas re-evaluate on every Trigger. Cache aggressively; consider an Apex-computed field if the formula's expensive.
- Recursion guards. Static
Set<Id>of already-processed records prevents Trigger recursion. Reset onbefore insert/updateif needed. - Profile then optimize. Don't guess at the hot spot. Use
Limitsclass to measure, the Apex Replay Debugger to step, and EM (Event Monitoring) to see production patterns.
Monitoring: what's your org actually using?
Limits aren't binary — you can be at 80% of CPU and not know it. Three places to look.
The Limits class (in code)
System.debug('SOQL: ' + Limits.getQueries() + '/' + Limits.getLimitQueries());
System.debug('DML: ' + Limits.getDmlStatements() + '/' + Limits.getLimitDmlStatements());
System.debug('CPU: ' + Limits.getCpuTime() + '/' + Limits.getLimitCpuTime());
System.debug('Heap: ' + Limits.getHeapSize() + '/' + Limits.getLimitHeapSize());
Drop these at start, midpoint, and end of suspected hot transactions. Before optimizing anything: measure.
OrgLimits (org-level, not transactional)
List<OrgLimit> ol = OrgLimits.getAll();
for (OrgLimit l : ol) {
System.debug(l.getName() + ': ' + l.getValue() + '/' + l.getLimit());
}
This is the org-wide picture — daily API call counts, daily Bulk API calls, daily email limits, etc. Different scope from per-transaction limits.
Event Monitoring (production)
If you have Salesforce Shield or Event Monitoring add-on, the ApexExecution event log captures CPU time, SOQL count, and DML count for every Apex execution in production. Aggregate it in Tableau (or your warehouse of choice) and you'll see your org's real distribution — including the long tail nobody knew about.
Common mistakes (and the fix)
- Optimizing without measuring. Every team has a "this code is slow" hunch that turns out to be wrong. Use
Limitsfirst. - Treating async as a free lunch. Async limits are larger, not unlimited. A bad Batch Apex job can still hit 60,000 ms CPU.
- Catching
LimitException. Don't. It can't be caught reliably and shouldn't be — the transaction needs to roll back. - Skipping bulk tests. Trigger test classes that only insert one record never catch bulk bugs. Always test with 200 records minimum.
- Tight CPU even in Flow. Flow can hit CPU limits too — it's not a free pass. The same patterns apply.
- Forgetting Agentforce Actions inherit limits. When an agent calls an Apex Action, the same governor limits apply. Bulk-safe
@InvocableMethoddesign matters even more.
Frequently asked questions
Are limits going up over time? Some. CPU and heap have crept up gradually. Most haven't moved in years. Don't plan around limits doubling.
Why is my Flow hitting governor limits? Flow shares the same per-transaction limits as Apex — they're enforced at the platform layer. A Flow that fires inside an Apex Trigger transaction shares the budget with the Trigger.
Can I monitor limits without Shield?
Partial. The Limits class works in dev/sandbox without Shield. For production, your options are debug logs (limited retention) or the Apex Log Analyzer apps on AppExchange.
What's the highest-leverage limit to optimize? SOQL count first (most common offender), CPU second (most painful), heap third. DML is usually fine if you're using lists.
Do limits apply differently to managed packages? Each managed package's namespace has its own SOQL and DML limit budgets — slight relief if you're a customer with multiple packaged products. But CPU and heap are still org-wide per transaction.
What to read next
- Governor Limits — the dictionary entry, kept in sync with each release.
- Apex, Batch Apex — the foundations.
- Flow vs Apex 2026 — how the same limits manifest differently in each tool.
If you only act on one section: add Limits.getCpuTime() debug calls to your three slowest transactions today. The numbers will surprise you, and "I'll fix it later" turns into "I'll fix it now."
Share this article
Sources
Related dictionary terms
Keep reading

Salesforce Flow vs Apex in 2026: A Decision Matrix for Admins, Developers & Consultants
Flow vs Apex isn't a religious war anymore. Here's the 2026 decision matrix — capability gaps, governor limits, the 70/30 rule, and 12 worked scenarios with the right answer for each.

What Is Agentforce 360? The Complete 2026 Guide for Salesforce Admins, Developers & Architects
Agentforce 360 is Salesforce's 2025 rebrand of its agentic-AI platform — built on the Atlas Reasoning Engine, Einstein Trust Layer, and Data 360. Here's the complete admin + dev + architect guide.
