System.LimitException: Apex heap size too large
Your transaction is holding more data in memory at once than Apex allows: 6 MB synchronous, 12 MB asynchronous. Usually it's a `List<SObject>` that grew unbounded, or a `Map` that accumulated across an iteration that was supposed to be incremental.
Also seen asApex heap size too large·heap size too large·System.LimitException: Apex heap
Heap is the live memory of the running transaction — every variable, every collection, every queried record. The cap is small by web-app standards (6 MB sync), so a single fat query can sink you.
The arithmetic
A queried Account row carries every selected field, including long text areas, plus its parent links and any related child relationships you traversed. A modest SELECT Id, Name, Description, Notes__c FROM Account on 50,000 rows where Description averages 200 characters and Notes__c averages 2 KB easily clears 100 MB of objects — not the on-disk size, the in-memory Apex object size, which is bigger.
What to look for
// Suspect 1: queries with no LIMIT and a wide SELECT.
List<Case> all = [SELECT Id, Description, Comments, Subject, ...
FROM Case WHERE IsClosed = false];
// Suspect 2: a Map that grows forever in batch Apex.
Map<Id, Account> accumulator = new Map<Id, Account>();
public void execute(Database.BatchableContext bc, List<Account> scope) {
accumulator.putAll(scope); // never released; grows every batch chunk
}
The second one is sneaky because in batch Apex the heap resets between chunks, so an instance variable on the batch class accumulates across chunks even though Apex programmers expect "each batch chunk is fresh."
Five fixes, in order of bluntness
- Trim the SELECT. Drop fields you don't actually use. This is the cheapest win.
- Stream with SOQL
forloops. The runtime fetches in chunks of 200; you don't hold all rows at once.for (Case c : [SELECT Id, Description FROM Case WHERE IsClosed = false]) { process(c); } - Process in Batch Apex, where you get 12 MB and chunks are isolated.
- Null out variables when you're done with them. Apex GC is real; setting
bigList = null;lets it free. - Move heavy work to platform events with subscribers that consume in their own transactions.
A diagnostic that actually helps
Apex tracks heap as you go:
System.debug('Heap: '
+ Limits.getHeapSize() + ' / ' + Limits.getLimitHeapSize());
Print it before and after the suspicious block. The delta is what that block cost you. Watch out: in test classes the limits are higher, so a test passing doesn't prove production fits.
A common false positive
If you're seeing this error during a batch Apex run on the first chunk, the issue probably isn't your batch logic — it's the start() method's query result. Database.QueryLocator returns can be huge, but only what you cache yourself in instance variables sticks around between chunks. If start() returns 5 million rows, the LOCATOR streams them; you only blow up if you Database.query(...) into a List.
