Salesforce Dictionary - Free Salesforce GlossarySalesforce Dictionary
All errors
Governor limits

System.LimitException: Too many query rows: 50001

A single transaction returned more than 50,000 rows from SOQL queries (combined across all queries in the transaction). Different from the 100-query cap — this one counts rows, not statements. One badly-bounded query can blow it.

Also seen asToo many query rows: 50001·Too many query rows·System.LimitException: Too many query rows

The 50,000-row cap is the platform's defence against a single transaction loading "everything." It's per-transaction across all queries combined. One query returning 30,000 rows + another returning 21,000 = boom.

The shape of the bug

// Probably more than 50,000 closed Cases org-wide.
List<Case> archived = [SELECT Id, Subject FROM Case WHERE IsClosed = true];

No LIMIT, no narrow WHERE. The runtime starts streaming rows; at 50,001 it throws.

The fixes by problem type

Fix 1: Add a LIMIT you actually meant

Most "give me everything" queries are really "give me the most recent N." Add the bound:

List<Case> recent = [
    SELECT Id, Subject FROM Case
    WHERE IsClosed = true
    ORDER BY ClosedDate DESC
    LIMIT 200
];

Fix 2: Tighten the WHERE

If you genuinely need all matching rows, rethink the filter. A scheduled job processing "every closed case" really means "every closed case I haven't processed yet" — add a flag field, query only the unprocessed ones, mark them as processed when done.

Fix 3: SOQL for loop

The for form streams in chunks of 200; only the current chunk lives in your collection at a time:

for (Case c : [SELECT Id, Subject FROM Case WHERE IsClosed = true]) {
    process(c);
}

This still counts the rows against the 50,000 governor — but lets you process each batch without crashing on heap. To get past 50,000 rows total, see Fix 4.

Fix 4: Move to Batch Apex

Each chunk in batch Apex gets its own fresh 50,000-row budget. Database.QueryLocator from start() can return up to 50 million rows; the platform streams them in chunks of (default) 200.

public class ArchiveCaseBatch implements Database.Batchable<SObject> {
    public Database.QueryLocator start(Database.BatchableContext bc) {
        return Database.getQueryLocator([
            SELECT Id, Subject FROM Case WHERE IsClosed = true
        ]);
    }
    public void execute(Database.BatchableContext bc, List<Case> scope) {
        // process this chunk
    }
    public void finish(Database.BatchableContext bc) { }
}

This is the right answer for any "process all of X" job.

A diagnostic worth running

System.debug('Rows so far: '
    + Limits.getQueryRows() + ' / ' + Limits.getLimitQueryRows());

Print it after each big query to find the culprit before you hit 50,001.

A subtle source: aggregate queries with no GROUP BY

// Counts as 1 query but the row count is the underlying matched rows, not 1.
Integer total = [SELECT COUNT() FROM Case WHERE IsClosed = false];

COUNT() queries are bounded — they return one row. But this:

List<AggregateResult> grouped = [
    SELECT Status, COUNT(Id) c FROM Case GROUP BY Status
];

Returns one row per distinct Status, which is small. The pitfall is something like GROUP BY Account.Industry where Industry has 500,000 distinct values — then you have 500,000 result rows. Always LIMIT aggregate queries on high-cardinality keys.

Related dictionary terms