Salesforce Dictionary - Free Salesforce GlossarySalesforce Dictionary
All errors
SOQL

OPERATION_TOO_LARGE: Aggregate query has too many rows for direct assignment, use FOR loop

An aggregate or relationship query returned more rows than Apex will assign to a `List<AggregateResult>` variable in one go. The platform tells you the answer in the message itself: use a SOQL `for` loop, which streams the result.

Also seen asOPERATION_TOO_LARGE·Aggregate query has too many rows·use FOR loop·Aggregate query has too many rows for direct assignment

The exact threshold differs across query shapes (around 2,000 rows for aggregate queries, 50,000 for plain queries) but the cure is the same: stream instead of materialise.

The error in context

List<AggregateResult> results = [
    SELECT Owner.Name, COUNT(Id) c
    FROM Case
    GROUP BY Owner.Name
];                                   // 💥 if there are >2,000 distinct owners

The platform refuses to allocate a list of all 2,001+ rows at once. Convert to a SOQL for loop:

for (AggregateResult r : [
    SELECT Owner.Name, COUNT(Id) c
    FROM Case
    GROUP BY Owner.Name
]) {
    String owner = (String) r.get('Name');
    Integer count = (Integer) r.get('c');
    process(owner, count);
}

Now the runtime fetches in chunks and yields each row to the loop body. You never hold all rows at once.

Why aggregate queries hit this lower

Plain SELECT Id FROM Case queries have a 50,000-row direct-assignment limit. Aggregate queries — anything with GROUP BY, COUNT(), SUM(), etc. — are limited to 2,000 rows for direct assignment, because each aggregate row carries more state than a plain row.

When the for loop isn't enough

If your processing logic itself mutates state across rows (running totals, cross-row analysis), a streaming for loop works as long as the accumulated state fits in heap. If 2 million distinct owners with running totals would blow heap, you need Batch Apex. The start() returns a Database.QueryLocator; the platform handles the streaming for you.

A common surprise: GROUP BY on a high-cardinality field

Grouping by OwnerId in an org with 50,000 active users always blows this limit if you ask for everything:

// Will hit OPERATION_TOO_LARGE
List<AggregateResult> r = [SELECT OwnerId, COUNT(Id) FROM Account GROUP BY OwnerId];

Add a WHERE to bound the result, or stream with the for loop pattern.

Plain queries hit a different ceiling

For non-aggregate queries, you'll hit Too many query rows: 50001 first (see that page). Same family of fix: stream, batch, or bound.

Related dictionary terms