REQUEST_RUNNING_TOO_LONG: Your request was running for too long
An API request hit the platform's hard cap on request duration. For most APIs, 120 seconds. The platform interrupts; the client gets this error. Almost always means the work needs to move to async (Bulk API, Batch Apex, queueables) or be split into smaller requests.
Also seen asREQUEST_RUNNING_TOO_LONG·request was running for too long·REQUEST_RUNNING_TOO_LONG: Your request
Salesforce's REST and SOAP APIs have a per-request duration cap that varies by endpoint:
| API | Per-request limit |
|---|---|
| REST single-record CRUD | 120s |
| Bulk API 2.0 batch | 600s |
| SOAP Apex callout | 120s |
| Composite REST | 120s for all sub-requests combined |
| Streaming API event delivery | depends on subscription type |
Hit the cap, the platform aborts and sends REQUEST_RUNNING_TOO_LONG.
When a request takes too long
Three usual suspects, in order of frequency:
1. A trigger fires off a slow workflow
The "request" you sent took 1 second of network and 119 seconds of synchronous workflow / flow / Apex trigger cascade on the receiving side. The slow part is server-side, after the platform received your call.
Diagnose: enable a debug log on the integration user before the failing request, reproduce, look at CUMULATIVE_LIMIT_USAGE at the bottom. If CPU time is close to 10s and the request still takes 120s, most of the time is in the database (DML cascades).
Fix: identify the slow trigger / flow / Apex method, optimise (bulkify queries, async-ify side effects).
2. A large insert/update with too many records
List<Account> accounts = new List<Account>();
for (Integer i = 0; i < 50000; i++) accounts.add(new Account(Name = 'A' + i));
insert accounts; // exceeds the time cap on a single request
50,000 inserts in one synchronous DML often exceeds 120s of CPU + DB time. Use Bulk API or chunk into batches:
for (Integer i = 0; i < 50000; i += 200) {
Integer end = Math.min(i + 200, 50000);
insert accounts.subList(i, end); // 200 at a time
}
Or for true scale, use Bulk API 2.0 — it ships the data, runs the inserts async, and you poll for completion.
3. A SOSL or aggregate query against a huge dataset
FIND {anything} IN ALL FIELDS RETURNING Account, Contact, Lead, Opportunity, Case
Without bounds, this can scan tens of millions of rows. The platform tries; eventually times out.
Fix: add a WHERE and a LIMIT. SOSL's RETURNING clause supports those:
FIND {anything} IN ALL FIELDS RETURNING Account(Id LIMIT 100)
When the slow path is async already
Async API jobs (Bulk API, Batch Apex) have their own per-batch caps. If you got REQUEST_RUNNING_TOO_LONG from a Bulk API poll, the underlying job batch hit its own 600-second cap. Reduce the batch size:
Database.executeBatch(new MyBatch(), 50); // small chunks of 50, not 2000
Each chunk gets a fresh 600-second budget. Smaller chunks = less likely to time out individually, more total chunks = the job takes longer overall but each succeeds.
The Bulk API 2.0 specific case
Bulk API 2.0 jobs have a 24-hour processing cap. If a job stays in InProgress for 24 hours without completing, the platform aborts. This sounds extreme but happens with JobType=Query over multi-billion-row tables. Either:
- Filter the query
- Run multiple smaller jobs that each cover a partition
- Use Composite Query API for things truly bigger than a single Bulk API job
