Salesforce Dictionary - Free Salesforce GlossarySalesforce Dictionary
All errors
Governor limits

STORAGE_LIMIT_EXCEEDED: storage limit exceeded

Your org has hit its data storage or file storage cap. New record creation fails until you free space or buy more storage. Audit which objects are eating storage; archive or delete what you don't need.

Also seen asSTORAGE_LIMIT_EXCEEDED·storage limit exceeded·Data storage limit reached·File storage limit

Salesforce orgs have hard caps on data and file storage, allocated by edition + per-user license:

Storage typePer-user allocationCounts against
Data Storage20 MB per user (Enterprise+)Most records (Account, Contact, custom objects, etc.)
File Storage~1 GB per user (Enterprise+)Files, Attachments, ContentVersion bodies

Hit the cap, every insert fails with this error. Existing data is read-only-safe — you can still query and update, just not insert new rows.

Diagnose: where is the storage going?

Setup → Storage Usage lists per-object storage. The biggest consumers are usually:

  1. EmailMessage (Email-to-Case captures every reply forever)
  2. Task and Event (activity history accumulates)
  3. CaseHistory, AccountHistory (field history tracking)
  4. ContentVersion (file uploads — under File Storage)
  5. Custom objects with high write volume (logs, audit trails)

For a custom-object cleanup:

SELECT COUNT(Id) FROM My_Log__c WHERE CreatedDate < LAST_N_DAYS:90

If that count is in the millions, archiving old rows frees enormous storage.

Fix 1: archive old data

Move old records to Big Objects (which don't count against the standard storage cap, but have limited query capabilities) or to an external warehouse (Snowflake, BigQuery via Sync Out).

A simple Apex job:

public class ArchiveOldLogs implements Database.Batchable<SObject> {
    public Database.QueryLocator start(Database.BatchableContext bc) {
        return Database.getQueryLocator([
            SELECT Id FROM My_Log__c WHERE CreatedDate < LAST_N_DAYS:180
        ]);
    }
    public void execute(Database.BatchableContext bc, List<My_Log__c> scope) {
        delete scope;
    }
    public void finish(Database.BatchableContext bc) { }
}

Run quarterly. Storage grows back over the next quarter; rinse and repeat.

Fix 2: empty the recycle bin

Deleted records sit in the recycle bin for 15 days, counting against storage. Empty them sooner:

Database.emptyRecycleBin([SELECT Id FROM My_Log__c WHERE IsDeleted = true ALL ROWS]);

Or via the UI: Setup → Recycle Bin → Empty.

Fix 3: buy more storage

Salesforce Account Executives sell additional storage in 50 GB / 100 GB blocks. Cheaper for some orgs than building an archive pipeline, especially if storage growth is bursty (annual events generate a lot of activity records).

Fix 4: shrink the bloat objects

If the bloat is EmailMessage, retention rules can auto-purge old email after N days. Setup → Email → Email-to-Case → Email Retention.

If it's field history, the History Retention feature (extra cost) moves records older than 18 months out of the live storage pool.

A common surprise

Attachment (legacy file model) and ContentVersion (modern file model) both count against File Storage but accumulate independently. Migrating from Attachment to Files via Files Connect doesn't free up the Attachment storage automatically — you have to delete the old Attachments in a separate pass.

Related dictionary terms