Data Cloud Is Now Data 360: What Actually Changed (and Why It Matters)
From Customer 360 Audiences to Genie to Data Cloud to Data 360 — the full naming history, the real new features, and the migration plan for 2026.

TL;DR
- Data Cloud was renamed Data 360 at Dreamforce '25 (October 2025) — same product, expanded scope.
- The headline new features: Zero-Copy Federation (query Snowflake/Databricks/BigQuery in place), Tableau Semantics (a shared semantic layer between Tableau and Data 360), Intelligent Context (RAG-style grounding for Agentforce), Activation-Triggered Flows (events fire on segment changes), and the Data 360 Lakehouse (a managed Iceberg lake under your org).
- Pricing is credit-based, not per-record. Most teams that scale fast underestimate ingest credits and overestimate query credits.
- This is the data foundation that grounds every Agentforce 360 agent. If you're shipping AI on Salesforce in 2026, you're using Data 360 — even if you don't realize it.
If you've been on the Salesforce platform for more than a year or two, you've watched this product get renamed roughly every 18 months. Customer 360 Audiences became CDP became Marketing Cloud CDP became Genie became Data Cloud — and as of October 2025, Data 360. The renames are funny when isolated and frustrating when you're trying to write documentation. But each one matched a real architectural shift, and the Data 360 rename is no exception.
This guide walks through the full naming history, what actually changed in October 2025 versus what's just rebranding, the credit pricing model that catches teams off guard, and what existing Data Cloud customers must do to take advantage of the new features.
The naming history: from CDP to Data 360
Quick reference, because this comes up in every architecture review.
| Era | Brand name | What it actually was |
|---|---|---|
| 2018–2020 | Customer 360 Audiences | Marketing-only CDP. Limited identity resolution. Salesforce's first attempt at a unified profile. |
| 2020–2021 | Salesforce CDP | Same engine, broader pitch. Identity resolution improved. Still marketing-led. |
| 2021–2022 | Marketing Cloud CDP | Re-bundled into the Marketing Cloud SKU. |
| 2022–2023 | Genie | Renamed at Dreamforce '22. Real-time data layer rebuilt on a new lakehouse. The first version that wasn't marketing-only. |
| 2023–2025 | Data Cloud | Genie + standardized data model (DMO) + activation across all clouds. The version most current customers know. |
| Oct 2025 → | Data 360 | Data Cloud + Zero-Copy + Tableau Semantics + the Lakehouse + Intelligent Context. The data foundation for Agentforce 360. |
The pattern: every rename added scope. Data 360 is the most ambitious to date because it's pitched not just as "your customer data platform," but as the unified data layer underneath every product Salesforce sells — including the AI agents.
What actually changed in October 2025
Five real product changes shipped under the Data 360 banner. None are pure marketing.
1. Zero-Copy Federation
The single biggest change. Historically, Data Cloud worked by ingesting data — you stood up a Data Stream from Snowflake or S3 or Salesforce, the platform copied the data into Data Cloud's storage, and you queried it there. Two storage systems. Two copies. Two reconciliation problems.
Zero-Copy Federation lets Data 360 query the source system directly, in place, with no copy. You point Data 360 at a Snowflake table, a Databricks Delta table, a BigQuery dataset, or an Iceberg lake; you map it to a Data Model Object (DMO); and queries run as federated SQL against the source. Identity resolution and calculated insights still work — they're computed lazily over the federated view.
When to use it: large analytical tables you don't want to duplicate, datasets governed by another platform, sources where freshness matters more than query latency.
When not to use it: real-time activation paths (federated queries can be slow), small dimensional tables, anything where the source system's uptime is shakier than Data 360's.
2. Tableau Semantics
Tableau and Data 360 now share a single semantic layer. A measure defined once — "Lifetime Customer Value" — surfaces in Tableau dashboards, in Agentforce prompts, in segment definitions, and in Calculated Insights. Before this, the same metric had three or four definitions across the stack and the numbers diverged.
This is foundational. The reason your CFO's "revenue" doesn't match the dashboard's "revenue" is usually that the same word means three things in three places. Tableau Semantics centralizes the definition.
3. Intelligent Context
The Data 360 piece that wires it to Agentforce. Intelligent Context exposes Data 360 records and Calculated Insights to the Atlas Reasoning Engine as grounding — the RAG-equivalent for structured data. When an agent asks "what's this customer's lifetime value?", Atlas queries Intelligent Context, pulls the value through the Einstein Trust Layer, and grounds its response.
The Agentforce Data Library — the unstructured-document side — sits next to Intelligent Context and provides the same plumbing for PDFs, knowledge articles, and uploaded files.
4. Activation-Triggered Flows
A small but meaningful change for admins. Previously, Data Cloud activations pushed segment data outbound (to Marketing Cloud, S3, an ad platform). To run logic when a new customer entered a segment, you had to poll. Activation-Triggered Flows fires a Flow the instant a record enters or exits an activation, with the segment payload as input.
Common uses: notify a CSM when a new churn-risk segment fires; auto-create a high-priority Case when a customer enters the "fraud signal" segment; kick off a personalized email cadence the moment someone hits a high-intent threshold.
5. Data 360 Lakehouse
Salesforce now ships a managed Apache Iceberg lakehouse as part of Data 360. Iceberg is the table format that lets multiple compute engines — Snowflake, Databricks, Spark, Trino, Tableau, Atlas — read and write the same underlying files without conflict. The Data 360 Lakehouse stores all your ingested data as Iceberg by default, so you can run external compute against it without exporting.
This is the change that quietly matters most over a 3–5 year horizon. It means Salesforce data is no longer locked inside Salesforce — it's in an open table format your data team can use from any tool.
How Data 360 powers Agentforce 360
If you've read our Agentforce 360 guide, you know the Atlas Reasoning Engine needs grounding. That grounding comes from Data 360 in two paths:
- Structured grounding — DMOs, Calculated Insights, and federated tables surfaced through Intelligent Context. Atlas queries them when an agent needs a number, a record, or a list.
- Unstructured grounding — knowledge articles, PDFs, and uploaded docs in the Agentforce Data Library, indexed and retrieved via vector search.
Both paths run through the Einstein Trust Layer, which enforces the running user's Identity Resolution scope and Field-Level Security. An agent calling on behalf of a low-privilege rep can only ground on data that rep could see.
The practical implication: if your Agentforce agents feel "dumb" or "off topic," the fix is usually in your Data 360 setup — incomplete identity resolution, stale Calculated Insights, missing data streams — not in the agent prompt.
Credit-based pricing — the real mental model
Data 360 charges in credits, the same unit Salesforce uses across its AI products. The mental model that's worked best for our team:
| Credit category | What it pays for | Where teams blow the budget |
|---|---|---|
| Data Service Credits | Ingestion, identity resolution, calculated insights, segmentation runs | Real-time data streams. A streaming source costs ~10× a daily batch. |
| Data Platform Credits | Storage (per TB-month), federated query throughput | Federated queries against slow sources. Each query meters. |
| Activation Credits | Outbound activation events | Marketing teams firing big segments to many destinations. |
There's also a flat platform fee for Data 360 itself, separate from the credits. The platform fee is your "license"; credits are your variable usage.
Two gotchas that sting almost every customer:
- Real-time costs disproportionately more than batch. A real-time stream (sub-second freshness) is roughly 5–10× the credit cost of an hourly batch and 50–100× a daily batch. Use real-time only where it pays back in business value (fraud detection, abandoned-cart recovery). Batch the rest.
- Identity resolution credits scale with match candidates, not record count. A poorly-tuned matching ruleset can balloon your bill 10× without changing record count. Tune match rules early. Audit them quarterly.
What existing Data Cloud customers must do
If you're already on Data Cloud, the rename to Data 360 is automatic. You don't need to migrate. But to take advantage of the new features:
- Inventory your data streams. List every stream, its source, its frequency, its DMO target. You'll repeatedly reference this list during the next steps.
- Identify federation candidates. For every stream pointing at Snowflake, Databricks, or BigQuery, ask: "Could this be Zero-Copy?" Move the largest tables first — that's where the storage savings are real.
- Reset your semantic layer. Tableau Semantics is opt-in. If you have measures defined in 5 places, pick the canonical one (usually Tableau or your dbt project), expose it through Tableau Semantics, and start retiring duplicates. This is a 1–2 quarter project; don't try to boil the ocean.
- Wire Intelligent Context. If you have any Agentforce agents — or plan to — pre-load the DMOs and Calculated Insights they need into Intelligent Context. The default mapping is usable but not optimal.
- Audit credit consumption. Pull last 90 days of credit usage from your Data Cloud Cost Explorer. Look for anomalies. Tighten match rules and stream frequencies before turning on new features.
Common gotchas (and how to avoid them)
A short list of mistakes I see almost every team make on a new Data 360 project.
- Treating DMOs as "just tables." A Data Model Object is more than a table — it has identity rules, field types tied to the harmonized model, and downstream effects on activation and grounding. Map carefully.
- Skipping identity resolution. Every team knows they need it; many teams get to "load data" and skip the matching ruleset. Without identity resolution, the Customer 360 profile is fragmented and your Calculated Insights lie.
- Calculated Insights as full-table scans. A poorly-defined CI can scan billions of rows on every refresh. Use partition keys. Cap row counts during dev.
- Activations without rate limits. A segment of 10M people, fired hourly, will hit downstream rate limits and look like an outage. Set sensible activation throttles.
- Forgetting that the Lakehouse is a security surface. Once data lands in Iceberg, your data team can read it from external tools. Make sure that matches your security posture and audit log requirements with Salesforce Shield.
The architecture cheat sheet
If you're explaining Data 360 to a CTO or new architect on the team, this is the picture:
- Sources — operational systems, data warehouses, file stores, MuleSoft endpoints, Salesforce orgs.
- Ingestion or Federation — copy data in or federate in place. Choice is per-source.
- DMOs (Data Model Objects) — the harmonized, governed shape of every entity (Customer, Order, Lead, etc.).
- Identity Resolution — collapses fragmented records into a single profile per real-world entity.
- Calculated Insights — derived metrics computed across DMOs, refreshed on a schedule.
- Activation — outbound: push segments and events to other systems; or inbound: provide grounding for Agentforce.
- Atlas Reasoning Engine — uses Intelligent Context (and the Trust Layer) to ground every agent response.
The whole thing runs on Hyperforce by default for new orgs.
Frequently asked questions
Is Data 360 a different product from Data Cloud? No. Same engine, expanded scope and rebrand. Existing Data Cloud orgs auto-upgrade.
Do I need Data 360 for Agentforce? For trivial agents, no — you can ground on standard Salesforce data. For anything involving unstructured content, cross-system identity, or computed metrics, yes.
Does Zero-Copy Federation work with anything besides Snowflake? Yes — Snowflake, Databricks, BigQuery, and any Iceberg lake out of the box. More sources are added each release.
What happens to data already in Data Cloud when I move to federation? You can keep it or you can drop the stream and replace it with a federation. Salesforce won't auto-migrate; you choose per-source.
Can Tableau Semantics work without Tableau? Confusingly, yes. The "Tableau" branding refers to where the semantic layer originated — but it's now a Data 360 capability that any client can consume, including Apex, Flow, and Agentforce. You don't need a Tableau license to define measures.
Where do I learn the credit pricing? Pull your Cost Explorer and read it side-by-side with the Data 360 pricing PDF. Don't trust the marketing slide — read the actual SKU table.
What to read next
You should now have a clear picture of the rename, the new features, and the migration steps. To go deeper:
- Data Cloud — the dedicated glossary entry, kept in sync with the Data 360 rename.
- Data Stream, Calculated Insight, Identity Resolution — the three core building blocks.
- What Is Agentforce 360? — how Data 360 grounds the Atlas Reasoning Engine and the Einstein Trust Layer.
- Tableau — Salesforce's analytics layer, now sharing a semantic model with Data 360.
If you only act on one section of this article, make it the credit-audit step. Most teams discover their bill is 30–50% optimizable in the first hour of looking.
Share this article

