Starting in 2026, Atlassian will use your Jira and Confluence data to train its AI. Here’s what it means for Atlassian data privacy — and what to do now.
What’s changing
Starting August 17, 2026, Atlassian will begin using data from its cloud products — Jira, Confluence, Jira Service Management, and others — to train its AI offerings, including Rovo and Rovo Dev. This affects approximately 300,000 customers worldwide.
This is not a data breach, and it’s not necessarily illegal. But it is a meaningful policy change that requires organizations to understand what it means for their Atlassian data privacy posture — and to take action before it takes effect.
The settings to control this are already live today: Atlassian Administration → Security → Data contribution.
What data is collected — and what you can opt out of
Atlassian collects two distinct categories of data, each with a different risk profile and different opt-out availability.
Metadata is de-identified and aggregated — things like readability scores, story point values, task classifications, SLA metrics, and common search patterns. It does not include names, email addresses, health data, financial data, or location data. It’s statistical in nature.
In-app data is the actual content your organization creates: Confluence page titles and bodies, Jira issue titles, descriptions, and comments, custom emoji and workflow names.
Here’s the critical distinction:
- In-app data opt-out: Available on all plans — Free, Standard, Premium, and Enterprise
- Metadata opt-out: Enterprise only
The key takeaway: every customer on every plan can turn off in-app data collection. That’s the sensitive category — the real content your teams produce — and it’s under your control regardless of which Atlassian plan you’re on. Metadata opt-out is limited to Enterprise, but given that metadata is de-identified and aggregated, it’s a relatively low-risk category for most organizations.
Where does the data go?
This is the layer Atlassian has been least transparent about, and it matters most for organizations with data residency requirements.
Atlassian’s sub-processor list (effective May 15, 2026) confirms that AI processing involves:
Contributed data can flow to the United States, including to OpenAI. Atlassian’s data residency offering covers data at rest — but it does not necessarily cover AI training pipelines. Atlassian has not publicly clarified whether EU-resident data is excluded from training, or whether enabling data residency overrides contribution settings.
Is a US transfer automatically illegal under GDPR? No. GDPR permits transfers to the US under the EU-US Data Privacy Framework (DPF) and Standard Contractual Clauses (SCCs), both of which Atlassian uses. However, the DPF is politically fragile and could be challenged again — as it was under Schrems II. Organizations that have built their compliance posture on DPF alone are in a less stable position than they may realize.
For EU customers in regulated industries where any transfer to a non-EU country is a hard line, this remains an open and unresolved question.
The legal shift that matters most
This is the most important layer for understanding why this change is more than a settings update — and why it has real Atlassian data privacy implications.
In the normal relationship, the customer is the data controller: the entity that decides what data is collected, why, and what happens to it. Atlassian is the data processor: it handles customer data strictly on the customer’s behalf, following the customer’s instructions.
Under GDPR, a processor has no right to use customer data for its own purposes. It can only do what the controller instructs.
When Atlassian uses customer data to train its own AI models, it’s pursuing its own commercial objectives — not executing a customer instruction. At that point, Atlassian is acting as a data controller in its own right. This shift has real legal consequences:
- Atlassian needs an independent lawful basis for this new processing purpose
- Your original consent to Atlassian providing the service doesn’t automatically extend to Atlassian training its own AI
- Atlassian’s Data Processing Agreement (DPA) needs to explicitly legitimize this new controller role
The current DPA language relies on vague “improve the Cloud Products” wording that predates generative AI — there’s no explicit consent mechanism for AI model training. The DPA’s effective date (August 17, 2026, the same day collection begins) suggests Atlassian is updating its legal documents in parallel with the rollout. DPOs and legal teams should review the updated DPA carefully before that date.
What to do before August 17
- All customers: Go to Atlassian Administration → Security → Data contribution and turn off in-app data collection. Do this before August 17. Applies to all plans.
- EU customers with residency requirements: Ask Atlassian explicitly whether data residency settings exclude data from the AI training pipeline. Get the answer in writing.
- Regulated industries (financial services, public sector, healthcare): Involve your DPO. Review the updated DPA. Don’t rely on general reassurances.
- Enterprise customers: Both metadata and in-app data settings are off by default for Enterprise. Verify and document your settings regardless.
Our read on this
The alarm being raised by some customers — “this is a clear GDPR breach” — is overstated for most practical situations. Customers who turn off in-app data are in a defensible position, and metadata collection is genuinely low-risk in nature.
The more legitimate concerns are the controller/processor shift and the data residency gap. These are real legal questions that Atlassian hasn’t fully answered yet.
For most organizations: take the settings action now. One action in Atlassian Administration protects your most sensitive data across all plans. The window before August 17 is your opportunity.
For compliance-sensitive organizations: press Atlassian for written clarity before August 17. Don’t rely on general reassurances. Involve your DPO and legal counsel.
Seibert is not a law firm and this article does not constitute legal advice. Customers with specific compliance obligations — particularly those in regulated industries or subject to strict data residency requirements — should involve their own legal counsel or Data Protection Officer.
We’ve put together a full security and compliance briefing on Atlassian data privacy and the 2026 AI training policy change — covering all four layers of the issue in detail. Enter your email below to download it.
Loading content...