For Creators

What the EU AI Act means for creators (practical, not panicky)

What the EU AI Act means for creators (practical, not panicky)
Team TBM
Team TBM
Apr 21, 20267 min read

August 2, 2026 is the deadline most creators haven’t heard of yet. It’s the date when Article 50 of the EU AI Act becomes fully enforceable — the provision that puts transparency obligations on anyone using AI tools professionally.

Four months from now. Closer than it sounds.

But here’s what the headlines miss: for most individual creators, this regulation doesn’t demand a legal team or a compliance overhaul. It comes down to two practical habits — and knowing which of your tools already handle the heavy lifting for you.

Does this apply to you?

The EU AI Act groups people into roles. If you use AI tools in a professional or commercial context — paid client work, branded content, commercial campaigns — you’re classified as a deployer. That’s the term the law uses, and it’s the role that carries disclosure obligations.

If you use AI purely for personal, non-commercial purposes — making art for yourself, experimenting privately — you’re not a deployer under the Act. That use case is out of scope.

The other thing worth knowing: your location doesn’t change your obligations. The Act has extraterritorial reach modelled on GDPR. It follows the audience, not the creator’s address.

A US-based designer delivering to an EU client? In scope. A UK copywriter running ads that reach EU audiences? In scope. A hobbyist making AI art for personal use? Not in scope.

What the regulation actually asks

Article 50 splits responsibility cleanly between tool makers and creators — and that split matters.

Tool makers (providers) — Adobe, OpenAI, Midjourney, and their peers — are required to embed machine-readable provenance signals in their AI-generated outputs. This is the technical layer. It’s not your job to build it.

You, as a deployer, carry two targeted obligations:

1. Deepfake and synthetic media disclosure. If you use AI to generate realistic images, video, or audio depicting real-looking people, places, or scenarios, and you distribute that content, you need to add a visible disclosure that it’s AI-generated. This applies when distributing the work, not just when creating it.

2. AI-generated text for public-interest content. If you publish AI-generated text intended to inform the public on matters of public interest, you need to disclose it — unless a human genuinely reviewed it, rewrote it, and takes editorial responsibility for the output. Under current guidance (subject to the final Code of Practice expected June 2026), substantive editorial review and responsibility is understood to qualify — but the precise threshold has not been formally defined. Light proofreading does not count.

There’s also a creative exception: work that’s clearly artistic, satirical, or fictional needs only a light-touch note — not full labeling.

Most standard creative work — design, marketing content, branded copy, social graphics — sits in the low-risk tier and doesn’t trigger high-risk obligations. You don’t need to file conformity assessments or register as an AI provider.

Former MEP Marietje Schaake explains what the EU’s draft Code of Practice means in practice — useful context for the checklist below.

Your tool, your obligation

Whether you need to add a visible disclosure label often depends on which tool you’re using. Some tools embed machine-readable provenance automatically — others don’t.

The technical standard behind this is called C2PA (Coalition for Content Provenance and Authenticity). When a tool embeds C2PA metadata in a generated file, it does the invisible, machine-readable part of compliance for you. Your job is to not strip it out.

Here’s where the main tools currently stand, according to C2PA Viewer’s updated February 2026 data:

ToolsAuto-labels for you?What you need to do
Adobe FireflyYes Don’t strip metadata on export
DALL-E 3 / OpenAIYesDon’t strip metadata on export
Google ImagenYesDon’t strip metadata on export
MidjourneyUnclear — see note Add a visible “AI-generated” label
Canva AIUnclearAdd a visible label to be safe
Stable DiffusionNoAdd a visible “AI-generated” label
ChatGPT / text toolsN/AReview and own the output — document your process

Note: Midjourney’s C2PA support status is evolving — sources conflict on whether full credential embedding is active as of early 2026. Treat Midjourney outputs as requiring a manual visible disclosure label until Midjourney officially confirms full C2PA compliance. Check for updates closer to August.

Four scenarios, four quick answers

Scenario A — Designer using Midjourney for EU client work

Midjourney doesn’t reliably embed C2PA metadata (see table note above). Add a visible “AI-generated” disclosure to your deliverables. Keep a note of which tool you used. And brief your client — they’ll also need to stay compliant when they publish the work.

Scenario B — Copywriter using ChatGPT then editing heavily

If you genuinely reviewed the output, rewrote sections, exercised editorial judgment, and take responsibility for what’s published — you’re likely in the clear for the text-disclosure rule under current guidance. The key word is *genuinely*. Light proofreading doesn’t count. Substantive rewriting and editorial ownership does. Keep a brief note of your process to be safe. Note that the final Code of Practice (expected June 2026) will clarify what qualifies.

Scenario C — Marketer using Adobe Firefly or DALL-E 3 for brand social content

C2PA metadata is auto-embedded. Your main job is to not strip it during export — most tools make this a toggle in export settings, so check before you hit download. Adding a visible “Created with AI” label in the post itself is good practice even where the Act doesn’t strictly require it. Platforms and audiences increasingly expect it.

Scenario D — Creator outside the EU with EU-reaching work

You’re in scope. The regulation follows the audience, not your address. US studio, UK agency, Canadian solo creator — if EU users see your AI-generated work, the rules apply. Enforcement is likely to prioritize larger entities first, but the legal obligation is real and shouldn’t be ignored.

Two habits that cover most of it

Most of the complexity dissolves into two repeatable steps:

1. Preserve metadata on export. If your tool auto-embeds C2PA, your main job is to not accidentally strip it. Check your export settings — it’s usually a toggle. This is a one-time setup check per tool you use.

2. When your tool doesn’t auto-label, add a visible one. “AI-generated” or “Created with AI assistance” covers most contexts. The final Code of Practice — expected June 2026, just weeks before the August 2 deadline — will nail down the standard format. For now, clear and honest language gets you there.

Bonus habit: For AI-assisted text, keep a brief note of your editorial process. Not a legal document — just a line in your project notes: “Prompted, reviewed, substantially rewrote.” That’s the kind of record that supports an editorial-control claim if it’s ever questioned.

What to watch next

This regulation is designed to create transparency, not trap creators. The law already acknowledges that most creative work is low-risk, that artistic expression gets a lighter touch, and that genuine editorial work earns an exemption from text-disclosure rules. Most of the heavy compliance infrastructure sits with the tool makers — not you.

Two habits, and you’re most of the way there: preserve what’s already in your files, and add a label when it isn’t.

*This article is editorial guidance, not legal advice. If your work involves significant AI-generated content reaching EU audiences, consult a qualified legal practitioner.*

Stay current as the deadline approaches. The final Code of Practice is expected June 2026 and will set the standard format for AI disclosures. We’ll share a plain-English update when it publishes.