The Quiet Revolution in Digital Trust That's Reshaping Publishing and Creator Economics

The rise in sophisticated generative AI media tools and the exponential volume of content they produce is driving a paradigm shift in our societal relationship with digital information. When AI deepfakes first appeared, researchers began an arms race to develop detection methods. But the models have improved exponentially, and it is now nearly impossible to visibly discern some deepfakes from authentic content.
That arms race of rapid detection and reaction has proven increasingly difficult to sustain at scale. The sheer volume of AI-generated content and the democratization of tools to create convincing deepfakes makes detection approaches impractical if not impossible. But something must be done or we will have moved from an internet experience, where in most cases we believed the authenticity of what we saw, to a world where we trust nothing and assume that anything could be fake and generated by AI.
This crisis has shifted the focus from a reactive detection stance to a proactive approach to the authentication of content. In the past, digital content was more or less assumed to be real unless proven fake. Now that burden is shifting towards establishing the provenance of genuine content.
In this newsletter we will look at the major interoperable provenance standard adopted by most of the tech sector (The C2PA), its swift rise in recent months, some of the challenges it faces, and why it could set the stage for a better future for digital creatives.
What is the C2PA?
C2PA stands for the Coalition for Content Provenance and Authenticity. It emerged from combining the efforts of the BBC and Microsoft's Project Origin and Adobe’s Content Authenticity Initiative. In addition to those companies, the coalition’s steering committee also includes major tech companies like Amazon, Google, Intel, Meta, Sony, and OpenAI. (As of this writing, Apple and X have not joined the coalition).
It makes sense that this would be done as a coalition approach. If your aim is to create a standard for proving what is real on the Internet, you don’t want multiple proprietary approaches across various walled gardens that can’t communicate with each other.
What the coalition currently offers is called “content credentials.” It’s an open technical standard that identifies when a piece of content was created or edited, and what tools and software were used to create it. In many cases, creators choose to add these credentials to their content to establish and transparently convey provenance and a history of changes to the content. The below video from C2PA goes through some of the current use cases:
C2PA on device
In early 2025, the Samsung Galaxy S25 became the first mobile phone to implement the C2PA on device. In August, Google announced they are doing the same on the Pixel 10 series. What this means is that any image or video captured on these phones will carry information about its creation that will be carried with it wherever it goes unless it is actively stripped from the file.
This goes beyond just mobile phones. Leica and Sony are already incorporating C2PA in their cameras and Nikon and Canon are working to do the same soon.
C2PA online
You have likely started to see content credentials on social media.
- Meta is using it on Facebook, Instagram and Threads for content generated by its own tools, or when it identifies content credentials.
- LinkedIn is doing the same on its feeds.
- TikTok recently started implementing it.
- In addition to its Pixel phones, Google is also integrating content credentials into search and advertising products and has a “captured on a camera” feature on YouTube.
This is a major shift. In the past, most platforms, by default, would strip most metadata associated with images or video to cut down on storage costs or out of user privacy concerns. Now the standard is shifting to capturing and maintaining this metadata as a way to assess the nature of the content.
C2PA and AI
If an AI tool that supports C2PA is then used to change an element in a piece of content, those changes would also be noted in the content credential manifest.
With this standard swiftly rolling out across devices and platforms, what will it actually look like to interact with it? The coalition has developed a preferred analogy to help users think about it…
A nutrition label for digital content
The C2PA often likens the content credential to a nutrition label for digital content. Using their Verify tool, the Content Credential pin, or other tools, you can view when the content was created, its history and creators can share what went into that content.
One thing this metaphor does well is capture the viewer's relationship with the content and the neutrality of the tool to quality, truthfulness or other measures that might be deemed subjective. Like a nutrition label that tells you what’s in your bread but not whether it’s “good” or “bad.”
An authenticated photo could still be a work of generative AI art and an unauthenticated image may still show a genuine moment. The credentials simply provide transparency about origins and editing history.
As the adoption of this nutrition label accelerates, we’re starting to witness an evolution in how digital content organizes itself.
The opportunity and peril of a two-tiered digital Ecosystem
Rather than the current free-for-all where every piece of content is equally mysterious in its origins, we're moving toward a more transparent ecosystem with two distinct categories: authenticated content with clear provenance, and content of unknown or unverified origin.
This shift is being driven by several converging forces:
- Growing demand for transparency: The C2PA is the largest coalition-based approach to tackling this issue, but they aren’t alone. Google’s SynthID is a proprietary approach to digital watermarks, groups like Starling Lab are using blockchain technology and other solutions to develop tools in this space. This wide range of approaches is a recognition that digital transparency is essential for maintaining trust online.
- AI systems require quality training data: AI crawlers proliferate across the Internet for the purpose of training future models. Going forward, there's real concern about recursive training contamination. (Basically, if you train an LLM on AI slop, you get worse LLMs). There will be value in models being able to clearly know what is AI and what isn’t, and may decide to ignore or carefully classify anything with ambiguous origins. Cloudflare recently blocked AI crawlers by default and is in an early beta to create a marketplace for publishers to license their content to AI crawlers. If crawlers put a higher value on authenticated content, it could become economically advantageous for creators to apply content credentials if they are interested in licensing their content for training.
- People (and their AI agents) need reliable information: The latest AI race is in agentic web browsing and involves AI agents conducting research, booking travel and handling tasks on our behalf. These systems will naturally gravitate toward authenticated content they can trust. Agentic browsing disrupts the entire economic foundation of click-based revenue on the Internet. In recognition of this shift, Perplexity recently started a revenue sharing program with publishers.
As much as I’m hopeful that this transition could be beneficial to creatives, it is not without its challenges and drawbacks and we must ensure it doesn’t inadvertently exclude voices or create new barriers to participation. Some populations may not have the resources or experience to execute the kind of “workflow hygiene” that would prominently place content in an emerging “authenticated web.”
Perhaps most critically, there are situations where anonymity is essential for safety. Journalists working in authoritarian regimes, whistleblowers, and activists often depend on the ability to share information without revealing their identity or location. Any system must thoughtfully address these use cases to avoid silencing the very voices that often need amplification most.
And content that isn’t authenticated may suffer from what is called the “liar’s dividend” where people in power can dismiss real content that hasn’t been authenticated as being fake. This could create real challenges of trust in this paradigm shift.
What you can do today
The Coalition for Content Provenance and Authenticity is actively working to address these edge cases, but they need input from creators, journalists, and advocates who understand the real-world implications of these systems.
Start using the technology
If you’re curious about content credentials, check the tools you already own and make sure they are enabled. The feature may already be there..
- In Adobe Creative Cloud: In the export settings of most Adobe products, simply check the box for "Attach Content Credentials" and manage your profile.
- On Your Camera: If you have a recent Leica (M11-P, SL3) or professional Sony camera (α1, α9 III), find the "Content Credentials" option in the menu and switch it on.
- On Your Smartphone: On a Google Pixel 10 or Samsung S25, the feature is built into the native camera app settings.
- With AI & Design Tools: In Microsoft Designer or Canva, your creator info is often attached automatically to AI-generated images, linked to your user account.
If software that doesn't incorporate C2PA edits or changes your content, it can break the edit history in the content credential manifest. This means that creators will have to put at least some thought into their workflows to ensure an unbroken and trustworthy provenance chain from capture to publication. This should become more seamless as the technology evolves.
Help shape the future of the C2PA
If you’re technically inclined, are a creative in a unique situation, or you just want to have your say in how the C2PA evolves, I strongly recommend getting involved with the Creators Assertions Working Group. If you are an organization, or an independent individual, you can contribute for free by joining the Decentralized Identity Foundation.
Origen Story recently joined the working group and we look forward to seeing more people join as this technology evolves.