Founder Essay
The Authenticity Decade: How Content Provenance Becomes the Internet's Next Layer of Public Infrastructure
By 2030, every meaningful piece of digital content will carry a verifiable signature attesting to its origin. The shift will be as profound as the move from HTTP to HTTPS, and it has already begun.
Authenticity is becoming the next layer of public internet infrastructure, alongside HTTPS and DNS.
In August 2024, the European Union passed the world's first comprehensive law for general-purpose artificial intelligence. Buried in its 113 articles is the one most ordinary people will eventually encounter: Article 50, the transparency obligation. From August 2026, anyone deploying AI to generate content for European users must disclose that the content is AI-generated, in a form readable by machines and visible to humans. The era of unmarked AI content, in Europe, is over by law.
That deadline is not a regulatory inconvenience. It is the visible edge of a much larger shift, the shift the rest of this essay is about. The internet has lived for thirty years without a serious answer to the question who made this. We are about to spend the next ten years building one. By 2036, every meaningful piece of digital content will carry a verifiable signature attesting to its origin. The work has already begun, and it has a name. I think we will look back at the period from 2026 to 2036 as the Authenticity Decade.
The Internet's Trust Crisis Reaches a Tipping Point
The case for a new trust layer is no longer abstract. By the most credible 2025 estimates, more than half of all newly published text on the open web is at least partially AI-generated. The volume of AI-generated images, audio, and video shipped per day exceeds, by orders of magnitude, anything humans alone produced before 2022. Detectors based on stylometry or perceptual hashing have already lost the cat-and-mouse against frontier generative models, and they will keep losing. The question is not whether software can tell the difference. The question is whether the original creator can declare the difference, in a way that survives copying and reposting.
This is a different framing of the same crisis. Trying to detect AI content after the fact is a losing game. Asking creators to declare origin at publish time, with cryptographic enforcement so the declaration cannot be silently stripped, is a winning one. Detection is a defensive crouch. Declaration is a public commitment. The internet has been here before. In the late 1990s, the question was not whether the network could detect malicious traffic in flight. The question was whether the sender could commit to authenticity and confidentiality at the moment of sending. The answer was HTTPS.
From HTTPS to Verified Origin
HTTPS was a slow revolution. The protocol was published in 2000. Major sites adopted it for sensitive pages around 2008. Universal adoption did not happen until Let's Encrypt made certificates free in 2015 and Google began penalizing non-HTTPS pages in search rankings in 2018. By 2020, browsers showed an angry warning for any page served without it. A protocol that was once a luxury became table stakes in twenty years, and the internet's trust posture changed permanently.
We are about to repeat the cycle, one layer up. The technical primitive is no longer an encrypted channel between browser and server. It is a cryptographic signature bound to a verified human identity, attached to the content itself. Where HTTPS asked can I trust the connection, the next layer asks can I trust the origin. Both questions have the same shape, and the same answer: only when the sender chooses to commit, and only when the receiver can verify the commitment without trusting any single intermediary.
The internet has had a trust crisis before. We solved it. We can solve it again.
What Verified Origin Looks Like in Practice
The technical answer The AI Lab has been building is Trust Identity Protocol, which we publish as an open standard under Creative Commons Attribution 4.0. TIP gives every internet user a free, portable identity called a TIP-ID, issued by an accredited Verification Provider after a one-time biometric check. The verification produces no central database of biometric data. The biometric never leaves the user's device in a recoverable form. What it produces is a public key, registered to a federated network anyone can run a node on.
The signature itself is post-quantum secure under FIPS 203, 204, and 205, the cryptographic standards published by the United States National Institute of Standards and Technology in August 2024. Each signature carries one of three Origin Codes: OH for Original Human, AA for AI-Assisted, or AG for AI-Generated. A reporter writing an article from scratch signs with OH. A creator who outlined the post and used AI to polish the grammar signs with AA. A video produced from a text prompt signs with AG. The same person can hold one TIP-ID and produce content under all three codes across their career, depending on how each piece was made. The classification is the creator's call, and the cryptography enforces that whatever was claimed is what gets verified.
That is the mechanic. The deeper point is the social contract: a verified origin is meaningful only because the creator chose to attach it. It is a statement, not a surveillance signal. The protocol does not let anyone strip the signature without breaking it. The user does not have to sign anything they do not want to. The choice belongs to the speaker.
The Three-Body Problem of Trust
A protocol alone is not infrastructure. HTTPS works because three bodies hold it together: the IETF that ratified the standard, the certificate authorities that issue trust roots, and ICANN that anchors the namespace. Take away any one of those legs and the system collapses into proprietary forks. Verified origin needs the same three legs.
We have built them. The AI Trust Council is the multi-stakeholder governance body for TIP, modeled on how ICANN governs the global Domain Name System. It convenes publishers, governments, accredited Verification Providers, civil society groups, technical contributors, and journalism organizations, and protocol policy is set by consensus rather than corporate decree. The AI Trust Registry is the public, searchable directory of accredited Verification Providers and certified AI systems, the canonical record any journalist or regulator can consult to confirm a credential. And the Verification Provider network is the decentralized accreditation layer, with jurisdictional tiering (GREEN, AMBER, RED) that lets a Belgian publisher demand that the credential they trust comes from a country whose due-process protections they recognize.
None of these bodies is owned by The AI Lab. We started them. We will not always run them. The job of an institution is to outlive its founders.
Five Predictions for the Authenticity Decade
With the technical and institutional groundwork in place, here is how I expect the next ten years to unfold. None of these is a bet on a particular company. They are bets on the structural pull of the underlying problem.
- By 2028, every major browser will display a verified-origin indicator natively, the way they display a padlock for HTTPS today. Chrome, Safari, Firefox, and Edge will not converge on identical UI, but the underlying signal will be consistent. The first browser to ship it will get a marketing moment. The last to ship it will be embarrassed.
- By 2030, every major social platform will require Origin Codes on AI-generated content, regulatory or not. Meta and TikTok have already announced labels. The labels are claims. The next move is signed claims, and the platforms will follow rather than lead.
- By 2031, professional indemnity insurance for journalists will require verified provenance for high-risk reporting. The actuarial argument is straightforward: a TIP-signed article is materially harder to fabricate around than an unsigned one, and reduced fabrication risk reduces underwriting loss.
- By 2032, search engines will rank by provenance. Google will not advertise it as a ranking factor, but engineers inside the company already know that the prior signals (link graph, freshness, on-page) are decaying as AI-generated text floods the index. Verified human authorship, paired with subject-matter signals, will quietly become a top-quartile ranking input.
- By 2034, the question "is this human-made?" will sound dated, the way "is this site secure?" sounds dated today. The answer will be visible at a glance, the way the padlock is. The question will move on to subtler ones: who, when, with what tools, under what license.
By 2030, every meaningful piece of digital content will carry a verifiable signature attesting to its origin.
A Hashtag for the Era
Technical revolutions need cultural surfaces. HTTPS had the padlock. The Web 2.0 era had the read-write web and the term itself. The Authenticity Decade has, at its public-facing edge, a hashtag we originated and are stewarding through the AI Trust Council. #HumanOrAI is the conversation people are already having. Every time someone replies "is this AI?" under a post, they are asking the question the protocol answers. The hashtag is the cultural surface of a technical revolution. We did not invent the conversation. We did invent the answer.
Use it. Sign your work with it. Tag the contradiction when you see it. The internet is filling with content that nobody can confidently attribute, and the simplest way to push back is to attribute your own.
What Comes Next
The institutional, cryptographic, and cultural pieces of the Authenticity Decade are now in place. What is left is adoption. That part is slow, contested, sometimes thankless, and ultimately the only thing that matters. HTTPS did not win because it was technically superior to HTTP. It won because enough creators made enough commitments, until the internet rewarded them and the laggards lost.
We will spend the next ten years building toward the moment when "is this real?" stops being an interesting question, because the answer is at the bottom of every post, signed by a real person, attached to a verifiable record. The internet was built on rough consensus and running code. The Authenticity Decade will be built on the same.
The work is to make verified content origin part of the public infrastructure of the internet, alongside HTTPS and DNS. We are far enough along to know it is doable. We are early enough that anyone reading this can help.
About the Author
Dinesh Mendhe is the Founder and Chairman of The AI Lab Intelligence Unobscured, Inc., the American AI trust certification and content provenance company that created Trust Identity Protocol (TIP) and originated the #HumanOrAI campaign. He can be found on LinkedIn, via ORCID, and on Wikidata.
Tags
