Can you tell a deepfake from a real video? Most people believe they can. Most people are wrong.
A 2025 iProov study found that only 0.1% of participants correctly identified all fake and real media shown to them. Human detection accuracy for high-quality deepfake videos sits at 24.5%, which is barely better than a coin flip. Meanwhile, the volume of deepfake content online has gone from 500,000 incidents in 2023 to a projected 8 million by 2025, a 1,500% increase in two years. Europol has projected that 90% of online content could be synthetically generated by 2026.
The standard response to this problem has been to build better detectors. Train more AI models to identify what AI-generated content looks like. The problem is that this approach has never worked and never will. Every time a detection model improves, generation models improve in response. The two are permanently locked in a cycle, and the detectors are always one step behind the generators. A Fortune analysis from December 2025 noted that voice cloning has crossed what researchers call the "indistinguishable threshold," and that video realism has reached the point where the telltale flicker and structural distortions once used to identify fakes have largely disappeared.
https://x.com/brevis_zk/status/2030976988174041325?embedable=true
Siwei Lyu, Professor of Computer Science at the University at Buffalo and Director of the UB Media Forensics Lab, put it plainly: the meaningful line of defense will shift away from human judgment and toward infrastructure-level protections, including media signed cryptographically using C2PA specifications.
Brevis Vera is exactly that infrastructure.
The Problem with Detection Is Structural, Not Technical
Before getting into what Brevis Vera does, it is worth understanding why the detection approach fails so consistently.
AI detection tools are trained on known patterns of synthetic media. They learn what fake content looks like based on examples that already exist. The moment generation technology updates, the detection model is working off outdated patterns. This is not a solvable engineering problem within the detection paradigm. It is a fundamental asymmetry: the attacker always knows exactly what the defender is looking for, and can simply avoid producing those signals. The deepfake detection market is projected to reach $15.7 billion by 2026, up from $5.5 billion in 2023. That is a lot of capital chasing a problem that cannot be solved with more of the same approach.
The deeper issue is philosophical. Detection asks: "does this look real?" That is a probabilistic question with no definitive answer. Provenance asks: "can this prove where it came from?" That is a verifiable question with a mathematical answer. The shift from detection to provenance is not an incremental improvement. It is a different problem statement entirely.
What Brevis Vera Actually Does
Brevis Vera, launched on March 9, 2026, is an end-to-end media authenticity attestation system. It does not analyze media after publication to determine if it looks real. It builds and preserves a cryptographic chain of proof from the moment of capture through every edit, all the way to the final published version.
The system has two components working together.
The first is C2PA (Coalition for Content Provenance and Authenticity), an open standard founded by Adobe, Arm, BBC, Intel, Microsoft, and Truepic. Think of C2PA as a tamper-evident receipt that a camera generates the instant it captures an image or video. The device signs the media cryptographically at the moment of capture, binding the content to the hardware and producing provenance metadata. Cameras from Sony, Leica, and Nikon already support this. The C2PA v2.2 specification was published in May 2025. This answers the first question: did a real device capture this?
But there is a gap. Raw media is almost never what gets published. Journalists crop images, editors adjust exposure and color, faces are blurred for privacy, captions and annotations are added, and everything gets compressed for faster loading. The moment any of these edits happen, the original C2PA hardware signature no longer applies to the modified file. A simple crop breaks the cryptographic binding between what the camera signed and what eventually appears online. Authenticity and editing are in direct tension, and until now, there was no way to reconcile them.
The Zero-Knowledge Proof of Editing: Why This Is New
This is where Brevis Vera introduces something that has not existed before.
To understand zero-knowledge proofs (ZK proofs), a simple analogy helps. Imagine you want to prove you know the answer to a puzzle without actually showing anyone the answer. A ZK proof lets you do exactly that: mathematically demonstrate that a statement is true without revealing the underlying information. In Vera's case, the "statement" is: this published image derives from a real C2PA-signed original, and only legitimate edits were applied to it. Nobody can see the raw original or the editorial workflow. Anyone can verify the proof.
Vera integrates with open-source image editing libraries and uses the Brevis Pico zkVM to generate a zero-knowledge proof of the entire editing process. When an editor modifies C2PA-signed media using supported software, Vera takes the original signed metadata and raw media as input, executes the transformations, and generates a proof that mathematically attests to three things: the output derives from the signed original, only permitted transformations were applied, and no hidden or malicious edits were introduced. The proof is generated locally and can be verified independently by anyone, with no centralized intermediary required. The raw content and editorial workflow remain private throughout.
This is the first system that maintains cryptographic proof of authenticity through the full lifecycle of media. Not from the camera, not from the editing suite, not from the publisher, but from the camera all the way through to publication, with every step accounted for.
Who Needs This and Why Now
The use cases are not abstract. In February 2024, a finance worker at Arup was tricked into wiring $25 million after a deepfake video call impersonated colleagues. The average deepfake fraud incident now costs businesses around $500,000. North America alone saw deepfake fraud losses exceed $200 million in Q1 2025.
For photojournalism, the stakes are different but equally serious. A single fabricated or altered image circulating as genuine news can shape public opinion, influence elections, and destroy careers. The existing fact-checking infrastructure cannot keep pace with the volume and speed of synthetic media distribution. 60% of consumers report having encountered a deepfake video in the past year.
Brevis Vera targets exactly the workflows where authenticity matters most: news organizations, documentary filmmakers, courtroom evidence, corporate communications, and any context where a published image or video needs to carry verifiable proof of its origin. The system is open-source, with the reference implementation on GitHub, and Brevis is currently in discussions with consumer-facing image and video editing applications to bring Vera directly into widely used creative tools.
The Honest Limitations
No system is without tradeoffs, and Vera is not a universal solution.
C2PA adoption at the hardware level is still limited. While Sony, Leica, and Nikon support the standard, the majority of cameras and smartphones do not yet produce C2PA-signed media. Vera can only verify media that was captured with a C2PA-enabled device. Content shot on an unsigned camera cannot use the system, which means widespread utility depends on hardware adoption that is still early.
The World Privacy Forum's technical review of C2PA also notes that C2PA records what creators declare, and that provenance metadata can theoretically be altered or stripped before the ZK proof is generated. Vera's guarantee is strong within its operating conditions, but those conditions require a chain of trust starting from the hardware level. If the device itself is compromised or does not support C2PA, the chain does not exist to prove.
These are real constraints. They do not make Vera irrelevant. They make the argument for broader C2PA hardware adoption more urgent.
Final Thoughts
The deepfake problem is not going to be solved by building smarter detectors. The detection arms race has no winner because the two sides are not on equal footing: generators set the pace, detectors chase it. That race has been running for years, billions have been spent on it, and the gap between what is real and what looks real keeps narrowing.
Vera's provenance-first approach is the right architectural response to this problem. Moving the question from "does this look real?" to "can this prove it is real?" is a structural shift, not an incremental improvement. The system is live, open-source, and building toward integration with the editing tools professionals already use daily. That trajectory matters. Broad adoption requires that proof generation happens inside familiar workflows without friction. If Brevis can achieve that integration at scale, Vera has the potential to become infrastructure that the internet did not know it needed but cannot operate honestly without.
Donโt forget to like and share the story!