According to the 2025 Imperva Bad Bot Report, for the first time in a decade, automated bot traffic has surpassed human activity, with 51% of all web traffic attributed to bots. The primary driver of this watershed moment has been the surge in AI-generated content proliferation. A surge that has seen automated threats rise at an unprecedented rate, with bad bots now accounting for 37% of all internet traffic.
The sophistication of these threats is equally concerning—75% of businesses have faced deepfake scams, with organizations losing an average of $450,000 per AI fraud incident. Attackers now target APIs, exploit businesses, and fan the flames of fraud using advanced, evasive bots created with AI technology. The realization of this is a lower barrier to entry for attackers, resulting in an increase in the volume of simple bot attacks.
The improved digitalization of the world has brought with it problems that have resulted in increased difficulty in proving human authenticity. Head of Public Sector at Socure and former White House advisor on cybersecurity and digital, Jordan Burris, at a live event hosted by Dock Labs, highlighted that upwards of $500 billion—a figure underreported due to lack of measurement tools—is lost to identity fraud, deepfake scams, and inefficient KYC processes.
These are real problems that a combination of AI advancement and blockchain technology is creating a new category of identity solutions. In this article, we'll be breaking down digital identity and its evolution.
Digital Identity Breakdown
A digital identity is an online representation of an individual or organization using data and credentials to verify their presence and actions in the digital space. Some of the components of digital identity include Personally Identifiable Information, Personalization Data, and Credentials.
Significant concerns about the implications on truth, trust, and integrity are being raised with the rise of AI-generated content and deepfake sophistication in the last decade. People and organizations with bad intentions are using these tools to disguise identities, making the world an increasingly untrustworthy place as a result.
Tools like Google's new Veo 3 — a state-of-the-art AI video generator that allows you to create high-quality short videos that depict real-life situations — are impressive and industry-revolutionizing. You must, however, consider for a moment what bad actors could do with access to such technology. Some of the negatives that could arise from the continuous advancement in AI tools like these could be:
-
Defamation and reputation damage
-
Blackmail and extortion
-
Political interference and espionage
In Web 3, the poor identity management system has created a loophole that has witnessed the rise of Sybil attacks in the industry. A Sybil attack is a cybersecurity threat where malicious actors create fake identities to gain control within a network. The attacker uses this sham identity to gain an unfair advantage over legitimate participants, especially in airdrop farming. According to Dropstab, in 2024, $15 billion worth of tokens were airdropped by crypto projects to 'users'—many of whom were without proof of personhood. This lack of verification places the airdrop processes at risk of facing widespread manipulation via bots and fake accounts (Sybils).
Evolution of Digital Identity: From Password to Proof of Personhood
Traditionally (re: in Web 2), identity methods like username/password combinations, Two-factor authentication (TFA), and Document-based verification are the major defense tools for online identity verification. While they are widely used, they are not foolproof and have significant limitations that leave them open to vulnerabilities and attacks. Limitations such as weak passwords, password reuse, phishing attacks, device dependence, SMS-based 2FA, lost or stolen documents, and document forgery make these verification tools less secure in today's digital world.
Biometric systems were proposed as the solution to these problems. And even though these systems promise more secure identity verification, they present significant challenges like privacy concerns, security vulnerabilities, and technical limitations. Because of their centralized nature, biometric databases can also become targets for hackers, resulting in "honeypots."
The advent of cryptography has seen improved identity verification solutions. Zero-knowledge proofs (ZKPs) are an example of this. They are a cryptographic technique used to prove knowledge without revealing any information about the data itself. The technology uses complex mathematical operations to ensure that the verifier learns only the statement's truth without gaining access to the underlying data. Today, ZKPs are gaining traction, improving security and privacy in decentralized environments.
Blockchain-based Identity
Individuals are now capable of controlling their digital identities and gaining access to services that were previously inaccessible to users without formal identity proof. This is possible with the use of blockchain technology backed by tenets like security, decentralization, and verifiability.
The blockchain is providing a path to proof of humanity for billions of people who lack official documentation, giving them access to decentralized banking, education, and other services. The introduction of blockchain technology to digital identity has brought a reduction in fraud, an improvement in security, an enhancement in privacy, and an increase in access to essential services.
A perfect example of a blockchain-based identity protocol is the Humanity Protocol. With over 8 million verified identities, 140,000 daily transactions, and a $1.1 billion Fully Diluted Valuation (FDV) before mainnet, Humanity Protocol is at the forefront of the blockchain identity verification scene. Decentralized Identity (DID) systems like this are based on the principles of self-sovereignty when it comes to identity, control, and management of digital identities without depending on centralized authorities.
For many of these DID systems, scanning is the choice tool. Projects like Humanity Protocol, however, opt for palm-based verification, citing cultural acceptance and less invasive data collection as key advantages.
Real-World Applications and Use Cases
The costs of identity fraud have increased significantly in recent years. According to a report by Javelin Strategy & Research and AARP, in 2024, American adults lost $47 billion to identity fraud and scams. This was a substantial increase of $4 billion over 2023. There is an ecosystem of solutions and adaptations to solve this problem. The global digital identity solutions market share was valued at $41.63 billion in 2024 and is projected to reach $159.93 billion by 2031, a CAGR of 21%.
At the forefront of this market boom are blockchain-powered identity protocols. Identity protocols leverage advanced blockchain technologies such as ZKPs and DIDs to achieve Know Your Customer/Anti-Money Laundering (KYC/AML) compliance without exposing sensitive customer data. These technologies help with the verification of identities and other information without storing or sharing personal details. This ensures absolute data privacy and meets regulatory obligations.
This is an aspect where a project like Humanity Protocol shines—a project that partners with genomics companies and traditional validators to create compliance-ready verification systems that satisfy KYC requirements and preserve user privacy via ZKPs.
Another use case of blockchain-powered identity protocols is their revolution of the digital governance system. These protocols are designed to resist Sybil attacks in their voting mechanisms, ensuring probity in Decentralized Autonomous Organization (DAO) participation. Everyone is counted, everyone has a voice, and everyone is rewarded. This design also ensures a fair token distribution model in the case of airdrops and reward sharing.
Future Implications
According to a report by Netguru, global AI spending will reach $1.3 trillion by 2030, from approximately $150.2 billion in 2023. This explosive growth in AI spending points to the need for more regulation to differentiate real human users from AI-generated bots and synthetic identities, and protect users.
Government initiatives such as the European Digital Identity (EUDI) Regulation and the eIDAS Regulation are ensuring the evolution of the regulatory landscape for digital identity and data protection. These initiatives are designed to create a secure digital identity framework that adheres to General Data Protection Regulation (GDPR) compliance while being interoperable.
In addition, technologies such as AI, IoT, and the Metaverse also require robust verification mechanisms. While it is necessary for human verification to ensure trustworthiness as AI advances, IoT devices require authentication for interoperability and security. The Metaverse, being a shared virtual space, requires robust identity management that facilitates social interactions, prevents fraud, and ensures user safety.
Conclusion
Like CAPTCHAs and other forms of online authentication, human verification has become infrastructural due to the rise of AI and the growing need to differentiate between humans and bots. As AI-generated content and AI-powered bots become more sophisticated, traditional methods for preventing spam and fraud are no longer sufficient. Human verification is the crucial layer of security that helps maintain the integrity of online systems, protecting against the increase in AI-generated scams and misinformation.