Rethinking Who (or What) We Trust Online

The internet was built on the assumption that humans are the only genuine users. It’s baked into our authentication flows, our CAPTCHAs, our security heuristics, and even our language. We talk about "users" as people, and "bots" as threats.

But that assumption is breaking.

Today, some of the most essential actors in software systems aren’t human at all. They’re agents: headless, automated, credentialed pieces of software that do everything from retrieving payroll data to reconciling insurance claims to processing royalties at scale. They’re deeply integrated into the services we rely on every day, and yet, many platforms treat them as intrusions.

“It’s time to stop confusing automation with adversaries,” says Laurent Léveillé, Community Manager at Deck. “Many of these bots aren’t attackers. They’re your customers' workflows, breaking silently because your system doesn’t know how to trust them."


The Legacy of Human-Centric Trust Models

Security teams have long relied on a binary heuristic: humans are good; machines are bad. This led to a proliferation of CAPTCHAs, bot filters, rate limiters, and user-agent sniffers that make no distinction between adversarial automation and productive agents.

These models worked for a time. But the modern internet runs on APIs, scheduled jobs, and serverless triggers. Internal agents and external integrations behave just like bots, because they are. They log in, request data, act predictably, and don’t click around like a human would. And that’s the point.

"What we’re seeing now is that the same heuristics designed to keep bad actors out are breaking legitimate use cases inside," says YG Leboeuf, Co-founder of Deck. "That includes everything from from airline rewards to health insurance providers"


A Better Definition of "Genuine"

So how do you distinguish between harmful bots and helpful ones?


Deck proposes a shift: from human-first models to intent-first frameworks. Genuine users are not defined by their biology but by their behavior.

A genuine user is:


Consider a scheduled agent that pulls expense data from 150 employee accounts at the end of each month. It’s credentialed, scoped, and auditable. But most systems flag it as suspicious simply because it logs in too fast or accesses too much.

Meanwhile, a real human could engage in erratic or malicious activity that flies under the radar simply because they're using a browser.

This is a flawed paradigm. We need to flip it.


The Hidden Costs of Getting It Wrong

Misclassifying agents as threats doesn’t just lead to bad UX. It introduces risk:


At Deck, one client had built a multi-step claim appeals workflow that relied on an internal agent syncing EOB data nightly. When their legacy security provider began rate-limiting the agent, it created a cascade of downstream failures. It took weeks to diagnose.


Designing for Hybrid Identity

Modern systems need to accommodate both humans and non-humans in their trust models. Here’s what that looks like:


A Cultural Shift in Security

Security isn’t just about saying "no." It’s about enabling systems to work as intended, safely.

"The teams that win aren’t the ones with the most rigid defenses," says Léveillé. "They’re the ones who design infrastructure that understands the difference between risk and friction."


This means:


Don’t Fear the Agents. Learn From Them.

Not every user is human. That’s not a threat. It’s a reality. And increasingly, it’s an opportunity.

By recognizing and respecting automation as part of the user base, we unlock better reliability, faster scale, and stronger systems. The companies that embrace this shift will outbuild the ones that resist it.

It’s time we stop asking: “Is this a bot?” and start asking: “Is this trusted?”