I watched someone test a dashboard I'd been brought in to fix. Zero bugs. Sub-second load times. Every feature worked exactly as designed. They clicked around for two minutes, then closed the tab and said: "It's broken."
I asked which feature. "Not broken broken. Just... unfinished? I don't know. It doesn't feel real."
Like a PDF that prints fine but won't open on Mac – technically working, experientially broken.
Took me three more clients and too many user interviews to figure out what they meant.
Working isn't the same as feeling like it works.
⸻
When 400ms was too fast to be trusted
A B2B analytics platform I was hired to audit. Fast queries. Accurate data. Clean charts. Churn climbing every quarter. Exit surveys kept saying: "Found something more reliable."
More reliable than what? Their queries returned in 400 milliseconds. Unreasonably fast. Suspiciously fast. Uptime was 99.4%. The competitor I tested? Queries took 3-5 seconds. But the competitor showed progress bars. Loading states. "Processing your request..." with an animated spinner that felt important.
The client's product? Almost instant results that appeared so fast users assumed they were cached or incomplete. No indication anything had happened. Just: blank screen, then data.
They lost three enterprise deals in one quarter to that slower product. $240K ARR gone. All closed in the same quarter. Pipeline dried up. Founder started panicking about runway.
All three prospects said the same thing in feedback calls: "Yours felt too fast to be real."
I added 2-second artificial delays with progress indicators. "Processing your data..." with a spinner. Same 400ms queries underneath, just honest about the work happening. Shipped it.
Next quarter: closed two deals against the same competitor. Both prospects mentioned "feeling more confident in the data quality" in post-sale interviews. Same data, different trust signals. $160K recovered. The founder stopped opening competitor demos during sales calls. Said he finally trusted his own product.
Turns out users trust things that admit they're thinking. Turns out users trust things that admit they're thinking. (Slower, but honest about it.)
⸻
Auto-save is gaslighting with good intentions
Different client: a document editor. Auto-save every keystroke. Real-time sync across devices. Never lost a single character in production. Engineering team was proud of it.
Support tickets kept asking: "Did my changes save?"
The product manager showed me the tickets. Hundreds of them. "How do I know it saved?" "Where's the save button?" "I made changes but I don't see confirmation." I asked when they'd removed the save button. He said: "Eighteen months ago. Auto-save made it obsolete. Modern UX patterns. This is how Notion does it."
I pulled up session recordings. Watched someone type three paragraphs, pause, look around the interface for 12 seconds, close the tab without publishing. Next session: same user, retyped the same content, still couldn't find confirmation it was real.
Users were experiencing typing into a void with no evidence anything persisted.
I added a "Saved" indicator. Tiny. Faded in after each keystroke. "Saving..." while syncing. "All changes saved" when done. Smallest possible UI addition. Shipped it Tuesday.
Support tickets about save status dropped 89% by Friday. The product hadn't become more reliable. I'd just stopped pretending reliability was obvious. Turns out users trust things that admit they're working.
Six months later, different client. Similar auto-save product, still in development. I showed the founder these session recordings from the first project. He added the "Saved" indicator before launch. Zero support tickets about saves in the first three months. That's the difference between fixing problems and preventing them.
Speed isn't trust. Visible process is trust. Users don't want instant – they want proof something happened.
⸻
Making products slower on purpose
After the fourth client with the same trust problem, I started adding intentional delays. Not bugs – artificial pauses that show work happening. Click submit? Wait 600ms, show spinner, then confirm. Load data? Show skeleton screens even when the query finished in 300ms. (Made products technically slower. Made them feel more trustworthy.)
Support tickets about "is this working?" dropped 40-60% across three implementations.
The other thing: I make errors specific enough to act on. "Email address invalid" beats "Something went wrong." "Server timeout – try again in 30 seconds" beats "Error." Users can handle problems. They can't handle mystery.
⸻
Three tests that reveal what founders can't see
After four projects where "feels broken" meant "interface doesn't signal what's happening," I developed three diagnostics I run before designing anything. Costs clients 2 hours upfront. Saves them 6 months of "why does this feel wrong" support tickets after launch.
-
The five-second impression test. Show someone the product for five seconds. Not interactive – just looking. Ask: "Does this feel finished?"
Hesitation means something signals incomplete. "It looks fine but..." means they're sensing interaction gaps. Immediate yes? Watch their face, not their words. (People lie to be polite. Faces don't.)
-
The broken internet test. Throttle the connection. Use the product. Does it explain what's happening? Or just sit there looking frozen?
Most products optimize for perfect conditions. Users experience broken networks, slow servers, congested CDNs. This is where "too fast" products die hardest – zero explanation when infrastructure fails.
-
The state audit. List every possible state for every interactive element. Default, hover, focus, active, disabled, loading, error, success. Design all of them.
Some products design three states and ship with seven. The missing four just... happen. Usually badly.
Run these on your product this week. If you find missing states or silent actions, fix those before building new features. Trust signals cost less than new capabilities and convert better.
⸻
Trust problems don't throw errors
They're describing interface problems using functionality language. "Broken" means "doesn't behave how I expect reliable things to behave."
Reliable things confirm actions. Show progress. Explain errors. Stay consistent.
A product might work perfectly behind the scenes. But if the interface doesn't signal that work, users experience silence as failure.
Trust problems don't show up in error logs. They show up in churn surveys saying "found something more reliable."
Usually while your error logs stay perfectly, uselessly clean.