Most design systems I audit have pristine adoption metrics.

Sarah's was no different. She pulled up the Q3 metrics deck. Slide 4: "84% library adoption across 12 teams." Component usage trending up. Documentation hits are climbing. Every line is green.

"We're finally seeing real adoption," she said.

"Can I see your team's Slack?" I asked.

She hesitated. That's when I knew.

I ran an anonymous survey of her 23 developers. Satisfaction: 4.2 out of 10. Would they choose to use it: 23% yes. Does it save time: 67% said no, 19% said "technically yes, but actually no."

Dashboard: 84% adoption. Reality: nobody would choose this if they had options.

The satisfaction cliff nobody measures

Design system metrics track installation, not experience. It's like measuring software success by download counts instead of daily active users—everyone installed it, but are they opening it?

Here's what happens at every company:

Dashboard at month 12: still shows 84%. Nobody uninstalled anything.

At Sarah's company, Month 1 satisfaction was 68%. Month 6: 41%. Month 14: 31%. Adoption over the same period: 71%, 78%, 84%. Perfect inverse correlation. One of these trends predicted the system's future.

What "successful adoption" actually costs

Here's the math from Sarah's company. Mid-size B2B, 23 developers, 6 designers. "Strong adoption" by every metric they tracked.

Costs they tracked:

Costs they didn't track:

Total: $468K annually

The original business case promised 30% efficiency gains—1,104 hours saved monthly, $1.05M in annual value.

That assumed developers would prefer the system over building custom. When I asked them directly, 67% said it cost them time, not saved it.

Real math: $468K annual cost for a system that makes two-thirds of developers less productive. The dashboard showed 84% adoption. It just measured compliance, not value.

The workaround economy

I was helping one of Sarah's developers debug a production issue. While reviewing the code, I noticed he'd imported their Button component but overridden every style property. The component was an empty wrapper.

"Why not use it as designed?" I asked.

"Oh, I do use it. Technically. Shows up in the tracker."

He opened Slack. Private channel: "#system-workarounds." Created six weeks after launch. All 23 engineers. 280+ messages.

The channel had structure—pinned posts, categories, a running list of "components that don't work" with fixes. Someone had built better documentation than the official docs.

Sample messages:

"Import ButtonPrimary + override everything = tracker happy"

"Modal breaks when nested, fix: [8 lines of CSS that shouldn't exist]"

"Table dies at 8 columns, build custom, import the corpse for compliance"

"Friday's the design system check, remember to import stuff you don't use"

When I asked why they didn't report these to the design system team: "We do. Every week in office hours. Nothing changes because every fix breaks something else or violates some design principle we don't understand."

The adoption metric: 84% component reuse. The reality: maybe 40% used as designed. The rest imported for tracking, then rebuilt, modified, or abandoned.

What to measure instead

Stop checking the adoption dashboard. Start tracking developer preference over time.

Questions that predict success:

Does the system save time or cost time? Don't ask once at launch. Ask monthly. Track the trend. Sarah's developers started at 60% positive in month one. Month six: 31%. The adoption rate over the same period went from 71% to 84%. One of these numbers predicted failure.

Would they choose it without a mandate? The answer is in behavior, not surveys. At Sarah's company, I checked projects where the design system wasn't required. Usage dropped to 31%. That's the real adoption rate—when people have a choice.

Are there workaround channels? If developers built parallel support systems, you've already failed. The #system-workarounds channel with 280 messages isn't an early warning—you're discovering the problem late.

The one metric that matters: developer NPS over time

Track it monthly. Make it anonymous. If NPS declines while adoption increases, you're measuring compliance, not value.

Sarah's team now tracks three questions:

  1. Satisfaction with the system (1-10)
  2. Does it save or cost you time (multiple choice)
  3. Would you use it if not required (yes/no)

First honest month: 4.2/10, 67% say it costs time, 23% would choose it voluntarily.

Red flags your dashboard won't show:

Design systems fail slowly, then suddenly. Metrics look pristine while satisfaction collapses. By the time you notice, you've spent a year building adoption strategies instead of fixing usability problems.

The simplest test: if you deleted this system tomorrow, how many developers would be relieved versus devastated? At Sarah's company: 18 out of 23 would be relieved. The dashboard showed 84%. One of these numbers tells you if the system works.

The uncomfortable truth

Most design systems I audit: pristine adoption metrics, miserable developers. Everyone uses them. Nobody would choose to.

If your system needs adoption strategies, governance committees, mandatory training, and weekly office hours just to maintain usage, it's not solving problems—it's creating them. Good tools get adopted because they're obviously better. Bad tools get "adopted" because someone mandated it and questioning mandates is politically expensive.

Your dashboard shows success. Your developers are building workaround docs at 2am. One of these tells you if your system works.

The developer experience tax is real. You're paying it every month in productivity loss and hidden friction. You're just not measuring it yet.