We built one platform. They experience six.

I thought I understood how our data platform worked. I built half of it. I wrote the dbt models, configured the orchestration, set up the semantic layer, and documented the governance policies. If anyone knew how this system operated, it should have been me.

Then I started talking to the people who actually use it.

This wasn't a planned research project. It started because a new hire asked me a question during her onboarding: "Can you walk me through how the data platform works?" I started explaining and realized, about three minutes in, that I was describing the system as I designed it. Not the system as people experience it.

So I did something I should have done a long time ago. I scheduled 30-minute conversations with six people across different teams who regularly interact with our data platform. Same opening question for each: "Describe our data platform to me as I've never seen it."

I didn't correct anyone. I didn't fill in gaps. I just listened.

What I heard was six completely different systems.

The Product Manager

She described the platform as "three dashboards and a Slack channel."

When I asked her to elaborate, she explained that her entire experience with our data infrastructure consists of three Looker dashboards (one for activation metrics, one for retention, one for feature adoption) and a Slack channel where she posts questions when the dashboards don't answer them.

She doesn't know what dbt is. She doesn't know we have a warehouse. She has never seen a data model or a DAG. She's used the self-serve Explore interface exactly once, got confused by a join that produced duplicated rows, and never went back.

Her mental model of data quality is: "If the dashboard loads and the numbers look reasonable, the data is fine." When I asked how she would know if the data was wrong, she paused for a long time and said, "I guess I wouldn't. Unless the number was obviously crazy."

She trusts the platform completely. She has no mechanism to verify that trust. And she's making product decisions weekly based on numbers she cannot independently validate.

The Data Analyst

He described the platform as "a warehouse with some models on top, surrounded by a mess."

He knows the warehouse schema intimately. He knows which tables are reliable and which ones are "probably fine, but I always double-check." He has a personal list of tables he trusts and tables he avoids, built entirely from experience, not from any documentation or certification system we've provided.

When I asked him about the semantic layer, he said, "I know it exists. I don't use it. It's faster to write SQL." He writes raw SQL against the warehouse for 80% of his work. He uses dbt models for the other 20%, but only the ones he's personally reviewed.

His mental model of governance is entirely tribal. He knows that the revenue table was rebuilt six months ago and is reliable now. He knows the attribution table has a known issue with weekend data that hasn't been fixed. He knows these things because he was in the Slack threads when the problems were discovered. None of this knowledge is documented anywhere.

When I asked what would happen if he left the company, he laughed. "Someone would have a really bad first month."

The Finance Lead

She described the platform as "a source of numbers that I reconcile against our accounting system."

She doesn't trust the platform. Not because she's had bad experiences, but because she's a finance person and finance people don't trust any single source. Her workflow: pull numbers from the data platform, pull the same numbers from the ERP, compare them, investigate discrepancies, and use the ERP numbers for anything that goes to the board.

She's been doing this reconciliation for two years. The numbers agree about 85% of the time. The 15% discrepancy is usually due to timing (the data platform updates daily, while the ERP updates on close) or definitional differences (the platform counts pending orders, while the ERP doesn't).

When I asked if she'd ever told the data team about the discrepancies, she said: "I assumed you knew." We didn't. For two years, finance has been silently working around a 15% reconciliation gap that we could have fixed with a filter and a timestamp alignment.

Her mental model of the platform is: "It's useful for directional trends. I would never use it for anything a CFO has to sign."

The Sales Ops Lead

He described the platform as "something the data team built that I pull CSVs from."

His entire interaction with the platform is a weekly export. Every Monday morning, he opens a saved Looker query, downloads the results as a CSV, pastes them into a Google Sheet, adds his own formulas and conditional formatting, and distributes the results to the sales team.

The Looker query is one I built two years ago. He's never modified it. He doesn't know it can be modified. When I asked why he exports to a spreadsheet instead of sharing the Looker link, he said, "The sales reps don't have Looker access. And even if they did, they wouldn't know what to do with it."

His mental model of the platform is a vending machine. He submits a request (clicks the saved query), receives the product (the CSV), and does the real work elsewhere entirely.

He also told me something I hadn't considered: three of his formulas in the Google Sheet apply business logic that duplicates, and in one case contradicts, logic in our dbt models. He's calculating the win rate differently from how we are. The sales team has been using his number. The dashboards show ours. Nobody noticed until I asked.

The Data Engineer

She described the platform as "an orchestration layer, a warehouse, a transformation layer, and seventeen things that could break at any time."

Her mental model is the most technically accurate, but it's also the most anxious. She sees the platform as infrastructure, not as a product. She thinks about uptime, data freshness, pipeline failures, and cost optimization. She does not think about whether the numbers are correct in a business sense. That's the analysts' job.

When I asked her about data governance, she pointed to the dbt tests. "If the tests pass, the data is good," I asked. "What do the tests cover?" She listed schema tests, not-null checks, uniqueness constraints, and a handful of custom tests for known edge cases. I asked if there were tests for business logic correctness, for example, whether the revenue number matches the finance team's expectation. She said: "That's not really a data engineering concern."

She's right in a strict role-definition sense. She's wrong in every practical sense. The finance lead reconciles against the ERP every month because no one on the engineering side tests for business-level accuracy. But the engineer doesn't know this because she's never talked to the finance lead about it.

The New Hire

She described the platform as "overwhelming."

She's been at the company for three weeks. She has access to Looker, read access to the warehouse, and a Confluence page titled "Data Platform Overview" that was last updated fourteen months ago.

Her mental model is a fog. She knows the platform exists. She knows there are dashboards. She's been told to "look at the data" for a project she's working on. She doesn't know which dashboards are current, which tables are trustworthy, who to ask, or where the documentation lives (the Confluence page she found describes a schema that has since been restructured).

She has done what any reasonable person in her position would do: she found a colleague's saved Looker query, copied it, modified it slightly for her use case, and assumed the output was correct. She doesn't know that the query she copied uses a table that was deprecated three months ago and redirected to a view that excludes a subset of data the original query was designed to include.

She'll find out eventually. Probably when someone questions her numbers in a meeting. Until then, she's building analysis on a foundation she has no way to evaluate.

The Map

Person

Sees the platform as

Trusts it?

Validates data?

Knows the governance?

Product Manager

3 dashboards + Slack

Completely

Never

No

Data Analyst

Warehouse + tribal knowledge

Selectively

Constantly (manually)

Yes (undocumented)

Finance Lead

A source to reconcile against

Partially

Always (against ERP)

No

Sales Ops Lead

A CSV vending machine

Doesn't think about it

Never

No

Data Engineer

Infrastructure to keep alive

Trusts the tests

Schema-level only

Thinks she does

New Hire

A fog

Assumes so

Can't

No

What I Learned That I Didn't Expect

We don't have one data platform. We have six. Each person has constructed a mental model of the system based on their access, their role, their history, and the problems they've personally encountered. These mental models diverge dramatically. The product manager's "three dashboards" and the data engineer's "seventeen things that could break" are not simplified and detailed versions of the same system. They are different systems that happen to share infrastructure.

The governance exists in one person's head. The data analyst is our de facto governance layer. His tribal knowledge of which tables to trust, which known issues exist, and which workarounds to apply is the closest thing we have to operational governance. It's not in the documentation. It's not in the semantic layer. It's not in the tests. It's in Slack threads he remembers and the institutional context he's accumulated over two years. When he leaves, that governance leaves with him.

Nobody talks to each other about the data. The finance lead has been reconciling numbers for two years without telling us about the discrepancies. The sales ops lead has been calculating the win rate differently, without realising we calculate it too. The product manager has no way to validate the numbers on which she bases her decisions. These aren't communication failures in the usual sense. Nobody is refusing to communicate. The platform doesn't provide any reason or mechanism for cross-functional data conversations.

If an AI agent tried to navigate this platform, it would fail. Not because the data is bad or the tools are wrong. Because the context required to use the platform correctly lives in six different human heads and zero machine-readable locations. The analyst's trust list, the finance lead's reconciliation workflow, and the engineer's knowledge of deprecated tables. An AI agent would have none of this. It would behave like the new hire: find the nearest plausible data source, assume it's correct, and build confidently on a foundation it cannot evaluate.

This is the real governance gap. Not policies. Not tools. Not semantic layers. The gap between what the platform is and what each person believes it is. Until that gap closes, every layer we build on top of it (including AI) inherits the confusion beneath it.

What I'd Tell My Past Self

Stop documenting how the platform works. Start documenting how people think it works. The difference between those two things lies in where every governance failure occurs.

And sit with your users. Not in a feedback session. Not in a requirements meeting. In their actual workflow, watching them actually use the thing you built. You will learn more about your platform in six conversations than in six months of monitoring dashboards.

The system isn't what you built. The system is what they experience.