“If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.” - Attributed to Albert Einstein
The most expensive code I ever wrote solved the wrong problem perfectly.
It was elegant. Well-tested. Performant. Completely useless. Because I’d started coding before I understood what I was actually trying to solve. I’d taken stakeholder requests at face value, translated them into technical requirements, and built exactly what was asked for.
Six weeks of work. Thousands of lines of code. And in the first week after launch, I watched users struggle, work around the system, and eventually go back to their old manual processes. The software worked perfectly. It just didn’t solve their actual problem.
This is the trap that catches engineers at every level: we love solving technical problems so much that we forget to question whether we’re solving the right problem at all.
The Principle: Understand the Problem Deeply Before You Touch a Keyboard
Here’s what separates senior engineers from everyone else: we’ve been burned enough times to know that the hardest part of engineering isn’t writing code, it’s figuring out what to build.
A stakeholder says “we need real-time notifications.” We hear a technical requirement. We think about WebSockets, push services, polling strategies. We get excited about the implementation.
But what if the real problem isn’t real-time updates? What if it’s that users feel out of the loop and anxious about missing important information? What if the solution is better email summaries, not another push notification channel?
What if “we need faster performance” actually means “users are getting frustrated during peak hours” and the solution is better progress indicators, not server optimization?
What if “we need more features” actually means “users can’t figure out how to use the features we already have” and the solution is better UX, not more development?
The technical solution to each of these problems looks completely different. And if you build the wrong thing brilliantly, you’ve accomplished nothing except wasting time and budget.
The Cost of Starting Too Soon
I’ve watched teams fall into predictable patterns:
Pattern 1: Solving yesterday’s problem. Stakeholders describe issues they’ve experienced. Engineers build solutions to those specific scenarios. By the time the software ships, the context has changed. The solution is technically perfect but strategically obsolete.
Pattern 2: Building what’s asked instead of what’s needed. A department head requests a reporting dashboard. Engineers build it exactly to spec. It goes unused because what they actually needed was automated alerts when metrics hit thresholds, not another dashboard to check manually.
Pattern 3: Over-engineering for imagined scale. “This needs to handle millions of users.” Engineers build a distributed system with complex caching and load balancing. The product launches to 100 users and never grows. The complexity becomes a maintenance burden that slows every future change.
Pattern 4: Ignoring constraints until it’s too late. Build a beautiful system that requires constant internet connectivity. Deploy it in an environment with unreliable networks. Watch it fail spectacularly and expensively.
All of these failures have the same root cause: starting with solutions instead of understanding problems.
What Discovery Actually Looks Like
The best projects I’ve been part of shared a common pattern: they started with investigation, not implementation.
Observe the work being done. Don’t schedule a requirements meeting in a conference room. Go to where users actually work. Watch what they do. See the interruptions, the workarounds, the pain points they’ve gotten so used to they don’t even mention them anymore.
I’ve seen teams waste months building features that addressed what stakeholders said in meetings, while missing the obvious problems visible in five minutes of watching actual work.
Ask “why” until you hit bedrock. Stakeholder: “We need automated reporting.” Why? “So managers can see performance metrics.” Why do they need to see them? “To make decisions about resource allocation.” What decisions specifically? “Whether to hire more staff or redistribute work.” What information would help them make that decision?
By the fifth “why,” you’re usually at the actual problem, which often looks nothing like the initial request.
Talk to the people closest to the problem. Executives tell you what they think the problem is. The people doing the work every day tell you what the problem actually is. Both perspectives matter, but if you only hear from leadership, you’ll miss critical context.
The person who uses the system eight hours a day knows things about its problems that no executive will ever see. They’ve developed workarounds. They know which features are broken but officially “work.” They understand the real workflow, not the idealized version in process documents.
Understand constraints before proposing solutions. Budget, timeline, infrastructure, team capabilities, regulatory requirements, these aren’t obstacles to work around. They’re part of the problem definition.
A solution that requires three months when you have six weeks isn’t a solution. A solution that requires infrastructure investment beyond available budget isn’t a solution. A solution that requires skills your team doesn’t have and can’t acquire isn’t a solution.
Constraints clarify. They force creative thinking. They prevent over-engineering.
The Translation Skills Nobody Teaches You
Most of engineering education focuses on writing code. Almost none of it focuses on the skill that actually determines your career trajectory: translating between different languages that all sound like English but aren’t.
From vague stakeholder requests to concrete requirements:
- “Make it more user-friendly” → Which specific tasks are difficult? For which users? What does success look like?
- “Improve security” → Which assets need protecting? From what threats? At what cost? What’s acceptable risk?
- “We need better data” → What decisions are you trying to make? What information would help you make them?
From user complaints to technical opportunities:
- “This is confusing” → Is it information architecture? Visual design? Mental model mismatch? Missing documentation?
- “This is slow” → Which operations? For which users? Under what conditions? How slow is too slow?
- “This doesn’t work” → What were you trying to do? What happened instead? What did you expect?
From technical constraints to business implications:
- “This will take three months” → What can we deliver in phases? What’s the MVP? What’s the cost of waiting versus the risk of rushing?
- “This requires infrastructure investment” → What’s the ROI? What happens if we don’t invest? What’s the cost of the status quo?
- “This creates technical debt” → What’s the interest rate? When will it come due? Is it strategic debt or sloppy shortcuts?
These translations happen in every project meeting, every stakeholder conversation, every code review discussion. Get them wrong and you build the wrong thing. Get them right and you solve real problems.
A Pattern I’ve Seen Work Across Domains
Whether building examination systems for universities, transaction platforms for fintech, or compliance tools for healthcare, a consistent discovery pattern emerges:
Week 1: Context Immersion
- Spend time where the work happens. No meetings in conference rooms.
- Watch people use existing tools. Note every friction point.
- Interview multiple stakeholder levels: executives, managers, individual contributors.
- Understand the domain: regulations, business models, competitive landscape.
- Review past attempts: What’s been tried? Why did it fail or succeed?
Week 2: Problem Definition
- Synthesize observations into problem statements.
- Validate problems with stakeholders: Is this actually the issue?
- Prioritize: Which problems matter most? Which are solvable?
- Define success metrics: How will we know if we’ve solved this?
- Map constraints: What can’t change? What must change?
Week 3: Solution Exploration
- Brainstorm multiple approaches. Resist picking the first idea.
- Prototype conversations, not code. Sketch ideas. Use words and drawings.
- Evaluate against constraints: What actually works here?
- Estimate costs: Time, money, complexity, maintenance burden.
- Pick the simplest solution that solves the core problem.
Week 4: Validation Before Building
- Mock up the solution (paper, wireframes, clickable prototypes).
- Test with actual users before writing production code.
- Adjust based on feedback while changes are still cheap.
- Finalize technical architecture only after validating the approach.
Three weeks of discovery before a single line of production code. This feels slow. It’s actually the fastest way to build the right thing.
The Red Flags I Watch For Now
After enough projects that went wrong, you develop pattern recognition for warning signs:
Nobody can articulate what success looks like. If stakeholders can’t define “done,” you’re building the wrong thing. Push until you get concrete, measurable outcomes.
The proposed solution is disproportionately complex for the stated problem. Either you don’t understand the actual problem, or someone’s over-engineering. Complex solutions should only emerge from genuinely complex problems.
Everyone assumes someone else understands the details. This is how requirement gaps hide until implementation. Force the conversation that makes gaps visible.
Users and stakeholders want fundamentally different things. This tension won’t resolve itself. Address it during discovery, not after launch.
The problem description keeps changing. This isn’t scope creep, it’s lack of clarity about what you’re actually solving. Stop and get aligned before proceeding.
Constraints are treated as negotiable. “We’ll figure out budget later.” “Maybe we can extend the timeline.” “Perhaps we can hire contractors.” No. Constraints are real. Design within them.
What High-Stakes Environments Teach You
In domains where errors have serious consequences, whether financial losses or patient safety risks, the problem-solving discipline becomes even more critical.
You learn to question requirements rigorously. When someone requests eventual consistency in a financial system, you ask: “What happens if two people see different account balances? Who loses money when reconciliation fails? How do we prove what happened?”
When someone wants to skip a validation step to save time, you ask: “What’s the worst-case scenario? What’s the cost of that scenario? Is the time savings worth the risk?”
You learn to think in failure modes. Not just “will this work?” but “how will this fail? What happens when it does? Can we recover? How do we know it failed?”
These questions shape architecture. They force defensive design. They create systems that fail safely instead of catastrophically.
You learn that perfect is the enemy of good enough. In critical systems, you need to ship. But shipping broken code has unacceptable consequences. The balance is: ship the simplest thing that solves the core problem safely, then iterate.
The Questions That Change Everything
When someone brings you a problem or feature request, here are the questions that cut through to what actually matters:
“What’s the job this needs to do?” Not features, not specifications. What does the user need to accomplish? Why?
“What happens if we do nothing?” Sometimes the best solution is no software at all. Understanding the cost of inaction clarifies whether the problem is worth solving.
“Who is this for, specifically?” “Users” is too vague. Executives? Individual contributors? Power users? Occasional users? Different audiences need different solutions.
“What have you tried already?” Learn from past attempts. Understand what didn’t work and why. Don’t repeat old mistakes.
“How will we know if this succeeded?” If you can’t measure success, you can’t know if you’ve solved the problem. Define metrics before building.
“What can’t change?” Understand the non-negotiable constraints. Build within them, don’t pretend they don’t exist.
“What’s the simplest version that solves the core problem?” Start here. Add complexity only when proven necessary.
When Understanding the Problem Is the Solution
Sometimes the act of deeply understanding a problem reveals that the solution isn’t software at all.
I’ve seen teams spend weeks investigating performance issues only to discover the real problem was unclear progress indicators. Users thought the system was slow. It was actually fast but gave no feedback during processing. The solution was better UX, not optimization.
I’ve seen requests for complex automation that dissolved when stakeholders realized their manual process was only painful because they’d never optimized it. The solution was process improvement, not software.
I’ve seen feature requests that disappeared when users realized an existing feature already solved their problem, they just hadn’t known it existed. The solution was better documentation and training.
The best code is sometimes no code. The best solution is sometimes not the one you build.
What This Changes About Your Career
Understanding that you’re a problem solver, not just a coder, changes everything about how you approach work:
You become more valuable. Engineers who only implement specifications are replaceable. Engineers who can identify and solve the right problems are essential.
You have better conversations with stakeholders. Instead of just taking orders, you can push back, ask questions, propose alternatives. You become a trusted advisor, not just a vendor.
You build less and accomplish more. When you solve the right problems, your code has impact. When you solve the wrong problems, your code gets abandoned.
You avoid career-limiting mistakes. Building the wrong thing perfectly doesn’t advance your career. It brands you as someone who can’t see the bigger picture.
You earn the right to make architectural decisions. When you’ve proven you understand problems deeply, people trust you to design solutions appropriately.
This isn’t about being difficult or slowing things down. It’s about being responsible with the considerable resources that go into software development: time, money, opportunity cost, and maintenance burden.
What problem are you solving today? Not the technical problem, the human problem behind it. How would you know if you’ve solved it? And have you spent more time understanding the problem or jumping to solutions?