Over the last two years, something subtle has shifted in software engineering, and most of us have felt it even if we can't name it. Teams report being dramatically more productive. Code is written faster, prototypes appear sooner, and the distance from idea to working system has collapsed. And yet, the software we rely on every day doesn't feel proportionally better. In many cases, it feels harder to understand, harder to change, and more fragile than before.
The feeling is familiar: everything works until you try to change it.
This isn't because AI tools are bad. They work remarkably well at what they optimize for: accelerating implementation. But software engineering was never primarily limited by how fast code could be written. The hard parts, understanding requirements, shaping systems, and managing complexity over time, haven't become easier. If anything, they've become more exposed now that nothing else is slowing you down.
What's changed is not the nature of software, but where the constraints live.
What is it? This piece continues my series on practical mental models for startup engineering, for founders and engineers who want to ship fast without drowning in self-inflicted complexity.
Mental Model 1: AI YAGNI
Software engineering was never constrained by how much code could be produced. It was constrained by how much change humans can cognitively absorb: how much complexity a team can understand, hold in their heads, reason about safely, and modify without breaking things they forgot existed.
Implementation speed used to be slow enough that this limit was invisible. The rate of code creation naturally stayed below the rate of human comprehension. You couldn't outpace your own understanding because building took too long.
AI broke that balance.
Code now generates faster than teams can absorb it. The production rate exceeds the comprehension rate, and the gap widens every sprint. As a result, systems grow beyond what any individual can hold in their head, beyond what documentation can capture, beyond what onboarding can transfer.
A newer way to see this is 'comprehension debt': code added faster than the team can internalize it. Reports from 2025 (like Ox Security) show AI code often lacks architectural judgment, leading to bloated, hard-to-reason-about systems. And also the 2025 DORA insights highlight the validation overhead as reviewing AI outputs adds cognitive load that offsets raw speed gains.
To remember:
- The true limit in software engineering is human comprehension — how much complexity a team can understand and manage safely.
- AI exposes this limit by making code production fast and cheap, allowing systems to grow faster than anyone can internalize.
- The constraint has moved to our minds;
Mental Model 2: AI Seduction
One effect I've noticed working with AI tools is what I'd call the seduction. When implementation feels almost free, teams naturally start saying yes to more ideas. A feature that once required days of careful planning now appears in hours after a simple prompt, so the temptation grows to add "just one more thing" because the cost seems tiny.
People begin to explore alternatives, build extra flexibility for future needs they only imagine, and polish details that nobody has asked for yet. This pattern feels highly productive in the moment, and that feeling is deeply seductive.
Yet each addition quietly increases what the team must understand and maintain. Much of it addresses problems that never actually arrive, so the system carries permanent weight for temporary wins. The effect shows up in several familiar practices that used to have natural constraints:
|
Practice |
🔒 Prev. constraint |
😏 AI Seduction |
⚠️ Results |
|---|---|---|---|
|
Small Batches |
Development speed naturally limited batch size. |
🫦 "It's easy, let's add more..." |
Change arrives faster than comprehension |
|
YAGNI (You Are Not Gonna Need It) |
Implementation cost suppressed speculation. |
😈 “AI suggests improvements, why not?" |
Speculative mass survives and entangles |
|
Eliminate Waste |
Manual effort made unused code painful |
🫦 ”We'll clean it up later" |
Dead code accumulates, mental surface grows |
|
Team Learning |
Slow progress forced shared understanding |
😈 "AI handles the details" |
Surface knowledge, context decay |
Recent research confirms this pattern: the 2025 Ox Security report "Army of Juniors" describes how AI-generated code often works perfectly in isolation but systematically lacks architectural judgment, which leads to bloated systems that feel functional at first yet grow increasingly fragile.
The paradox is clear:
- AI removes the natural constraints that kept us lean. To stay healthy, we must recreate those constraints through radical discipline.
- Without this radical discipline, the seduction wins by default, and short-term velocity eventually collapses under hidden comprehension debt.
Mental Model 3: Judgment Is the Bottleneck
As we already discovered, AI accelerates implementation, not thinking. It moves you faster in whatever direction you already chose. But it doesn't choose the direction.
Mental shifts happen here. When building was slow, you had time to notice mistakes during implementation, to correct courses and to learn from the results. The slowness created a buffer between a bad decision and its consequences. But that buffer is gone.
Decisions become real almost immediately. If your judgment was wrong, you find problems in production, not during development. So, the real bottleneck now isn't how fast you can build, but iIt's how well you can decide what should exist before AI builds it.
This explains why senior engineers gain the most from AI. They have accumulated hard-won instincts for refusing ideas that will later become burdens, for sensing when a system is approaching the limits of team comprehension, and for recognizing that something "easy to add now" often means "painful to live with forever." Junior engineers struggle more because those instincts require years of real-world repetition, and AI cannot shorten that timeline.
This mental shift shows up in several practical changes that successful teams adopt:
1. Prompt reviews replace traditional pull requests. The plan prompt that generated the code reveals the true intent far better than the code itself. Better to align prompts to ensure the goal is clear and minimal before any code appears.
2. Architecture conversations replace line-by-line code reviews. Better to focus on bounded contexts, boundaries, dependencies, and irreversible decisions. Better to ask whether the change respects the system's overall shape and whether the team will still understand it months from now.
3. Close the verification loop Design workflows so AI agents can confirm their own success through automated checks like compiling, linting, running tests, and ideally validating outcomes. Invest time in crafting reusable prompts and ai-skills. If an agent cannot reliably verify its work, we end up babysitting it.
4. Care about outcomes, not clever implementation. Most modern code is borring data transformations. Surprisingly, the engineers who good in this environment are the ones who always prioritized shipping valuable products over solving leetcode.
Bottom Line
The real bottleneck is now the human mind: how much change a person can understand, reason about, and safely evolve. As implementation becomes effortless, success depends on recognizing this shift and refocusing engineering effort on judgment, restraint, and understanding. AI didn’t remove the limits of software engineering but it relocated them.