It's 11 PM on a Tuesday and you're four hours into building a web app that nobody asked for. You started with a vague idea after seeing someone's tweet about a gap in the market, opened Claude Code, described what you wanted in plain English, and watched a full-stack application materialize in front of you. The landing page looks clean. The auth works. There's a Stripe integration. You feel like you're building something real. You’re not.

What you're doing is chasing the most refined dopamine loop that software has ever produced.

The loop

The new generation of AI coding tools (Claude Code, Cursor, Replit, take your pick) have made it so trivially easy to go from idea to working prototype that the act of building has become its own reward. You speak into Wispr Flow, describe a feature in conversational English, and the code writes itself. You see a button appear on screen. You click it and it does the thing. Something in your brain lights up, the same something that lights up when you pull a slot machine and hear the coins drop.

The problem isn't that these tools are bad. They're genuinely remarkable, and learning to use them well is one of the highest-leverage skills you can develop right now. The problem is that the feedback loop is so tight and so satisfying that it becomes a substitute for the harder, slower, less dopaminergic work of figuring out whether what you're building actually matters.

I've done this to myself. I've spent entire weekends building personal agents, automation pipelines, side projects with beautiful dashboards and clever integrations. Some of them solved real problems I genuinely have (I built a whole relocation decision support platform because my wife and I were contemplating where we move after she graduates). Some of them solved problems that exist but that I never validated with anyone else. And some of them solved nothing at all, they were just excuses to keep the loop going. The tricky part is that all three feel exactly the same at 11 PM on a Saturday. The building felt productive. The output was real code, deployed to real infrastructure, doing real things. But I never stopped long enough to ask which category I was in, and by Monday morning, the answer was usually the third one.

The illusion of progress

Vibe coding looks and feels exactly like productive work. You're writing code (sort of). You're shipping features. You're learning new tools. You can show people a working demo. All the surface-level indicators of progress are there, and none of the substance.

If you're building a company, the substance is whether anyone will pay for what you're making. If you're building a personal tool, the substance is whether it saves you meaningful time on something you actually do repeatedly. If you're building for your career, the substance is whether the skill you're developing has durable value.

The market moves so fast now that the half-life of a side project is approaching zero. You might just be getting good at prompting a model that will be obsolete in 18 months. You can spend three weeks building an AI wrapper around some API, and by the time you're ready to show it to anyone, four YC companies have launched the same thing with better distribution and actual funding. The speed that makes vibe coding feel powerful is the same speed that makes your output disposable. Everyone has access to the same tools. The bottleneck was never the code.

When to let go

The hardest skill in this environment isn't building. Building is the easy part now (that's the whole point). The hard skill is knowing when to stop. Knowing when the project you started three weeks ago has taught you what it's going to teach you and the remaining work is just momentum, not intention. Knowing when you're spending Saturday on a side project because you have genuine conviction about it versus because opening your laptop and talking to an AI feels better than the ambiguity of not having a clear next move in your career.

This requires a kind of self-honesty that the tools actively work against. Every AI coding assistant is optimized to keep you building. That's the product. You describe something, it builds it, you feel good, you describe the next thing. There's no moment in that loop where Claude Code stops and asks you whether this is a good use of your finite time on earth. That reflection has to come from you, and it has to come regularly, not just when you burn out.

I've started asking myself a simple question before I open my terminal on a weekend: "If I couldn't use AI tools for this, would I still think it was worth building?" If the answer is no, that tells me the tool is the draw, not the outcome. And that's a signal to close the laptop and go outside.

The security problem nobody's talking about

There's a second dimension to this that's less personal and more structural. When everyone can build software, everyone builds software. And most of it is insecure.

The vibe coding wave has produced an explosion of personal agents, automation scripts, self-hosted tools, and weekend SaaS products that handle real data, make real API calls, store real credentials, and have been reviewed by exactly zero security-conscious humans. The person who built it asked an AI to "add authentication" and got something that looks like auth, passes the visual check, and might have three injection vectors that nobody will find until someone finds them.

Platforms like OpenClaw are making it even easier to build and deploy personal agents that interact with your email, your calendar, your files, your banking APIs. The capability is genuinely exciting. The security posture of the average project built on these platforms is genuinely terrifying. Most people building personal agents have never thought about token storage, input sanitization, or what happens when your agent's context window gets poisoned by a malicious email it was asked to summarize.

This isn't a reason to stop building. But it's a reason to slow down enough to understand what your code is actually doing, even if (especially if) an AI wrote it for you. The dopamine loop doesn't reward security review. It rewards shipping the next feature.

Touch grass

I realize this entire post might sound hypocritical coming from someone who literally runs autonomous AI agents on a cron job. I build with these tools every day. I think they're transformative. I think everyone should learn to use them.

But I've also caught myself at 1 AM on a weeknight, bleary-eyed, debugging an agent pipeline that automates something I do once a month manually, and I've had to ask myself: what am I actually doing here? Am I solving a problem or am I feeding a compulsion? Is this making my life better or is it just making my GitHub contribution graph greener?

The tools are incredible. The feeling of building with them is genuinely addictive. And like any addiction, the fix is not abstinence, it's awareness. Know why you're building. Know when to stop. Know the difference between a project that serves your goals and a project that just serves the loop.

And occasionally, close the laptop, leave your phone inside, and go stand in your backyard for ten minutes. The code will still be there when you get back. Probably with fewer bugs than if you'd kept going at 1 AM anyway.


I'm a Senior Manager at EY, getting back into writing. If you want to follow me, connect, or have a question to ask, feel free to reach out on LinkedIn.