This isn't theory. This is the battle-tested, plan-first playbook that took my project, Apparate, from zero to a production-ready app in about 30 days a project I'd have budgeted eight months for traditionally.
Forget the hype. This "101" is the practical starting point built solely on my real-world experience, emphasizing scoped workflows, brutal failure modes, and why your discipline is the only thing that makes AI coding reliable. You are the Tech Lead; Claude Code is your genius junior engineer fast, brilliant, but requiring constant coaching and explicit rules.
β‘οΈ The Core Truth: Speed is a Trap Without Checks
The biggest 'aha' is the sheer velocity of code generation when Claude is directed precisely. The biggest disappointment? How quickly it can destroy a working codebase when granted unchecked autonomy.
- The Contrast: Claude is a race car. It will get you there fast, but it will drive off a cliff if you aren't the one steering.
- The Takeaway: Use the speed. Never grant it control over critical code. Reliability comes from verification, not autonomy.
This contrast is what shaped my core approach: use the speed, but never grant it unchecked autonomy over important sections of the codebase.
β³ 30-Day App vs. The 8-Month Slog
My timeline: I launched a production app in 30 days with Claude Code and one "vibe coder" (me). The traditional path would have been an eight-month marathon of full-time development.
This acceleration isn't "AI magic." Itβs a direct result of process:
- Relentlessly scoping tasks to the smallest possible unit.
- Approving a plan (the "Plan" in the Plan-Act loop) before any code is written.
- Shipping in small slices that are quick to review (the "Act" in the Plan-Act loop).
Your mantra must be: Plan first, ship in slices. Let speed amplify good process never replace it.
π¨ The Costliest Failure: The Dependency Death Loop
My worst spiral came while integrating an activity-detection API, where the system kept selecting non-compatible library versions and got stuck in an infinite loop of upgrading and downgrading without converging. Token burn and time wasted were astronomical.
- How I Broke It: I stopped the session immediately, reset the context completely, and reasserted the specific, pinned versions and compatibility requirements before letting it touch another file.
- The Lesson: If versions oscillate and suggestions repeat, stop immediately and re-verify constraints before you write another line.
The most expensive mistake: My standing rule now is simple: Skipping a written plan led straight to loops, token burn, and code churn. If there is no plan on the table, there is no code.
π My Secret Weapon: The CLAUDE.md Contract
I don't just start coding. I follow a strict, end-to-end spec-writing process that creates a living rules-of-engagement document for the agent:
- Concept Refinement: Discuss the raw idea in Claude Desktop until a clear software architecture emerges.
- PRD Generation: Create a formal PRD from that refined discussion.
- Project Initialization: Give the PRD to Claude Code, initialize the project, and demand it generate a CLAUDE.md file based on the PRD.
- Enforcing House Rules: I then add my non-negotiable "house rules" to CLAUDE.md. This file is now the contract for how work gets done. It is the guardrail against drift, rework, and accidental scope creep.
π The 7 Non-Negotiable Rules in My CLAUDE.md
These aren't suggestions; they are guardrails that define the coaching model for your AI junior engineer:
- Plan First: Wait for explicit user confirmation before writing code.
- Architecture: For major decisions, always present the top three options with a clear list of pros and cons.
- Security Gate: Route all changes through a security review agent before they can be accepted.
- Task Tracking: Write all TODO items to a dedicated file and let specialized agents update their status as work progresses.
- Efficiency: Use sub-agents for specialized tasks, coordinated by a Project Manager agent.
- Platform Authority: Read the official API documentation (e.g., Android API) before implementing platform-specific behaviors.
- Compatibility Check: Before checking in, verify all libraries are compatible and match the latest stable versions I have approved.
π§ When to Stop Wrestling and Hit the Break
You never accept a suggestion from Claude Code without challenging it, and you treat the agent like a junior engineer whose steps must be questioned.
- The Break Signal: You know itβs time to take a break when Claude starts suggesting the same irrelevant changes or minor edits, and your core problem still isnβt solved.
- My Rule: "STOP, TAKE A BREAK." Your human debugging skills are still the most critical part of the loop; Claude is not autonomous enough to debug safely without active oversight.
Reviewing AI-Generated Code: I use a review agent to generate a report, but I still prefer building in small chunks and reviewing manually. Chunking the work keeps the risk surface small and prevents long sessions from accumulating subtle errors. Even with a review agent, I assume responsibility for final judgment.
π Guidance for Managers (A Laddered Rollout)
I didn't pitch this to leadership; I just shipped with it. For organizations, I recommend a laddered rollout to build trust and avoid culture shock:
- Phase 1 (Familiarity): Start by using Claude only for minor defects and bug fixes.
- Phase 2 (Expansion): Once the team is comfortable, expand to new feature development under the same plan-first/review rules.
- Phase 3 (Complex): Only then should you push into larger refactors and more complex features.
This laddered approach builds trust in the workflow and avoids expensive reversals.
Closing: This Is the Starting Point
This is your 101 only your experience, in your words, expanded into a format others can apply immediately. This is the Plan-Act Loop.
Next up, I'll be publishing the deeper dives that build on these foundations: Advanced Multi-Agent Verification, Security Guardrails, Cost Control, and the full Team Rollout strategy all coming soon.
