TL;DR: I taught an AI agent to scaffold an entire Clean Architecture .NET solution in seconds. Here's what Agent Skills are, why they matter, and how this changes the way we think about AI-assisted development.**
There's a moment every .NET developer knows too well. You've got a green-field project, a clear architectural vision, and you're ready to ship. Then you open Visual Studio and spend the next 15 minutes clicking through dialogs, creating solution folders, wiring up project references, and installing NuGet packages — before you've written a single line of business logic.
Multiply that by every new project, every new team member onboarding, every spike or prototype, and you're looking at a surprisingly significant chunk of wasted engineering time.
This is the problem I set out to solve with Agent Skills — and the results were worth writing about.
See This in Action (Video)
https://youtu.be/rnRmdNrDkmo?embedable=true
What Are Agent Skills?
Agent Skills are repeatable, teachable procedures for AI coding agents.
Here's the core idea: instead of prompting an AI for the same multi-step workflow over and over, you teach it the procedure once — iterating and correcting until it gets it right — and then save that workflow as a reusable Skill.
Think of it less like a prompt template and more like a recorded macro, but one that the AI understands contextually and can execute reliably.
Skills can range from trivially simple to genuinely complex:
- Simple: Rename a file according to a naming convention
- Medium: Scaffold a project folder structure
- Complex: Create a full multi-project .NET solution with references, packages, and architecture enforced
The key property that makes Skills powerful is reusability. Once an agent successfully learns a workflow, you never have to explain it again.
Another important property — ideally, skills should be model-agnostic. A skill trained with OpenAI GPT Codex should be executable by Anthropic Claude or any other capable model. The skill describes what to do, not which brain to use.
The Problem: Manual .NET Solution Setup Is Boring and Error-Prone
When you follow Clean Architecture for a .NET application, a typical solution contains at minimum:
- Domain — entities, value objects, domain events
- Application — use cases, interfaces, DTOs
- Infrastructure — data access, external services, implementations
- Web / API — controllers, middleware, presentation layer
- Tests — unit, integration, sometimes separated by project
Setting this up manually in Visual Studio or VS Code requires:
- Creating the solution file
- Adding each project individually
- Organizing projects into solution folders
- Configuring project-to-project references (in the right direction — no cheating on the dependency rule)
- Installing baseline NuGet packages per project
- Double-checking the architecture is actually enforced
For an experienced developer who's done this dozens of times, this takes 10–15 minutes. For someone newer to the team or the pattern, it can take much longer — and they're more likely to make a mistake that only surfaces later.
The other problem is consistency. In team environments, slight variations in structure — a differently named folder here, a missing abstraction there — compound over time into architectural drift. Every project should start from the same baseline, but enforcing that manually is friction nobody wants.
The Solution: Teaching the Agent Once, Running It Forever
Instead of repeatedly doing this setup by hand, I taught GPT Codex to do the entire thing automatically.
The process looked like this:
- Walk the agent through the setup step-by-step, explaining what each command does and why
- Correct mistakes as they arise (and there were a few — NuGet references, wrong project types)
- Repeat until the agent could execute the full workflow reliably from a single instruction
- Save it as an Agent Skill
The resulting skill handles the complete scaffold:
dotnet new slnto create the solutiondotnet new classlib/dotnet new webapifor the appropriate project typesdotnet sln addto register each projectdotnet add referenceto wire up the dependency graphdotnet add packagefor baseline packages per project layer- Verification that the architecture matches the intended structure
From that point on, creating a new Clean Architecture .NET solution is a single command. The agent doesn't ask clarifying questions — it executes the known procedure and hands back a ready-to-develop solution structure.
Why This Actually Matters
1. Time Savings Are Real, but Not the Main Point
Yes, 15 minutes down to ~30 seconds is a meaningful improvement. But the bigger win is cognitive load. When project setup is automated, developers enter a new codebase in a state of focus rather than fatigue. You start thinking about domain logic immediately, not worrying about whether you forgot to add a project reference.
2. Architectural Standards Become Self-Enforcing
This is the benefit that's hardest to see until you've experienced it. When a Skill encodes your team's architectural conventions, those conventions stop being documentation that people may or may not read — they become the default starting point.
Every project gets:
- The same folder structure
- The same naming conventions
- The same layer boundaries
- The same baseline dependencies
Architectural consistency is no longer a code review concern for the setup phase. It's guaranteed before the first commit.
3. Determinism Over Generation
Most discussions about AI in development focus on code generation — the AI suggests something, you evaluate it, maybe accept it, maybe tweak it. That's a probabilistic, creative process.
Agent Skills operate differently. They're deterministic procedures — the AI isn't generating something novel, it's executing a known workflow reliably. This makes them a different class of tool: more like scripting or automation than co-authoring.
For tasks that are well-defined and repetitive, deterministic execution is exactly what you want. You don't need the AI to be creative about how to scaffold a project — you need it to do it the same correct way every time.
4. Skills Compose Into Larger Workflows
This is where things get interesting from an architecture-of-AI-development perspective.
A scaffolding Skill isn't just useful on its own — it's a building block. In a Specification-Driven Development workflow, for example, an AI agent could:
- Read a system specification document
- Invoke the scaffolding Skill to create the solution structure
- Generate domain models from the specification
- Implement application services
- Wire up API endpoints
- Generate test stubs
The scaffolding Skill becomes Step 2 in a larger automated pipeline. Skills compose. And as the library of Skills grows, more complex workflows become achievable without manual intervention at each step.
What This Signals About Where AI Development Is Heading
The current state of AI coding assistance is largely about autocomplete at various levels of granularity — a line, a function, sometimes a file. Useful, but fundamentally reactive. The developer is still the one orchestrating everything.
Agent Skills push toward something more interesting: AI agents that own complete procedures, not just individual steps. The developer's role shifts from "person who executes the process" to "person who defines and refines the process, then delegates it."
This is a meaningful shift. It's not about AI replacing developers — it's about elevating the kinds of problems developers spend their time on. Less mechanical setup. More architecture, domain modeling, system design, and product thinking.
The developers who will thrive in this environment aren't the ones who resist automation — they're the ones who get good at teaching agents the right procedures, curating a library of reliable Skills, and composing them into increasingly sophisticated workflows.
See It in Action
If you want to see the full walkthrough — including the iterative process of teaching the agent and the final Skill execution — I've recorded a demo:
Agent Skills in VS Code – Using GPT Codex | Automatically Creating Project Structure
Final Thoughts
The era of spending 15 minutes on boilerplate project setup is over — if you're willing to invest a bit of time upfront teaching your AI agent the right way to do it.
Agent Skills aren't magic. They require iteration and correction to get right. But once they're working, they pay dividends on every project that follows.
The question worth asking isn't "can AI help me write code?" — that's already answered. The better question is: "what repetitive engineering procedures can I teach an AI to own entirely?"
Start there. The answers might surprise you.
Have you experimented with Agent Skills or similar automation in your development workflow? Drop a comment — I'd love to hear what procedures you've automated or are thinking about automating.