We are entering the golden age of AI Coding. Every day, I see colleagues, both technical and non-technical, marveling at how agents are rewriting the rules of software construction. The promise is intoxicating: describe what you want, and let the machine handle the rest.

However, when I see my colleagues try to apply these agents to strict engineering standards, they hit a wall. On one side, you have rigorous specification tools like spec-kit or conductor.


They are deterministic and thorough, but setting them up feels like writing a legal contract. On the other side, you have generic tools like the Model Context Protocol (MCP). They act as incredible “hands” for the AI — reading databases, calling APIs — but they lack the brain for your specific context.


They don’t know that your team enforces uv over poetry. They don’t know you prefer just files for automation. They don’t know your specific flavor of “clean code.”


Then I discovered Agent Skills, and everything clicked.


I was immediately hooked. They offer the specific trade-off I had been looking for: lightweight enough to be flexible, yet opinionated enough to be useful.


The Challenge: Making References Actionable

For the past few months, I’ve been deep in the trenches writing the MLOps Coding Course. It is a comprehensive curriculum teaching production-grade MLOps, from robust project initialization to advanced observability.


I didn’t just want another wiki page. I wanted to make these best practices actionable for valid AI Agents. I wanted to move from “reading the docs” to “installing the capability.”

The Logic: How to “Skillify” Knowledge

The beauty of an Agent Skill lies in its simplicity. It is essentially a markdown file (SKILL.md) that functions as a context injection module. It gives the agent “muscle memory” for a specific topic.


My methodology for building the MLOps Coding Skills repo was straightforward:

  1. Isolate a Chapter: Take a specific section of the course (e.g., Automation or Observability).
  2. Extract Patterns: Use an LLM to distill the generic engineering standards from the educational content.
  3. Standardize: Format it into a SKILL.md that an agent can ingest.

A Concrete Example: Automating Ops

Let’s look at the mlops-automation skill.


In our course, we have strong opinions: we use just for command running and Docker for containerization, with very specific layer caching strategies.


Here is what the skill looks like “on the wire”:

# MLOps Automation

## Goal

To elevate the codebase to production standards by adding Task Automation (just), Containerization ([docker](https://www.docker.com/)), CI/CD ([github-actions](https://github.com/features/actions)), and Experiment Tracking ([mlflow](https://mlflow.org/)).

## Instructions

### 1. Task Automation

Replace manual commands with a `justfile`.
1. **Tool**: `just` (modern alternative to Make).
2. **Organization**: Split tasks into `tasks/*.just` modules.
3. **Core Tasks**:
- `check`: Run all linters and tests.
- `package`: Build wheels.

### 2. Containerization

1. **Tool**: `docker`.
2. **Base Image**: Use `ghcr.io/astral-sh/uv:python3.1X-bookworm-slim` for minimal size.


When I load this skill, my agent stops guessing. It doesn’t offer me a Makefile. It doesn’t suggest a bloated Ubuntu image. It acts like a senior engineer who has been on the team for years.

The “Senior Engineer” Injection

This is the killer value proposition.


Most frustrations with AI coding come from a lack of context. We blame the model for being “dumb,” but usually, we just haven’t told it the rules of the house.


By using Agent Skills, you are effectively injecting a Senior Engineer into your chat context. You are giving the agent a “cheat sheet” that forces it to align with your organization’s reality.


I now use these skills for every new project I touch. I don’t spend an hour setting up boilerplate. I load or create a skill, and within minutes, I have a structure that matches my most rigorous standards.

The Friction Points

Of course, no solution is perfect. There are still rough edges in this workflow:

Conclusion

Despite the minor friction, Agent Skills are excellent “Low Hanging Fruit” for any engineering team.


The productivity gain is massive. For a few minutes of setup — writing a markdown file — you save hours of correcting boilerplate code and enforcing standards down the line. It bridges the gap between the rigidity of a spec and the chaos of a raw LLM.


If you are tired of fighting your AI to follow your style, stop arguing with it. Give it a Skill.


Check out the full MLOps Coding Skills repository to see the library in action.


Feature image made with Gemini