AI assistants don’t have “bad memory.” They have bad governance.

You’ve seen it:

This isn’t about intelligence. It’s about incentives.

In Claude Code, Skills are available context, not a hard constraint. If Claude “feels” it can answer without calling a Skill, it will. And your project handbook becomes decoration.

On our team, building a RuoYi-Plus codebase with Claude Code, we tracked it:

Without intervention, Claude proactively activated project Skills only ~25% of the time.

So 3 out of 4 times, your “rules” aren’t rules. They’re a suggestion.

We wanted something stricter: a mechanism that makes Claude behave less like a clever intern and more like a staff engineer who reads the playbook before writing code.

The fix wasn’t more prompting.

The fix was Hooks.

After ~1 month of iteration, we shipped a .claude configuration stack:

Result:

Skill activation for dev tasks: ~25% → 90%+ Less rule-violating code, fewer “please redo it” cycles, and far fewer risky tool actions.

This article breaks down the architecture so you can reproduce it in your own repo.


1) Why Claude Code “forgets”: Skills are opt-in by default

Claude Code’s default flow looks like this:

User prompt → Claude answers (maybe calls a Skill, maybe not)

That “maybe” is the problem.

Claude’s internal decision heuristic is usually:

So the system drifts toward convenience.

What you want is institutional friction: a lightweight “control plane” that runs before Claude starts reasoning, and shapes the work every time.


2) The breakthrough: a forced Skill evaluation hook

We implemented a hook that fires at the earliest moment: UserPromptSubmit.

It prints a short policy block that Claude sees before doing anything else:

The “forced eval” hook (core logic)

We keep it deliberately dumb and deterministic:

// .claude/hooks/skill-forced-eval.js (core idea, simplified)
const prompt = process.env.CLAUDE_USER_PROMPT ?? "";
​
// Escape hatch: if user invoked a slash command, skip forced eval
const isSlash = /^\/[^\s/]+/.test(prompt.trim());
if (isSlash) process.exit(0);
​
const skills = [
  "crud-development",
  "api-development",
  "database-ops",
  "ui-pc",
  "ui-mobile",
  // ... keep going (we have 26)
];
​
const instructions = [
  "## Mandatory Skill Activation Protocol (MUST FOLLOW)",
  "",
  "### Step 1 — Evaluate",
  "For EACH skill, output: [skill] — Yes/No — Reason",
  "",
  "### Step 2 — Activate",
  "If ANY skill is Yes → call Skill(<name>) immediately.",
  "If ALL are No → state 'No skills needed' and continue.",
  "",
  "### Step 3 — Implement",
  "Only after Step 2 is done, start the actual solution.",
  "",
  "Available skills:",
  ...skills.map(s => `- ${s}`)
].join("\n");
​
console.log(instructions);

What changes in practice

Before (no hook):

“Build coupon management.” Claude starts coding… and ignores your 4-layer architecture or banned components.

After (forced eval hook):

Claude must first produce an explicit decision table, then activate Skills, then implement.

The behavioral shift is dramatic because you’re eliminating “optional compliance.”


3) Why 90% and not 100%?

Because we intentionally added a fast path.

When a user knows what they want, typing a command like:

should be instant. So the hook skips evaluation for slash commands and lets the command workflow take over.

That’s the tradeoff:


4) Hooks as a lifecycle control plane (the 4-hook stack)

Think of hooks as a CI pipeline for an agent session—except it runs live, in your terminal.

We use four key points:

4.1 SessionStart: “look at the repo before you talk”

When a session starts, we show:

Example output:

🚀 Session started: RuoYi-Plus-Uniapp
Time: 2026-02-16 21:14
Branch: master
​
⚠️ Uncommitted changes: 5 files
📋 TODO: 3 open / 12 done
​
Shortcuts:
  /dev    build feature
  /crud   generate module
  /check  verify conventions

Why it matters: Claude stops acting like it’s entering a blank room.


4.2 UserPromptSubmit: forced Skill evaluation (discipline)

This is the “must read the handbook” gate.


4.3 PreToolUse: the safety layer (tool governance)

Claude Code is powerful because it can run tools: Bash, write files, edit code.

That’s also how accidents happen.

PreToolUse is your last line of defense before something irreversible.

We block a small blacklist (and warn on a broader greylist):

// .claude/hooks/pre-tool-use.js (conceptual)
const cmd = process.env.CLAUDE_TOOL_INPUT ?? "";
​
const hardBlock = [
  /rm\s+(-rf|--recursive).*\s+\//i,
  /drop\s+(database|table)\b/i,
  />\s*\/dev\/sd[a-z]\b/i,
];
​
if (hardBlock.some(p => p.test(cmd))) {
  console.log(JSON.stringify({
    decision: "block",
    reason: "Dangerous command pattern detected"
  }));
  process.exit(0);
}
​
// Optionally: warn/confirm on sensitive actions (mass deletes, chmod -R, etc.)

This isn’t paranoia. We’ve seen models “clean temp files” with rm -rf in the wrong directory. You want a guardrail that doesn’t rely on the model being careful.


4.4 Stop: “close the loop” with next-step guidance

When Claude finishes, we:

Example:

✅ Done — 8 files changed
​
Next steps:
- @code-reviewer for backend conventions
- SQL changed: sync migration scripts
- Consider: git commit -m "feat: coupon module"

The goal: eliminate the “it worked in the chat” gap.


5) Skills: the knowledge layer you can actually enforce

Once activation is deterministic, Skills become what they were supposed to be: a domain-specific knowledge base.

We built 26 Skills across:

A Skill file structure that scales

Every SKILL.md follows the same skeleton:

# Skill Name
​
## When to trigger
- Keywords:
- Scenarios:
​
## Core rules
### Rule 1
Explanation + example
​
### Rule 2
Explanation + example
​
## Forbidden
- ❌ ...
​
## Reference code
- path/to/file
​
## Checklist
- [ ] ...

This consistency matters because the model learns how to consume Skills.


6) Commands: workflows, not suggestions

Skills solve “what is correct.”

Commands solve “what is the process.”

6.1 /dev: a 7-step development pipeline

We designed /dev as an opinionated workflow:

  1. clarify requirements
  2. design (API + modules + permissions)
  3. DB design (SQL + dictionaries + menus)
  4. backend (4-layer output)
  5. frontend (component policy + routing)
  6. test/verify
  7. docs/progress update

It’s basically: “how seniors want juniors to work” encoded as a runnable script.

6.2 /crud: generate a full module from a table

Input:

/crud b_coupon

Output (example set):

Manual effort: 2–4 hours Command-driven: 5–10 minutes (plus review)

6.3 /check: full-stack convention linting (human-readable)

This is where we turn Skills into a verifier:


7) Agents: parallel specialists (don’t overload the main thread)

Some tasks should be handled by a dedicated subagent:

The advantage isn’t “more intelligence.” It’s separation of concerns and reduced context pollution in the main session.

A practical pattern:

  1. main agent generates code
  2. Stop hook suggests review
  3. you invoke @code-reviewer
  4. reviewer returns a structured diff + fixes list

8) The full system: governance + knowledge + process + division of labor

This is the architecture in one sentence:

Hooks enforce behavior, Skills provide standards, Commands encode workflows, Agents handle parallel expertise.

And yes—this is how you turn a “general AI assistant” into a “repo-native teammate.”


9) Minimal reproducible setup (copy-paste friendly)

If you want the smallest version that still works, build this:

.claude/
  settings.json
  hooks/
    skill-forced-eval.js
    pre-tool-use.js
  skills/
    crud-development/
      SKILL.md

Minimal settings.json (UserPromptSubmit hook)

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "matcher": "",
        "hooks": [
          {
            "type": "command",
            "command": "node .claude/hooks/skill-forced-eval.js"
          }
        ]
      }
    ],
    "PreToolUse": [
      {
        "matcher": "",
        "hooks": [
          {
            "type": "command",
            "command": "node .claude/hooks/pre-tool-use.js"
          }
        ]
      }
    ]
  }
}

Then iterate:


Closing: “ability” isn’t the bottleneck — activation is

Your model is already capable.

What’s missing is a system that makes the right behavior automatic.

A smart new hire without a handbook will freestyle. A smart new hire with:

…becomes consistent fast.

Claude is the same.