How to write a CLAUDE.md that actually controls your AI coding agent
A practical guide to writing CLAUDE.md and AGENTS.md files that give Claude Code, Cursor, and Codex the context they need to write correct code the first time.
Every time you start a new session with Claude Code, the agent reads your CLAUDE.md file before it touches a single line of code. That file is its operating manual — its understanding of your codebase, your conventions, and what "done" looks like.
Most developers write one in ten minutes and never revisit it. This post is about writing one that actually works: a CLAUDE.md that cuts hallucinated code, reduces revision cycles, and keeps the agent on the right path even in long sessions.
The same principles apply to AGENTS.md, .cursorrules, and any other persistent instruction file your AI coding environment reads at startup.
Why your current CLAUDE.md probably isn't working
The average CLAUDE.md looks like this:
This is a Next.js project using TypeScript and Tailwind.
Use pnpm. Don't use any and fix lint errors.
That's not an operating manual — that's a Post-it note. It tells the agent almost nothing about:
- What the project does (and therefore what code should accomplish)
- How your data flows from the database to the UI
- What patterns are established and should be followed
- What mistakes it keeps making that you keep correcting
- What's in scope for the current work
When an agent is missing this context, it fills in the gaps with its training data — which means generic code that may not match your architecture, your naming conventions, or your actual requirements.
The result: you spend more time correcting the agent than you would have spent writing the code yourself.
The anatomy of an effective CLAUDE.md
A well-structured CLAUDE.md has five sections. None of them need to be long — but each one prevents a specific class of mistake.
1. Project identity (2-3 sentences)
Tell the agent what this project is, who uses it, and what problem it solves. This isn't for documentation — it's so the agent can make reasonable inferences when your spec doesn't cover an edge case.
# My Project
ClearSpec is a B2B SaaS tool that transforms product descriptions into
structured specifications for AI coding agents. Users are PMs and
founders without engineering backgrounds. The core value is reducing
the gap between intent and buildable output.
With this context, the agent knows that "save a spec" means persisting structured data to a database, not writing to a local file. It knows that error messages should be written for non-technical users. It knows that the output format should be parseable by another AI agent.
Without it, the agent guesses — and it often guesses wrong.
2. Tech stack and key constraints
List your stack, your versions (if they matter), and any footguns. If your version of a framework has breaking changes from what the agent was trained on, say so explicitly.
## Stack
- Next.js 16 (breaking changes from 14/15 — read node_modules/next/dist/docs/ before writing any code)
- React 19, TypeScript strict mode, Tailwind 4
- Supabase (Postgres + Auth + Storage)
- pnpm (not npm, not yarn)
## Hard constraints
- RLS is enabled on all tables — every query must include a user_id filter
- No `any` in TypeScript — use `unknown` and narrow
- `useSearchParams()` requires a Suspense boundary in this Next.js version
The "hard constraints" section is the most valuable part of a CLAUDE.md. It's the distilled lessons from every time you've had to correct the agent — turned into standing instructions so you never have to correct it again.
3. Code patterns (show, don't tell)
The most common CLAUDE.md mistake is describing conventions in prose. The agent is a code model — it learns patterns from code examples far more reliably than from natural language descriptions.
Instead of: "Use the repository pattern for data access"
Write this:
## Data access pattern
Always use this pattern for server-side data access:
\`\`\`typescript
// ✓ CORRECT
import { createClient } from "@/lib/supabase/server";
export async function getSpec(id: string, userId: string) {
const supabase = await createClient();
const { data, error } = await supabase
.from("specs")
.select("*")
.eq("id", id)
.eq("user_id", userId)
.single();
if (error) throw error;
return rowToSpec(data);
}
// ✗ WRONG — never import from supabase/client in server components
import { createClient } from "@/lib/supabase/client";
Showing both the correct and incorrect pattern prevents the agent from confidently producing the wrong one.
4. What's out of scope
This is the most underused section. When a spec doesn't cover something, the agent will implement it anyway — and it will implement the version that seems reasonable to it, which is often not what you want.
An explicit out-of-scope list tells the agent to stop and ask rather than guess:
## Out of scope (don't implement without asking)
- Email notifications — no email service configured
- OAuth integrations with Linear, Jira, or Slack — env vars not set
- Any payment or subscription changes — Stripe config is locked
- Internationalization for new strings — flag for manual translation
5. Operational notes
Things the agent needs to know to run your project, not just write code for it:
## Commands
- Build: `export PATH="/opt/homebrew/bin:$PATH" && npx next build`
- Node: /opt/homebrew/bin/node (not system node)
- No test suite yet — build pass is the primary validation
## Known issues
- The `@/lib/analytics.ts` import fails in jest — skip analytics calls in tests
The CLAUDE.md maintenance loop
A CLAUDE.md is only valuable if it's current. The maintenance rule is simple: every time you correct the agent, update the file.
Correction loop:
- Agent produces wrong output
- You correct it
- You ask yourself: "Why did it produce this? What did it not know?"
- You add that missing knowledge to
CLAUDE.mdas a hard constraint or pattern example - The agent never makes that mistake again (in this project)
This is the difference between a CLAUDE.md that's a month old and useless, and one that gets progressively better at representing your project.
AGENTS.md: the same thing, differently located
AGENTS.md is a convention used by some agent frameworks (and Claude Code supports it) to scope instructions to a specific directory. Where CLAUDE.md applies to the entire project, you can put an AGENTS.md inside a subdirectory to give the agent specialized context for that area:
/src/app/api/AGENTS.md ← API-specific patterns (auth, validation, response format)
/src/components/AGENTS.md ← Component patterns (naming, props, styling conventions)
/src/lib/AGENTS.md ← Utility patterns (pure functions, no side effects)
This keeps your root CLAUDE.md focused on project-wide context, while subdirectory files handle the details that are only relevant to specific areas.
From CLAUDE.md to per-feature specs
A well-written CLAUDE.md gives the agent its operating environment. But it doesn't replace a feature spec — it complements one.
The difference:
CLAUDE.mdanswers: How do we build things here?- A feature spec answers: What specifically are we building right now?
When the agent receives both — a solid CLAUDE.md for project context plus a precise spec for the current feature — the quality of the first draft jumps significantly. The agent knows the patterns, knows the constraints, and knows exactly what done looks like.
This is the workflow ClearSpec is built for: you describe what you want to build, and ClearSpec generates a structured spec in the format that works best as agent input. Pair it with a strong CLAUDE.md and the first pass is usually the last pass.
A minimal CLAUDE.md to start from
Here's the shortest version that covers everything an agent actually needs:
# [Project Name]
[One sentence: what the project does and who uses it.]
## Stack
- [Framework + version]
- [Database + ORM]
- [Auth]
- [Package manager]
## Hard constraints
- [Constraint 1 — the thing the agent keeps getting wrong]
- [Constraint 2]
- [What's out of scope — don't implement without asking]
## Data access pattern
\`\`\`[language]
// Show the correct pattern with a real example from your codebase
\`\`\`
## Commands
- Build: [command]
- Test: [command]
Paste this into your project root, fill in the sections, and update it every time you have to correct the agent.
Building something with Claude Code, Cursor, or Codex and want a feature spec that works out of the box? ClearSpec generates structured specs in under two minutes — compatible with any AI coding agent.
Further reading: