Skip to content

Using Documentation as Effective AI Context

You just onboarded a new developer. They spend their first week asking the same questions: “How do I run the tests?” “What’s the deployment process?” “Why do we use this pattern instead of that one?” Now imagine that developer asks those same questions every single morning because they forget overnight.

That is what working with an AI coding assistant feels like without documentation-as-context. Every new session, the AI starts from zero. It does not know your build commands, your team’s conventions, or your architectural decisions. It rediscovers them by reading files — burning context tokens on information you could have told it in 10 lines.

  • A template for each tool’s configuration file (CLAUDE.md, .cursor/rules, AGENTS.md)
  • Guidelines for what to include and what to leave out
  • Prompts for bootstrapping documentation from an existing codebase
  • A strategy for keeping documentation current as the project evolves

Each tool has its own mechanism for persistent, session-level documentation. Despite different names, they serve the same purpose: giving the AI project-specific knowledge it cannot infer from code alone.

Project Rules live in .cursor/rules/ as markdown files. They can be scoped by file pattern, applied always, or invoked manually.

.cursor/rules/
code-style.md # Always applied
testing.mdc # Applied to test files (via globs)
api-conventions.md # Agent-decided based on relevance
deployment.md # Manual invocation only

Cursor also supports User Rules (global preferences in Cursor Settings) and AGENTS.md as a simpler alternative to .cursor/rules.

Key capabilities:

  • Glob-scoped rules: Apply only when working with matching files
  • Agent-decided rules: Applied when Cursor determines they are relevant
  • Team Rules: Organization-wide rules managed from the dashboard (Team/Enterprise plans)
  • Remote rules: Import rules from GitHub repositories that stay synced

The golden rule: if removing this line would cause the AI to make a mistake, keep it. If the AI already does this correctly without the line, delete it.

CategoryExample
Build commandsnpm run build, make test, docker compose up
Test commandsnpm test -- --testPathPattern=auth, pytest -x
Code style rules that differ from defaults”Use single quotes”, “2-space indentation”
Architectural patterns”Repository pattern for data access”, “All API routes in src/pages/api/“
Non-obvious constraints”Redis must be running for integration tests”, “Use pnpm, not npm”
Environment setup”Run cp .env.example .env before first build”
CategoryWhy
Standard language conventionsThe AI already knows them
File-by-file descriptionsThe AI can read the files
Long tutorials or explanationsToo much text causes the AI to ignore important rules
Information that changes frequentlyIt will become stale and mislead the AI
Self-evident practices”Write clean code” adds nothing

The difference between documentation that works and documentation the AI ignores comes down to specificity and brevity.

# Code Quality
We care deeply about code quality. Always write clean, maintainable,
well-documented code that follows best practices. Make sure to handle
errors properly and write tests for your code.
# Code Style
- Use ES modules (import/export), not CommonJS (require)
- Prefer async/await over .then() chains
- Error responses: { error: string, code: number } shape
# Testing
- Run single tests with: npm test -- --testPathPattern=<name>
- Never mock the database in integration tests
- Test file location: src/**/__tests__/<name>.test.ts
# Workflow
- Run npm run type-check after making code changes
- NEVER commit to main directly. Always create a branch.

Split rules into focused files. Use frontmatter to control when they apply:

---
description: "API endpoint conventions"
globs:
- "src/api/**/*.ts"
- "src/routes/**/*.ts"
---
# API Conventions
- All endpoints return { data: T } on success, { error: string } on failure
- Use Zod for request validation
- Include rate limiting middleware on all public endpoints
- Reference @src/api/users.ts as the canonical example

Documentation that falls out of date is worse than no documentation — it actively misleads the AI.

  1. Review monthly. Schedule a 10-minute review of your instruction files. Remove rules that no longer apply. Add rules for mistakes the AI keeps making.
  2. Treat it like code. Check instruction files into git. Review changes in PRs. Let the team contribute.
  3. Use the AI to maintain it. After a difficult debugging session, ask the AI to add a rule preventing the same issue next time.
  4. Watch for ignored rules. If the AI keeps violating a rule, the file is probably too long and the rule is getting lost. Prune aggressively.

The file is too long and rules get ignored. This is the most common failure. Keep your main instruction file under 50 lines. Move detailed guidelines to topic-specific files (.claude/rules/, scoped .cursor/rules/). Use emphasis (“IMPORTANT:”, “YOU MUST”) for critical rules, but sparingly — if everything is important, nothing is.

Different team members add contradictory rules. Treat instruction files like code: review changes, resolve conflicts, keep one source of truth. In Claude Code, use the root CLAUDE.md for team-shared rules and CLAUDE.local.md for personal preferences.

The AI follows outdated rules. If your testing framework changed but your instruction file still references the old one, the AI will use the wrong commands and get confused. Audit regularly.

Too many rules files in a monorepo. With nested instruction files across packages, the combined context can exceed limits. In Codex, the project_doc_max_bytes setting (32 KiB default) caps the total. In Claude Code, only the first 200 lines of auto-memory MEMORY.md are loaded. Keep each file focused.