Skip to content

From Plan to Working Code

You have a plan. Maybe it is a formal spec you wrote in the last session. Maybe it is bullet points in a Notion doc. Maybe it is a conversation with your tech lead that ended with “sounds good, go build it.” The plan is the easy part. The hard part is turning it into working code that touches twelve files, respects existing patterns, passes tests, and does not break the features your teammates shipped last week.

This is where most developers either lose hours to context switching or hand Claude a vague prompt and end up with code that ignores the project’s conventions. This lesson covers the implementation workflow that avoids both traps: explore first, plan the specific changes, implement in reviewable chunks, and commit at each checkpoint.

  • The explore-plan-implement-commit cycle that keeps multi-file changes on track
  • Prompts that produce code matching your project’s existing patterns
  • Techniques for breaking large implementations into reviewable commits
  • The checkpoint pattern that catches mistakes before they compound

The developers who ship the fastest with Claude Code are not the ones who type “implement the feature” and accept everything. They follow a tight loop: explore the area they are about to change, plan the specific edits, implement one logical chunk, verify it works, and commit before moving on.

  1. Explore the relevant code

    Even if you planned the feature in a previous session, start by grounding Claude in the specific files that will change. Context from a planning session does not carry over perfectly — Claude needs to see the actual code.

    Read these files and summarize the patterns I need to follow:
    - src/services/user.service.ts
    - src/routes/user.routes.ts
    - src/schemas/user.schema.ts
    - tests/services/user.service.test.ts
    I'm about to add an organization service that follows the same
    patterns. Tell me: naming conventions, error handling approach,
    how validation is structured, and how tests are organized.
  2. Plan the specific changes

    Before Claude touches any files, have it describe exactly what it will do. This is your review checkpoint.

    Review the plan. If Claude wants to create a file in the wrong directory, introduce a new pattern, or skip tests, catch it now. Corrections at this stage cost nothing.

  3. Implement one logical chunk

    Do not ask Claude to implement the entire feature at once. Break it into chunks that make sense as individual commits.

    Start with the data layer: create the organization schema,
    the database migration, and the service with CRUD operations.
    Follow the exact patterns from user.service.ts. Include tests.
    Do not touch routes or middleware yet -- we'll do that next.
  4. Verify before committing

    After each chunk, have Claude run the tests and check for issues.

    Run the tests for the organization service. Also run the existing
    user service tests to make sure nothing is broken. Show me the
    test output.
  5. Commit the checkpoint

    All tests pass. Commit with a message describing what was added.
    Keep the message under 72 characters for the subject line.
  6. Repeat for the next chunk

    Move to routes, then middleware, then frontend integration. Each chunk gets its own verify-and-commit cycle.

The most common failure in AI-assisted implementation is generated code that works but does not look like the rest of the codebase. The fix is explicit pattern matching.

When Claude follows existing patterns, your code reviews become faster because reviewers are looking at familiar structures. The diff tells the story of what changed, not how Claude’s preferred style differs from yours.

Large features inevitably touch many files. The key is ordering the changes so each step builds on solid ground.

Implement in this order to minimize breakage:

  1. Types and interfaces — Define the data shapes first
  2. Database layer — Migrations, schema, queries
  3. Service/business logic — Core operations that depend on the database
  4. API routes — Thin layer that calls services
  5. Frontend components — Consume the API
  6. Tests at each layer — Written alongside the code, not after
We're implementing multi-tenant support. Let's work through it
in dependency order. Start with the types:
1. Create src/types/tenant.ts with the Tenant interface
2. Add tenant_id to the User interface in src/types/user.ts
3. Create the Zod schemas for tenant validation
Show me each file. Don't proceed to the database layer until
I confirm these are correct.

Using sub-agents for parallel implementation

Section titled “Using sub-agents for parallel implementation”

When parts of a feature are independent, delegate them to sub-agents to keep the work moving.

Use sub-agents to implement these in parallel:
1. Create the tenant database migration and Drizzle schema
2. Create the tenant service with CRUD operations
3. Create the tenant API route handler
Each sub-agent should read the existing user.* files first
to match our patterns. Report the results so I can review
before we integrate them.

Sub-agents are especially useful for generating tests alongside implementation code. While one sub-agent writes the service, another can write the test file based on the same patterns.

Every implementation session should produce commits that tell a coherent story. If something breaks later, you can bisect the commits to find exactly when the bug was introduced.

After implementing each layer, create a commit:
1. "Add tenant types and validation schemas"
2. "Add tenant database migration and Drizzle schema"
3. "Add tenant service with CRUD operations and tests"
4. "Add tenant API routes with auth middleware"
5. "Add tenant switcher component to dashboard"
Each commit should pass all tests independently. If a commit
would leave tests failing, the chunk is too big -- break it
down further.

Claude Code can run your test suite after every change. This is not optional — it is the safety net that lets you move fast without breaking things.

After every file you create or modify, run:
1. The specific test file for the module you changed
2. The full test suite to check for regressions
If a test fails, fix it before moving to the next file.
Do not accumulate broken tests and fix them later.

For projects with slow test suites, scope the test runs:

Run only the tests related to the tenant module:
npm test -- --grep "tenant"
We'll run the full suite before committing.

Set up hooks so Claude’s output is automatically validated:

.claude/settings.json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"command": "npx eslint --fix $FILE_PATH && npx tsc --noEmit"
}
]
}
}

With this configuration, every file Claude creates or edits is automatically linted and type-checked. Issues are caught immediately, not at the end of a long implementation session.

Complex features span multiple sessions. Name your sessions and use artifacts to maintain continuity.

Terminal window
# Name your implementation session
/rename tenant-implementation
# At the end of a session, save progress

Before ending a session, have Claude write a checkpoint:

Write a brief status update to docs/plans/tenant-progress.md:
- What's been implemented and committed
- What's next
- Any decisions that were made during implementation
- Any blockers or open questions

When you start a new session:

Read docs/plans/tenant-progress.md and docs/specs/tenant-spec.md.
Also check git log --oneline -10 to see recent commits.
Pick up where we left off. What's the next chunk to implement?

Claude generates code that does not match your patterns. You did not show it enough examples. Before implementing, always have Claude read at least two existing files that follow the pattern you want. If it still deviates, be explicit: “Use the exact same error handling as user.service.ts lines 45-60.”

Tests pass but the feature does not work end-to-end. Your test chunks are too isolated. Add an integration test after completing each layer that exercises the full stack for at least one happy path.

Implementation diverges from the plan. This happens when Claude discovers something during implementation that the plan did not account for. When it happens, pause: “Stop implementing. You just found something the plan didn’t cover. Explain the issue and propose an update to the plan before continuing.”

Context fills up during a long implementation. Run /compact Focus on the tenant implementation. Keep the current file list, patterns, and progress. Drop exploration context. Or, if you have been committing at checkpoints, start a fresh session — your commits preserve all the progress.

A commit breaks tests for an unrelated module. Your change touched a shared utility or type. Before continuing, fix the regression. Then add a note to your implementation plan: “Shared module X is fragile. Changes here require running the full test suite.”

Your feature is implemented and committed. Now make sure it is tested beyond the happy path and reviewed before it ships.