Skip to content

Managing Large Codebases

Working in a large codebase—hundreds of thousands or even millions of lines of code—presents a unique set of challenges for any developer, human or AI. The sheer volume of information can be overwhelming. However, by adopting specific strategies for managing context, you can successfully leverage your AI assistant as a powerful partner in even the most complex projects.

This guide covers the essential techniques for scaling your AI-assisted development to large codebases.

For large projects, codebase indexing is not just a feature; it’s a necessity. This process creates a semantic map (vector embeddings) of your entire project, allowing the AI to perform intelligent searches and understand the relationships between different parts of the code, even those not explicitly in the context window.

  • How it Works: When you ask a question like, “Where is our payment processing logic handled?”, the AI uses the index to find the most relevant code snippets from across the entire project, even if the files are not open.
  • Why it’s Critical: Without indexing, the AI is effectively blind to any code outside of the immediate files you provide. With indexing, it has a “photographic memory” of your whole codebase, making it possible to reason about system-wide changes and complex dependencies.

Even with a comprehensive index, the quality of your prompts matters. The context window is a finite and valuable resource. Use it wisely.

Prefer Symbols over Files

When possible, reference a specific function or class (@MyClassName) instead of an entire file (@my-class.ts). This provides the AI with a more focused and less noisy context.

Start Small and Expand

Begin a task by providing only the most critical one or two files as context. You can always add more files to the conversation later if the AI needs more information. This prevents the AI from getting bogged down in irrelevant details upfront.


Never ask the AI to perform huge, open-ended tasks like “refactor the entire authentication system.” This will overwhelm the model and lead to poor results. Instead, break down large problems into a series of smaller, well-defined sub-tasks.

  1. Generate a Plan. Use “Ask” mode to create a high-level plan. For a large migration, you might ask: “Generate a markdown checklist of all the files that need to be updated for the React 19 upgrade.”

  2. Execute Step-by-Step. Switch to “Agent” mode and tackle the checklist one item at a time. “Using the checklist, please update the first file, Button.tsx, to use the new use hook.”

  3. Verify and Iterate. After each step, run your tests and verify the changes. This incremental approach makes the process manageable, reviewable, and far more reliable than a single “big bang” change.


In a long and complex work session, the conversation history itself can become a source of outdated or irrelevant context.

  • Use /clear: When you’re switching from one distinct task to another (e.g., from fixing a bug in the backend to working on a new UI component), use the /clear command to reset the conversation and start with a fresh slate.
  • Start New Chats for New Tasks: A simple but powerful habit is to start a brand new chat for each new feature or significant bug fix. This ensures the AI’s context is always tightly focused on the job at hand.

By combining the power of codebase-wide indexing with the discipline of focused context and task decomposition, you can effectively overcome the challenges of scale and make your AI assistant an indispensable partner in any project, no matter its size.