Skip to content

Skills Optimization and Team Sharing

You installed twelve skills. The AI now has so much context about React, Next.js, Tailwind, testing, Git conventions, documentation, accessibility, security, performance, Drizzle, Docker, and code review that it cannot prioritize any of them. Every component it generates includes exhaustive JSDoc comments, accessibility attributes, performance optimizations, security checks, and test stubs — even when you just wanted a quick utility function. The AI is drowning in instructions and overengineering everything.

More skills is not better. The right skills, well-organized, with clear priorities, is the setup that actually makes you faster.

  • A skill selection strategy based on impact and context budget
  • Techniques for avoiding and resolving conflicts between skills
  • Team-wide standardization patterns that keep AI behavior consistent across developers
  • Version pinning strategies for stability in production codebases
  • Security considerations when installing third-party skills
  • Monitoring and measuring skill impact on AI performance

Every skill consumes tokens from your context window. The context window is finite. If skills take up 40% of the available context, you have 60% left for your actual conversation, file contents, and tool outputs.

Target: 3-5 skills for daily development. This leaves ample context for the work itself. You can add specialized skills temporarily for specific tasks (security audit, accessibility review) and remove them when done.

Rank potential skills by how often the AI gets that particular thing wrong without the skill:

PriorityCategoryInstall WhenExample
Always onProject conventionsEvery sessionYour team’s coding patterns
Always onPrimary frameworkEvery sessionReact/Next.js/Vue best practices
Session-specificSecondary frameworkWorking on that subsystemDatabase patterns, DevOps conventions
Task-specificSpecialized concernDoing that specific taskSecurity audit, accessibility review, performance optimization

Skill conflicts happen when two skills give contradictory instructions. The AI has no built-in priority system — it tries to follow all instructions and produces inconsistent output.

Two framework skills disagree on exports. A React skill says “use default exports for components” while your project skill says “never use default exports.” The AI alternates between them unpredictably.

A general skill contradicts a specific skill. A TypeScript skill says “avoid classes, prefer functions” while your project skill says “use classes for services with constructor injection.”

An outdated skill conflicts with a current one. A skill referencing React 17 patterns (like ReactDOM.render) conflicts with a React 19 skill using createRoot.

Remove the less authoritative skill. If two skills cover the same topic, keep the one that matches your current stack. Remove the other.

Add explicit overrides. In your project conventions skill, explicitly override conflicting instructions:

## Overrides
The following instructions take precedence over any framework skill:
- **Exports:** Always use named exports. Never use default exports.
(This overrides React conventions that suggest default exports for components.)
- **State management:** Use Zustand for global state. Do not use Redux.
(This overrides any skill that teaches Redux patterns.)
- **Error handling:** Use our AppError class, not thrown Error objects.
(This overrides general TypeScript error handling patterns.)

Layer your skills explicitly. Define a priority order in your project instructions:

# Priority Order for Convention Conflicts
1. This project conventions skill (highest priority)
2. Framework-specific skills (React, Next.js)
3. Language-level skills (TypeScript best practices)
4. General development skills (lowest priority)
When instructions conflict between layers, follow the higher-priority source.

When every developer on the team has the same skills installed, the AI produces consistent output regardless of who is prompting. This eliminates the “it works on my machine” problem for AI-generated code.

Commit skill files directly to your project repository. When developers clone the repo, they get the skills automatically.

Commit .cursor/rules/ to version control:

.cursor/
rules/
project-conventions.md # Team coding patterns
framework-patterns.md # React/Next.js patterns
workflow-conventions.md # Git, PR, deployment patterns

Add to your project README:

## AI Configuration
This project includes Cursor rules in `.cursor/rules/`.
They are loaded automatically when you open the project in Cursor.
No additional setup needed.

Separating Team Skills from Personal Skills

Section titled “Separating Team Skills from Personal Skills”

Some skills are team-wide standards. Others are personal preferences. Keep them separate.

Team skills (committed to repo):

  • Project coding conventions
  • Architecture decisions
  • Testing requirements
  • Deployment workflow

Personal skills (in home directory, not committed):

  • Editor workflow preferences
  • Personal productivity patterns
  • Experimental skills being evaluated

Team: .cursor/rules/ (in the project directory)

Personal: ~/.cursor/rules/ (in your home directory)

Cursor reads both. Project rules take precedence when they conflict.

When a new developer joins the team:

  1. Clone the repository. All committed skills are included automatically.

  2. Install marketplace skills. If the project uses marketplace skills in addition to committed ones, list them in the README with install commands:

    Terminal window
    npx skills add vercel-labs/agent-skills
    npx skills add anthropics/claude-code
  3. Verify the setup. Run a standard verification prompt.

Marketplace skills update when the author pushes changes. An update might introduce conventions that break your existing codebase patterns. Version pinning prevents unexpected changes.

Terminal window
# Pin to a specific commit
npx skills add vercel-labs/agent-skills@abc1234
  • Production codebases: Always pin. Unexpected convention changes can cause inconsistent code generation.
  • Personal projects: Optional. Latest versions give you the newest patterns.
  • During audits or reviews: Pin to ensure consistent AI behavior during the review period.

Schedule skill updates like dependency updates:

Terminal window
# Check for updates
npx skills list --outdated
# Update a specific skill
npx skills update vercel-labs/agent-skills
# Test after updating
# Generate sample code and verify it matches expectations

Skills are markdown files that become part of your AI’s instructions. A malicious skill could instruct the AI to:

  • Introduce subtle security vulnerabilities in generated code
  • Exfiltrate data through seemingly innocent code patterns
  • Bypass your project’s security conventions
  • Install unwanted dependencies

Before installing any third-party skill:

  1. Read the source. Skills are plain markdown. Read every file before installing.
  2. Check the author. Prefer skills from known organizations (Vercel, Anthropic, Stripe) or developers with established reputations.
  3. Check the repository activity. A skill that was last updated two years ago may teach outdated patterns.
  4. Test in isolation. Install the skill in a test project first and review the AI’s output.

Never put credentials in skill files. Skills are committed to version control and visible to everyone with repository access.

Bad:

## API Configuration
Use this API key for the staging environment: sk_test_abc123...

Good:

## API Configuration
Use environment variables for all API keys. Never hardcode credentials.
Read the API key from process.env.STRIPE_SECRET_KEY.

Each skill file contributes to context usage. Monitor the total size to ensure skills are not crowding out space for actual development work.

Terminal window
# Check total size of all skill files
wc -w .cursor/rules/*.md
wc -w .claude/skills/*/instructions.md
wc -w CLAUDE.md AGENTS.md

Guidelines:

  • Under 3,000 words total: Comfortable. Plenty of room for conversation.
  • 3,000-5,000 words: Acceptable for focused work. May feel cramped for long sessions.
  • Over 5,000 words: Too much. The AI will not process all instructions effectively. Cut or split.

Track these signals to know if your skills are working:

Code review comments decrease. If reviewers stop flagging convention violations in AI-generated code, the skills are effective.

Prompt length decreases. Developers spend less time explaining conventions in prompts because the skills handle it.

Onboarding time decreases. New developers produce convention-compliant code faster with AI guidance.

AI output consistency increases. Different developers asking similar questions get similarly structured results.

Set a recurring reminder to audit your skill stack:

  1. Are all skills still relevant to your current tech stack?
  2. Are any skills teaching deprecated patterns?
  3. Are any skills redundant (overlapping with other skills)?
  4. Is the total context budget reasonable?
  5. Are there new marketplace skills that would be valuable?

Problem: “Write clean code and follow best practices” wastes context on something the AI cannot act on.

Fix: Replace with specific, actionable instructions. “When creating service classes, use constructor injection with a typed options parameter.”

Mistake: Skills That Duplicate Built-in Knowledge

Section titled “Mistake: Skills That Duplicate Built-in Knowledge”

Problem: A skill that says “use async/await instead of callbacks” teaches the AI something it already knows.

Fix: Focus on conventions specific to your team that the AI would not discover from its training data.

Mistake: Skills That Contradict Each Other

Section titled “Mistake: Skills That Contradict Each Other”

Problem: Two skills give opposite instructions for the same concern. The AI oscillates between them.

Fix: Remove the less authoritative skill, or add explicit override language in the more specific skill.

Problem: The skill was written for React 17 and the Pages Router. The project now uses React 19 with the App Router.

Fix: Treat skill updates like documentation updates. When you change a pattern in the codebase, update the skill in the same PR.

Problem: A community skill teaches patterns incompatible with your project.

Fix: Test every new skill by generating representative code and reviewing it before committing to the team.

Problem: Twelve skills overload the AI’s context and it tries to satisfy all of them, overengineering everything.

Fix: Audit and reduce to 3-5 essential skills. Move specialized skills to a “use when needed” list.

AI ignores skills when the context window is full. Long conversations with many file reads push skills out of active context. Start a new session for tasks that require strict convention adherence.

Skills work in one tool but not another. Each tool reads skills from a different directory. Verify the files exist in the correct location for the tool you are using.

New team member’s AI behaves differently. They may not have run npx skills update after cloning, or they may have personal skills that override team conventions. Check their skill directory and global configuration.

Skills make the AI too conservative. If the AI refuses to generate quick prototype code because skills demand production-quality patterns, tell it explicitly: “This is a prototype. Ignore convention skills and optimize for speed.”

Context window errors during long sessions. If you see truncation warnings or degraded output quality, your skills plus conversation history have exceeded the context limit. Reduce skill count or start a fresh session.