Team Collaboration: Tips 106-112
Your team has five developers using Cursor. Each one has different global rules, different model preferences, different prompting styles. One developer’s agent generates code with semicolons; another’s generates code without. One uses any types everywhere; another writes strict TypeScript. The AI-generated code looks like it was written by five different people — because it was configured by five different people. These 7 tips turn individual Cursor setups into a coordinated team workflow.
What You Will Walk Away With
Section titled “What You Will Walk Away With”- A committed
.cursor/rules/directory that makes AI-generated code consistent across the entire team - An onboarding checklist that gets new team members productive with Cursor in a single day
- An AI-assisted code review process that catches issues before human reviewers spend time on them
- Shared prompt templates that encode your team’s best practices into reusable files
- A security policy that balances productivity with compliance requirements
Shared Configuration
Section titled “Shared Configuration”Tip 106: Commit Your Cursor Rules to Version Control
Section titled “Tip 106: Commit Your Cursor Rules to Version Control”This is the single most important team tip. Project rules belong in version control, not in individual developer settings. When rules are committed, every developer and every agent conversation uses the same constraints.
Create the rules directory and commit it:
mkdir -p .cursor/rulesAt minimum, create these files:
## Code Style
- TypeScript strict mode. No 'any' types except where explicitly documented.- Named exports only. No default exports.- Functional components with hooks for React.- camelCase for variables and functions. PascalCase for types, interfaces, and components.- Use async/await. Never use .then() chains.- All async functions must have try-catch error handling.- Prefer fetch over axios for HTTP requests.- Use pnpm for all package manager commands.## Testing Conventions
- Use vitest for all unit and integration tests.- Test files live next to the code they test: src/services/__tests__/user.test.ts- Use descriptive test names that explain the scenario: "should return 404 when user does not exist"- Mock external dependencies (database, APIs) but not internal modules.- Aim for 80% code coverage on new code.- Always test error cases, not just happy paths.## Architecture Rules
- API routes in src/api/ handle HTTP concerns only (parsing, validation, response formatting).- Business logic goes in src/services/. Services never import from src/api/.- Database operations go in src/repositories/. Services use repositories, never raw queries.- Shared types go in src/types/. Types are the only thing that every layer can import.- Do not create circular dependencies. If service A needs service B, inject it as a parameter.Tip 107: Standardize MCP Server Configuration
Section titled “Tip 107: Standardize MCP Server Configuration”If your team uses MCP servers (database, GitHub, Jira), commit the configuration so every developer has the same setup:
{ "mcpServers": { "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_TOKEN": "${GITHUB_TOKEN}" } }, "postgres": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-postgres"], "env": { "DATABASE_URL": "${DATABASE_URL}" } } }}Document the required environment variables in your project’s README or onboarding guide. Each developer sets their own tokens, but the MCP server configuration is shared.
Tip 108: Create Shared Prompt Templates
Section titled “Tip 108: Create Shared Prompt Templates”Save commonly used prompts as files in the repo so any team member can reference them:
mkdir -p .cursor/promptsCreate a new API endpoint following these project conventions:
1. Route handler in src/api/[resource]/route.ts using the pattern in @src/api/users/route.ts2. Service layer in src/services/[resource].ts using the pattern in @src/services/user.ts3. Type definitions in src/types/[resource].ts4. Vitest tests in src/services/__tests__/[resource].test.ts5. Zod validation schema for request body6. Proper error handling using our AppError class from @src/lib/errors.ts
Run pnpm test after implementation to verify everything passes.Review and fix all issues before this PR is ready:
1. pnpm run typecheck -- fix TypeScript errors2. pnpm run lint -- fix linting issues3. pnpm run test -- fix test failures (do not modify assertions)4. Remove any console.log/debug statements5. Verify all new exports are added to index files6. Check that no sensitive data (API keys, passwords) is committed
Summarize all changes made.Team members reference these with @.cursor/prompts/new-endpoint.md in any agent conversation. The prompts encode your team’s best practices so that even the newest team member produces consistent output.
Onboarding and Training
Section titled “Onboarding and Training”Tip 109: Create a Cursor Onboarding Checklist for New Team Members
Section titled “Tip 109: Create a Cursor Onboarding Checklist for New Team Members”When a new developer joins the team, they should be productive with Cursor within a day. Create an onboarding document:
Tip 110: Run Weekly “Cursor Tips” Standups
Section titled “Tip 110: Run Weekly “Cursor Tips” Standups”Dedicate 5 minutes of one weekly standup to Cursor tips. Each week, one team member shares:
- A prompt that worked particularly well
- A workflow they discovered
- A problem they could not solve with Cursor (the team might have a solution)
- A suggestion for a new project rule
This creates a feedback loop where the team continuously improves its Cursor usage. The best prompts and workflows get added to .cursor/prompts/ for everyone to use.
Code Review and Quality
Section titled “Code Review and Quality”Tip 111: Implement AI-Assisted Pre-Review
Section titled “Tip 111: Implement AI-Assisted Pre-Review”Before requesting a human code review, have every team member run an AI review:
- Developer finishes their feature
- Developer runs
@.cursor/prompts/pre-pr.mdto fix linting, types, and tests - Developer runs Bug Finder (
Cmd+Shift+P> “Bug Finder”) - Developer runs the AI code review prompt below
- Developer fixes issues found in steps 2-4
- Developer opens PR for human review
This pre-review catches 60-70% of the issues that a human reviewer would flag. Human reviewers can then focus on architecture decisions, business logic correctness, and design trade-offs — the things AI is worst at.
Security and Compliance
Section titled “Security and Compliance”Tip 112: Establish a Team Security Policy for AI Tools
Section titled “Tip 112: Establish a Team Security Policy for AI Tools”Create a security policy that balances productivity with compliance:
## AI Security Policy
### What the AI Can Access- All source code in the repository- Development and staging databases via MCP (never production)- Public documentation and APIs- CI/CD logs and build artifacts
### What the AI Must Not Do- Access production databases or servers- Commit or push code without human review- Store API keys, passwords, or secrets in code- Disable security middleware or authentication checks- Modify .env files or deployment configurations
### Privacy Requirements- Enable Privacy Mode for all proprietary code- Do not paste customer data into agent conversations- Do not reference internal company documents by URL- Review all AI-generated code for accidentally exposed credentials
### YOLO Mode RestrictionsAllow: test commands, build commands, linting, file creationDeny: rm -rf, sudo, ssh, curl to external URLs, git push, npm publish, deploy commandsCommit this to your repo and review it during onboarding. Update it as your team’s security requirements evolve.
When This Breaks
Section titled “When This Breaks”Team members override project rules with personal settings: Global user rules take precedence over project rules for some settings. Make sure the project rules document does not conflict with common personal preferences. If conflicts arise, have a team discussion to standardize the convention and update the project rules.
Onboarding checklist becomes outdated: Assign one team member to update the onboarding checklist whenever Cursor releases a significant update or the team changes a workflow. Review it quarterly at minimum.
AI code review is too noisy: If the AI review prompt generates too many false positives, refine it. Add exceptions for known patterns: “Our project intentionally uses ‘any’ types in the GraphQL resolver layer — do not flag these.” The more specific the prompt, the fewer false positives.
MCP server tokens expire: Set calendar reminders for token renewal. Document the token rotation process in your onboarding guide. Consider using short-lived tokens from your organization’s secret management system.
What is Next
Section titled “What is Next”You have completed the full 112-tip collection. Your individual workflow is optimized, your team is configured for consistency, and you have the advanced techniques to handle any development challenge.
To continue growing:
- Browse the Cursor Advanced Techniques section for deep dives into agent modes, checkpoints, and automation workflows
- Explore Productivity Patterns for keyboard shortcuts, debugging workflows, and testing strategies
- Check Version Management to stay current with Cursor’s latest features
- Return to this Tips Collection index whenever you need a refresher on specific techniques