JavaScript/TypeScript
- ESLint with enterprise config
- Prettier for formatting
- TSLint for type checking
- Custom rules via plugins
Ta treść nie jest jeszcze dostępna w Twoim języku.
Maintaining consistent code quality across large enterprise codebases is challenging. AI-powered tools transform quality assurance from a bottleneck into an accelerator, catching issues early while enforcing standards automatically.
Tier 1: Development-Time Quality
Tier 2: Integration Quality Gates
Tier 3: Continuous Quality Monitoring
---description: Enterprise Code Quality StandardsalwaysApply: true---## Code Style- Use 4 spaces for indentation- Maximum line length: 100 characters- All functions must have JSDoc comments- Use descriptive variable names (min 3 chars)
## Architecture Patterns- Follow repository pattern for data access- Use dependency injection for services- Implement proper error boundaries- All API calls through centralized modules
## Performance Standards- Database queries must use indexed columns- Implement pagination for list endpoints- Use caching for expensive operations- Avoid N+1 query patterns
## Security Requirements- Never log sensitive data- Validate all user inputs- Use parameterized queries- Implement rate limiting on APIs
## Coding Standards
### Style Guidelines- ESLint configuration: .eslintrc.json- Prettier formatting: .prettierrc- TypeScript strict mode enabled- No any types without justification
### Quality Gates- Code coverage minimum: 80%- Cyclomatic complexity limit: 10- No console.log in production code- All TODOs must include ticket numbers
### Review Checklist- [ ] Tests written for new functionality- [ ] Documentation updated- [ ] No security vulnerabilities- [ ] Performance impact assessed- [ ] Error handling implemented
Hook Configuration for Claude Code:
{ "hooks": [ { "matcher": "Edit|Write", "hooks": [ { "type": "command", "command": "prettier --write \"$CLAUDE_FILE_PATHS\"" }, { "type": "command", "command": "eslint --fix \"$CLAUDE_FILE_PATHS\"" } ] }, { "matcher": "Edit", "hooks": [ { "type": "command", "command": "if [[ \"$CLAUDE_FILE_PATHS\" =~ \\.(ts|tsx)$ ]]; then npx tsc --noEmit \"$CLAUDE_FILE_PATHS\"; fi" } ] } ]}
JavaScript/TypeScript
Python
Java
Go
Cursor’s Auto-Fix Loop:
// When ESLint errors appear in Problems panel:// 1. Cursor detects the errors// 2. If "Loops on Errors" enabled, AI auto-fixes// 3. Fixes are applied without manual intervention// 4. Process repeats until clean
// Example: AI converts this...const data = response.data as any;
// ...to this with proper typing:interface ResponseData { id: string; status: 'active' | 'inactive'; metadata: Record<string, unknown>;}const data = response.data as ResponseData;
Claude Code’s Validation Chain:
# Custom slash command: /project:validatePlease validate the current changes:1. Run all linters and formatters2. Check for security vulnerabilities3. Verify test coverage4. Analyze performance implications5. Generate a quality report
Include specific fixes for any issues found.
Create .cursor/BUGBOT.md
at project root:
# Enterprise Review Guidelines
## Critical Security Checks- No hardcoded credentials or API keys- Input validation on all user data- SQL injection prevention via parameterized queries- XSS protection in rendered content- Authentication on all protected endpoints
## Performance Considerations- Batch database operations where possible- Implement caching for expensive computations- Use pagination for large datasets- Monitor for N+1 query patterns- Profile critical paths regularly
## Code Quality Standards- Functions under 50 lines- Classes follow single responsibility- Proper error handling with specific exceptions- Comprehensive logging (no sensitive data)- Test coverage for new code > 80%
## Common Anti-Patterns- Global state mutations- Synchronous operations in async contexts- Missing error boundaries in React- Unhandled promise rejections- Memory leaks from event listeners
Configure GitHub/GitLab integration
# Claude Codeclaude mcp add github
# Or for GitHub App/install-github-app
Set up review automation
name: AI Code Reviewon: [pull_request]
jobs: review: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: AI Review run: | claude --no-interactive \ "Review this PR for security, performance, \ and code quality issues. Focus on: \ $(git diff origin/main...HEAD)"
Enable team notifications
Track these metrics to measure AI review effectiveness:
// Example quality monitoring setupinterface QualityMetrics { codeComplexity: number; testCoverage: number; duplicateCodePercentage: number; technicalDebtHours: number; securityVulnerabilities: number;}
// AI-generated quality reportasync function generateQualityReport(): Promise<QualityMetrics> { // Claude/Cursor can analyze codebase and generate return { codeComplexity: 8.2, // Average cyclomatic complexity testCoverage: 84.5, // Percentage covered duplicateCodePercentage: 3.2, technicalDebtHours: 120, securityVulnerabilities: 0 };}
module.exports = { rules: { coverage: { min: 80, severity: 'error' }, complexity: { max: 10, severity: 'warning' }, duplicates: { max: 5, severity: 'warning' }, vulnerabilities: { max: 0, severity: 'error' },
// AI-specific gates aiReviewPassed: { required: true }, documentationUpdated: { required: true }, testsAdded: { required: true } },
enforcement: { blockMerge: ['error'], requireApproval: ['warning'] }};
// Monitor code quality in productionimport { QualityMonitor } from '@company/quality-tools';
const monitor = new QualityMonitor({ metrics: ['performance', 'errors', 'security'], aiAnalysis: true, alertThresholds: { errorRate: 0.01, // 1% error rate responseTime: 1000, // 1 second memoryUsage: 0.8 // 80% of allocated }});
// AI analyzes anomaliesmonitor.on('anomaly', async (event) => { const analysis = await aiAnalyzeAnomaly(event); if (analysis.severity === 'critical') { await notifyOncall(analysis); }});
Identifying Performance Issues:
# Claude Code command"Analyze this codebase for performance bottlenecks.Focus on:- Database query efficiency- Algorithm complexity- Memory usage patterns- Network request optimization- Caching opportunities
Provide specific code examples and fixes."
Performance Standards Enforcement:
---description: Performance Requirementsglobs: ["**/*.ts", "**/*.js"]---- All database queries must be indexed- Implement pagination for lists > 100 items- Use memoization for expensive computations- Batch API requests where possible- Implement request debouncing for user inputs- Profile and optimize functions > 100ms
// AI-generated load test scenariosexport const loadTestScenarios = { userRegistration: { vus: 100, // Virtual users duration: '5m', thresholds: { http_req_duration: ['p(95) < 500'], // 95% under 500ms http_req_failed: ['rate < 0.1'], // Error rate less than 10% } },
apiEndpoints: { vus: 200, duration: '10m', scenarios: { constant_load: { executor: 'constant-vus', vus: 50, duration: '5m', }, spike_test: { executor: 'ramping-vus', stages: [ { duration: '2m', target: 100 }, { duration: '1m', target: 200 }, { duration: '2m', target: 100 }, ], }, } }};
Configure security rules
module.exports = { rules: { 'no-eval': 'error', 'no-implied-eval': 'error', 'no-hardcoded-secrets': 'error', 'validate-inputs': 'error', 'parameterized-queries': 'error', 'secure-random': 'warning', 'crypto-strong': 'error' }};
Implement AI security reviews
# Custom security scan command/project:security-audit
# Comprehensive check including:# - OWASP Top 10 vulnerabilities# - Dependency vulnerabilities# - Code injection risks# - Authentication weaknesses# - Data exposure risks
Set up continuous monitoring
New Developer Checklist:
Quality Patterns Library:
// Share successful patterns across teamexport const QualityPatterns = { errorHandling: { description: "Consistent error handling pattern", example: ` try { const result = await riskyOperation(); return { success: true, data: result }; } catch (error) { logger.error('Operation failed', { error, context }); return { success: false, error: error.message }; } `, aiPrompt: "Apply this error handling pattern consistently" },
performantQueries: { description: "Optimized database query patterns", example: "Use batch loading, indexes, and projections", aiPrompt: "Optimize database queries using our patterns" }};
Track and improve quality metrics over time:
🎯 Start Small
Begin with basic linting and formatting, then gradually add more sophisticated quality gates.
📊 Measure Impact
Track metrics before and after implementing AI quality tools to demonstrate value.
🤝 Team Buy-in
Involve the team in defining standards and configuring AI behavior for better adoption.
🔄 Iterate Often
Regularly review and update quality rules based on team feedback and project evolution.