Consistency
- Same standards applied every time
- No reviewer fatigue
- Catches subtle issues
- Enforces team conventions
Transform your code review process with Claude Code’s intelligent analysis capabilities. This guide covers automated review workflows, custom review rules, and integration with popular platforms.
Consistency
Speed
Learning
Simple file review
# Review a single fileclaudecode review src/components/UserAuth.js \ --output review-comments.md
# Review with specific focusclaudecode review src/api/endpoints.js \ --focus "security,performance,error-handling" \ --severity "error,warning"
Diff-based review
# Review changes between branchesclaudecode review \ --base main \ --head feature/new-auth \ --format github-pr
# Review staged changesgit diff --staged | claudecode review \ --input - \ --context "React application with TypeScript"
Batch review
# Review all changed filesgit diff --name-only main...HEAD | \ xargs -I {} claudecode review {} \ --output reviews/{}.md
# Combine reviewscat reviews/*.md > combined-review.md
Custom Review Rules
version: 1rules: - name: security-checks description: "Security vulnerability detection" patterns: - "hardcoded passwords or API keys" - "SQL injection vulnerabilities" - "XSS attack vectors" - "unsafe deserialization" severity: error
- name: performance-checks description: "Performance anti-patterns" patterns: - "N+1 query problems" - "unnecessary re-renders" - "blocking operations in async code" - "memory leaks" severity: warning
- name: code-quality description: "General code quality" patterns: - "functions longer than 50 lines" - "cyclomatic complexity > 10" - "duplicate code blocks" - "missing error handling" severity: info
focus_areas: frontend: - accessibility - responsive design - browser compatibility - performance metrics
backend: - API design - database queries - error handling - logging practices
security: - authentication - authorization - data validation - encryption
ignore_patterns: - "**/*.test.js" - "**/node_modules/**" - "**/*.generated.ts"
name: Claude Code Review
on: pull_request: types: [opened, synchronize]
jobs: automated-review: runs-on: ubuntu-latest permissions: contents: read pull-requests: write issues: write
steps: - uses: actions/checkout@v4 with: fetch-depth: 0
- name: Setup Claude Code run: | npm install -g @anthropic-ai/claude-code-cli claudecode --version
- name: Load Review Configuration run: | if [ -f .claude-review.yml ]; then export CLAUDE_REVIEW_CONFIG=.claude-review.yml fi
- name: Analyze Changed Files id: analyze env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} run: | # Get changed files CHANGED_FILES=$(git diff --name-only \ origin/${{ github.base_ref }}...${{ github.sha }} \ -- '*.js' '*.ts' '*.jsx' '*.tsx' '*.py' '*.go')
# Create review for each file echo "## Claude Code Review Results" > review.md echo "" >> review.md
for file in $CHANGED_FILES; do if [ -f "$file" ]; then echo "### $file" >> review.md claude review "$file" \ --base origin/${{ github.base_ref }} \ --config $CLAUDE_REVIEW_CONFIG \ --format markdown >> review.md echo "" >> review.md fi done
# Generate summary claude "Summarize these review findings and prioritize issues" \ --input review.md \ --output summary.md
- name: Post Review Comments uses: actions/github-script@v7 with: script: | const fs = require('fs'); const review = fs.readFileSync('review.md', 'utf8'); const summary = fs.readFileSync('summary.md', 'utf8');
// Create review const { data: pr } = await github.rest.pulls.get({ owner: context.repo.owner, repo: context.repo.repo, pull_number: context.issue.number });
await github.rest.pulls.createReview({ owner: context.repo.owner, repo: context.repo.repo, pull_number: context.issue.number, body: summary, event: 'COMMENT', comments: parseReviewComments(review) });
function parseReviewComments(review) { // Parse review.md and create inline comments // Implementation depends on review format return []; }
- name: Update PR Status run: | # Check for blocking issues if grep -q "severity: error" review.md; then echo "::error::Critical issues found in code review" exit 1 fi
const { App } = require('@octokit/app');const { claudeCode } = require('@anthropic-ai/claude-code-sdk');
const app = new App({ appId: process.env.GITHUB_APP_ID, privateKey: process.env.GITHUB_APP_PRIVATE_KEY,});
// Listen for pull request eventsapp.webhooks.on('pull_request.opened', async ({ payload }) => { await reviewPullRequest(payload);});
app.webhooks.on('pull_request.synchronize', async ({ payload }) => { await reviewPullRequest(payload);});
async function reviewPullRequest(payload) { const { pull_request, repository } = payload;
// Get installation access token const octokit = await app.getInstallationOctokit(payload.installation.id);
// Get changed files const { data: files } = await octokit.pulls.listFiles({ owner: repository.owner.login, repo: repository.name, pull_number: pull_request.number, });
// Review each file const reviews = []; for (const file of files) { if (shouldReviewFile(file.filename)) { const review = await claudeCode.review({ content: await getFileContent(octokit, repository, file), context: { filename: file.filename, changes: file.patch, repository: repository.name, }, rules: await getReviewRules(repository), });
reviews.push({ path: file.filename, comments: review.comments, }); } }
// Post review await postReview(octokit, repository, pull_request, reviews);}
function shouldReviewFile(filename) { const extensions = ['.js', '.ts', '.jsx', '.tsx', '.py', '.go', '.java']; return extensions.some(ext => filename.endsWith(ext));}
async function getReviewRules(repository) { // Load custom rules from .claude-review.yml try { const config = await loadRepoConfig(repository); return config.rules || getDefaultRules(); } catch { return getDefaultRules(); }}
GitLab CI Integration
stages: - review
claude-code-review: stage: review image: node:18 only: - merge_requests
before_script: - npm install -g @anthropic-ai/claude-code-cli - export ANTHROPIC_API_KEY=$CLAUDE_API_KEY
script: # Get MR changes - | git fetch origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME CHANGED_FILES=$(git diff --name-only origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME...HEAD)
# Run reviews - | echo "## Claude Code Review" > review.md for file in $CHANGED_FILES; do if [[ -f "$file" ]]; then echo "### $file" >> review.md claudecode review "$file" \ --context "GitLab MR: $CI_MERGE_REQUEST_TITLE" \ --format markdown >> review.md || true fi done
# Post review as MR comment - | REVIEW_CONTENT=$(cat review.md | jq -Rs .) curl --request POST \ --header "PRIVATE-TOKEN: $GITLAB_TOKEN" \ --header "Content-Type: application/json" \ --data "{\"body\": $REVIEW_CONTENT}" \ "$CI_API_V4_URL/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes"
artifacts: reports: codequality: claude-review-report.json paths: - review.md
Architecture-aware reviews
# Build architecture contextclaudecode analyze --architecture src/ \ --output architecture.json
# Use architecture in reviewsclaudecode review src/services/NewService.js \ --context-file architecture.json \ --check "consistency with existing patterns"
Historical context
# Include commit historygit log --oneline -n 20 --pretty=format:"%h %s" > recent-commits.txt
claudecode review src/api/auth.js \ --context-file recent-commits.txt \ --check "alignment with recent changes"
Performance baselines
# Include performance datanpm run benchmark > performance-baseline.txt
claudecode review src/components/DataGrid.jsx \ --context-file performance-baseline.txt \ --focus "performance impact"
Progressive Reviews
import subprocessimport json
class ProgressiveReviewer: def __init__(self): self.severity_levels = ['error', 'warning', 'info', 'style']
def review_file(self, file_path): """Progressively review file with increasing scrutiny""" issues = []
for level in self.severity_levels: # Run review at current level result = subprocess.run([ 'claudecode', 'review', file_path, '--severity', level, '--format', 'json' ], capture_output=True, text=True)
level_issues = json.loads(result.stdout)
# Stop if critical issues found if level == 'error' and level_issues: issues.extend(level_issues) break
issues.extend(level_issues)
return self.prioritize_issues(issues)
def prioritize_issues(self, issues): """Sort and deduplicate issues""" # Remove duplicates unique_issues = [] seen = set()
for issue in issues: key = (issue['file'], issue['line'], issue['message']) if key not in seen: seen.add(key) unique_issues.append(issue)
# Sort by severity and line number return sorted(unique_issues, key=lambda x: (x['severity'], x['line']))
Team-Specific Rules
const teamRules = { frontend: { focus: ['accessibility', 'performance', 'responsive'], ignore: ['*.generated.ts', '*.d.ts'], customChecks: [ 'React hooks dependencies', 'Proper error boundaries', 'Loading states', 'SEO meta tags' ] },
backend: { focus: ['security', 'scalability', 'error-handling'], ignore: ['*/migrations/*', '*/seeds/*'], customChecks: [ 'SQL injection prevention', 'Rate limiting', 'Proper logging', 'Transaction handling' ] },
mobile: { focus: ['performance', 'offline-support', 'battery'], ignore: ['*.android.js', '*.ios.js'], customChecks: [ 'Memory leaks', 'Network optimization', 'Platform-specific code', 'App size impact' ] }};
function getTeamRules(filePath) { // Determine team based on file path if (filePath.includes('/frontend/')) return teamRules.frontend; if (filePath.includes('/backend/')) return teamRules.backend; if (filePath.includes('/mobile/')) return teamRules.mobile; return teamRules.frontend; // default}
Create metrics collection
import jsonfrom datetime import datetimefrom collections import defaultdict
class ReviewMetrics: def __init__(self): self.metrics = defaultdict(lambda: { 'total_issues': 0, 'issues_by_severity': defaultdict(int), 'issues_by_category': defaultdict(int), 'false_positives': 0, 'review_time': 0, 'files_reviewed': 0 })
def record_review(self, file_path, review_result, review_time): """Record metrics from a review""" team = self.get_team(file_path) metrics = self.metrics[team]
metrics['files_reviewed'] += 1 metrics['review_time'] += review_time
for issue in review_result.get('issues', []): metrics['total_issues'] += 1 metrics['issues_by_severity'][issue['severity']] += 1 metrics['issues_by_category'][issue['category']] += 1
def generate_report(self): """Generate metrics report""" report = { 'generated_at': datetime.now().isoformat(), 'summary': { 'total_files': sum(m['files_reviewed'] for m in self.metrics.values()), 'total_issues': sum(m['total_issues'] for m in self.metrics.values()), 'avg_review_time': self.calculate_avg_time(), 'most_common_issues': self.get_top_issues() }, 'by_team': dict(self.metrics) }
return report
def calculate_avg_time(self): total_time = sum(m['review_time'] for m in self.metrics.values()) total_files = sum(m['files_reviewed'] for m in self.metrics.values()) return total_time / total_files if total_files > 0 else 0
Visualize metrics
import Chart from 'chart.js/auto';
async function createReviewDashboard() { const metrics = await fetch('/api/review-metrics').then(r => r.json());
// Issues by severity chart new Chart(document.getElementById('severity-chart'), { type: 'doughnut', data: { labels: ['Error', 'Warning', 'Info', 'Style'], datasets: [{ data: [ metrics.summary.issues_by_severity.error, metrics.summary.issues_by_severity.warning, metrics.summary.issues_by_severity.info, metrics.summary.issues_by_severity.style ], backgroundColor: ['#dc3545', '#ffc107', '#17a2b8', '#6c757d'] }] } });
// Trends over time new Chart(document.getElementById('trends-chart'), { type: 'line', data: { labels: metrics.daily_stats.map(d => d.date), datasets: [{ label: 'Issues Found', data: metrics.daily_stats.map(d => d.issues), borderColor: '#dc3545' }, { label: 'Files Reviewed', data: metrics.daily_stats.map(d => d.files), borderColor: '#28a745' }] } });}
Plugin Architecture
from abc import ABC, abstractmethodimport subprocess
class ReviewPlugin(ABC): """Base class for custom review plugins"""
@abstractmethod def name(self): """Plugin name""" pass
@abstractmethod def check(self, file_path, content): """Perform custom check""" pass
def run_claude_check(self, prompt, context): """Helper to run Claude Code check""" result = subprocess.run([ 'claudecode', prompt, '--context', context, '--format', 'json' ], capture_output=True, text=True)
return json.loads(result.stdout)
class SecurityPlugin(ReviewPlugin): def name(self): return "security-scanner"
def check(self, file_path, content): # Custom security checks issues = []
# Check for hardcoded secrets secret_patterns = [ r'api[_-]?key\s*=\s*["\'][\w\-]+["\']', r'password\s*=\s*["\'][^"\']+["\']', r'token\s*=\s*["\'][\w\-]+["\']' ]
for pattern in secret_patterns: if re.search(pattern, content, re.IGNORECASE): issues.append({ 'severity': 'error', 'message': 'Potential hardcoded secret detected', 'pattern': pattern })
# Use Claude for advanced checks claude_result = self.run_claude_check( "Check for security vulnerabilities like SQL injection, XSS, etc.", content )
issues.extend(claude_result.get('issues', [])) return issues
class PerformancePlugin(ReviewPlugin): def name(self): return "performance-analyzer"
def check(self, file_path, content): # Performance-specific checks return self.run_claude_check( "Analyze for performance issues: N+1 queries, unnecessary renders, memory leaks", content )
# Plugin managerclass PluginManager: def __init__(self): self.plugins = []
def register(self, plugin): self.plugins.append(plugin)
def run_all(self, file_path, content): all_issues = []
for plugin in self.plugins: try: issues = plugin.check(file_path, content) for issue in issues: issue['plugin'] = plugin.name() all_issues.extend(issues) except Exception as e: print(f"Plugin {plugin.name()} failed: {e}")
return all_issues
#!/bin/bashecho "Running Claude Code review..."
# Get staged filesSTAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM)
# Review each fileISSUES_FOUND=falsefor file in $STAGED_FILES; do if [[ $file =~ \.(js|ts|py|go)$ ]]; then echo "Reviewing $file..."
# Run review REVIEW=$(claudecode review "$file" \ --severity "error,warning" \ --format json)
# Check for issues if echo "$REVIEW" | jq -e '.issues | length > 0' > /dev/null; then ISSUES_FOUND=true echo "$REVIEW" | jq -r '.issues[] | "[\(.severity)] \(.file):\(.line) - \(.message)"' fi fidone
if $ISSUES_FOUND; then echo "" echo "Issues found! Please fix before committing." echo "To bypass, use: git commit --no-verify" exit 1fi
echo "Review passed!"
Enhance your review automation:
Remember: Automated reviews should enhance, not replace, human code review. Use Claude Code to catch common issues and enforce standards, allowing human reviewers to focus on architecture, business logic, and complex decisions.