Start Small
Begin with simple automations and gradually increase complexity
Ta treść nie jest jeszcze dostępna w Twoim języku.
Cursor’s AI capabilities extend far beyond interactive coding. This guide shows how to build sophisticated automation workflows that leverage AI for continuous development, testing, and maintenance.
{ "mcpServers": { "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_TOKEN": "your-github-token" } }, "gitlab": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-gitlab"], "env": { "GITLAB_TOKEN": "your-gitlab-token", "GITLAB_URL": "https://gitlab.com" } }, "jenkins": { "command": "npx", "args": ["-y", "jenkins-mcp-server"], "env": { "JENKINS_URL": "https://jenkins.company.com", "JENKINS_USER": "your-username", "JENKINS_TOKEN": "your-api-token" } }, "terraform": { "command": "npx", "args": ["-y", "terraform-mcp"], "env": { "TF_WORKSPACE": "production" } }, "kubernetes": { "command": "npx", "args": ["-y", "@kubernetes/mcp-server"], "env": { "KUBECONFIG": "~/.kube/config" } } }}
"Using GitHub MCP, automate our CI/CD:1. Get workflow run status for main branch2. Trigger deployment workflow if tests pass3. Create release with changelog4. Update deployment tracking issue"
// Advanced workflow orchestration"Using GitHub MCP:- Check if PR #123 checks have passed- If yes, auto-merge with squash- Trigger production deployment- Monitor deployment status- Rollback if health checks fail"
"Using Jenkins MCP:- Trigger 'build-and-test' job- Wait for completion- If successful, trigger 'deploy-staging'- Run integration tests- Promote to production if all pass"
// Pipeline monitoring"Monitor Jenkins pipeline:- Get current build status- Show failed test details- List blocking issues- Suggest fixes based on logs"
"Using GitLab MCP:- Create merge request from feature branch- Run CI pipeline- Auto-assign reviewers based on CODEOWNERS- Schedule deployment to staging- Create deployment notes"
// Terraform + Kubernetes orchestration"Coordinate infrastructure deployment:1. Using Terraform MCP: - Plan infrastructure changes - Show me what will be created/modified - Apply changes if approved
2. Using Kubernetes MCP: - Deploy new application version - Monitor pod health - Scale based on load - Update ingress rules"
// Complete automation example"Automate full deployment:1. GitHub MCP: Get latest release tag2. Jenkins MCP: Build Docker image3. Terraform MCP: Update infrastructure4. Kubernetes MCP: Deploy new version5. Slack MCP: Notify team of deployment"
// Automated issue management"Using Linear MCP and GitHub MCP together:- When PR is merged to main- Find related Linear issues- Move issues to 'Done' status- Add deployment date- Link to production URL"
// Security automation"Security scan automation:1. GitHub MCP: Get changed files2. Run security scan on changes3. If vulnerabilities found: - Create GitHub issue - Assign to security team - Block deployment - Suggest fixes"
Configure Agent Environment
{ "backgroundAgent": { "enabled": true, "maxConcurrentTasks": 3, "taskTimeout": 300000, "retryAttempts": 2, "logLevel": "info", "webhookUrl": "${AGENT_WEBHOOK_URL}" }, "automations": { "codeGeneration": true, "testCreation": true, "bugFixing": true, "codeReview": true, "documentation": true }}
Create Test Generation Workflow
Cursor Agent Workflow for Test Generation:
#!/bin/bash# Find files without testsFILES_WITHOUT_TESTS=$(find src -name "*.ts" -o -name "*.js" | while read file; do test_file="${file%.ts}.test.ts" [ ! -f "$test_file" ] && echo "$file" done)
# Open each file in Cursor for test generationfor file in $FILES_WITHOUT_TESTS; do echo "Generating tests for: $file" cursor "$file"
# Use this prompt in Agent mode (Ctrl+I): # "Generate comprehensive Jest tests for @Files using BDD style. # Focus on testing all public functions and edge cases. # Use @Code to understand the implementation details."done
Schedule Automation
name: Cursor Background Automation
on: schedule: - cron: '0 */4 * * *' # Every 4 hours push: branches: [main, develop]
jobs: automation: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- name: Run Cursor Agent env: CURSOR_API_KEY: ${{ secrets.CURSOR_API_KEY }} run: | cursor agent run \ --config .cursor/agent-config.json \ --tasks "test-generation,doc-update,code-review"
// Intelligent task orchestratorclass TaskOrchestrator { private queue: TaskQueue; private agents: Map<string, BackgroundAgent>;
async orchestrate(trigger: Trigger) { // Analyze trigger and determine tasks const tasks = await this.analyzeTrigger(trigger);
// Prioritize tasks const prioritized = this.prioritizeTasks(tasks);
// Distribute to agents for (const task of prioritized) { const agent = this.selectOptimalAgent(task); await this.queueTask(agent, task); }
// Monitor execution return this.monitorExecution(); }
private selectOptimalAgent(task: Task): BackgroundAgent { // Consider agent capabilities, load, and specialization const agents = Array.from(this.agents.values());
return agents.reduce((best, agent) => { const score = this.calculateAgentScore(agent, task); return score > this.calculateAgentScore(best, task) ? agent : best; }); }}
#!/bin/bashecho "Running Cursor AI pre-commit checks..."
# 1. Code quality checkcursor lint --ai-enhanced --fix
# 2. Security scancursor security scan --staged
# 3. Generate missing testscursor agent run --task generate-tests --target staged
# 4. Update documentationcursor docs update --auto
# 5. Check for code smellscursor analyze --smells --complexity
if [ $? -ne 0 ]; then echo "Pre-commit checks failed. Please review and fix issues." exit 1fi
echo "Pre-commit checks passed!"
#!/bin/bash# post-merge-automation.sh
# Get list of merged filesMERGED_FILES=$(git diff --name-only HEAD~1 HEAD)
# 1. Update documentationecho "Updating documentation..."cursor README.md docs/# In Agent mode (Ctrl+I), use:# "Update all documentation based on @Git changes in the recent merge"# "Include architecture diagrams if structural changes were made"
# 2. Check for API changes and generate testsif echo "$MERGED_FILES" | grep -q "api/"; then echo "API changes detected, generating integration tests..." cursor tests/integration/ # Agent prompt: "Generate integration tests for the API endpoints modified in @Recent Changes"fi
# 3. Update dependency documentationcursor package.json docs/dependencies.md# Agent prompt: "Update dependency graph documentation based on @Files package.json changes"
# 4. Generate summary and notifyecho "Post-merge automation complete"# Use Agent to create summary: "Summarize the impact of @Recent Changes for the team"
name: AI Code Review
on: pull_request: types: [opened, synchronize]
jobs: ai-review: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 with: fetch-depth: 0
- name: Cursor AI Review id: review run: | cursor review \ --base ${{ github.base_ref }} \ --head ${{ github.head_ref }} \ --output review.md
- name: Post Review Comment uses: actions/github-script@v6 with: script: | const review = fs.readFileSync('review.md', 'utf8'); github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: review });
class ContinuousTestGenerator { async generateTestsForChanges(changes: FileChange[]) { const testPlan = await this.createTestPlan(changes);
for (const item of testPlan) { switch (item.type) { case 'unit': await this.generateUnitTests(item); break;
case 'integration': await this.generateIntegrationTests(item); break;
case 'e2e': await this.generateE2ETests(item); break; } }
// Run generated tests const results = await this.runGeneratedTests();
// Refine based on results if (results.failures.length > 0) { await this.refineFailingTests(results.failures); } }
# Generate unit tests using Cursor Agent generateUnitTests() { local TARGET_FILE=$1
# Open the file in Cursor cursor "$TARGET_FILE"
# Use Agent mode (Ctrl+I) with this comprehensive prompt: cat << 'EOF' Generate comprehensive Jest unit tests for @Files following AAA pattern: - Arrange-Act-Assert structure - Target 90% statement coverage - Include edge cases and error scenarios - Mock external dependencies appropriately - Use descriptive test names - Group related tests in describe blocks
Reference @Code to understand all functions and their parameters. Check @Definitions for type information. EOF }}
export class ScheduledRefactoring { async performWeeklyMaintenance() { const tasks = [ this.removeDeadCode(), this.updateDeprecatedPatterns(), this.optimizeImports(), this.consolidateDuplicates(), this.improveNaming() ];
const results = await Promise.allSettled(tasks);
// Create summary PR if (this.hasChanges(results)) { await this.createMaintenancePR(results); } }
# Remove dead code using Cursor removeDeadCode() { echo "Analyzing codebase for dead code..."
# Use Cursor Agent to find dead code cursor .
# Agent prompts for dead code detection: # "Find unused functions in @Codebase" # "Identify unreachable code in @Files" # "Look for unused imports across the project"
# For each identified dead code: # 1. Review with: "Verify this code is truly unused by checking @Code references" # 2. Remove with: "Remove this unused code and update any related tests" }
# Update deprecated patterns updateDeprecatedPatterns() { # Create patterns file cat > .cursor/deprecated-patterns.md << 'EOF' # Deprecated Patterns to Update
1. Old: `findOne()` → New: `findFirst()` 2. Old: `callback pattern` → New: `async/await` 3. Old: `require()` → New: `import` EOF
# Open in Cursor for pattern updates cursor .cursor/deprecated-patterns.md
# Use Agent with prompts like: # "Find all occurrences of findOne() in @Codebase and suggest updates to findFirst()" # "Update callback patterns to async/await in @Files" # "Modernize require statements to ES6 imports across the project" }}
class AutoDocumentation { async generateComprehensiveDocs() { // API Documentation await this.generateApiDocs();
// Architecture diagrams await this.generateArchitectureDiagrams();
// Setup guides await this.updateSetupGuides();
// Code examples await this.generateCodeExamples(); }
# Generate API documentation using Cursor generateApiDocs() { # Find all API endpoints API_FILES=$(find . -path "*/api/*" -name "*.ts" -o -name "*.js")
for endpoint_file in $API_FILES; do cursor "$endpoint_file"
# Use Agent mode (Ctrl+I) with these prompts: # "Generate OpenAPI documentation for the endpoints in @Files" # "Include request/response examples and error codes" # "Add authentication requirements and rate limiting info" # "Create a Postman collection from these endpoints" done }
# Generate architecture diagrams generateArchitectureDiagrams() { # Create template for architecture analysis cat > architecture-prompt.md << 'EOF' Analyze the codebase architecture and create: 1. System overview diagram showing main components 2. Data flow diagram showing how data moves through the system 3. Deployment diagram showing infrastructure setup 4. Sequence diagrams for key user flows
Use @Codebase to understand the project structure Use @Files in src/ to analyze component relationships Reference @Docs for existing architecture notes EOF
cursor architecture-prompt.md docs/architecture/
await this.saveDiagrams(diagrams); }}
export class WebhookAutomation { async handleWebhook(event: WebhookEvent) { switch (event.type) { case 'issue.created': await this.handleNewIssue(event); break;
case 'deployment.failed': await this.handleFailedDeployment(event); break;
case 'security.vulnerability': await this.handleSecurityAlert(event); break;
case 'performance.degradation': await this.handlePerformanceIssue(event); break; } }
# Handle new GitHub issues with Cursor handleNewIssue() { local ISSUE_NUMBER=$1 local ISSUE_TITLE=$2 local ISSUE_BODY=$3
# Create issue analysis file cat > "issue-${ISSUE_NUMBER}.md" << EOF# Issue #${ISSUE_NUMBER}: ${ISSUE_TITLE}
## Description:${ISSUE_BODY}
## Analysis Tasks:1. Identify the root cause2. Determine if it can be auto-fixed3. Suggest implementation approachEOF
# Open in Cursor for analysis cursor "issue-${ISSUE_NUMBER}.md"
# Use Agent mode (Ctrl+I) with prompts: # "Analyze this issue and determine if it can be fixed automatically" # "Search @Codebase for related code that might be causing this issue" # "Check @Recent Changes that might have introduced this problem"
# If fix is possible, create branch and implement echo "Creating fix branch..." git checkout -b "fix/issue-${ISSUE_NUMBER}"
# Use Agent to implement fix: # "Implement a fix for the issue described based on your analysis" # "Add tests to prevent this issue from recurring" # "Update documentation if needed" }}
class MonitoringAutomation { private monitors = new Map<string, Monitor>();
async initialize() { // Code quality monitor this.monitors.set('quality', new QualityMonitor({ threshold: { complexity: 10, duplication: 5 }, action: this.handleQualityIssue.bind(this) }));
// Performance monitor this.monitors.set('performance', new PerformanceMonitor({ threshold: { responseTime: 200, cpu: 80 }, action: this.handlePerformanceIssue.bind(this) }));
// Security monitor this.monitors.set('security', new SecurityMonitor({ scanInterval: '1h', action: this.handleSecurityIssue.bind(this) })); }
# Handle code quality issues with Cursor handleQualityIssue() { local ISSUE_FILE=$1 local ISSUE_TYPE=$2
# Open the problematic file in Cursor cursor "$ISSUE_FILE"
# Use Agent mode (Ctrl+I) based on issue type: case $ISSUE_TYPE in "complexity") # "Refactor @Files to reduce complexity. Break down large functions." ;; "duplication") # "Find and eliminate duplicate code in @Files. Extract common logic." ;; "performance") # "Optimize performance bottlenecks in @Files. Focus on loops and queries." ;; esac }}
#!/bin/bash# scaffold-microservice.sh
# Scaffold microservice with CursorscaffoldMicroservice() { SERVICE_NAME=$1 LANGUAGE=$2 FRAMEWORK=$3
# Create service structure mkdir -p "services/$SERVICE_NAME"/{src,tests,docs,config} cd "services/$SERVICE_NAME"
# Create scaffolding instructions cat > .cursor/scaffold-instructions.md << EOF# Scaffold $SERVICE_NAME Microservice
## Configuration:- Language: $LANGUAGE- Framework: $FRAMEWORK- Service: $SERVICE_NAME
## Generate these files:
### 1. Project StructureCreate standard microservice structure with:- src/ - Source code- tests/ - Test files- docs/ - Documentation- config/ - Configuration files
### 2. Core Files- Package/dependency file (package.json, requirements.txt, etc.)- Main application entry point- Configuration management- Health check endpoint- Example CRUD endpoints
### 3. Testing Setup- Test framework configuration- Example unit tests- Example integration tests
### 4. CI/CD- Dockerfile- docker-compose.yml- GitHub Actions workflow
### 5. Documentation- README.md with setup instructions- API documentation template- Architecture decision recordsEOF
# Open in Cursor for scaffolding cursor .cursor/scaffold-instructions.md
# Use Agent mode (Ctrl+I) with prompts: # "Create the complete microservice structure as described" # "Use @Docs to follow our organization's microservice patterns" # "Reference @Cursor Rules for coding standards and best practices" # "Include proper error handling and logging throughout"}
#!/bin/bash# Perform batch updates across filesperformBatchUpdate() { local PATTERN=$1 local TRANSFORMATION=$2
# Find all matching files FILES=$(find . -name "$PATTERN" -type f)
# Create transformation instructions cat > batch-transform.md << EOF# Batch Transformation Instructions
## Pattern: $PATTERN## Transformation: $TRANSFORMATION
### For each file:1. Apply the transformation consistently2. Preserve existing logic and behavior3. Update related tests if they exist4. Maintain code style and formatting
### Validation:- Ensure no syntax errors- Verify logic is preserved- Check that tests still passEOF
# Process files in batches using Cursor echo "$FILES" | xargs -n 10 | while read -r batch; do echo "Processing batch: $batch"
# Create checkpoint by committing current state git add -A && git commit -m "Checkpoint before batch transformation"
# Open files in Cursor cursor $batch batch-transform.md
# Use Agent mode (Ctrl+I) with prompts: # "Apply the transformation described to all @Files" # "Ensure consistency across all files" # "Update any related test files"
# Validate after transformation echo "Validating transformations..." npm test || yarn test || make test
if [ $? -ne 0 ]; then echo "Validation failed, rolling back..." git reset --hard HEAD~1 fi done}
# Example transformations# performBatchUpdate "*.js" "Convert CommonJS to ES6 modules"# performBatchUpdate "*.test.ts" "Update to new testing framework syntax"# performBatchUpdate "*Controller.ts" "Add proper error handling to all endpoints"
// Using GitHub MCP for advanced automation"Implement smart PR automation:1. Monitor all open PRs2. For each PR: - Check if reviews are approved - Verify CI/CD status - Check for merge conflicts - Auto-merge if all conditions met - Update related issues"
// Automated release management"Create automated release process:1. Using GitHub MCP: - Generate changelog from commits - Create release draft - Upload build artifacts - Tag release version - Trigger deployment workflow"
"Using Kubernetes MCP for deployment automation:1. Check cluster health2. Perform rolling update: - Update deployment image - Monitor rollout status - Check pod health - Verify service endpoints3. If issues detected: - Pause rollout - Collect logs - Rollback if necessary - Alert team via Slack MCP"
// Complex automation workflowclass MCPOrchestrator { async deployWithFullAutomation(version: string) { // Phase 1: Build and Test await this.runWithMCP('GitHub MCP: Create release branch'); await this.runWithMCP('Jenkins MCP: Run full test suite');
// Phase 2: Infrastructure await this.runWithMCP('Terraform MCP: Update infrastructure'); await this.runWithMCP('Kubernetes MCP: Prepare cluster');
// Phase 3: Deploy await this.runWithMCP('Deploy application version'); await this.runWithMCP('Run health checks');
// Phase 4: Monitor await this.runWithMCP('Sentry MCP: Check for errors'); await this.runWithMCP('Grafana MCP: Monitor metrics');
// Phase 5: Notify await this.runWithMCP('Slack MCP: Send deployment summary'); await this.runWithMCP('Linear MCP: Update task status'); }}
#!/bin/bash# Sync Jira tickets with development workflowsyncWithJira() { # Get current sprint tickets using Jira CLI TICKETS=$(jira list --query "sprint = activeSprint()" --json)
# Process each ticket echo "$TICKETS" | jq -r '.[] | @base64' | while read -r ticket_data; do # Decode ticket data TICKET=$(echo "$ticket_data" | base64 -d) TICKET_KEY=$(echo "$TICKET" | jq -r '.key') TICKET_TYPE=$(echo "$TICKET" | jq -r '.fields.issuetype.name') TICKET_SUMMARY=$(echo "$TICKET" | jq -r '.fields.summary')
# Create feature branch if needed if ! git branch -r | grep -q "feature/$TICKET_KEY"; then git checkout -b "feature/$TICKET_KEY" fi
# Generate boilerplate for stories if [ "$TICKET_TYPE" = "Story" ]; then generateStoryBoilerplate "$TICKET_KEY" "$TICKET" fi done}
# Generate story implementation with CursorgenerateStoryBoilerplate() { local TICKET_KEY=$1 local TICKET_JSON=$2
# Extract ticket details DESCRIPTION=$(echo "$TICKET_JSON" | jq -r '.fields.description') ACCEPTANCE_CRITERIA=$(echo "$TICKET_JSON" | jq -r '.fields.customfield_10100')
# Create requirements file cat > ".cursor/tickets/${TICKET_KEY}-requirements.md" << EOF# $TICKET_KEY Implementation
## Description$DESCRIPTION
## Acceptance Criteria$ACCEPTANCE_CRITERIA
## Implementation Tasks1. Analyze requirements and create technical design2. Implement core functionality with tests3. Add integration tests4. Update documentation5. Create PR with detailed descriptionEOF
# Open in Cursor for implementation cursor ".cursor/tickets/${TICKET_KEY}-requirements.md" src/
# Use Agent mode (Ctrl+I) with prompts: # "Analyze the requirements in this ticket and suggest implementation approach" # "Create the necessary files and folder structure for this feature" # "Implement the feature following TDD - write tests first" # "Reference @Docs for our coding patterns and @Cursor Rules for standards"
# Update documentation cursor docs/features/ # "Update the feature documentation to include $TICKET_KEY implementation"}
# Example usage:# ./jira-automation.sh# This will sync all active sprint tickets and help implement them
class ParallelAutomation { private workers: Worker[] = [];
async executeParallel(tasks: AutomationTask[]) { // Initialize worker pool const numWorkers = os.cpus().length; for (let i = 0; i < numWorkers; i++) { this.workers.push(new Worker('./automation-worker.js')); }
// Distribute tasks const taskQueue = [...tasks]; const results = [];
await Promise.all( this.workers.map(async (worker) => { while (taskQueue.length > 0) { const task = taskQueue.shift(); const result = await this.executeTask(worker, task); results.push(result); } }) );
return results; }}
Start Small
Begin with simple automations and gradually increase complexity
Monitor Everything
Track automation success rates and performance impact
Fail Gracefully
Always include rollback mechanisms and error handling
Human Oversight
Maintain human review for critical automations
Chain MCPs Intelligently
Combine multiple MCP servers for complex workflows
Handle MCP Failures
Always have fallback plans when MCP servers are unavailable
Secure MCP Credentials
Use environment variables and secret management for MCP tokens
Log MCP Operations
Track all MCP operations for audit and debugging
// Single command orchestrates everything"Deploy version 2.1.0 to production:- GitHub MCP: Create release- Jenkins MCP: Build and test- Terraform MCP: Update infra- K8s MCP: Deploy application- Slack MCP: Notify team"
// Time: 2 minutes setup// Maintenance: Minimal// Flexibility: High
// Custom scripts for each toolasync function deploy(version) { // GitHub API integration const github = new GitHubAPI(token); await github.createRelease(...);
// Jenkins API integration const jenkins = new JenkinsAPI(...); await jenkins.triggerBuild(...);
// Terraform CLI wrapper await exec('terraform apply...');
// Kubernetes client const k8s = new K8sClient(...); await k8s.updateDeployment(...);
// Slack webhook await sendSlackMessage(...);}
// Time: Days of development// Maintenance: High// Flexibility: Limited
Define Clear Objectives
Implement Gradually
Monitor and Iterate
Document Everything
Remember: The goal of automation is not to replace developers but to amplify their capabilities. Focus on automating the mundane so humans can focus on the creative.