Configuration Generation
- Infrastructure as Code
- Container orchestration
- CI/CD pipelines
- Environment configs
Master deployment automation with Claude Code. From generating deployment configurations to orchestrating complex release processes, this guide covers practical patterns for modern deployment workflows.
Configuration Generation
Release Management
Monitoring Setup
Generate base infrastructure
claude "Create Terraform configuration for:- AWS VPC with public/private subnets- EKS cluster with 3 node groups- RDS PostgreSQL with read replicas- ElastiCache Redis cluster- Application Load BalancerFollow AWS Well-Architected Framework" \--output infrastructure/terraform/
Create environment variations
# Development environmentclaude "Create Terraform tfvars for development:- Smaller instance sizes- Single AZ deployment- Minimal redundancy- Cost optimization focus" \--output infrastructure/terraform/environments/dev.tfvars
# Production environmentclaude "Create Terraform tfvars for production:- High availability across 3 AZs- Auto-scaling configurations- Enhanced monitoring- Backup strategies" \--output infrastructure/terraform/environments/prod.tfvars
Generate modules
claude "Create reusable Terraform modules for:- Standard web application setup- Database provisioning with backups- Container cluster with monitoring- Networking with security groupsInclude examples and documentation" \--output infrastructure/terraform/modules/
claude "Generate CloudFormation template for:- Elastic Beanstalk application- Auto-scaling configuration- CloudFront distribution- S3 buckets for static assets- Route53 DNS configurationInclude parameters for environment customization" \--output cloudformation/app-stack.yaml
claude "Create CloudFormation for serverless app:- Lambda functions with layers- API Gateway with custom domain- DynamoDB tables with streams- Step Functions for workflows- EventBridge for event routing" \--output cloudformation/serverless-stack.yaml
claude "Generate CloudFormation for containers:- ECS Fargate cluster- Task definitions with secrets- Application Load Balancer- Service auto-scaling- CloudWatch Container Insights" \--output cloudformation/container-stack.yaml
Complete K8s Application
# Generate comprehensive Kubernetes manifestsclaude "Create Kubernetes manifests for microservices app:
Services:- API Gateway (nginx ingress)- Auth Service (Node.js)- User Service (Python)- Payment Service (Go)- Frontend (React)
Include:- Deployments with resource limits- Services for internal communication- ConfigMaps for configuration- Secrets for sensitive data- Ingress rules with TLS- HorizontalPodAutoscaler- PodDisruptionBudgets- NetworkPolicies for security- ServiceMonitor for Prometheus
Use best practices for production" \--output k8s/manifests/
Initialize Helm chart
claude "Create Helm chart for the microservices app:- Chart.yaml with proper metadata- Flexible values.yaml with environments- Templates for all resources- Helpers for common patterns- NOTES.txt with usage instructions- README.md with examples" \--output helm/microservices-app/
Environment-specific values
# Development valuesclaude "Create Helm values for development:- Single replicas- Minimal resources- Local storage- Debug logging- No TLS" \--output helm/microservices-app/values-dev.yaml
# Staging valuesclaude "Create Helm values for staging:- 2 replicas- Moderate resources- Persistent storage- Info logging- Let's Encrypt TLS" \--output helm/microservices-app/values-staging.yaml
# Production valuesclaude "Create Helm values for production:- 3+ replicas with pod anti-affinity- Production-grade resources- Multi-AZ storage- Structured logging- Commercial TLS certificates" \--output helm/microservices-app/values-prod.yaml
Base Configuration
claude "Create Kustomize base configuration:- Base manifests for all resources- Common labels and annotations- Shared ConfigMaps- Cross-cutting policies" \--output k8s/kustomize/base/
Environment Overlays
claude "Create Kustomize overlays for:- Development (local)- Staging (AWS)- Production (AWS)- DR site (Azure)Each with specific patches and transforms" \--output k8s/kustomize/overlays/
Complete CI/CD Pipeline
# Generated by Claude Codename: Deploy to Production
on: push: tags: - 'v*'
env: AWS_REGION: us-east-1 ECR_REPOSITORY: myapp EKS_CLUSTER: production-cluster
jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4
- name: Run Tests run: | npm ci npm test npm run test:integration
- name: Security Scan run: | npm audit docker run --rm -v "$PWD":/src \ aquasec/trivy fs --severity HIGH,CRITICAL /src
build: needs: test runs-on: ubuntu-latest outputs: image-tag: ${{ steps.meta.outputs.tags }}
steps: - uses: actions/checkout@v4
- name: Configure AWS uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }}
- name: Login to ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v2
- name: Build and Push id: meta uses: docker/metadata-action@v5 with: images: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }} tags: | type=semver,pattern={{version}} type=semver,pattern={{major}}.{{minor}} type=sha
- name: Build Docker image uses: docker/build-push-action@v5 with: context: . push: true tags: ${{ steps.meta.outputs.tags }} cache-from: type=gha cache-to: type=gha,mode=max
deploy: needs: build runs-on: ubuntu-latest environment: production
steps: - uses: actions/checkout@v4
- name: Configure kubectl run: | aws eks update-kubeconfig \ --region ${{ env.AWS_REGION }} \ --name ${{ env.EKS_CLUSTER }}
- name: Deploy with Helm run: | helm upgrade --install myapp ./helm/myapp \ --namespace production \ --create-namespace \ --values helm/myapp/values-prod.yaml \ --set image.tag=${{ needs.build.outputs.image-tag }} \ --wait \ --timeout 10m
- name: Verify Deployment run: | kubectl rollout status deployment/myapp -n production kubectl get pods -n production
- name: Run Smoke Tests run: | npm run test:smoke -- --env=production
stages: - test - build - deploy
variables: DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "/certs"
test: stage: test image: node:18 script: - npm ci - npm test - npm run lint coverage: '/Coverage: \d+\.\d+%/'
build: stage: build image: docker:latest services: - docker:dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest
deploy-staging: stage: deploy image: bitnami/kubectl:latest script: - kubectl config use-context $K8S_CONTEXT_STAGING - kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -n staging - kubectl rollout status deployment/myapp -n staging environment: name: staging url: https://staging.example.com only: - develop
deploy-production: stage: deploy image: bitnami/kubectl:latest script: - kubectl config use-context $K8S_CONTEXT_PRODUCTION - kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -n production - kubectl rollout status deployment/myapp -n production environment: name: production url: https://example.com only: - tags when: manual
# Infrastructure deploymentterraform-plan: stage: plan image: hashicorp/terraform:latest script: - cd infrastructure/terraform - terraform init - terraform plan -out=tfplan artifacts: paths: - infrastructure/terraform/tfplan
terraform-apply: stage: deploy image: hashicorp/terraform:latest script: - cd infrastructure/terraform - terraform init - terraform apply tfplan dependencies: - terraform-plan when: manual only: - main
Generate blue-green scripts
claude "Create blue-green deployment scripts:- Health check verification- Traffic switching logic- Rollback procedures- Database migration handlingFor both Kubernetes and AWS ECS" \--output deployment/blue-green/
Traffic management
claude "Generate traffic switching configuration:- AWS ALB target group switching- Kubernetes service selector updates- Istio VirtualService for canary- CloudFlare load balancer rules" \--output deployment/traffic-management/
Progressive Rollout
# Generate Flagger configuration for canary deploymentsclaude "Create Flagger canary deployment config:- Progressive traffic shifting (10%, 25%, 50%, 100%)- Automated rollback on failures- Custom metrics for business KPIs- Slack notifications- Load testing during canaryInclude for both Istio and AWS App Mesh" \--output deployment/canary/flagger-config.yaml
# Generate custom metricsclaude "Create Prometheus queries for canary analysis:- Request success rate- P95 latency- Error rate by status code- Business metrics (orders, signups)Format for Flagger MetricTemplate" \--output deployment/canary/metrics.yaml
Zero-Downtime Migrations
claude "Create zero-downtime migration plan:- Backward compatible schema changes- Data migration scripts- Rollback procedures- Verification queriesFor PostgreSQL with millions of records" \--output migrations/zero-downtime/
Multi-Version Support
claude "Generate migration strategy for:- Supporting old and new schema- Feature flags for gradual rollout- Data sync between versions- Cleanup proceduresInclude example code" \--output migrations/multi-version/
Create monitoring stack
claude "Generate monitoring configuration:- Prometheus scrape configs- Grafana dashboards for deployments- Alert rules for failures- Custom metrics collection- Log aggregation queries" \--output monitoring/deployment/
Automated rollback
claude "Create automated rollback system:- Health check definitions- Failure detection logic- Rollback triggers- State preservation- Notification systemSupport Kubernetes, ECS, and Lambda" \--output deployment/rollback/
Comprehensive Health Checks
# Generated health check systemimport asyncioimport aiohttpfrom typing import List, Dictimport json
class HealthChecker: def __init__(self, config_file: str): with open(config_file) as f: self.config = json.load(f) self.results = {}
async def check_endpoint(self, endpoint: Dict) -> Dict: """Check individual endpoint health""" try: async with aiohttp.ClientSession() as session: async with session.get( endpoint['url'], timeout=aiohttp.ClientTimeout(total=endpoint.get('timeout', 30)) ) as response:
# Basic health health = { 'url': endpoint['url'], 'status_code': response.status, 'response_time': response.headers.get('X-Response-Time'), 'healthy': response.status == endpoint.get('expected_status', 200) }
# Custom checks if 'expected_response' in endpoint: body = await response.json() health['matches_expected'] = ( body == endpoint['expected_response'] )
return health
except Exception as e: return { 'url': endpoint['url'], 'healthy': False, 'error': str(e) }
async def check_all(self) -> Dict: """Run all health checks in parallel""" tasks = [ self.check_endpoint(endpoint) for endpoint in self.config['endpoints'] ]
results = await asyncio.gather(*tasks)
# Aggregate results return { 'timestamp': datetime.now().isoformat(), 'overall_health': all(r['healthy'] for r in results), 'endpoints': results, 'summary': { 'total': len(results), 'healthy': sum(1 for r in results if r['healthy']), 'unhealthy': sum(1 for r in results if not r['healthy']) } }
async def continuous_monitoring(self, interval: int = 60): """Continuously monitor health""" while True: results = await self.check_all()
# Store results self.results[results['timestamp']] = results
# Alert if unhealthy if not results['overall_health']: await self.send_alert(results)
await asyncio.sleep(interval)
async def send_alert(self, results: Dict): """Send alerts for failures""" # Implement Slack, PagerDuty, email alerts pass
# Configuration filehealth_config = { "endpoints": [ { "name": "API Gateway", "url": "https://api.example.com/health", "expected_status": 200, "timeout": 30 }, { "name": "Auth Service", "url": "https://auth.example.com/health", "expected_status": 200, "expected_response": {"status": "healthy"} }, { "name": "Database", "url": "https://api.example.com/health/db", "expected_status": 200, "critical": true } ], "alerts": { "slack_webhook": "https://hooks.slack.com/...", "pagerduty_key": "..." }}
# Generate environment configsfor env in dev staging prod; do claude "Create ConfigMap for $env environment: - API endpoints - Feature flags - Cache settings - Log levels - Third-party service URLs Use template variables for secrets" \ --output k8s/configs/configmap-$env.yamldone
claude "Generate AWS Systems Manager parameters:- Hierarchical structure (/app/env/component/key)- SecureString for sensitive values- Standard tier for configs- CloudFormation template- Terraform configurationInclude access policies" \--output infrastructure/parameter-store/
claude "Create environment file templates:- .env.example with all variables- .env.development with local values- .env.test for testing- Documentation for each variable- Validation scriptNever include real secrets" \--output environments/
Master Deployment Script
#!/bin/bash# deploy.sh - Generated by Claude Code
set -euo pipefail
# ConfigurationENVIRONMENT="${1:-staging}"VERSION="${2:-latest}"DRY_RUN="${DRY_RUN:-false}"
# Colors for outputRED='\033[0;31m'GREEN='\033[0;32m'YELLOW='\033[1;33m'NC='\033[0m'
log() { echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"}
error() { echo -e "${RED}[ERROR]${NC} $1" >&2 exit 1}
warn() { echo -e "${YELLOW}[WARN]${NC} $1"}
# Pre-deployment checkspre_deploy_checks() { log "Running pre-deployment checks..."
# Check cluster connectivity if ! kubectl cluster-info &>/dev/null; then error "Cannot connect to Kubernetes cluster" fi
# Verify namespace exists if ! kubectl get namespace "$ENVIRONMENT" &>/dev/null; then warn "Namespace $ENVIRONMENT does not exist, creating..." kubectl create namespace "$ENVIRONMENT" fi
# Check image exists if ! docker manifest inspect "$IMAGE:$VERSION" &>/dev/null; then error "Docker image $IMAGE:$VERSION not found" fi
# Run security scan log "Running security scan..." trivy image "$IMAGE:$VERSION" --severity HIGH,CRITICAL
log "Pre-deployment checks passed ✓"}
# Deploy applicationdeploy() { log "Deploying version $VERSION to $ENVIRONMENT..."
if [[ "$DRY_RUN" == "true" ]]; then log "DRY RUN - would execute:" echo "helm upgrade --install myapp ./helm/myapp \\" echo " --namespace $ENVIRONMENT \\" echo " --values helm/myapp/values-$ENVIRONMENT.yaml \\" echo " --set image.tag=$VERSION" return 0 fi
# Backup current state kubectl get all -n "$ENVIRONMENT" -o yaml > "backup-$ENVIRONMENT-$(date +%s).yaml"
# Deploy with Helm helm upgrade --install myapp ./helm/myapp \ --namespace "$ENVIRONMENT" \ --values "helm/myapp/values-$ENVIRONMENT.yaml" \ --set image.tag="$VERSION" \ --wait \ --timeout 10m \ --atomic
log "Deployment completed ✓"}
# Post-deployment verificationverify_deployment() { log "Verifying deployment..."
# Wait for rollout kubectl rollout status deployment/myapp -n "$ENVIRONMENT"
# Check pod status READY_PODS=$(kubectl get pods -n "$ENVIRONMENT" -l app=myapp \ -o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}' | wc -w)
if [[ "$READY_PODS" -lt 1 ]]; then error "No running pods found" fi
# Run health checks log "Running health checks..." ./scripts/health-check.sh "$ENVIRONMENT"
# Run smoke tests log "Running smoke tests..." npm run test:smoke -- --env="$ENVIRONMENT"
log "Verification completed ✓"}
# Main executionmain() { log "Starting deployment process" log "Environment: $ENVIRONMENT" log "Version: $VERSION"
pre_deploy_checks deploy verify_deployment
log "Deployment successful! 🚀"}
# Run main functionmain "$@"
Failed Health Checks
claude "Create troubleshooting guide for:- Analyzing health check failures- Common causes and fixes- Debug commands- Log locationsFormat as runbook" \--output docs/troubleshooting/health-checks.md
Resource Constraints
claude "Generate resource debugging scripts:- Check cluster capacity- Identify resource bottlenecks- Recommend scaling solutions- Cost optimization tips" \--output scripts/debug-resources.sh
Continue improving your deployment workflows:
Remember: Good deployment practices are about reliability, repeatability, and rapid recovery. Use Claude Code to generate robust deployment configurations that handle edge cases and failures gracefully.