Przejdź do głównej zawartości

Deployment Patterns

Ta treść nie jest jeszcze dostępna w Twoim języku.

Master deployment automation with Claude Code. From generating deployment configurations to orchestrating complex release processes, this guide covers practical patterns for modern deployment workflows.

Configuration Generation

  • Infrastructure as Code
  • Container orchestration
  • CI/CD pipelines
  • Environment configs

Release Management

  • Deployment strategies
  • Rollback procedures
  • Health checks
  • Migration scripts

Monitoring Setup

  • Observability configs
  • Alert definitions
  • Dashboard creation
  • Log aggregation
  1. Generate base infrastructure

    Terminal window
    claude "Create Terraform configuration for:
    - AWS VPC with public/private subnets
    - EKS cluster with 3 node groups
    - RDS PostgreSQL with read replicas
    - ElastiCache Redis cluster
    - Application Load Balancer
    Follow AWS Well-Architected Framework" \
    --output infrastructure/terraform/
  2. Create environment variations

    Terminal window
    # Development environment
    claude "Create Terraform tfvars for development:
    - Smaller instance sizes
    - Single AZ deployment
    - Minimal redundancy
    - Cost optimization focus" \
    --output infrastructure/terraform/environments/dev.tfvars
    # Production environment
    claude "Create Terraform tfvars for production:
    - High availability across 3 AZs
    - Auto-scaling configurations
    - Enhanced monitoring
    - Backup strategies" \
    --output infrastructure/terraform/environments/prod.tfvars
  3. Generate modules

    Terminal window
    claude "Create reusable Terraform modules for:
    - Standard web application setup
    - Database provisioning with backups
    - Container cluster with monitoring
    - Networking with security groups
    Include examples and documentation" \
    --output infrastructure/terraform/modules/
Terminal window
claude "Generate CloudFormation template for:
- Elastic Beanstalk application
- Auto-scaling configuration
- CloudFront distribution
- S3 buckets for static assets
- Route53 DNS configuration
Include parameters for environment customization" \
--output cloudformation/app-stack.yaml

Complete K8s Application

Terminal window
# Generate comprehensive Kubernetes manifests
claude "Create Kubernetes manifests for microservices app:
Services:
- API Gateway (nginx ingress)
- Auth Service (Node.js)
- User Service (Python)
- Payment Service (Go)
- Frontend (React)
Include:
- Deployments with resource limits
- Services for internal communication
- ConfigMaps for configuration
- Secrets for sensitive data
- Ingress rules with TLS
- HorizontalPodAutoscaler
- PodDisruptionBudgets
- NetworkPolicies for security
- ServiceMonitor for Prometheus
Use best practices for production" \
--output k8s/manifests/
  1. Initialize Helm chart

    Terminal window
    claude "Create Helm chart for the microservices app:
    - Chart.yaml with proper metadata
    - Flexible values.yaml with environments
    - Templates for all resources
    - Helpers for common patterns
    - NOTES.txt with usage instructions
    - README.md with examples" \
    --output helm/microservices-app/
  2. Environment-specific values

    Terminal window
    # Development values
    claude "Create Helm values for development:
    - Single replicas
    - Minimal resources
    - Local storage
    - Debug logging
    - No TLS" \
    --output helm/microservices-app/values-dev.yaml
    # Staging values
    claude "Create Helm values for staging:
    - 2 replicas
    - Moderate resources
    - Persistent storage
    - Info logging
    - Let's Encrypt TLS" \
    --output helm/microservices-app/values-staging.yaml
    # Production values
    claude "Create Helm values for production:
    - 3+ replicas with pod anti-affinity
    - Production-grade resources
    - Multi-AZ storage
    - Structured logging
    - Commercial TLS certificates" \
    --output helm/microservices-app/values-prod.yaml

Base Configuration

Terminal window
claude "Create Kustomize base configuration:
- Base manifests for all resources
- Common labels and annotations
- Shared ConfigMaps
- Cross-cutting policies" \
--output k8s/kustomize/base/

Environment Overlays

Terminal window
claude "Create Kustomize overlays for:
- Development (local)
- Staging (AWS)
- Production (AWS)
- DR site (Azure)
Each with specific patches and transforms" \
--output k8s/kustomize/overlays/

Complete CI/CD Pipeline

# Generated by Claude Code
name: Deploy to Production
on:
push:
tags:
- 'v*'
env:
AWS_REGION: us-east-1
ECR_REPOSITORY: myapp
EKS_CLUSTER: production-cluster
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Tests
run: |
npm ci
npm test
npm run test:integration
- name: Security Scan
run: |
npm audit
docker run --rm -v "$PWD":/src \
aquasec/trivy fs --severity HIGH,CRITICAL /src
build:
needs: test
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- uses: actions/checkout@v4
- name: Configure AWS
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build and Push
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
- name: Build Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Configure kubectl
run: |
aws eks update-kubeconfig \
--region ${{ env.AWS_REGION }} \
--name ${{ env.EKS_CLUSTER }}
- name: Deploy with Helm
run: |
helm upgrade --install myapp ./helm/myapp \
--namespace production \
--create-namespace \
--values helm/myapp/values-prod.yaml \
--set image.tag=${{ needs.build.outputs.image-tag }} \
--wait \
--timeout 10m
- name: Verify Deployment
run: |
kubectl rollout status deployment/myapp -n production
kubectl get pods -n production
- name: Run Smoke Tests
run: |
npm run test:smoke -- --env=production
.gitlab-ci.yml
stages:
- test
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
test:
stage: test
image: node:18
script:
- npm ci
- npm test
- npm run lint
coverage: '/Coverage: \d+\.\d+%/'
build:
stage: build
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
deploy-staging:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl config use-context $K8S_CONTEXT_STAGING
- kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -n staging
- kubectl rollout status deployment/myapp -n staging
environment:
name: staging
url: https://staging.example.com
only:
- develop
deploy-production:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl config use-context $K8S_CONTEXT_PRODUCTION
- kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -n production
- kubectl rollout status deployment/myapp -n production
environment:
name: production
url: https://example.com
only:
- tags
when: manual
  1. Generate blue-green scripts

    Terminal window
    claude "Create blue-green deployment scripts:
    - Health check verification
    - Traffic switching logic
    - Rollback procedures
    - Database migration handling
    For both Kubernetes and AWS ECS" \
    --output deployment/blue-green/
  2. Traffic management

    Terminal window
    claude "Generate traffic switching configuration:
    - AWS ALB target group switching
    - Kubernetes service selector updates
    - Istio VirtualService for canary
    - CloudFlare load balancer rules" \
    --output deployment/traffic-management/

Progressive Rollout

Terminal window
# Generate Flagger configuration for canary deployments
claude "Create Flagger canary deployment config:
- Progressive traffic shifting (10%, 25%, 50%, 100%)
- Automated rollback on failures
- Custom metrics for business KPIs
- Slack notifications
- Load testing during canary
Include for both Istio and AWS App Mesh" \
--output deployment/canary/flagger-config.yaml
# Generate custom metrics
claude "Create Prometheus queries for canary analysis:
- Request success rate
- P95 latency
- Error rate by status code
- Business metrics (orders, signups)
Format for Flagger MetricTemplate" \
--output deployment/canary/metrics.yaml

Zero-Downtime Migrations

Terminal window
claude "Create zero-downtime migration plan:
- Backward compatible schema changes
- Data migration scripts
- Rollback procedures
- Verification queries
For PostgreSQL with millions of records" \
--output migrations/zero-downtime/

Multi-Version Support

Terminal window
claude "Generate migration strategy for:
- Supporting old and new schema
- Feature flags for gradual rollout
- Data sync between versions
- Cleanup procedures
Include example code" \
--output migrations/multi-version/
  1. Create monitoring stack

    Terminal window
    claude "Generate monitoring configuration:
    - Prometheus scrape configs
    - Grafana dashboards for deployments
    - Alert rules for failures
    - Custom metrics collection
    - Log aggregation queries" \
    --output monitoring/deployment/
  2. Automated rollback

    Terminal window
    claude "Create automated rollback system:
    - Health check definitions
    - Failure detection logic
    - Rollback triggers
    - State preservation
    - Notification system
    Support Kubernetes, ECS, and Lambda" \
    --output deployment/rollback/

Comprehensive Health Checks

# Generated health check system
import asyncio
import aiohttp
from typing import List, Dict
import json
class HealthChecker:
def __init__(self, config_file: str):
with open(config_file) as f:
self.config = json.load(f)
self.results = {}
async def check_endpoint(self, endpoint: Dict) -> Dict:
"""Check individual endpoint health"""
try:
async with aiohttp.ClientSession() as session:
async with session.get(
endpoint['url'],
timeout=aiohttp.ClientTimeout(total=endpoint.get('timeout', 30))
) as response:
# Basic health
health = {
'url': endpoint['url'],
'status_code': response.status,
'response_time': response.headers.get('X-Response-Time'),
'healthy': response.status == endpoint.get('expected_status', 200)
}
# Custom checks
if 'expected_response' in endpoint:
body = await response.json()
health['matches_expected'] = (
body == endpoint['expected_response']
)
return health
except Exception as e:
return {
'url': endpoint['url'],
'healthy': False,
'error': str(e)
}
async def check_all(self) -> Dict:
"""Run all health checks in parallel"""
tasks = [
self.check_endpoint(endpoint)
for endpoint in self.config['endpoints']
]
results = await asyncio.gather(*tasks)
# Aggregate results
return {
'timestamp': datetime.now().isoformat(),
'overall_health': all(r['healthy'] for r in results),
'endpoints': results,
'summary': {
'total': len(results),
'healthy': sum(1 for r in results if r['healthy']),
'unhealthy': sum(1 for r in results if not r['healthy'])
}
}
async def continuous_monitoring(self, interval: int = 60):
"""Continuously monitor health"""
while True:
results = await self.check_all()
# Store results
self.results[results['timestamp']] = results
# Alert if unhealthy
if not results['overall_health']:
await self.send_alert(results)
await asyncio.sleep(interval)
async def send_alert(self, results: Dict):
"""Send alerts for failures"""
# Implement Slack, PagerDuty, email alerts
pass
# Configuration file
health_config = {
"endpoints": [
{
"name": "API Gateway",
"url": "https://api.example.com/health",
"expected_status": 200,
"timeout": 30
},
{
"name": "Auth Service",
"url": "https://auth.example.com/health",
"expected_status": 200,
"expected_response": {"status": "healthy"}
},
{
"name": "Database",
"url": "https://api.example.com/health/db",
"expected_status": 200,
"critical": true
}
],
"alerts": {
"slack_webhook": "https://hooks.slack.com/...",
"pagerduty_key": "..."
}
}
Terminal window
# Generate environment configs
for env in dev staging prod; do
claude "Create ConfigMap for $env environment:
- API endpoints
- Feature flags
- Cache settings
- Log levels
- Third-party service URLs
Use template variables for secrets" \
--output k8s/configs/configmap-$env.yaml
done

Master Deployment Script

#!/bin/bash
# deploy.sh - Generated by Claude Code
set -euo pipefail
# Configuration
ENVIRONMENT="${1:-staging}"
VERSION="${2:-latest}"
DRY_RUN="${DRY_RUN:-false}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
exit 1
}
warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
# Pre-deployment checks
pre_deploy_checks() {
log "Running pre-deployment checks..."
# Check cluster connectivity
if ! kubectl cluster-info &>/dev/null; then
error "Cannot connect to Kubernetes cluster"
fi
# Verify namespace exists
if ! kubectl get namespace "$ENVIRONMENT" &>/dev/null; then
warn "Namespace $ENVIRONMENT does not exist, creating..."
kubectl create namespace "$ENVIRONMENT"
fi
# Check image exists
if ! docker manifest inspect "$IMAGE:$VERSION" &>/dev/null; then
error "Docker image $IMAGE:$VERSION not found"
fi
# Run security scan
log "Running security scan..."
trivy image "$IMAGE:$VERSION" --severity HIGH,CRITICAL
log "Pre-deployment checks passed ✓"
}
# Deploy application
deploy() {
log "Deploying version $VERSION to $ENVIRONMENT..."
if [[ "$DRY_RUN" == "true" ]]; then
log "DRY RUN - would execute:"
echo "helm upgrade --install myapp ./helm/myapp \\"
echo " --namespace $ENVIRONMENT \\"
echo " --values helm/myapp/values-$ENVIRONMENT.yaml \\"
echo " --set image.tag=$VERSION"
return 0
fi
# Backup current state
kubectl get all -n "$ENVIRONMENT" -o yaml > "backup-$ENVIRONMENT-$(date +%s).yaml"
# Deploy with Helm
helm upgrade --install myapp ./helm/myapp \
--namespace "$ENVIRONMENT" \
--values "helm/myapp/values-$ENVIRONMENT.yaml" \
--set image.tag="$VERSION" \
--wait \
--timeout 10m \
--atomic
log "Deployment completed ✓"
}
# Post-deployment verification
verify_deployment() {
log "Verifying deployment..."
# Wait for rollout
kubectl rollout status deployment/myapp -n "$ENVIRONMENT"
# Check pod status
READY_PODS=$(kubectl get pods -n "$ENVIRONMENT" -l app=myapp \
-o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}' | wc -w)
if [[ "$READY_PODS" -lt 1 ]]; then
error "No running pods found"
fi
# Run health checks
log "Running health checks..."
./scripts/health-check.sh "$ENVIRONMENT"
# Run smoke tests
log "Running smoke tests..."
npm run test:smoke -- --env="$ENVIRONMENT"
log "Verification completed ✓"
}
# Main execution
main() {
log "Starting deployment process"
log "Environment: $ENVIRONMENT"
log "Version: $VERSION"
pre_deploy_checks
deploy
verify_deployment
log "Deployment successful! 🚀"
}
# Run main function
main "$@"

Failed Health Checks

Terminal window
claude "Create troubleshooting guide for:
- Analyzing health check failures
- Common causes and fixes
- Debug commands
- Log locations
Format as runbook" \
--output docs/troubleshooting/health-checks.md

Resource Constraints

Terminal window
claude "Generate resource debugging scripts:
- Check cluster capacity
- Identify resource bottlenecks
- Recommend scaling solutions
- Cost optimization tips" \
--output scripts/debug-resources.sh

Continue improving your deployment workflows:

Remember: Good deployment practices are about reliability, repeatability, and rapid recovery. Use Claude Code to generate robust deployment configurations that handle edge cases and failures gracefully.