Infrastructure Complexity
AI generates cloud-agnostic IaC with best practices built-in
Modern DevOps requires managing complex infrastructure, CI/CD pipelines, monitoring systems, and deployment strategies. This lesson demonstrates how Cursor IDE’s AI capabilities transform DevOps workflows, making infrastructure as code (IaC) more accessible and reliable.
Traditional DevOps involves deep knowledge of multiple tools, platforms, and best practices. AI assistance democratizes this expertise, helping developers write infrastructure code, create deployment pipelines, and implement monitoring with confidence.
Infrastructure Complexity
AI generates cloud-agnostic IaC with best practices built-in
Pipeline Automation
AI creates sophisticated CI/CD pipelines tailored to your stack
Security Configuration
AI implements security best practices and compliance requirements
Cost Optimization
AI suggests cost-effective infrastructure configurations
Project Structure Setup
# Ask AI to create Terraform project structure"Create a Terraform project structure for:- Multi-environment setup (dev, staging, prod)- AWS infrastructure- Modular design with reusable components- Remote state management- Variable management best practices"
Generate Base Infrastructure
# AI creates main infrastructure"Generate Terraform configuration for:- VPC with public/private subnets- EKS cluster with node groups- RDS PostgreSQL with read replicas- Redis cluster for caching- S3 buckets with proper encryption- IAM roles and policies"
Environment Configuration
# AI implements environment-specific configs"Create environment-specific configurations:- Development: minimal resources, cost-optimized- Staging: production-like but smaller- Production: highly available, auto-scalingInclude proper tagging strategy"
# AI generates AWS infrastructure"Create AWS infrastructure for a web application:- Application Load Balancer with WAF- ECS Fargate for containerized services- Aurora Serverless for database- CloudFront CDN for static assets- Secrets Manager for credentials- CloudWatch monitoring and alarms"
# AI provides complete implementationmodule "web_app" { source = "./modules/web-application"
environment = var.environment region = var.aws_region
vpc_config = { cidr_block = var.vpc_cidr availability_zones = data.aws_availability_zones.available.names enable_nat_gateway = var.environment == "production" single_nat_gateway = var.environment != "production" }
ecs_config = { task_cpu = var.task_cpu task_memory = var.task_memory desired_count = var.environment == "production" ? 3 : 1
autoscaling = { min_capacity = var.environment == "production" ? 3 : 1 max_capacity = var.environment == "production" ? 10 : 3
target_cpu_utilization = 70 target_memory_utilization = 80 } }
# ... complete configuration}
# AI creates Azure infrastructure"Set up Azure infrastructure for microservices:- AKS cluster with multiple node pools- Azure Database for PostgreSQL- Application Gateway with WAF- Key Vault for secrets- Container Registry- Monitor and Log Analytics"
# AI implements Azure-specific patternsresource "azurerm_kubernetes_cluster" "main" { name = "${var.project_name}-aks-${var.environment}" location = azurerm_resource_group.main.location resource_group_name = azurerm_resource_group.main.name dns_prefix = "${var.project_name}-${var.environment}"
default_node_pool { name = "system" node_count = var.system_node_count vm_size = "Standard_D2_v3" availability_zones = ["1", "2", "3"] enable_auto_scaling = true min_count = 1 max_count = 3 }
identity { type = "SystemAssigned" }
network_profile { network_plugin = "azure" network_policy = "calico" load_balancer_sku = "standard" }
# ... additional configuration}
# AI implements GCP infrastructure"Create GCP infrastructure for data pipeline:- GKE Autopilot cluster- Cloud SQL with high availability- Pub/Sub for messaging- Dataflow for stream processing- BigQuery for analytics- Cloud Storage for data lake"
# AI provides GCP-optimized configurationmodule "data_platform" { source = "./modules/gcp-data-platform"
project_id = var.gcp_project_id region = var.gcp_region environment = var.environment
gke_config = { autopilot_enabled = true release_channel = "STABLE"
workload_identity_config = { enabled = true } }
bigquery_config = { dataset_id = "${var.project_name}_${var.environment}" location = "US"
tables = { events = { time_partitioning = { type = "DAY" field = "timestamp" } clustering = ["user_id", "event_type"] } } }
# ... complete implementation}
# AI helps with Kubernetes manifests"Create Kubernetes manifests for:- Microservices deployment with HPA- Service mesh (Istio) configuration- Ingress with TLS termination- ConfigMaps and Secrets- Network policies- RBAC rules"
# AI generates production-ready manifestsapiVersion: apps/v1kind: Deploymentmetadata: name: api-service namespace: productionspec: replicas: 3 selector: matchLabels: app: api-service template: metadata: labels: app: api-service version: v1 spec: serviceAccountName: api-service containers: - name: api image: myregistry/api:latest ports: - containerPort: 8080 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: database-credentials key: connection-string resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5---apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: api-service-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: api-service minReplicas: 3 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80
# AI creates comprehensive CI/CD pipeline"Create GitHub Actions workflow for:- Multi-service monorepo- Docker builds with layer caching- Automated testing (unit, integration, e2e)- Security scanning (SAST, dependency check)- Multi-environment deployment- Rollback capabilities"
name: CI/CD Pipeline
on: push: branches: [main, develop] pull_request: branches: [main]
env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }}
jobs: detect-changes: runs-on: ubuntu-latest outputs: services: ${{ steps.detect.outputs.services }} steps: - uses: actions/checkout@v3 with: fetch-depth: 0
- name: Detect changed services id: detect run: | # AI generates change detection logic CHANGED_SERVICES=$(git diff --name-only ${{ github.event.before }}..${{ github.sha }} | \ grep -E '^services/' | \ cut -d'/' -f2 | \ sort -u | \ jq -R -s -c 'split("\n")[:-1]') echo "services=$CHANGED_SERVICES" >> $GITHUB_OUTPUT
build-and-test: needs: detect-changes strategy: matrix: service: ${{ fromJson(needs.detect-changes.outputs.services) }} runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- name: Set up Docker Buildx uses: docker/setup-buildx-action@v2
- name: Log in to Container Registry uses: docker/login-action@v2 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and test run: | # AI implements sophisticated build process docker buildx build \ --target test \ --load \ --cache-from type=gha \ --cache-to type=gha,mode=max \ -t ${{ matrix.service }}-test \ ./services/${{ matrix.service }}
docker run --rm ${{ matrix.service }}-test
- name: Security scan uses: aquasecurity/trivy-action@master with: image-ref: ${{ matrix.service }}-test format: 'sarif' output: 'trivy-results.sarif'
- name: Upload scan results uses: github/codeql-action/upload-sarif@v2 with: sarif_file: 'trivy-results.sarif'
deploy: needs: build-and-test if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest environment: production steps: # AI implements deployment strategy - name: Deploy to Kubernetes run: | # ... deployment logic
# AI creates GitLab CI pipeline"Generate GitLab CI pipeline with:- Parallel job execution- Docker-in-Docker builds- Kubernetes deployment- Review apps for MRs- Scheduled security scans"
variables: DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "/certs" KUBERNETES_MEMORY_REQUEST: 1Gi KUBERNETES_MEMORY_LIMIT: 2Gi
stages: - build - test - security - deploy - cleanup
.build_template: image: docker:latest services: - docker:dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $CI_REGISTRY_IMAGE/$SERVICE:$CI_COMMIT_SHA ./services/$SERVICE - docker push $CI_REGISTRY_IMAGE/$SERVICE:$CI_COMMIT_SHA
# ... complete pipeline configuration
// AI creates Jenkinsfile"Create Jenkins pipeline for:- Declarative pipeline syntax- Parallel stages- Shared libraries- Blue Ocean compatible- Slack notifications"
@Library('shared-library') _
pipeline { agent { kubernetes { yaml """ apiVersion: v1 kind: Pod spec: containers: - name: docker image: docker:dind securityContext: privileged: true - name: kubectl image: bitnami/kubectl:latest command: ['cat'] tty: true """ } }
environment { DOCKER_REGISTRY = credentials('docker-registry') KUBECONFIG = credentials('kubeconfig') }
stages { stage('Build') { steps { container('docker') { script { // AI implements build logic docker.build("${env.IMAGE_NAME}:${env.BUILD_ID}") } } } }
// ... additional stages }
post { success { slackSend(color: 'good', message: "Deploy succeeded: ${env.JOB_NAME} ${env.BUILD_NUMBER}") } failure { slackSend(color: 'danger', message: "Deploy failed: ${env.JOB_NAME} ${env.BUILD_NUMBER}") } }}
# AI creates Azure DevOps pipeline"Generate Azure DevOps pipeline with:- Multi-stage YAML pipeline- Template reuse- Approval gates- Azure Key Vault integration- Artifact publishing"
trigger: branches: include: - main - develop paths: include: - services/*
variables: - group: production-variables - name: buildConfiguration value: 'Release'
stages:- stage: Build jobs: - job: BuildServices strategy: matrix: service1: serviceName: 'api' service2: serviceName: 'web' pool: vmImage: 'ubuntu-latest' steps: - task: Docker@2 inputs: containerRegistry: 'ACR-Connection' repository: '$(serviceName)' command: 'buildAndPush' Dockerfile: 'services/$(serviceName)/Dockerfile' tags: | $(Build.BuildId) latest
- stage: Deploy dependsOn: Build condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main')) jobs: - deployment: DeployToAKS environment: 'production' strategy: runOnce: deploy: steps: # ... deployment steps
# AI creates optimized Dockerfiles"Create multi-stage Dockerfile for Node.js app with:- Minimal final image size- Layer caching optimization- Security best practices- Non-root user- Health checks"
# AI generates optimized Dockerfile# Build stageFROM node:18-alpine AS builder
# Install build dependenciesRUN apk add --no-cache python3 make g++
# Set working directoryWORKDIR /app
# Copy package filesCOPY package*.json ./COPY yarn.lock ./
# Install dependencies with cache mountRUN --mount=type=cache,target=/root/.yarn \ yarn install --frozen-lockfile --production=false
# Copy source codeCOPY . .
# Build applicationRUN yarn build
# Prune dev dependenciesRUN yarn install --frozen-lockfile --production=true && \ yarn cache clean
# Runtime stageFROM node:18-alpine AS runtime
# Install runtime dependenciesRUN apk add --no-cache tini
# Create non-root userRUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001
# Set working directoryWORKDIR /app
# Copy built applicationCOPY --from=builder --chown=nodejs:nodejs /app/dist ./distCOPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modulesCOPY --from=builder --chown=nodejs:nodejs /app/package.json ./
# Switch to non-root userUSER nodejs
# Expose portEXPOSE 3000
# Health checkHEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD node healthcheck.js
# Use tini for proper signal handlingENTRYPOINT ["/sbin/tini", "--"]
# Start applicationCMD ["node", "dist/index.js"]
# AI creates production Helm charts"Generate Helm chart for microservices with:- Configurable resource limits- HPA configuration- Ingress with TLS- Service mesh integration- Secrets management- Monitoring integration"
# AI provides complete chart structure# values.yamlglobal: environment: production domain: example.com
services: api: enabled: true replicaCount: 3 image: repository: myregistry/api tag: latest pullPolicy: IfNotPresent
resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m"
autoscaling: enabled: true minReplicas: 3 maxReplicas: 10 targetCPUUtilizationPercentage: 70 targetMemoryUtilizationPercentage: 80
service: type: ClusterIP port: 80 targetPort: 8080
ingress: enabled: true className: nginx annotations: cert-manager.io/cluster-issuer: letsencrypt-prod hosts: - host: api.example.com paths: - path: / pathType: Prefix tls: - secretName: api-tls hosts: - api.example.com
# templates/deployment.yaml{{- range $name, $service := .Values.services }}{{- if $service.enabled }}apiVersion: apps/v1kind: Deploymentmetadata: name: {{ $name }} labels: {{- include "microservices.labels" $ | nindent 4 }} app.kubernetes.io/component: {{ $name }}spec: {{- if not $service.autoscaling.enabled }} replicas: {{ $service.replicaCount }} {{- end }} selector: matchLabels: {{- include "microservices.selectorLabels" $ | nindent 6 }} app.kubernetes.io/component: {{ $name }} template: metadata: annotations: checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") $ | sha256sum }} labels: {{- include "microservices.selectorLabels" $ | nindent 8 }} app.kubernetes.io/component: {{ $name }} spec: {{- with $.Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ include "microservices.serviceAccountName" $ }} containers: - name: {{ $name }} image: "{{ $service.image.repository }}:{{ $service.image.tag | default $.Chart.AppVersion }}" imagePullPolicy: {{ $service.image.pullPolicy }} ports: - name: http containerPort: {{ $service.service.targetPort }} protocol: TCP livenessProbe: httpGet: path: /health port: http initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: http initialDelaySeconds: 5 periodSeconds: 5 resources: {{- toYaml $service.resources | nindent 12 }} env: - name: ENVIRONMENT value: {{ $.Values.global.environment }} {{- if $service.env }} {{- toYaml $service.env | nindent 12 }} {{- end }}---{{- end }}{{- end }}
# AI implements monitoring stack"Set up Prometheus monitoring with:- Service discovery for Kubernetes- Custom metrics and alerts- Grafana dashboards- AlertManager configuration- Long-term storage with Thanos"
# AI creates comprehensive monitoringapiVersion: v1kind: ConfigMapmetadata: name: prometheus-configdata: prometheus.yml: | global: scrape_interval: 15s evaluation_interval: 15s
alerting: alertmanagers: - static_configs: - targets: - alertmanager:9093
rule_files: - "alerts/*.yml"
scrape_configs: - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https
- job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__
Log Collection
Fluentd/Fluent Bit for log aggregation
Log Storage
Elasticsearch or Loki for storage
Log Analysis
Kibana or Grafana for visualization
Log Alerting
Real-time alerts on log patterns
# AI implements security scanning"Create security scanning pipeline with:- SAST (Static Application Security Testing)- DAST (Dynamic Application Security Testing)- Container scanning- Dependency vulnerability scanning- Infrastructure compliance checks"
# AI generates security workflowname: Security Scanning
on: schedule: - cron: '0 2 * * *' # Daily at 2 AM workflow_dispatch:
jobs: sast-scan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- name: Run Semgrep uses: returntocorp/semgrep-action@v1 with: config: >- p/security-audit p/owasp-top-ten p/r2c-security-audit
- name: Run CodeQL uses: github/codeql-action/analyze@v2 with: languages: javascript, python
- name: SonarCloud Scan uses: SonarSource/sonarcloud-github-action@master env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
dependency-scan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- name: Run Snyk uses: snyk/actions/node@master env: SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} with: args: --severity-threshold=high
- name: OWASP Dependency Check uses: dependency-check/Dependency-Check_Action@main with: project: 'app' path: '.' format: 'ALL'
container-scan: runs-on: ubuntu-latest steps: - name: Run Trivy uses: aquasecurity/trivy-action@master with: scan-type: 'fs' scan-ref: '.' format: 'sarif' output: 'trivy-results.sarif' severity: 'CRITICAL,HIGH'
- name: Upload Trivy results uses: github/codeql-action/upload-sarif@v2 with: sarif_file: 'trivy-results.sarif'
# AI creates infrastructure tests"Generate Terratest suite for:- Module testing- Integration testing- Compliance validation- Cost estimation- Destroy testing"
// AI implements Go tests for Terraformpackage test
import ( "testing" "github.com/gruntwork-io/terratest/modules/terraform" "github.com/stretchr/testify/assert")
func TestTerraformWebAppModule(t *testing.T) { t.Parallel()
terraformOptions := &terraform.Options{ TerraformDir: "../modules/web-app", Vars: map[string]interface{}{ "environment": "test", "region": "us-east-1", }, }
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
// Validate outputs albDns := terraform.Output(t, terraformOptions, "alb_dns_name") assert.NotEmpty(t, albDns)
// Test actual infrastructure validateALBIsWorking(t, albDns) validateSecurityGroups(t, terraformOptions) validateTags(t, terraformOptions)}
Version Everything
Keep all infrastructure code in version control
Test Infrastructure
Test infrastructure changes before production
Monitor Everything
Comprehensive monitoring from day one
Automate Security
Security scanning in every pipeline
Monitoring Deep Dive
Advanced monitoring and observability
Migration Strategies
Migrating legacy infrastructure
Architecture Patterns
Cloud-native architecture design