Przejdź do głównej zawartości

DevOps and Infrastructure as Code with AI

Ta treść nie jest jeszcze dostępna w Twoim języku.

Modern DevOps requires managing complex infrastructure, CI/CD pipelines, monitoring systems, and deployment strategies. This lesson demonstrates how Cursor IDE’s AI capabilities transform DevOps workflows, making infrastructure as code (IaC) more accessible and reliable.

Traditional DevOps involves deep knowledge of multiple tools, platforms, and best practices. AI assistance democratizes this expertise, helping developers write infrastructure code, create deployment pipelines, and implement monitoring with confidence.

Infrastructure Complexity

AI generates cloud-agnostic IaC with best practices built-in

Pipeline Automation

AI creates sophisticated CI/CD pipelines tailored to your stack

Security Configuration

AI implements security best practices and compliance requirements

Cost Optimization

AI suggests cost-effective infrastructure configurations

  1. Project Structure Setup

    Terminal window
    # Ask AI to create Terraform project structure
    "Create a Terraform project structure for:
    - Multi-environment setup (dev, staging, prod)
    - AWS infrastructure
    - Modular design with reusable components
    - Remote state management
    - Variable management best practices"
  2. Generate Base Infrastructure

    # AI creates main infrastructure
    "Generate Terraform configuration for:
    - VPC with public/private subnets
    - EKS cluster with node groups
    - RDS PostgreSQL with read replicas
    - Redis cluster for caching
    - S3 buckets with proper encryption
    - IAM roles and policies"
  3. Environment Configuration

    # AI implements environment-specific configs
    "Create environment-specific configurations:
    - Development: minimal resources, cost-optimized
    - Staging: production-like but smaller
    - Production: highly available, auto-scaling
    Include proper tagging strategy"
# AI generates AWS infrastructure
"Create AWS infrastructure for a web application:
- Application Load Balancer with WAF
- ECS Fargate for containerized services
- Aurora Serverless for database
- CloudFront CDN for static assets
- Secrets Manager for credentials
- CloudWatch monitoring and alarms"
# AI provides complete implementation
module "web_app" {
source = "./modules/web-application"
environment = var.environment
region = var.aws_region
vpc_config = {
cidr_block = var.vpc_cidr
availability_zones = data.aws_availability_zones.available.names
enable_nat_gateway = var.environment == "production"
single_nat_gateway = var.environment != "production"
}
ecs_config = {
task_cpu = var.task_cpu
task_memory = var.task_memory
desired_count = var.environment == "production" ? 3 : 1
autoscaling = {
min_capacity = var.environment == "production" ? 3 : 1
max_capacity = var.environment == "production" ? 10 : 3
target_cpu_utilization = 70
target_memory_utilization = 80
}
}
# ... complete configuration
}
# AI helps with Kubernetes manifests
"Create Kubernetes manifests for:
- Microservices deployment with HPA
- Service mesh (Istio) configuration
- Ingress with TLS termination
- ConfigMaps and Secrets
- Network policies
- RBAC rules"
# AI generates production-ready manifests
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: api-service
template:
metadata:
labels:
app: api-service
version: v1
spec:
serviceAccountName: api-service
containers:
- name: api
image: myregistry/api:latest
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: database-credentials
key: connection-string
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api-service
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
# AI creates comprehensive CI/CD pipeline
"Create GitHub Actions workflow for:
- Multi-service monorepo
- Docker builds with layer caching
- Automated testing (unit, integration, e2e)
- Security scanning (SAST, dependency check)
- Multi-environment deployment
- Rollback capabilities"
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
services: ${{ steps.detect.outputs.services }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Detect changed services
id: detect
run: |
# AI generates change detection logic
CHANGED_SERVICES=$(git diff --name-only ${{ github.event.before }}..${{ github.sha }} | \
grep -E '^services/' | \
cut -d'/' -f2 | \
sort -u | \
jq -R -s -c 'split("\n")[:-1]')
echo "services=$CHANGED_SERVICES" >> $GITHUB_OUTPUT
build-and-test:
needs: detect-changes
strategy:
matrix:
service: ${{ fromJson(needs.detect-changes.outputs.services) }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and test
run: |
# AI implements sophisticated build process
docker buildx build \
--target test \
--load \
--cache-from type=gha \
--cache-to type=gha,mode=max \
-t ${{ matrix.service }}-test \
./services/${{ matrix.service }}
docker run --rm ${{ matrix.service }}-test
- name: Security scan
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ matrix.service }}-test
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload scan results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
deploy:
needs: build-and-test
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: production
steps:
# AI implements deployment strategy
- name: Deploy to Kubernetes
run: |
# ... deployment logic
# AI creates GitLab CI pipeline
"Generate GitLab CI pipeline with:
- Parallel job execution
- Docker-in-Docker builds
- Kubernetes deployment
- Review apps for MRs
- Scheduled security scans"
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
KUBERNETES_MEMORY_REQUEST: 1Gi
KUBERNETES_MEMORY_LIMIT: 2Gi
stages:
- build
- test
- security
- deploy
- cleanup
.build_template:
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY_IMAGE/$SERVICE:$CI_COMMIT_SHA ./services/$SERVICE
- docker push $CI_REGISTRY_IMAGE/$SERVICE:$CI_COMMIT_SHA
# ... complete pipeline configuration
# AI creates optimized Dockerfiles
"Create multi-stage Dockerfile for Node.js app with:
- Minimal final image size
- Layer caching optimization
- Security best practices
- Non-root user
- Health checks"
# AI generates optimized Dockerfile
# Build stage
FROM node:18-alpine AS builder
# Install build dependencies
RUN apk add --no-cache python3 make g++
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY yarn.lock ./
# Install dependencies with cache mount
RUN --mount=type=cache,target=/root/.yarn \
yarn install --frozen-lockfile --production=false
# Copy source code
COPY . .
# Build application
RUN yarn build
# Prune dev dependencies
RUN yarn install --frozen-lockfile --production=true && \
yarn cache clean
# Runtime stage
FROM node:18-alpine AS runtime
# Install runtime dependencies
RUN apk add --no-cache tini
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Set working directory
WORKDIR /app
# Copy built application
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/package.json ./
# Switch to non-root user
USER nodejs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js
# Use tini for proper signal handling
ENTRYPOINT ["/sbin/tini", "--"]
# Start application
CMD ["node", "dist/index.js"]
# AI creates production Helm charts
"Generate Helm chart for microservices with:
- Configurable resource limits
- HPA configuration
- Ingress with TLS
- Service mesh integration
- Secrets management
- Monitoring integration"
# AI provides complete chart structure
# values.yaml
global:
environment: production
domain: example.com
services:
api:
enabled: true
replicaCount: 3
image:
repository: myregistry/api
tag: latest
pullPolicy: IfNotPresent
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
service:
type: ClusterIP
port: 80
targetPort: 8080
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: api.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: api-tls
hosts:
- api.example.com
# templates/deployment.yaml
{{- range $name, $service := .Values.services }}
{{- if $service.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $name }}
labels:
{{- include "microservices.labels" $ | nindent 4 }}
app.kubernetes.io/component: {{ $name }}
spec:
{{- if not $service.autoscaling.enabled }}
replicas: {{ $service.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "microservices.selectorLabels" $ | nindent 6 }}
app.kubernetes.io/component: {{ $name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") $ | sha256sum }}
labels:
{{- include "microservices.selectorLabels" $ | nindent 8 }}
app.kubernetes.io/component: {{ $name }}
spec:
{{- with $.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "microservices.serviceAccountName" $ }}
containers:
- name: {{ $name }}
image: "{{ $service.image.repository }}:{{ $service.image.tag | default $.Chart.AppVersion }}"
imagePullPolicy: {{ $service.image.pullPolicy }}
ports:
- name: http
containerPort: {{ $service.service.targetPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
resources:
{{- toYaml $service.resources | nindent 12 }}
env:
- name: ENVIRONMENT
value: {{ $.Values.global.environment }}
{{- if $service.env }}
{{- toYaml $service.env | nindent 12 }}
{{- end }}
---
{{- end }}
{{- end }}
# AI implements monitoring stack
"Set up Prometheus monitoring with:
- Service discovery for Kubernetes
- Custom metrics and alerts
- Grafana dashboards
- AlertManager configuration
- Long-term storage with Thanos"
# AI creates comprehensive monitoring
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
rule_files:
- "alerts/*.yml"
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__

Log Collection

Fluentd/Fluent Bit for log aggregation

Log Storage

Elasticsearch or Loki for storage

Log Analysis

Kibana or Grafana for visualization

Log Alerting

Real-time alerts on log patterns

# AI implements security scanning
"Create security scanning pipeline with:
- SAST (Static Application Security Testing)
- DAST (Dynamic Application Security Testing)
- Container scanning
- Dependency vulnerability scanning
- Infrastructure compliance checks"
# AI generates security workflow
name: Security Scanning
on:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
workflow_dispatch:
jobs:
sast-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Semgrep
uses: returntocorp/semgrep-action@v1
with:
config: >-
p/security-audit
p/owasp-top-ten
p/r2c-security-audit
- name: Run CodeQL
uses: github/codeql-action/analyze@v2
with:
languages: javascript, python
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
dependency-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Snyk
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=high
- name: OWASP Dependency Check
uses: dependency-check/Dependency-Check_Action@main
with:
project: 'app'
path: '.'
format: 'ALL'
container-scan:
runs-on: ubuntu-latest
steps:
- name: Run Trivy
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
# AI creates infrastructure tests
"Generate Terratest suite for:
- Module testing
- Integration testing
- Compliance validation
- Cost estimation
- Destroy testing"
// AI implements Go tests for Terraform
package test
import (
"testing"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
func TestTerraformWebAppModule(t *testing.T) {
t.Parallel()
terraformOptions := &terraform.Options{
TerraformDir: "../modules/web-app",
Vars: map[string]interface{}{
"environment": "test",
"region": "us-east-1",
},
}
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
// Validate outputs
albDns := terraform.Output(t, terraformOptions, "alb_dns_name")
assert.NotEmpty(t, albDns)
// Test actual infrastructure
validateALBIsWorking(t, albDns)
validateSecurityGroups(t, terraformOptions)
validateTags(t, terraformOptions)
}
  1. Design infrastructure architecture with AI
  2. Implement IaC with Terraform/CloudFormation
  3. Create CI/CD pipeline
  4. Set up monitoring and logging
  5. Implement security scanning
  1. Create Kubernetes manifests with AI
  2. Implement Helm charts
  3. Set up GitOps with ArgoCD
  4. Configure service mesh
  5. Implement observability stack
  1. Design backup strategy
  2. Implement automated backups
  3. Create recovery procedures
  4. Test failover scenarios
  5. Document RTO/RPO

Version Everything

Keep all infrastructure code in version control

Test Infrastructure

Test infrastructure changes before production

Monitor Everything

Comprehensive monitoring from day one

Automate Security

Security scanning in every pipeline

Monitoring Deep Dive

Advanced monitoring and observability

Migration Strategies

Migrating legacy infrastructure

Architecture Patterns

Cloud-native architecture design