DevOps for Beginners: Understanding CI/CD and Automation
Table of Contents
1. [Introduction to DevOps](#introduction-to-devops) 2. [Version Control Systems](#version-control-systems) 3. [Continuous Integration (CI)](#continuous-integration-ci) 4. [Continuous Delivery and Deployment (CD)](#continuous-delivery-and-deployment-cd) 5. [Containerization with Docker](#containerization-with-docker) 6. [Container Orchestration with Kubernetes](#container-orchestration-with-kubernetes) 7. [Automation Tools and Practices](#automation-tools-and-practices) 8. [Building Your First CI/CD Pipeline](#building-your-first-cicd-pipeline) 9. [Best Practices and Common Pitfalls](#best-practices-and-common-pitfalls) 10. [Conclusion and Next Steps](#conclusion-and-next-steps)Introduction to DevOps
DevOps represents a cultural and technical revolution in software development, bridging the gap between development (Dev) and operations (Ops) teams. At its core, DevOps is about breaking down silos, improving collaboration, and automating processes to deliver software faster, more reliably, and with higher quality.
What is DevOps?
DevOps is a set of practices, tools, and cultural philosophies that automate and integrate the processes between software development and IT operations teams. The primary goal is to shorten the development lifecycle while delivering features, fixes, and updates frequently in close alignment with business objectives.
The DevOps Lifecycle
`
[Plan] → [Code] → [Build] → [Test] → [Release] → [Deploy] → [Operate] → [Monitor]
↑ ↓
←←←←←←←←←←←←←←←←←← [Feedback Loop] ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
`
This continuous cycle ensures that: - Planning involves collaboration between teams - Code is version-controlled and reviewed - Building is automated and consistent - Testing happens continuously - Release and Deployment are automated - Operations and Monitoring provide feedback for improvement
Benefits of DevOps
1. Faster Time to Market: Automated processes reduce manual bottlenecks 2. Improved Quality: Continuous testing catches issues early 3. Better Collaboration: Teams work together toward common goals 4. Increased Reliability: Automated deployments reduce human error 5. Enhanced Security: Security practices are integrated throughout the pipeline
Version Control Systems
Version control is the foundation of modern software development and DevOps practices. It tracks changes to code over time, enables collaboration, and provides the ability to revert to previous versions when needed.
Understanding Git
Git is the most popular distributed version control system. Unlike centralized systems, Git allows every developer to have a complete copy of the project history.
#### Basic Git Workflow
`
Working Directory → Staging Area → Local Repository → Remote Repository
↓ ↓ ↓ ↓
[git add] [git commit] [git push] [GitHub/GitLab]
`
Essential Git Commands
`bash
Initialize a new repository
git initClone an existing repository
git clone https://github.com/username/repository.gitCheck status of files
git statusAdd files to staging area
git add filename.txt git add . # Add all filesCommit changes
git commit -m "Add new feature"Push to remote repository
git push origin mainPull latest changes
git pull origin mainCreate and switch to new branch
git checkout -b feature-branchMerge branches
git merge feature-branch`Branching Strategies
#### Git Flow Model
`
main branch: ●────●────●────●────●
↑ ↑ ↑
develop branch: ●────●────●────●────●
↑ ↑
feature branch: ●────●────●
↑
hotfix branch: ●────●
`
Branch Types: - Main/Master: Production-ready code - Develop: Integration branch for features - Feature: Individual feature development - Hotfix: Quick fixes for production issues
Real Example: Setting Up a Repository
Let's create a simple web application repository:
`bash
Create project directory
mkdir my-web-app cd my-web-appInitialize Git repository
git initCreate basic files
echo "# My Web Application" > README.md echo "console.log('Hello, DevOps!');" > app.js echo "node_modules/" > .gitignoreAdd and commit files
git add . git commit -m "Initial commit: Add basic project structure"Add remote repository
git remote add origin https://github.com/yourusername/my-web-app.gitPush to remote
git push -u origin main`Continuous Integration (CI)
Continuous Integration is the practice of automatically building, testing, and integrating code changes from multiple contributors into a shared repository frequently, typically multiple times per day.
CI Workflow
`
Developer → Code Change → Git Push → CI Server → Build → Test → Notification
↑ ↓
←←←←←←←←←←←← Feedback (Success/Failure) ←←←←←←←←←←←←←←←←←←←←←←
`
Benefits of CI
1. Early Bug Detection: Issues are caught quickly 2. Reduced Integration Problems: Small, frequent changes are easier to debug 3. Improved Code Quality: Automated testing ensures standards 4. Faster Development: Quick feedback loops accelerate development
CI Tools and Platforms
#### Popular CI Platforms
1. Jenkins: Open-source automation server 2. GitHub Actions: Integrated with GitHub repositories 3. GitLab CI/CD: Built into GitLab 4. CircleCI: Cloud-based CI/CD platform 5. Travis CI: Simple CI for open-source projects
GitHub Actions Example
Create .github/workflows/ci.yml in your repository:
`yaml
name: CI Pipeline
on: push: branches: [ main, develop ] pull_request: branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x, 16.x, 18.x]
steps:
- uses: actions/checkout@v3
- name: Use Node.js $#
uses: actions/setup-node@v3
with:
node-version: $#
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run linting
run: npm run lint
- name: Run tests
run: npm test
- name: Run build
run: npm run build
- name: Upload coverage reports
uses: codecov/codecov-action@v3
`
Jenkins Pipeline Example
Create a Jenkinsfile in your repository:
`groovy
pipeline {
agent any
tools {
nodejs 'NodeJS-16'
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Install Dependencies') {
steps {
sh 'npm ci'
}
}
stage('Lint') {
steps {
sh 'npm run lint'
}
}
stage('Test') {
steps {
sh 'npm test'
}
post {
always {
publishTestResults testResultsPattern: 'test-results.xml'
}
}
}
stage('Build') {
steps {
sh 'npm run build'
}
}
stage('Archive Artifacts') {
steps {
archiveArtifacts artifacts: 'dist//*', allowEmptyArchive: false
}
}
}
post {
failure {
emailext (
subject: "Build Failed: ${env.JOB_NAME} - ${env.BUILD_NUMBER}",
body: "Build failed. Check console output at ${env.BUILD_URL}",
to: "${env.CHANGE_AUTHOR_EMAIL}"
)
}
}
}
`
Continuous Delivery and Deployment (CD)
Continuous Delivery (CD) extends CI by automatically deploying code changes to testing and staging environments. Continuous Deployment goes further by automatically deploying to production.
CD vs CD: Understanding the Difference
`
Continuous Delivery:
Code → Build → Test → Staging → [Manual Approval] → Production
Continuous Deployment:
Code → Build → Test → Staging → [Automated Checks] → Production
`
Deployment Strategies
#### 1. Blue-Green Deployment
`
Blue Environment (Current): [Load Balancer] → [Blue Servers] (v1.0)
↓
Green Environment (New): [Green Servers] (v1.1)
After Switch:
Blue Environment (Idle): [Load Balancer] → [Green Servers] (v1.1)
↓
Green Environment (Current): [Blue Servers] (v1.0)
`
Benefits: - Zero downtime deployments - Easy rollback capability - Full testing of production environment
#### 2. Canary Deployment
`
Production Traffic:
90% → [Stable Version] (v1.0)
10% → [Canary Version] (v1.1)
Gradual Rollout:
70% → [Stable Version] (v1.0) 50% → [Stable Version] (v1.0) 0% → [Stable Version] (v1.0)
30% → [Canary Version] (v1.1) → 50% → [Canary Version] (v1.1) → 100% → [Canary Version] (v1.1)
`
Benefits: - Reduced risk of widespread issues - Real user feedback on new features - Gradual performance monitoring
#### 3. Rolling Deployment
`
Initial State: [Server 1] [Server 2] [Server 3] [Server 4] (all v1.0)
Step 1: [Server 1] [Server 2] [Server 3] [Server 4*] (v1.1)
Step 2: [Server 1] [Server 2] [Server 3*] [Server 4] (v1.1)
Step 3: [Server 1] [Server 2*] [Server 3] [Server 4] (v1.1)
Final State: [Server 1*] [Server 2] [Server 3] [Server 4] (all v1.1)
`
GitLab CI/CD Pipeline Example
Create .gitlab-ci.yml:
`yaml
stages:
- build
- test
- deploy-staging
- deploy-production
variables: DOCKER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
build: stage: build script: - docker build -t $DOCKER_IMAGE . - docker push $DOCKER_IMAGE only: - main - develop
test: stage: test script: - npm ci - npm run test:coverage coverage: '/Lines\s:\s(\d+\.\d+)%/' artifacts: reports: coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml
deploy-staging: stage: deploy-staging script: - kubectl set image deployment/myapp myapp=$DOCKER_IMAGE -n staging - kubectl rollout status deployment/myapp -n staging environment: name: staging url: https://staging.myapp.com only: - develop
deploy-production:
stage: deploy-production
script:
- kubectl set image deployment/myapp myapp=$DOCKER_IMAGE -n production
- kubectl rollout status deployment/myapp -n production
environment:
name: production
url: https://myapp.com
when: manual
only:
- main
`
Containerization with Docker
Docker revolutionizes application deployment by packaging applications and their dependencies into lightweight, portable containers.
What is Docker?
Docker is a platform that uses containerization technology to package applications with all their dependencies, ensuring they run consistently across different environments.
Docker Architecture
`
Docker Architecture:
[Docker Client] ←→ [Docker Daemon] ←→ [Docker Registry]
↓
[Docker Images]
↓
[Docker Containers]
↓
[Host Operating System]
`
Key Docker Concepts
1. Image: A read-only template used to create containers 2. Container: A runnable instance of an image 3. Dockerfile: A text file containing instructions to build an image 4. Registry: A service for storing and distributing Docker images
Creating Your First Dockerfile
Here's a complete example for a Node.js application:
`dockerfile
Use official Node.js runtime as base image
FROM node:16-alpineSet working directory in container
WORKDIR /appCopy package.json and package-lock.json
COPY package*.json ./Install dependencies
RUN npm ci --only=production && npm cache clean --forceCopy application code
COPY . .Create non-root user for security
RUN addgroup -g 1001 -S nodejs RUN adduser -S nextjs -u 1001Change ownership of app directory
RUN chown -R nextjs:nodejs /appSwitch to non-root user
USER nextjsExpose port
EXPOSE 3000Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:3000/health || exit 1Start application
CMD ["npm", "start"]`Essential Docker Commands
`bash
Build an image
docker build -t myapp:latest .Run a container
docker run -d -p 3000:3000 --name myapp-container myapp:latestList running containers
docker psList all containers
docker ps -aView container logs
docker logs myapp-containerExecute command in running container
docker exec -it myapp-container /bin/shStop container
docker stop myapp-containerRemove container
docker rm myapp-containerRemove image
docker rmi myapp:latestPull image from registry
docker pull nginx:alpinePush image to registry
docker push myregistry.com/myapp:latest`Docker Compose for Multi-Container Applications
Create docker-compose.yml:
`yaml
version: '3.8'
services: app: build: . ports: - "3000:3000" environment: - NODE_ENV=production - DATABASE_URL=postgresql://user:password@db:5432/myapp depends_on: - db - redis volumes: - ./logs:/app/logs restart: unless-stopped
db: image: postgres:13-alpine environment: - POSTGRES_DB=myapp - POSTGRES_USER=user - POSTGRES_PASSWORD=password volumes: - postgres_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sql restart: unless-stopped
redis: image: redis:6-alpine command: redis-server --appendonly yes volumes: - redis_data:/data restart: unless-stopped
nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./ssl:/etc/nginx/ssl depends_on: - app restart: unless-stopped
volumes: postgres_data: redis_data:
networks:
default:
driver: bridge
`
Docker Compose Commands
`bash
Start all services
docker-compose up -dView logs
docker-compose logs -fScale a service
docker-compose up -d --scale app=3Stop all services
docker-compose downRebuild and start
docker-compose up -d --buildExecute command in service
docker-compose exec app /bin/sh`Container Orchestration with Kubernetes
Kubernetes (K8s) is a powerful container orchestration platform that automates the deployment, scaling, and management of containerized applications.
Kubernetes Architecture
`
Kubernetes Cluster:
Master Node: ├── API Server ├── etcd (Key-Value Store) ├── Controller Manager └── Scheduler
Worker Nodes:
├── Node 1
│ ├── kubelet
│ ├── kube-proxy
│ └── Pods
├── Node 2
│ ├── kubelet
│ ├── kube-proxy
│ └── Pods
└── Node N
├── kubelet
├── kube-proxy
└── Pods
`
Key Kubernetes Concepts
1. Pod: Smallest deployable unit, contains one or more containers 2. Deployment: Manages replicas of pods 3. Service: Provides stable network endpoint for pods 4. ConfigMap: Stores configuration data 5. Secret: Stores sensitive data like passwords 6. Ingress: Manages external access to services
Kubernetes Manifest Examples
#### Deployment
Create deployment.yaml:
`yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myregistry.com/myapp:v1.0.0
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
`
#### Service
Create service.yaml:
`yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
`
#### ConfigMap
Create configmap.yaml:
`yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
data:
app.properties: |
log.level=info
cache.enabled=true
cache.ttl=3600
nginx.conf: |
server {
listen 80;
server_name myapp.com;
location / {
proxy_pass http://myapp-service:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
`
#### Secret
Create secret.yaml:
`yaml
apiVersion: v1
kind: Secret
metadata:
name: myapp-secrets
type: Opaque
data:
database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc3dvcmRAZGI6NTQzMi9teWFwcA==
api-key: bXlfc2VjcmV0X2FwaV9rZXk=
`
#### Ingress
Create ingress.yaml:
`yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-tls
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
`
Essential kubectl Commands
`bash
Apply configurations
kubectl apply -f deployment.yaml kubectl apply -f service.yaml kubectl apply -f configmap.yamlGet resources
kubectl get pods kubectl get services kubectl get deployments kubectl get nodesDescribe resources
kubectl describe podView logs
kubectl logsExecute commands in pods
kubectl exec -itScale deployment
kubectl scale deployment myapp-deployment --replicas=5Update deployment
kubectl set image deployment/myapp-deployment myapp=myregistry.com/myapp:v1.1.0Check rollout status
kubectl rollout status deployment/myapp-deploymentRollback deployment
kubectl rollout undo deployment/myapp-deploymentPort forwarding for testing
kubectl port-forward service/myapp-service 8080:80Create namespace
kubectl create namespace stagingSwitch context
kubectl config set-context --current --namespace=staging`Automation Tools and Practices
Automation is the backbone of DevOps, reducing manual effort, minimizing errors, and ensuring consistency across environments.
Infrastructure as Code (IaC)
Infrastructure as Code treats infrastructure configuration as software code, making it version-controlled, testable, and repeatable.
#### Terraform Example
Create main.tf:
`hcl
Configure AWS Provider
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 4.0" } } }provider "aws" { region = var.aws_region }
Variables
variable "aws_region" { description = "AWS region" type = string default = "us-west-2" }variable "environment" { description = "Environment name" type = string default = "production" }
VPC
resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" enable_dns_hostnames = true enable_dns_support = truetags = { Name = "${var.environment}-vpc" Environment = var.environment } }
Internet Gateway
resource "aws_internet_gateway" "main" { vpc_id = aws_vpc.main.idtags = { Name = "${var.environment}-igw" Environment = var.environment } }
Subnets
resource "aws_subnet" "public" { count = 2 vpc_id = aws_vpc.main.id cidr_block = "10.0.${count.index + 1}.0/24" availability_zone = data.aws_availability_zones.available.names[count.index]map_public_ip_on_launch = true
tags = { Name = "${var.environment}-public-${count.index + 1}" Environment = var.environment } }
Security Group
resource "aws_security_group" "web" { name = "${var.environment}-web-sg" description = "Security group for web servers" vpc_id = aws_vpc.main.idingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }
ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }
egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] }
tags = { Name = "${var.environment}-web-sg" Environment = var.environment } }
Launch Template
resource "aws_launch_template" "app" { name_prefix = "${var.environment}-app-" image_id = data.aws_ami.amazon_linux.id instance_type = "t3.micro"vpc_security_group_ids = [aws_security_group.web.id]
user_data = base64encode(templatefile("${path.module}/user_data.sh", { environment = var.environment }))
tag_specifications { resource_type = "instance" tags = { Name = "${var.environment}-app" Environment = var.environment } } }
Auto Scaling Group
resource "aws_autoscaling_group" "app" { name = "${var.environment}-app-asg" vpc_zone_identifier = aws_subnet.public[*].id target_group_arns = [aws_lb_target_group.app.arn] health_check_type = "ELB"min_size = 2 max_size = 10 desired_capacity = 3
launch_template { id = aws_launch_template.app.id version = "$Latest" }
tag { key = "Name" value = "${var.environment}-app-asg" propagate_at_launch = false } }
Application Load Balancer
resource "aws_lb" "app" { name = "${var.environment}-app-alb" internal = false load_balancer_type = "application" security_groups = [aws_security_group.web.id] subnets = aws_subnet.public[*].idenable_deletion_protection = false
tags = { Name = "${var.environment}-app-alb" Environment = var.environment } }
Target Group
resource "aws_lb_target_group" "app" { name = "${var.environment}-app-tg" port = 80 protocol = "HTTP" vpc_id = aws_vpc.main.idhealth_check { enabled = true healthy_threshold = 2 interval = 30 matcher = "200" path = "/health" port = "traffic-port" protocol = "HTTP" timeout = 5 unhealthy_threshold = 2 }
tags = { Name = "${var.environment}-app-tg" Environment = var.environment } }
Load Balancer Listener
resource "aws_lb_listener" "app" { load_balancer_arn = aws_lb.app.arn port = "80" protocol = "HTTP"default_action { type = "forward" target_group_arn = aws_lb_target_group.app.arn } }
Data Sources
data "aws_availability_zones" "available" { state = "available" }data "aws_ami" "amazon_linux" { most_recent = true owners = ["amazon"]
filter { name = "name" values = ["amzn2-ami-hvm-*-x86_64-gp2"] } }
Outputs
output "load_balancer_dns" { description = "DNS name of the load balancer" value = aws_lb.app.dns_name }output "vpc_id" {
description = "VPC ID"
value = aws_vpc.main.id
}
`
#### Ansible Playbook Example
Create playbook.yml:
`yaml
---
- name: Deploy Web Application
hosts: webservers
become: yes
vars:
app_name: myapp
app_version: "#"
app_port: 3000
tasks:
- name: Update system packages
yum:
name: "*"
state: latest
- name: Install Docker
yum:
name: docker
state: present
- name: Start Docker service
systemd:
name: docker
state: started
enabled: yes
- name: Add user to docker group
user:
name: "#"
groups: docker
append: yes
- name: Install Docker Compose
pip:
name: docker-compose
state: present
- name: Create application directory
file:
path: "/opt/#"
state: directory
owner: "#"
group: "#"
mode: '0755'
- name: Copy docker-compose file
template:
src: docker-compose.yml.j2
dest: "/opt/#/docker-compose.yml"
owner: "#"
group: "#"
mode: '0644'
notify: restart application
- name: Copy application configuration
template:
src: app.conf.j2
dest: "/opt/#/app.conf"
owner: "#"
group: "#"
mode: '0644'
notify: restart application
- name: Pull Docker images
docker_image:
name: "myregistry.com/#:#"
source: pull
- name: Start application
docker_compose:
project_src: "/opt/#"
state: present
- name: Configure firewall
firewalld:
port: "#/tcp"
permanent: yes
state: enabled
immediate: yes
handlers:
- name: restart application
docker_compose:
project_src: "/opt/#"
restarted: yes
`
Monitoring and Alerting
#### Prometheus Configuration
Create prometheus.yml:
`yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files: - "alert_rules.yml"
alerting: alertmanagers: - static_configs: - targets: - alertmanager:9093
scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090']
- job_name: 'node-exporter' static_configs: - targets: ['node-exporter:9100']
- job_name: 'myapp' static_configs: - targets: ['myapp:3000'] metrics_path: '/metrics' scrape_interval: 30s
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
`
#### Alert Rules
Create alert_rules.yml:
`yaml
groups:
- name: application_alerts
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.1
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is # errors per second"
- alert: HighMemoryUsage expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes > 0.8 for: 5m labels: severity: warning annotations: summary: "High memory usage" description: "Memory usage is above 80%"
- alert: ServiceDown
expr: up == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Service is down"
description: "# service is down"
`
Building Your First CI/CD Pipeline
Let's create a complete CI/CD pipeline for a Node.js application using GitHub Actions.
Project Structure
`
my-web-app/
├── .github/
│ └── workflows/
│ └── ci-cd.yml
├── src/
│ ├── app.js
│ ├── routes/
│ └── middleware/
├── tests/
│ ├── unit/
│ └── integration/
├── docker/
│ ├── Dockerfile
│ └── docker-compose.yml
├── k8s/
│ ├── deployment.yml
│ ├── service.yml
│ └── ingress.yml
├── package.json
├── .dockerignore
├── .gitignore
└── README.md
`
Complete CI/CD Pipeline
Create .github/workflows/ci-cd.yml:
`yaml
name: CI/CD Pipeline
on: push: branches: [ main, develop ] pull_request: branches: [ main ]
env: REGISTRY: ghcr.io IMAGE_NAME: $#
jobs: # Continuous Integration test: name: Test and Build runs-on: ubuntu-latest strategy: matrix: node-version: [16.x, 18.x] steps: - name: Checkout code uses: actions/checkout@v3 - name: Setup Node.js $# uses: actions/setup-node@v3 with: node-version: $# cache: 'npm' - name: Install dependencies run: npm ci - name: Run linting run: npm run lint - name: Run unit tests run: npm run test:unit - name: Run integration tests run: npm run test:integration - name: Generate test coverage run: npm run test:coverage - name: Upload coverage to Codecov uses: codecov/codecov-action@v3 with: file: ./coverage/lcov.info - name: Build application run: npm run build - name: Archive build artifacts uses: actions/upload-artifact@v3 with: name: build-files path: dist/
# Security Scanning security: name: Security Scan runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Run npm audit run: npm audit --audit-level moderate - name: Run Snyk to check for vulnerabilities uses: snyk/actions/node@master env: SNYK_TOKEN: $#
# Docker Build and Push build-and-push: name: Build and Push Docker Image runs-on: ubuntu-latest needs: [test, security] if: github.ref == 'refs/heads/main' permissions: contents: read packages: write steps: - name: Checkout code uses: actions/checkout@v3 - name: Log in to Container Registry uses: docker/login-action@v2 with: registry: $# username: $# password: $# - name: Extract metadata id: meta uses: docker/metadata-action@v4 with: images: $#/$# tags: | type=ref,event=branch type=sha,prefix=#- type=raw,value=latest,enable=# - name: Build and push Docker image uses: docker/build-push-action@v4 with: context: . file: ./docker/Dockerfile push: true tags: $# labels: $#
# Deploy to Staging deploy-staging: name: Deploy to Staging runs-on: ubuntu-latest needs: build-and-push if: github.ref == 'refs/heads/develop' environment: name: staging url: https://staging.myapp.com steps: - name: Checkout code uses: actions/checkout@v3 - name: Setup kubectl uses: azure/setup-kubectl@v3 with: version: 'v1.25.0' - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v2 with: aws-access-key-id: $# aws-secret-access-key: $# aws-region: us-west-2 - name: Update kube config run: aws eks update-kubeconfig --name staging-cluster - name: Deploy to staging run: | sed -i 's|IMAGE_TAG|$#|g' k8s/deployment.yml kubectl apply -f k8s/ -n staging kubectl rollout status deployment/myapp -n staging
# Deploy to Production deploy-production: name: Deploy to Production runs-on: ubuntu-latest needs: build-and-push if: github.ref == 'refs/heads/main' environment: name: production url: https://myapp.com steps: - name: Checkout code uses: actions/checkout@v3 - name: Setup kubectl uses: azure/setup-kubectl@v3 with: version: 'v1.25.0' - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v2 with: aws-access-key-id: $# aws-secret-access-key: $# aws-region: us-west-2 - name: Update kube config run: aws eks update-kubeconfig --name production-cluster - name: Deploy to production run: | sed -i 's|IMAGE_TAG|$#|g' k8s/deployment.yml kubectl apply -f k8s/ -n production kubectl rollout status deployment/myapp -n production - name: Run smoke tests run: | curl -f https://myapp.com/health || exit 1 curl -f https://myapp.com/api/status || exit 1
# Notify
notify:
name: Notify Team
runs-on: ubuntu-latest
needs: [deploy-production]
if: always()
steps:
- name: Notify Slack
uses: 8398a7/action-slack@v3
with:
status: $#
channel: '#deployments'
webhook_url: $#
if: always()
`
Application Code Example
Create src/app.js:
`javascript
const express = require('express');
const prometheus = require('prom-client');
const app = express();
const port = process.env.PORT || 3000;
// Prometheus metrics const collectDefaultMetrics = prometheus.collectDefaultMetrics; collectDefaultMetrics({ timeout: 5000 });
const httpRequestDuration = new prometheus.Histogram({ name: 'http_request_duration_seconds', help: 'Duration of HTTP requests in seconds', labelNames: ['method', 'route', 'status_code'], buckets: [0.1, 0.3, 0.5, 0.7, 1, 3, 5, 7, 10] });
const httpRequestTotal = new prometheus.Counter({ name: 'http_requests_total', help: 'Total number of HTTP requests', labelNames: ['method', 'route', 'status_code'] });
// Middleware app.use(express.json());
app.use((req, res, next) => { const start = Date.now(); res.on('finish', () => { const duration = (Date.now() - start) / 1000; const labels = { method: req.method, route: req.route?.path || req.path, status_code: res.statusCode }; httpRequestDuration.observe(labels, duration); httpRequestTotal.inc(labels); }); next(); });
// Routes app.get('/health', (req, res) => { res.status(200).json({ status: 'healthy', timestamp: new Date().toISOString(), uptime: process.uptime() }); });
app.get('/ready', (req, res) => { // Add readiness checks here (database, external services, etc.) res.status(200).json({ status: 'ready', timestamp: new Date().toISOString() }); });
app.get('/metrics', (req, res) => { res.set('Content-Type', prometheus.register.contentType); res.end(prometheus.register.metrics()); });
app.get('/', (req, res) => { res.json({ message: 'Hello, DevOps World!', version: process.env.APP_VERSION || '1.0.0', environment: process.env.NODE_ENV || 'development' }); });
app.get('/api/status', (req, res) => { res.json({ status: 'running', timestamp: new Date().toISOString(), memory: process.memoryUsage(), uptime: process.uptime() }); });
// Error handling app.use((err, req, res, next) => { console.error(err.stack); res.status(500).json({ error: 'Something went wrong!', timestamp: new Date().toISOString() }); });
// 404 handler app.use((req, res) => { res.status(404).json({ error: 'Not found', path: req.path, timestamp: new Date().toISOString() }); });
// Graceful shutdown process.on('SIGTERM', () => { console.log('SIGTERM received, shutting down gracefully'); server.close(() => { console.log('Process terminated'); }); });
const server = app.listen(port, () => {
console.log(Server running on port ${port});
});
module.exports = app;
`
Best Practices and Common Pitfalls
DevOps Best Practices
#### 1. Start Small and Iterate
`
Traditional Approach:
[Planning] → [Big Bang Implementation] → [Hope for Success]
DevOps Approach:
[Small Change] → [Test] → [Learn] → [Iterate] → [Improve]
`
#### 2. Implement Proper Monitoring
`
Monitoring Pyramid:
Application Metrics (Business KPIs)
↑
Infrastructure Metrics (CPU, Memory, Disk)
↑
Log Aggregation (Centralized Logging)
↑
Health Checks (Basic Availability)
`
#### 3. Security Integration (DevSecOps)
`
Traditional Security:
[Develop] → [Test] → [Security Review] → [Deploy]
DevSecOps:
[Develop] → [Security Scan] → [Test] → [Security Test] → [Deploy] → [Monitor]
`
Common Pitfalls and Solutions
#### 1. Inadequate Testing
Problem: Rushing to automate without proper testing Solution: Implement comprehensive testing strategy
`yaml
Testing Strategy
testing: unit_tests: coverage_threshold: 80% tools: [jest, mocha] integration_tests: database: true external_apis: true e2e_tests: critical_paths: true browser_testing: [chrome, firefox] performance_tests: load_testing: true stress_testing: true security_tests: vulnerability_scanning: true dependency_checking: true`#### 2. Ignoring Rollback Strategies
Problem: No plan for when deployments go wrong Solution: Always have a rollback plan
`bash
Kubernetes rollback example
kubectl rollout undo deployment/myappBlue-green deployment rollback
Simply switch traffic back to previous version
Database migration rollback
Always write reversible migrations
`#### 3. Poor Secret Management
Problem: Hardcoding secrets in code or configs Solution: Use proper secret management
`yaml
Kubernetes secrets
apiVersion: v1 kind: Secret metadata: name: app-secrets type: Opaque data: database-password:---
Using secrets in deployment
spec: containers: - name: app env: - name: DB_PASSWORD valueFrom: secretKeyRef: name: app-secrets key: database-password`#### 4. Lack of Documentation
Problem: Undocumented processes and configurations Solution: Document everything as code
`markdown
README.md Template
Project Overview
- Purpose and goals - Architecture diagram - Technology stackDevelopment Setup
- Prerequisites - Installation steps - ConfigurationDeployment
- Environments - Deployment process - Rollback proceduresMonitoring and Troubleshooting
- Key metrics - Common issues - Support contacts`Performance Optimization
#### 1. Docker Image Optimization
`dockerfile
Multi-stage build for smaller images
FROM node:16-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production && npm cache clean --forceFROM node:16-alpine AS runtime RUN addgroup -g 1001 -S nodejs RUN adduser -S nextjs -u 1001
WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --chown=nextjs:nodejs . .
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
`
#### 2. Pipeline Optimization
`yaml
Parallel job execution
jobs: test: strategy: matrix: test-type: [unit, integration, e2e] runs-on: ubuntu-latest steps: - name: Run $# tests run: npm run test:$# # Cache dependencies
- name: Cache Node modules
uses: actions/cache@v3
with:
path: ~/.npm
key: $#-node-$#
`
Conclusion and Next Steps
DevOps is a journey, not a destination. This guide has covered the fundamental concepts and practices that form the foundation of modern DevOps:
Key Takeaways
1. Culture First: DevOps is as much about culture and collaboration as it is about tools 2. Automation: Automate repetitive tasks to reduce errors and increase efficiency 3. Continuous Learning: The DevOps landscape evolves rapidly; stay current with new tools and practices 4. Measure Everything: Use metrics to drive decisions and improvements 5. Start Small: Begin with simple implementations and gradually increase complexity
Your DevOps Learning Path
`
Foundation (Months 1-2):
├── Git and Version Control
├── Basic Linux/Command Line
├── Understanding CI/CD Concepts
└── Docker Basics
Intermediate (Months 3-6): ├── Advanced Docker and Docker Compose ├── Kubernetes Fundamentals ├── Infrastructure as Code (Terraform) ├── Monitoring and Logging └── Security Best Practices
Advanced (Months 6+):
├── Advanced Kubernetes (Helm, Operators)
├── Service Mesh (Istio, Linkerd)
├── Advanced Monitoring (Prometheus, Grafana)
├── GitOps (ArgoCD, Flux)
└── Cloud-Native Technologies
`
Recommended Next Steps
1. Practice: Set up your own CI/CD pipeline using the examples in this guide 2. Experiment: Try different tools and approaches to find what works best for your use case 3. Join Communities: Engage with DevOps communities, forums, and local meetups 4. Certifications: Consider pursuing relevant certifications (AWS, Azure, Kubernetes, Docker) 5. Read and Learn: Stay updated with DevOps blogs, books, and documentation
Resources for Continued Learning
- Books: "The Phoenix Project", "The DevOps Handbook", "Accelerate" - Online Platforms: Kubernetes Academy, Docker Training, Cloud Provider Training - Communities: DevOps subreddit, CNCF Slack, local DevOps meetups - Conferences: KubeCon, DockerCon, DevOps Enterprise Summit
Remember, DevOps is about continuous improvement. Start with the basics, practice regularly, and gradually expand your knowledge and skills. The journey may seem overwhelming at first, but with consistent effort and hands-on practice, you'll develop the expertise needed to implement effective DevOps practices in your organization.
The future of software development lies in the seamless integration of development and operations, and by mastering these concepts, you're positioning yourself at the forefront of this technological evolution. Good luck on your DevOps journey!