Introduction
In today's fast-paced software development landscape, the ability to ship code quickly and reliably isn't just a competitive advantage—it's a necessity. DevOps practices have transformed how teams build, test, and deploy applications, but knowing where to start can be overwhelming.
This guide walks you through building a real-world DevOps pipeline using three powerful tools: GitHub Actions for automation, Docker for containerization, and Kubernetes for orchestration. By the end, you'll understand how these technologies work together to create a seamless deployment workflow that can scale with your needs.
Whether you're a developer looking to understand DevOps better or a team lead evaluating CI/CD solutions, this practical approach will help you ship faster without sacrificing quality.
Understanding the DevOps Trio
GitHub Actions: Your Automation Engine
GitHub Actions is a CI/CD platform that automates your build, test, and deployment pipeline directly from your GitHub repository. Think of it as your personal DevOps assistant that watches your code and executes predefined workflows whenever specific events occur.
Why GitHub Actions?
- Native integration with GitHub (no third-party services needed)
- Free tier includes 2,000 CI/CD minutes per month
- Massive marketplace of pre-built actions
- Easy to start, powerful enough to scale
Docker: Packaging Your Application
Docker solves the classic "works on my machine" problem by packaging your application and all its dependencies into a standardized container. This container runs identically whether it's on your laptop, a test server, or production.
Key Benefits:
- Consistent environments across development, staging, and production
- Faster deployment times (seconds vs. minutes)
- Efficient resource utilization
- Simplified dependency management
Kubernetes: Orchestrating at Scale
Kubernetes (K8s) is a container orchestration platform that manages where and how your Docker containers run. It handles scaling, load balancing, self-healing, and rolling updates automatically.
When You Need Kubernetes:
- Running multiple containerized services
- Need auto-scaling based on traffic
- Require zero-downtime deployments
- Managing complex microservices architectures
Building Your First DevOps Pipeline
Let's build a complete pipeline for a Node.js web application. We'll automate testing, build a Docker image, push it to a registry, and deploy to Kubernetes—all triggered by a single git push.
Step 1: Dockerizing Your Application
First, create a Dockerfile in your project root:
# Use official Node.js LTS image
FROM node:20-alpine AS builder
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Build the application (if needed)
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
# Copy from builder
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
USER nodejs
EXPOSE 3000
CMD ["node", "dist/index.js"]
This multi-stage Dockerfile optimizes image size and security by separating build and runtime dependencies.
Step 2: Creating Kubernetes Manifests
Create a k8s/deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: your-registry/my-app:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Step 3: GitHub Actions Workflow
Create .github/workflows/deploy.yaml:
name: Build and Deploy
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run linter
run: npm run lint
build-and-push:
needs: test
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix={{branch}}-
type=ref,event=branch
type=semver,pattern={{version}}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build-and-push
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- name: Set up kubectl
uses: azure/setup-kubectl@v4
with:
version: 'v1.28.0'
- name: Configure Kubernetes context
run: |
echo "${{ secrets.KUBECONFIG }}" | base64 -d > kubeconfig
export KUBECONFIG=kubeconfig
- name: Update deployment image
run: |
kubectl set image deployment/my-app \
my-app=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:main-${{ github.sha }} \
--record
- name: Wait for rollout
run: |
kubectl rollout status deployment/my-app --timeout=5m
- name: Verify deployment
run: |
kubectl get pods -l app=my-app
kubectl get services my-app-service
Real-World Best Practices
1. Use Multi-Stage Docker Builds
Always use multi-stage builds to keep your production images lean:
# Bad: Single stage (includes build tools in production)
FROM node:20
COPY . .
RUN npm install
CMD ["node", "index.js"]
# Good: Multi-stage (production image is minimal)
FROM node:20 AS builder
COPY . .
RUN npm install && npm run build
FROM node:20-alpine
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]
2. Implement Health Checks
Both Docker and Kubernetes need to know if your app is healthy:
// Express.js example
app.get('/health', (req, res) => {
res.status(200).json({ status: 'healthy' });
});
app.get('/ready', async (req, res) => {
try {
// Check database connection
await db.ping();
res.status(200).json({ status: 'ready' });
} catch (error) {
res.status(503).json({ status: 'not ready' });
}
});
3. Use Secrets Management Properly
Never hardcode secrets. Use GitHub Secrets and Kubernetes Secrets:
# In GitHub Actions
- name: Deploy
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
API_KEY: ${{ secrets.API_KEY }}
# In Kubernetes
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
database-url: <base64-encoded-value>
---
# Reference in deployment
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
4. Set Resource Limits
Always define CPU and memory limits to prevent resource starvation:
resources:
requests:
memory: "128Mi" # Minimum guaranteed
cpu: "100m"
limits:
memory: "256Mi" # Maximum allowed
cpu: "200m"
5. Implement Rolling Updates
Configure rolling updates to ensure zero-downtime deployments:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Allow 1 extra pod during update
maxUnavailable: 0 # Keep all pods running
Common Mistakes to Avoid
❌ Running Containers as Root
Problem: Security risk and violation of least privilege principle.
# Bad
FROM node:20
COPY . .
CMD ["node", "index.js"] # Runs as root
# Good
FROM node:20
RUN addgroup -g 1001 nodejs && adduser -S nodejs -u 1001
USER nodejs
COPY --chown=nodejs:nodejs . .
CMD ["node", "index.js"]
❌ Using latest Tag in Production
Problem: Makes deployments unpredictable and hard to rollback.
# Bad
image: my-app:latest
# Good
image: my-app:v1.2.3
# or
image: my-app:main-abc123def
❌ Not Setting Up Proper Logging
Problem: Can't debug issues in production.
// Bad: console.log everywhere
console.log('User logged in');
// Good: Structured logging
import logger from './logger';
logger.info('User logged in', {
userId: user.id,
timestamp: new Date().toISOString(),
action: 'login'
});
❌ Ignoring Docker Image Size
Problem: Slow deployments and wasted resources.
# Bad: 1.2GB image
FROM node:20
COPY . .
RUN npm install # Includes dev dependencies
# Good: 150MB image
FROM node:20-alpine AS builder
COPY package*.json ./
RUN npm ci --only=production
FROM node:20-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY . .
❌ Forgetting to Clean Up
Always clean up package manager caches:
RUN apt-get update && \
apt-get install -y --no-install-recommends python3 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
Advanced Patterns
Blue-Green Deployments
Maintain two identical production environments (blue and green) and switch traffic between them:
# Service can point to either blue or green
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
version: blue # Switch to 'green' when ready
Canary Deployments
Roll out changes to a small subset of users first:
# 10% of traffic goes to new version
apiVersion: v1
kind: Service
metadata:
name: my-app-canary
spec:
selector:
app: my-app
track: canary
GitOps with ArgoCD
Use ArgoCD to sync your Kubernetes cluster with your Git repository automatically. Your Git repo becomes the single source of truth.
🚀 Pro Tips
- Cache Docker Layers Wisely: Copy
package.jsonbefore source code to leverage layer caching:
COPY package*.json ./
RUN npm ci
COPY . . # Source changes won't invalidate npm ci layer
- Use GitHub Actions Cache: Speed up workflows by caching dependencies:
- uses: actions/cache@v3
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
- Monitor Everything: Integrate Prometheus and Grafana for observability:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "3000"
prometheus.io/path: "/metrics"
- Use Namespace Isolation: Separate environments using Kubernetes namespaces:
kubectl create namespace production
kubectl create namespace staging
- Implement Auto-Scaling: Let Kubernetes handle traffic spikes:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- Tag Images with Git SHA: Makes rollbacks and debugging easier:
tags: |
type=sha,prefix={{branch}}-
- Test Your Dockerfile Locally: Use Docker Compose for local development that mirrors production:
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=development
📌 Key Takeaways
- Start Simple: You don't need Kubernetes on day one. Docker + GitHub Actions can take you far.
- Automate Everything: Manual deployments are error-prone and slow. Let CI/CD handle repetitive tasks.
- Security First: Never commit secrets, always run as non-root, scan images for vulnerabilities.
- Monitor and Measure: You can't improve what you don't measure. Add logging and metrics from the start.
- Document Your Pipeline: Your future self (and teammates) will thank you.
- Iterate and Improve: DevOps is a journey, not a destination. Start with the basics and add complexity as needed.
Conclusion
Building a modern DevOps pipeline with GitHub Actions, Docker, and Kubernetes might seem daunting at first, but breaking it down into manageable pieces makes it achievable. Start by containerizing your application with Docker, automate your testing and builds with GitHub Actions, and scale intelligently with Kubernetes.
The key is to start small, learn from each deployment, and continuously improve your process. The pipeline we've built here is production-ready but also flexible enough to grow with your needs.
Remember: the goal isn't perfection on day one—it's shipping reliably and learning from each iteration. Your first pipeline might be simple, but with each deployment, you'll gain insights that make the next one better.
Now it's your turn. Take these patterns, adapt them to your project, and start shipping faster. The tools are here, the path is clear—all that's left is to begin.
Happy deploying! 🚀