# On production server:1. Install Node.js 18.15.0 manually2. Install PostgreSQL 14 manually3. Configure environment variables4. Clone your code from Git5. Install dependencies: npm install6. Start the app: npm start7. Hope nothing breaks 🤞Problems:- Takes 2-3 hours to setup- Different versions = different bugs- New server = redo all steps- Server updates might break your app
With Containers (Modern Setup):
# Dockerfile (defines your container)FROM node:18.15.0 # Start with Node.js 18.15.0WORKDIR /app # Working directoryCOPY package*.json ./ # Copy dependency filesRUN npm install # Install dependenciesCOPY . . # Copy app codeEXPOSE 3000 # App runs on port 3000CMD ["npm", "start"] # Start the app
# On any server with Docker:docker build -t myblog:1.0 . # Build container (1 minute)docker run -p 3000:3000 myblog:1.0 # Run container (5 seconds)Benefits:- Works identically everywhere- Takes seconds to start- Isolated from other apps- Easy to update (new container = new version)
Production Server 1:- Install Node.js 18- Install nginx web server- Install MongoDB 6- Install Redis 7- Deploy web app code- Configure everything manuallyProblem: Takes 3-4 hours, prone to human errorNew developer joins:- Spend 2 days setting up local environment- "It doesn't work on my machine!" ← Wastes days debugging
# On ANY server (dev, staging, production):docker-compose up -d# Result:- Starts 3 containers in 10 seconds- Works identically everywhere- New developer: 5 minutes to run entire stack locally
❌ Mistake 1: Thinking containers are just lightweight VMs
✅ Reality: Containers share the host OS kernel, VMs don’t❌ Mistake 2: Storing data inside containers
✅ Reality: Containers are ephemeral (temporary). Use volumes for persistent data.❌ Mistake 3: Running multiple apps in one container
✅ Reality: One container = one process (web server OR database, not both)❌ Mistake 4: Using containers for everything
✅ Reality: Sometimes VMs are better (need different OS, strong isolation)
Use Containers When:
✅ You want fast deployment (seconds)
✅ You need to run many copies of the same app
✅ You want consistent environments (dev = production)
✅ Your app runs on LinuxUse VMs When:
✅ You need complete isolation (security, compliance)
✅ You need different operating systems (Windows + Linux on same hardware)
✅ You have legacy apps that can’t be containerized
✅ You need full control over the operating system
You own 20 shipping containers (your app containers)You have 5 trucks (your servers)Every day you manually:- Decide which containers go on which trucks- Check if containers fell off trucks (crashed)- Put fallen containers back on trucks (restart)- Tell customers which truck has their container- Swap trucks when they're fullResult: Full-time job, errors, slow
With Kubernetes (Automated):
Kubernetes = Smart logistics systemYou tell Kubernetes:- "I need 20 containers of my app running"- "Each container needs 500 MB RAM"- "Don't run more than 5 on one truck (server)"Kubernetes automatically:✅ Places containers on trucks (servers) optimally✅ Monitors all containers✅ Restarts crashed containers instantly✅ Gives customers one address (myblog.com)✅ Routes customers to healthy containers✅ Replaces containers during updates (zero downtime)✅ Adds/removes containers based on trafficResult: You focus on your app, Kubernetes handles operations
Kubernetes (K8s) = An open-source container orchestration platform that automates deployment, scaling, and management of containerized applications.Think of Kubernetes as: An operating system for your containers across many servers.Core Features:
Self-Healing: Container crashed? Kubernetes restarts it automatically
Load Balancing: Distributes traffic across containers
Auto-Scaling: More traffic? Kubernetes adds containers. Less traffic? Removes them.
Rolling Updates: Update app without downtime
Service Discovery: Containers find each other automatically
Storage Orchestration: Attach storage to containers automatically
Your App = Passengers that need to travelContainer = Car (carries your app)Kubernetes = Smart Transportation System:- Monitors all cars (containers)- Dispatches new cars when needed- Removes cars when traffic is light- Redirects passengers if a car breaks down- Shows passengers one address (the airport) instead of 20 car locations- Replaces old cars with new models (updates) while passengers keep arriving
5:00 AM: Normal traffic (5 containers)8:00 AM: Traffic increases (need 20 containers)Your manual actions:1. Notice website is slow (user complaints pour in)2. Log into Azure portal3. Manually create 15 new VMs (20 minutes)4. Manually start 15 containers5. Manually update load balancer configuration6. Total time: 45 minutes of website slowness4:00 PM: Traffic decreases (back to 5 containers needed)7. Manually stop 15 containers8. Manually delete 15 VMs (to save money)9. Total time: 30 minutes of manual workResult: Lost sales, frustrated customers, exhausted you
Black Friday Sale (With Kubernetes):
5:00 AM: Normal traffic (5 containers)8:00 AM: Traffic increasesKubernetes automatically:1. Detects high CPU usage (70%+)2. Creates 15 new containers in 2 minutes3. Distributes traffic across all 20 containers4. Total time: 2 minutes, zero human intervention ✅4:00 PM: Traffic decreasesKubernetes automatically:1. Detects low CPU usage (<30%)2. Gracefully stops 15 containers (waits for requests to finish)3. Scales back to 5 containers4. Total time: 5 minutes, zero human intervention ✅Result: Happy customers, maximized sales, you sleep peacefully
Your responsibilities:- Install Kubernetes on VMs (complex, 50+ steps)- Configure networking, storage, security- Upgrade Kubernetes versions manually- Monitor control plane (master nodes)- Pay for control plane VMs- Fix control plane issues (3am outages)Time investment: 40-80 hours/month
Azure Kubernetes Service (AKS) (Managed):
Your responsibilities:- Click "Create AKS Cluster" in Azure portal- Deploy your containersAzure's responsibilities:✅ Installs and configures Kubernetes✅ Manages control plane (free!)✅ Auto-upgrades Kubernetes✅ Monitors control plane health 24/7✅ Fixes control plane issues✅ Provides enterprise features (security, compliance)Time investment: 4-8 hours/monthCost: Control plane is FREE, you only pay for worker nodes (VMs that run your containers)
kubelet: The agent that takes orders from the control plane and starts containers.
kube-proxy: Handles networking and load balancing between pods.
Container Runtime: Usually containerd (Docker’s core).
[!IMPORTANT]
Pro Insight: The ‘Free’ Control Plane
In the standard AKS tier, the control plane is free. However, if you have a massive cluster (100+ nodes), you should upgrade to the Uptime SLA tier ($0.10/hour). This gives you a guaranteed 99.95% availability for the API server itself, backed by financially-backed credits.
Running 20 Containers for a Web App:Option 1: Traditional VMs (No Containers):
20 VMs × $50/month = $1,000/month+ Slow to scale (5-10 minutes to provision new VM)+ Manual management required
Option 2: Plain Kubernetes (DIY):
3 Control Plane VMs × $50 = $150/month ← You pay for this5 Worker VMs × $80 = $400/month ← You pay for thisTotal: $550/month+ You manage control plane (time cost)+ Complex setup and maintenance
Option 3: Azure Kubernetes Service (AKS):
Control Plane: $0/month ← Azure manages free!5 Worker VMs × $80 = $400/month ← You only pay thisTotal: $400/month+ Azure manages control plane+ Enterprise-grade security and updates+ Scales in seconds+ Integrates with Azure services
Winner: AKS saves $150/month + hundreds of hours of management time
[!WARNING]
Gotcha: System Node Pools
Every AKS cluster needs at least one “System Node Pool” to run Kubernetes itself (CoreDNS, Metrics Server). You cannot delete this pool or scale it to 0. It will always cost you money (usually 1-3 VMs).
[!TIP]
Jargon Alert: Pod vs NodeNode: A Virtual Machine (The house).
Pod: A running process/container (The tenant living in the house).
A single Node (VM) usually hosts many Pods.
apiVersion: v2name: myappdescription: A Helm chart for my applicationtype: applicationversion: 1.0.0 # Chart versionappVersion: "2.5.1" # Application version
Q1: What is the difference between a Pod and a Deployment?
Answer:Pod:
Smallest deployable unit in Kubernetes
One or more containers running together
Ephemeral (dies when node fails)
No self-healing
Deployment:
Manages a set of identical Pods (ReplicaSet)
Ensures desired number of Pods are running
Self-healing (recreates failed Pods)
Supports rolling updates and rollbacks
In production: Always use Deployments, never bare Pods.
Q2: Explain Kubernetes namespaces
Answer:Namespaces = Virtual clusters within a physical cluster.Use cases:
Environment separation: dev, staging, prod
Team isolation: team-a, team-b
Resource quotas: Limit CPU/memory per namespace
Default namespaces:
default: Default namespace for resources
kube-system: Kubernetes system components
kube-public: Public resources (readable by all)
Example:
kubectl create namespace productionkubectl get pods -n production
Q3: What is a Service in Kubernetes?
Answer:Service = Stable network endpoint for a set of Pods.Problem: Pods have dynamic IPs (change on restart)
Solution: Service provides a stable IP and DNS nameTypes:
Q5: Explain the difference between Kubenet and Azure CNI
Answer:
Feature
Kubenet
Azure CNI
Pod IP
Private (10.244.x.x)
VNet IP (10.0.1.x)
IP Consumption
Low (NAT used)
High (1 IP per pod)
Performance
Slight overhead (NAT)
Direct routing (faster)
VNet Integration
No
Yes (pods directly in VNet)
Network Policies
Calico required
Native support
Use Case
Small clusters, IP conservation
Enterprise, VNet integration
Recommendation: Use Azure CNI for production (better integration, performance).
Q6: How do you implement zero-downtime deployments in AKS?
Answer:Strategy: Rolling Update with readiness probes
apiVersion: apps/v1kind: Deploymentspec: replicas: 5 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 # Max 1 pod down at a time maxSurge: 1 # Max 1 extra pod during update template: spec: containers: - name: app readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 10 periodSeconds: 5
Process:
Create 1 new pod (v2)
Wait for readiness probe to pass
Terminate 1 old pod (v1)
Repeat until all pods are v2
Result: Always 4-6 pods running (never less than 4).
Service but no Response: The service is running, but you get a 504 timeout.
The Pro Check: Do the selectors in your Service YAML exactly match the labels in your Deployment YAML? If not, the Load Balancer is sending traffic into a black hole.
Evicted Pods: Your pods are being killed randomly.
The Pro Check: Your Node is out of disk space or RAM. Check “Azure Monitor for Containers” to see which app is leaking memory.
[!TIP]
Pro Tool: Lens & k9s
While kubectl is the standard, Principal Engineers often use Lens (Desktop UI) or k9s (Terminal UI) to visualize cluster health in real-time. These tools make it instantly obvious when a deployment is failing across multiple zones.