Module 18: Container Networking
Containers have transformed how we deploy applications, but they bring unique networking challenges. This module covers Docker networking, Kubernetes networking, and service mesh architectures.
Estimated Time : 4-5 hours
Difficulty : Intermediate to Advanced
Prerequisites : Module 10 (NAT), Module 13 (Load Balancing)
18.1 Container Networking Fundamentals
The Challenge
Each container needs:
Its own network namespace (isolated network stack)
An IP address
Ability to communicate with other containers
(Sometimes) Access to the outside world
Linux Network Namespaces
Containers use Linux namespaces for isolation:
┌─────────────────────────────────────────────────────────┐
│ Host Machine │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Container A │ │ Container B │ │
│ │ ┌───────────┐ │ │ ┌───────────┐ │ │
│ │ │ Network │ │ │ │ Network │ │ │
│ │ │ Namespace │ │ │ │ Namespace │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ eth0 │ │ │ │ eth0 │ │ │
│ │ │ 172.17.0.2│ │ │ │ 172.17.0.3│ │ │
│ │ └───────────┘ │ │ └───────────┘ │ │
│ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │
│ └───────────┬───────────┘ │
│ │ │
│ ┌────────┴────────┐ │
│ │ docker0 bridge │ │
│ │ 172.17.0.1 │ │
│ └────────┬────────┘ │
│ │ │
│ ┌────────┴────────┐ │
│ │ Host eth0 │ │
│ │ 192.168.1.100 │ │
│ └─────────────────┘ │
└─────────────────────────────────────────────────────────┘
18.2 Docker Networking Modes
1. Bridge Network (Default)
Containers connect to a virtual bridge.
# Create container on default bridge
docker run -d nginx
# Inspect network
docker network inspect bridge
┌───────────────────────────────────────────────────────────┐
│ docker0 (172.17.0.1) │
│ │ │
│ ┌───────────────────────┼───────────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────┐ ┌──────┐ ┌──────┐ │
│ │nginx │ │redis │ │mysql │ │
│ │.0.2 │ │.0.3 │ │.0.4 │ │
│ └──────┘ └──────┘ └──────┘ │
└───────────────────────────────────────────────────────────┘
Container-to-Container: Direct via bridge
Container-to-Internet: NAT through docker0
2. User-Defined Bridge
Better than default bridge - provides DNS resolution.
# Create custom network
docker network create myapp
# Run containers on it
docker run -d --name web --network myapp nginx
docker run -d --name api --network myapp node
# Containers can reach each other by name!
docker exec web ping api # Works!
3. Host Network
Container shares host’s network stack.
docker run --network host nginx
┌────────────────────────────────────────┐
│ Host Machine │
│ │
│ Container uses host's eth0 directly │
│ No NAT, no bridge │
│ Port 80 in container = Port 80 on host│
│ │
└────────────────────────────────────────┘
Use Case: Maximum network performance (no overhead)
Drawback: No port mapping, no isolation
4. None Network
Container has no network connectivity.
docker run --network none alpine
Use Case: Maximum isolation, batch processing
5. Overlay Network (Multi-Host)
Spans multiple Docker hosts (Docker Swarm).
┌─────────────────┐ ┌─────────────────┐
│ Host 1 │ │ Host 2 │
│ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ Container A │ │ │ │ Container B │ │
│ │ 10.0.0.2 │ │ │ │ 10.0.0.3 │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │
│ Overlay Network (VXLAN tunnels) │
│ └────────┼─────┼────────┘ │
└─────────────────┘ └─────────────────┘
18.3 Port Publishing
Expose container ports to the host:
# Map host 8080 to container 80
docker run -p 8080:80 nginx
# Map to specific interface
docker run -p 127.0.0.1:8080:80 nginx
# Random host port
docker run -p 80 nginx
docker port < containe r > # See assigned port
Port Publishing Flow
External Request → Host:8080
│
▼ (iptables DNAT)
Container:80 (172.17.0.2:80)
18.4 Kubernetes Networking Model
Kubernetes has specific networking requirements:
All Pods can communicate with all other Pods without NAT
All Nodes can communicate with all Pods without NAT
The IP a Pod sees itself as is the same IP others see it as
Pod Networking
Each Pod gets a unique IP:
┌─────────────────────────────────────────────────────────┐
│ Node │
│ │
│ ┌────────────────────────┐ ┌────────────────────────┐ │
│ │ Pod A │ │ Pod B │ │
│ │ ┌───────┬───────┐ │ │ ┌───────────────┐ │ │
│ │ │ nginx │ sidecar│ │ │ │ redis │ │ │
│ │ └───┬───┴───┬───┘ │ │ └───────┬───────┘ │ │
│ │ │ lo │ │ │ │ │ │
│ │ └───┬───┘ │ │ ┌─────┘ │ │
│ │ eth0: 10.244.1.5 │ │ eth0: 10.244.1.6 │ │
│ └──────────────┬──────────┘ └──────────┬────────────┘ │
│ │ │ │
│ └──────────┬─────────────┘ │
│ │ │
│ ┌───────┴───────┐ │
│ │ CNI Plugin │ │
│ │ (Calico,etc) │ │
│ └───────────────┘ │
└─────────────────────────────────────────────────────────┘
Containers in the same Pod share network namespace (can use localhost).
18.5 Kubernetes Services
Services provide stable endpoints for Pods.
ClusterIP (Default)
Internal cluster access only:
apiVersion : v1
kind : Service
metadata :
name : my-service
spec :
type : ClusterIP
selector :
app : my-app
ports :
- port : 80
targetPort : 8080
┌───────────────────────────────┐
│ Service: my-service │
│ ClusterIP: 10.96.50.100 │
│ Port: 80 │
└───────────────┬───────────────┘
│
┌──────────────────┼──────────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Pod 1 │ │ Pod 2 │ │ Pod 3 │
│ :8080 │ │ :8080 │ │ :8080 │
└──────────┘ └──────────┘ └──────────┘
How it works: kube-proxy configures iptables/IPVS rules to DNAT ClusterIP to Pod IPs.
NodePort
Exposes service on each node’s IP:
apiVersion : v1
kind : Service
metadata :
name : my-service
spec :
type : NodePort
selector :
app : my-app
ports :
- port : 80
targetPort : 8080
nodePort : 30080 # 30000-32767
External → Node1:30080 ─┐
External → Node2:30080 ─┼──→ Service ──→ Any Pod
External → Node3:30080 ─┘
LoadBalancer
Provisions cloud load balancer:
apiVersion : v1
kind : Service
metadata :
name : my-service
spec :
type : LoadBalancer
selector :
app : my-app
ports :
- port : 80
targetPort : 8080
Internet → Cloud LB (public IP) → NodePort → Service → Pods
Headless Service
Direct Pod IPs, no load balancing:
apiVersion : v1
kind : Service
metadata :
name : my-headless
spec :
clusterIP : None # Headless!
selector :
app : my-app
DNS returns all Pod IPs. Useful for StatefulSets.
18.6 CNI (Container Network Interface)
CNI plugins implement Kubernetes networking:
Popular CNI Plugins
Plugin Features Calico Network policies, BGP, IPIP tunnels Flannel Simple overlay, VXLAN Cilium eBPF-based, advanced observability Weave Mesh overlay, encryption AWS VPC CNI Native VPC IPs for pods
Calico Example
┌────────────────────────────────────────────────────────────┐
│ Cluster │
│ │
│ Node 1 Node 2 │
│ ┌────────────────┐ ┌────────────────┐ │
│ │ Pod: 10.244.1.5│ │ Pod: 10.244.2.3│ │
│ └───────┬────────┘ └───────┬────────┘ │
│ │ │ │
│ ┌──────┴──────┐ ┌──────┴──────┐ │
│ │ Calico Agent│ │ Calico Agent│ │
│ │ (Felix) │ │ (Felix) │ │
│ └──────┬──────┘ └──────┬──────┘ │
│ │ BGP peering │ │
│ └────────────┬──────────────────────┘ │
│ │ │
│ Routes learned via BGP │
│ Node1 knows: 10.244.2.0/24 via Node2 │
│ Node2 knows: 10.244.1.0/24 via Node1 │
└────────────────────────────────────────────────────────────┘
18.7 Network Policies
Control traffic between Pods:
apiVersion : networking.k8s.io/v1
kind : NetworkPolicy
metadata :
name : db-policy
namespace : default
spec :
podSelector :
matchLabels :
app : database
policyTypes :
- Ingress
- Egress
ingress :
- from :
- podSelector :
matchLabels :
app : backend
ports :
- protocol : TCP
port : 5432
egress :
- to :
- podSelector :
matchLabels :
app : backend
┌─────────────────────────────────────────┐
│ Network Policy: db-policy │
└─────────────────────────────────────────┘
│
Only pods with app=backend
can connect to port 5432
│
┌──────────────┐ │ ┌──────────────┐
│ Frontend │ │ │ Backend │
│ app=frontend│ ───────╳────────┤───────✓────────│ app=backend │
└──────────────┘ Blocked! │ Allowed └──────┬───────┘
│ │
▼ ▼
┌───────────────┐ ┌───────────────┐
│ Database │◄──────│ Database │
│ app=database │ │ app=database │
└───────────────┘ └───────────────┘
18.8 Ingress
Manage external access to services (L7):
apiVersion : networking.k8s.io/v1
kind : Ingress
metadata :
name : my-ingress
spec :
rules :
- host : api.example.com
http :
paths :
- path : /users
pathType : Prefix
backend :
service :
name : user-service
port :
number : 80
- path : /orders
pathType : Prefix
backend :
service :
name : order-service
port :
number : 80
Internet
│
▼
┌─────────────────┐
│ Ingress Controller│
│ (nginx, traefik) │
└────────┬────────┘
│
┌─────────────┼─────────────┐
│ │ │
/users /orders /products
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│user-svc │ │order-svc │ │product-svc│
└──────────┘ └──────────┘ └──────────┘
18.9 Service Mesh
The Problem
As microservices grow, networking becomes complex:
Service discovery
Load balancing
Encryption (mTLS)
Observability (traces, metrics)
Retries, timeouts, circuit breakers
Service Mesh Solution
Add a sidecar proxy to every pod:
┌──────────────────────────────────────────────────────────┐
│ Pod │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ App │◄─────────►│ Sidecar │◄────┐ │
│ │ Container │ localhost │ (Envoy) │ │ │
│ └─────────────┘ └─────────────┘ │ │
│ │ │
└──────────────────────────────────────────────────┼───────┘
│
All external traffic goes through sidecar
│
▼
Other Pods
Popular Service Meshes
Mesh Sidecar Features Istio Envoy Full-featured, complex Linkerd linkerd2-proxy Lightweight, simple Consul Connect Envoy HashiCorp ecosystem AWS App Mesh Envoy AWS native
Istio Architecture
┌─────────────────────────────────────────────────────────────┐
│ Control Plane (Istiod) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Pilot │ │ Citadel │ │ Galley │ │
│ │ (config) │ │ (certs) │ │ (validation)│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└────────────────────────────┬────────────────────────────────┘
│ Config + Certs
▼
┌────────────────────────────────────────────────────────────┐
│ Data Plane │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Pod A │ │ Pod B │ │
│ │ ┌─────┐ ┌────┐ │ mTLS │ ┌────┐ ┌─────┐ │ │
│ │ │ App │←│Envoy│◄────────────►│Envoy│→│ App │ │ │
│ │ └─────┘ └────┘ │ │ └────┘ └─────┘ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
18.10 Debugging Container Networks
Docker
# Inspect container network
docker inspect < containe r > | jq '.[0].NetworkSettings'
# Exec into container for debugging
docker exec -it < containe r > sh
# Inside: ping, curl, nc, etc.
# View bridge
docker network inspect bridge
# Test connectivity
docker run --rm nicolaka/netshoot ping < targe t >
Kubernetes
# Get pod IPs
kubectl get pods -o wide
# Describe service endpoints
kubectl describe svc < service-nam e >
# Debug pod networking
kubectl run debug --image=nicolaka/netshoot -it --rm -- bash
# Inside debug pod:
nslookup my-service
curl my-service:80
ping < pod-i p >
# Check CNI
kubectl logs -n kube-system -l k8s-app=calico-node
18.11 Key Takeaways
Pods Get IPs Every Kubernetes Pod gets a unique, routable IP address.
Services Abstract Pods Services provide stable endpoints; Pods come and go.
CNI Does the Work CNI plugins implement the actual networking.
Service Mesh for Complex Needs Use service mesh for mTLS, observability, traffic management.
Course Completion
Congratulations! You’ve completed the Networking Mastery course. You now have deep knowledge of:
IP addressing, subnetting, and CIDR
NAT and how private networks access the internet
Routing protocols and how packets find their way
DNS and domain name resolution
Load balancing and reverse proxies
Network troubleshooting tools
VPNs and secure tunneling
Firewalls and security groups
Container and Kubernetes networking
Practice Resources
Set up a home lab with VMs/containers
Get hands-on with AWS VPC
Deploy a Kubernetes cluster and explore networking
Capture and analyze packets with Wireshark