Distributed Networking Fundamentals
Networks are the backbone of distributed systems. Understanding how networks behave, fail, and recover is essential for building reliable systems.
Module Duration : 10-14 hours
Key Topics : TCP/UDP, RPC, Message Delivery Semantics, Failure Detection, Service Mesh
Interview Focus : Network partitions, idempotency, exactly-once delivery
The Network Reality
The First Fallacy : The network is reliable. It’s not, and your system design must account for this.
┌─────────────────────────────────────────────────────────────────────────────┐
│ NETWORK FAILURE MODES │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ PACKET-LEVEL FAILURES: │
│ ────────────────────── │
│ ▸ Lost packets (congestion, queue overflow, bit errors) │
│ ▸ Delayed packets (routing changes, congestion) │
│ ▸ Duplicated packets (retransmissions) │
│ ▸ Reordered packets (multi-path routing) │
│ ▸ Corrupted packets (bit flips, usually caught by checksum) │
│ │
│ CONNECTION-LEVEL FAILURES: │
│ ────────────────────────── │
│ ▸ Connection refused (server down, port not listening) │
│ ▸ Connection reset (server crashed, firewall) │
│ ▸ Connection timeout (network partition, server overloaded) │
│ ▸ Half-open connections (one side thinks it's connected) │
│ │
│ NETWORK PARTITIONS: │
│ ─────────────────── │
│ ▸ Full partition (no traffic between segments) │
│ ▸ Partial partition (some nodes can communicate, others can't) │
│ ▸ Asymmetric partition (A→B works, B→A doesn't) │
│ │
│ REAL INCIDENTS: │
│ ─────────────── │
│ • AWS 2011: Network misconfiguration → hours of outage │
│ • Azure 2014: Network packet storm → global degradation │
│ • Google 2015: Router firmware bug → traffic blackhole │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
TCP vs UDP for Distributed Systems
TCP Guarantees and Limitations
┌─────────────────────────────────────────────────────────────────────────────┐
│ TCP GUARANTEES │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ WHAT TCP PROVIDES: │
│ ────────────────── │
│ ✓ Reliable delivery (retransmissions) │
│ ✓ In-order delivery (sequence numbers) │
│ ✓ Flow control (receiver can slow sender) │
│ ✓ Congestion control (fair bandwidth sharing) │
│ ✓ Connection-oriented (handshake, teardown) │
│ │
│ WHAT TCP DOES NOT PROVIDE: │
│ ────────────────────────── │
│ ✗ Message boundaries (it's a byte stream) │
│ ✗ Delivery confirmation to application │
│ ✗ Bounded delivery time │
│ ✗ Protection against network partitions │
│ │
│ TCP STATE MACHINE (simplified): │
│ ───────────────────────────── │
│ │
│ CLOSED ──SYN──► SYN_SENT ──SYN+ACK──► ESTABLISHED │
│ ▲ │ │
│ │ │ │
│ └─────────── FIN/ACK ◄──────────────────┘ │
│ │
│ DANGER: TIME_WAIT (2MSL ≈ 60-120 seconds) │
│ Port exhaustion under high connection churn! │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
UDP Use Cases
┌─────────────────────────────────────────────────────────────────────────────┐
│ UDP IN DISTRIBUTED SYSTEMS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ UDP CHARACTERISTICS: │
│ ──────────────────── │
│ ▸ Connectionless (no handshake) │
│ ▸ Unreliable (no retransmissions) │
│ ▸ Unordered (no sequence numbers) │
│ ▸ Message-oriented (preserves boundaries) │
│ ▸ Low overhead (8 bytes vs 20+ for TCP) │
│ │
│ USE CASES: │
│ ────────── │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Gossip protocols (Cassandra, Consul) │ │
│ │ • Tolerate lost messages (redundant propagation) │ │
│ │ • Low latency important │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Failure detection heartbeats │ │
│ │ • Missing heartbeat = node might be dead │ │
│ │ • Don't want retransmission (defeats purpose) │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ QUIC (HTTP/3) │ │
│ │ • Built on UDP for flexibility │ │
│ │ • Implements own reliability layer │ │
│ │ • Reduces head-of-line blocking │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Advanced: Kernel-Bypass Networking (RDMA & DPDK)
For most applications, the standard Linux TCP/IP stack is sufficient. However, at Staff/Principal level , when building ultra-high-performance systems (like HFT platforms or distributed databases like FaRM or eRPC ), the OS kernel itself becomes the bottleneck.
The Problem: The “Kernel Tax”
Traditional networking suffers from three major overheads:
Context Switching : Moving between user space and kernel space for every send() or recv().
Data Copying : Packets are copied from the NIC to kernel buffers, then to user-space buffers (CPU-intensive).
Interrupt Handling : The CPU is interrupted for every incoming packet, thrashing caches.
1. RDMA (Remote Direct Memory Access)
RDMA allows one computer to read or write directly into the memory of another computer without involving either system’s OS or CPU.
┌─────────────────────────────────────────────────────────────────────────────┐
│ RDMA vs. TRADITIONAL TCP │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ TRADITIONAL TCP: RDMA (Kernel Bypass): │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ User App │ │ User App │ │ User App │ │ User App │ │
│ └────┬─────┘ └────▲─────┘ └────┬─────┘ └────▲─────┘ │
│ │ (Copy) │ (Copy) │ │ │
│ ┌────▼─────┐ ┌────┴─────┐ │ (Zero Copy) │ │
│ │ Kernel │ │ Kernel │ │ │ │
│ └────┬─────┘ └────▲─────┘ ┌────▼─────┐ ┌────┴─────┐ │
│ │ │ │ NIC │──────► NIC │ │
│ ┌────▼─────┐ ┌────┴─────┐ │ (RDMA) │ │ (RDMA) │ │
│ │ NIC │──────► NIC │ └──────────┘ └──────────┘ │
│ └──────────┘ └──────────┘ │
│ │
│ Latency: ~50-100μs Latency: ~1-5μs │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Key RDMA Verbs:
SEND/RECEIVE : Two-sided (both CPUs involved briefly).
READ/WRITE : One-sided (the remote CPU is never even notified). This is the fastest but hardest to program.
2. DPDK (Data Plane Development Kit)
DPDK moves the entire networking stack into user space.
Poll-Mode Drivers : Instead of waiting for interrupts, the CPU constantly “polls” the NIC for new packets.
Hugepages : Minimizes TLB misses by using 1GB memory pages.
Zero-Copy : Applications read directly from the NIC’s ring buffer.
Comparison: When to Bypass the Kernel?
Feature Standard TCP/IP DPDK RDMA (RoCE/Infiniband) Complexity Low (Standard Sockets) High (Special Libraries) Very High (Hardware-dependent) Latency 50μs - 1ms 10μs - 50μs < 5μs CPU Usage High (Kernel overhead) Medium (Polling) Low (Offloaded to NIC) Hardware Anything Standard NICs RDMA-capable NICs (RNICs)
Staff Tip: Only reach for RDMA/DPDK if your p99 latency requirements are sub-100 microseconds. For most “at-scale” systems, optimizing your serialization (Protobuf) and batching is more impactful than kernel bypass.
Hardware Offloading (SmartNICs & DPUs)
As we scale beyond 100Gbps networking, even Kernel-Bypass (DPDK) starts to consume significant CPU cycles just for processing packets. The modern solution is to move the entire Networking Data Plane into dedicated hardware.
1. The DPU (Data Processing Unit)
A DPU (or SmartNIC) is a “computer in front of the computer.” It contains its own ARM cores, memory, and specialized hardware accelerators.
AWS Nitro : Perhaps the most famous example. AWS moved VPC networking, EBS storage encryption, and management logic off the main Xeon CPU onto dedicated Nitro cards.
Offloaded Tasks :
Encryption (TLS/IPSec) : Zero CPU cost for the application.
Storage Virtualization : NVMe-over-Fabrics offload.
Network Policy : Firewalls and security groups enforced in hardware.
2. Infrastructure-as-Code in Hardware
By using DPUs, cloud providers can offer “Bare Metal” instances that still have full VPC networking and EBS support, because the infrastructure logic lives on the DPU, not in the host OS.
Level Networking Logic CPU Cost Latency Standard Kernel Stack High 100μs+ Advanced User-space (DPDK) Medium 10μs+ Principal Hardware (DPU/Nitro) Zero < 2μs
Staff Tip: When designing for “Cloud Native” scale, the DPU is the boundary that allows you to separate the Tenant workload from the Provider infrastructure .
Message Delivery Semantics
Critical Interview Topic : Understanding at-most-once, at-least-once, and exactly-once semantics is fundamental for distributed systems design.
At-Most-Once Delivery
┌─────────────────────────────────────────────────────────────────────────────┐
│ AT-MOST-ONCE DELIVERY │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ BEHAVIOR: Send message and don't retry on failure │
│ MESSAGE DELIVERED: 0 or 1 time │
│ │
│ Client Network Server │
│ │ │ │ │
│ │──── Request ──►│ │ │
│ │ │──── ✓ ──────►│ │
│ │ │ │ Process │
│ │ │◄─── Response─│ │
│ │◄─── Response ──│ │ │
│ │ │ │ │
│ │
│ FAILURE CASE: │
│ ───────────── │
│ │──── Request ──►│ │ │
│ │ │──── ✗ ───────│ (packet lost) │
│ │ │ │ │
│ │ (give up) │ │ MESSAGE NEVER DELIVERED │
│ │
│ USE CASES: │
│ ────────── │
│ • Metrics collection (losing a few is OK) │
│ • Log shipping (redundant logs cover gaps) │
│ • UDP-based protocols │
│ │
│ IMPLEMENTATION: No retries, fire-and-forget │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
At-Least-Once Delivery
┌─────────────────────────────────────────────────────────────────────────────┐
│ AT-LEAST-ONCE DELIVERY │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ BEHAVIOR: Retry until acknowledged │
│ MESSAGE DELIVERED: 1 or more times │
│ │
│ Client Network Server │
│ │ │ │ │
│ │──── Request ──►│──── ✓ ──────►│ │
│ │ │ │ Process │
│ │ │◄─── ✗ ───────│ (response lost) │
│ │ (timeout) │ │ │
│ │ │ │ │
│ │──── Retry ────►│──── ✓ ──────►│ │
│ │ │ │ Process AGAIN! │
│ │ │◄─── Response─│ │
│ │◄─── Response ──│ │ │
│ │
│ PROBLEM: DUPLICATES! │
│ ──────────────────── │
│ If server processes request but response is lost: │
│ • Client retries │
│ • Server processes AGAIN │
│ • Side effects happen twice! │
│ │
│ EXAMPLES OF DUPLICATE PROBLEMS: │
│ ─────────────────────────────── │
│ • Payment processed twice → double charge │
│ • Email sent twice → spam │
│ • Counter incremented twice → wrong count │
│ │
│ SOLUTION: Idempotency (see next section) │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Exactly-Once Delivery
┌─────────────────────────────────────────────────────────────────────────────┐
│ EXACTLY-ONCE DELIVERY │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ THE TRUTH: True exactly-once is IMPOSSIBLE in distributed systems │
│ What we achieve is "effectively exactly-once" │
│ │
│ APPROACHES: │
│ │
│ 1. IDEMPOTENCY KEY │
│ ───────────────── │
│ Client Server │
│ │ │ │
│ │─ Request + Key ────►│ │
│ │ │ Check: seen this key? │
│ │ │ NO → Process, store result │
│ │ │ YES → Return stored result │
│ │◄── Response ────────│ │
│ │
│ ```python │
│ def process_payment(idempotency_key: str, amount: float): │
│ # Check if already processed │
│ existing = cache.get(f"payment:{idempotency_key}") │
│ if existing: │
│ return existing # Return cached result │
│ │
│ # Process payment │
│ result = payment_gateway.charge(amount) │
│ │
│ # Store result atomically │
│ cache.set(f"payment:{idempotency_key}", result, ttl=24h) │
│ return result │
│ ``` │
│ │
│ 2. TRANSACTIONAL OUTBOX │
│ ──────────────────────── │
│ • Write data + event in same database transaction │
│ • Separate process reads outbox, publishes events │
│ • Deletes from outbox after confirmed delivery │
│ │
│ 3. KAFKA TRANSACTIONS (EOS) │
│ ───────────────────────── │
│ • Producer writes to multiple partitions atomically │
│ • Consumers read only committed messages │
│ • Kafka handles deduplication internally │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Global Traffic Management: Anycast vs. GSLB
When your system is distributed across the planet, the first problem is: How do I get the user to the nearest healthy datacenter?
1. DNS-based GSLB (Global Server Load Balancing)
The traditional way. The DNS server detects the user’s IP (or their resolver’s IP) and returns the IP of the closest DC.
Pros : Simple to implement.
Cons : DNS Caching . Even with low TTLs, some resolvers or browsers cache results for minutes. If a DC fails, users may still be routed to it until the cache expires.
2. IP Anycast (The Modern Way)
Anycast allows multiple physical servers (in different parts of the world) to share the same IP address .
┌─────────────────────────────────────────────────────────────────────────────┐
│ HOW ANYCAST WORKS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ User in London [Internet (BGP)] User in Tokyo │
│ │ │ │ │
│ │ (Request to 1.1.1.1) │ (Request to 1.1.1.1) │ │
│ ▼ │ ▼ │
│ ┌──────────────┐ │ ┌──────────────┐│
│ │ London Edge │◄────────────────────┴─────────────────────►│ Tokyo Edge ││
│ │ (IP 1.1.1.1) │ │ (IP 1.1.1.1) ││
│ └──────────────┘ └──────────────┘│
│ │
│ BGP (Border Gateway Protocol) ensures that the user's packet takes the │
│ shortest "path" (AS hops) to the nearest router advertising that IP. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Benefits of Anycast:
Lowest Latency : Packets naturally flow to the closest network edge.
Instant Failover : If the London Edge goes down, BGP stops advertising that IP from there. The internet automatically routes the next packet to the next closest DC (e.g., Paris).
DDoS Mitigation : Anycast naturally spreads the load of a flood attack across many global points of presence (PoPs).
Comparison Matrix
Feature Geo-DNS Anycast Control High (choose exact IP per user) Lower (BGP decides path) Failover Speed Slow (Minutes, DNS TTL) Fast (Seconds, BGP convergence) Complexity Low High (requires BGP/Autonomous System) Used By Most mid-sized apps Cloudflare, Google, AWS (Global Accelerator)
Staff Tip: For a global, high-availability service, use Anycast for your entry point (Edge/LB) and use DNS as a secondary layer for steering traffic between specific backends.
Remote Procedure Calls (RPC)
RPC Fundamentals
┌─────────────────────────────────────────────────────────────────────────────┐
│ RPC ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ GOAL: Make remote calls look like local function calls │
│ (But remember: they're NOT the same!) │
│ │
│ ┌──────────────────────────────────────────────────────────────────────┐ │
│ │ CLIENT SERVER │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Client Code │ │ Server Code │ │ │
│ │ │ │ │ │ │ │
│ │ │ result = │ │ def get_user │ │ │
│ │ │ get_user() │ │ (id): │ │ │
│ │ └──────┬───────┘ └──────▲───────┘ │ │
│ │ │ │ │ │
│ │ ┌──────▼───────┐ ┌──────┴───────┐ │ │
│ │ │ Client Stub │ │ Server Stub │ │ │
│ │ │ (Generated) │ │ (Generated) │ │ │
│ │ │ │ │ │ │ │
│ │ │ • Serialize │ │ • Deserialize│ │ │
│ │ │ • Send │ │ • Call impl │ │ │
│ │ │ • Wait │ │ • Serialize │ │ │
│ │ └──────┬───────┘ └──────▲───────┘ │ │
│ │ │ │ │ │
│ │ ┌──────▼─────────────────────────────────────────┴───────┐ │ │
│ │ │ NETWORK │ │ │
│ │ │ (HTTP/2, TCP, Unix Socket) │ │ │
│ │ └────────────────────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Popular RPC Frameworks
Google’s high-performance RPC framework. // user.proto - Define your service
syntax = "proto3" ;
service UserService {
// Unary RPC
rpc GetUser ( GetUserRequest ) returns ( User );
// Server streaming
rpc ListUsers ( ListUsersRequest ) returns ( stream User );
// Client streaming
rpc CreateUsers ( stream User ) returns ( CreateUsersResponse );
// Bidirectional streaming
rpc Chat ( stream ChatMessage ) returns ( stream ChatMessage );
}
message GetUserRequest {
string id = 1 ;
}
message User {
string id = 1 ;
string name = 2 ;
string email = 3 ;
}
Pros :
Binary protocol (efficient)
Strong typing with Protocol Buffers
Streaming support
Generated client/server code
Cons :
Binary = harder to debug
Browser support requires grpc-web
Steeper learning curve
HTTP-based APIs with JSON payloads. # Client
import requests
response = requests.get(
"https://api.example.com/users/123" ,
headers = { "Authorization" : "Bearer token" }
)
user = response.json()
# Server (FastAPI)
from fastapi import FastAPI
app = FastAPI()
@app.get ( "/users/ {user_id} " )
async def get_user ( user_id : str ):
user = await db.get_user(user_id)
return user
Pros :
Human readable
Universal browser support
Easy to debug (curl, browser)
Extensive tooling
Cons :
Text-based = larger payloads
No built-in streaming
Schema enforcement is optional
Apache Thrift - Facebook’s RPC framework. // user.thrift
struct User {
1: string id,
2: string name,
3: string email,
}
service UserService {
User getUser(1: string id),
list<User> listUsers(),
}
Pros :
Multi-language support
Binary or JSON protocols
Mature and battle-tested
Cons :
Less active development
Smaller community than gRPC
RPC Failure Modes
Interview Question : “What can go wrong with an RPC call?”
┌─────────────────────────────────────────────────────────────────────────────┐
│ RPC FAILURE SCENARIOS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. REQUEST LOST │
│ ───────────── │
│ Client──►Network──X │
│ Result: Server never sees request │
│ Solution: Timeout + retry │
│ │
│ 2. REQUEST DELIVERED, SERVER CRASHED BEFORE PROCESSING │
│ ──────────────────────────────────────────────────── │
│ Client──►Server (crashes) │
│ Result: Request lost │
│ Solution: Timeout + retry │
│ │
│ 3. SERVER PROCESSED, RESPONSE LOST │
│ ───────────────────────────────── │
│ Client──►Server──►Processing──►Response──X │
│ Result: Client times out, retries │
│ Problem: WORK DONE TWICE! │
│ Solution: Idempotency │
│ │
│ 4. SERVER SLOW (not dead) │
│ ─────────────────────── │
│ Client──►Server (processing slowly)──►Timeout │
│ Result: Client thinks server is dead, retries │
│ Problem: Request processed multiple times! │
│ Solution: Idempotency + longer timeouts │
│ │
│ 5. SERVER OVERLOADED │
│ ───────────────── │
│ Client──►Server (queue full)──►429 / Connection refused │
│ Result: Request rejected │
│ Solution: Retry with backoff + circuit breaker │
│ │
│ KEY INSIGHT: │
│ ──────────── │
│ The client can NEVER know for sure if the server processed the request. │
│ This is the fundamental uncertainty of distributed systems. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Failure Detection
Heartbeat-Based Detection
┌─────────────────────────────────────────────────────────────────────────────┐
│ HEARTBEAT FAILURE DETECTION │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ BASIC HEARTBEAT: │
│ ──────────────── │
│ Node periodically sends "I'm alive" message │
│ If no heartbeat for X seconds → declare dead │
│ │
│ Timeline: │
│ ───────── │
│ Node A ──♥──────♥──────♥────────────────────────────────── │
│ t=0 t=1 t=2 t=3 t=4 t=5 │
│ │ │ │ │
│ timeout DEAD! actually dead │
│ │
│ PROBLEM: FALSE POSITIVES │
│ ──────────────────────── │
│ • Network delay: Heartbeat arrives late │
│ • GC pause: Node stopped for garbage collection │
│ • CPU spike: Node too busy to send heartbeat │
│ │
│ SOLUTIONS: │
│ │
│ 1. LONGER TIMEOUTS │
│ Tradeoff: Slower failure detection │
│ │
│ 2. MULTIPLE HEARTBEAT MISSES │
│ Rule: Dead after 3 consecutive misses │
│ │
│ 3. PHI ACCRUAL DETECTOR │
│ • Track heartbeat arrival times │
│ • Build statistical model of arrival intervals │
│ • Calculate probability node is dead │
│ • Adapt to network conditions │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Phi Accrual Failure Detector
# Phi Accrual Failure Detector Implementation
import math
import time
from collections import deque
class PhiAccrualDetector :
"""
Phi Accrual Failure Detector (Cassandra, Akka use this)
Instead of binary "dead/alive", outputs a suspicion level (phi).
Higher phi = more suspicious that node is dead.
"""
def __init__ ( self , threshold : float = 8.0 , window_size : int = 1000 ):
self .threshold = threshold # phi > threshold → consider dead
self .window_size = window_size
self .intervals = deque( maxlen = window_size)
self .last_heartbeat = None
def heartbeat ( self ):
"""Record a heartbeat arrival."""
now = time.time()
if self .last_heartbeat is not None :
interval = now - self .last_heartbeat
self .intervals.append(interval)
self .last_heartbeat = now
def phi ( self ) -> float :
"""
Calculate the suspicion level.
Returns infinity if no heartbeats received.
Returns 0 if heartbeat just received.
"""
if self .last_heartbeat is None or len ( self .intervals) == 0 :
return float ( 'inf' )
now = time.time()
time_since_last = now - self .last_heartbeat
# Calculate mean and variance of intervals
mean = sum ( self .intervals) / len ( self .intervals)
variance = sum ((x - mean) ** 2 for x in self .intervals) / len ( self .intervals)
std_dev = math.sqrt(variance) if variance > 0 else mean / 4
# Calculate phi using normal distribution CDF
# P(X > time_since_last) where X ~ N(mean, std_dev)
y = (time_since_last - mean) / std_dev
probability = 1 - 0.5 * ( 1 + math.erf(y / math.sqrt( 2 )))
# Convert to phi
if probability == 0 :
return float ( 'inf' )
return - math.log10(probability)
def is_available ( self ) -> bool :
"""Check if node is considered available."""
return self .phi() < self .threshold
# Usage
detector = PhiAccrualDetector( threshold = 8.0 )
# Simulate heartbeats
for _ in range ( 10 ):
detector.heartbeat()
time.sleep( 1.0 ) # Normal 1-second heartbeat
print ( f "Phi: { detector.phi() } " ) # Low phi
print ( f "Available: { detector.is_available() } " ) # True
time.sleep( 5.0 ) # Miss several heartbeats
print ( f "Phi after delay: { detector.phi() } " ) # High phi
print ( f "Available: { detector.is_available() } " ) # Likely False
Gossip Protocols
How Gossip Works
┌─────────────────────────────────────────────────────────────────────────────┐
│ GOSSIP PROTOCOL │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ANALOGY: Like how rumors spread in a social network │
│ │
│ ALGORITHM: │
│ ────────── │
│ Every T seconds: │
│ 1. Pick random peer from known nodes │
│ 2. Exchange state information │
│ 3. Merge received state with local state │
│ │
│ EXAMPLE: Membership gossip │
│ ───────────────────────── │
│ │
│ Round 1: │
│ ┌───┐ ┌───┐ ┌───┐ ┌───┐ │
│ │ A │────►│ B │ │ C │ │ D │ │
│ │{A}│ │{B}│ │{C}│ │{D}│ │
│ └───┘ └───┘ └───┘ └───┘ │
│ │
│ Round 2: After gossip │
│ ┌───┐ ┌───┐ ┌───┐ ┌───┐ │
│ │ A │ │ B │────►│ C │ │ D │ │
│ │A,B│ │A,B│ │ C │ │ D │ │
│ └───┘ └───┘ └───┘ └───┘ │
│ │
│ After O(log N) rounds: All nodes know about all others │
│ │
│ PROPERTIES: │
│ ─────────── │
│ ✓ Scalable: O(log N) rounds to converge │
│ ✓ Fault-tolerant: Works with node failures │
│ ✓ Decentralized: No single point of failure │
│ ✓ Eventually consistent: All nodes converge │
│ │
│ USED BY: Cassandra, Consul, SWIM, Serf │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
SWIM Protocol
┌─────────────────────────────────────────────────────────────────────────────┐
│ SWIM (Scalable Weakly-consistent Infection-style │
│ Membership) PROTOCOL │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ MEMBERSHIP DETECTION: │
│ ───────────────────── │
│ │
│ Every T seconds, node M: │
│ │
│ 1. DIRECT PROBE │
│ ┌───┐ ping ┌───┐ │
│ │ M │────────►│ J │ (random node J) │
│ └───┘ └───┘ │
│ ◄──ack──── │
│ │
│ If ack received: J is alive │
│ │
│ 2. INDIRECT PROBE (if no ack) │
│ ┌───┐ ping-req ┌───┐ ping ┌───┐ │
│ │ M │───────────►│ K │───────►│ J │ │
│ └───┘ └───┘ └───┘ │
│ ◄─────────────ack───────── │
│ │
│ Ask K nodes to probe J on M's behalf │
│ Reduces false positives from network issues │
│ │
│ 3. SUSPECT STATE │
│ If still no response: │
│ - Mark J as "suspect" │
│ - Gossip suspicion to other nodes │
│ - J has timeout to refute │
│ - If not refuted: mark as "dead" │
│ │
│ DISSEMINATION: │
│ ───────────── │
│ Piggyback membership updates on protocol messages │
│ No extra bandwidth for gossip! │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Service Mesh and Sidecars
Modern Networking Patterns
┌─────────────────────────────────────────────────────────────────────────────┐
│ SERVICE MESH ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ TRADITIONAL: Library-based networking │
│ ───────────────────────────────────── │
│ ┌────────────────────────────────────────┐ │
│ │ APPLICATION │ │
│ │ ┌──────────────────────────────────┐ │ │
│ │ │ • Retry logic │ │ │
│ │ │ • Circuit breaker │ │ │
│ │ │ • Load balancing │ │ │
│ │ │ • Service discovery │ │ │
│ │ │ • TLS/mTLS │ │ ← Duplicated in every service │
│ │ │ • Tracing │ │ │
│ │ │ • Metrics │ │ │
│ │ └──────────────────────────────────┘ │ │
│ └────────────────────────────────────────┘ │
│ │
│ SERVICE MESH: Sidecar proxy handles networking │
│ ────────────────────────────────────────────── │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ POD │ │
│ │ ┌───────────────────┐ ┌───────────────────────────────────┐ │ │
│ │ │ APPLICATION │───►│ SIDECAR PROXY │ │ │
│ │ │ │ │ (Envoy, Linkerd-proxy) │ │ │
│ │ │ Just business │ │ │ │ │
│ │ │ logic! │ │ • Retries, timeouts │ │ │
│ │ │ │ │ • Circuit breaking │ │ │
│ │ └───────────────────┘ │ • Load balancing │ │ │
│ │ │ • mTLS (automatic) │ │ │
│ │ │ • Metrics & tracing │ │ │
│ │ │ • Rate limiting │ │ │
│ │ └───────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────────────┘ │
│ │
│ CONTROL PLANE: │
│ ────────────── │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ CONTROL PLANE (Istiod, Linkerd) │ │
│ │ • Service discovery │ │
│ │ • Traffic policies │ │
│ │ • Certificate management │ │
│ │ • Configuration distribution │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Service Mesh Comparison
Feature Istio Linkerd Consul Connect Proxy Envoy Linkerd-proxy (Rust) Envoy Complexity High Low Medium Performance Good Best Good mTLS Yes Yes Yes Traffic Management Advanced Basic Medium Observability Comprehensive Good Good Best For Large enterprises Simplicity focused HashiCorp stack
Interview Practice
Q1: Design a robust RPC client
Question : Design an RPC client that handles transient failures gracefully.Key Points :
Retry with exponential backoff and jitter
Idempotency keys for non-idempotent operations
Circuit breaker to fail fast when service is down
Deadline propagation across service calls
Connection pooling for efficiency
class RobustRPCClient :
def __init__ ( self , service_url : str ):
self .service_url = service_url
self .circuit_breaker = CircuitBreaker(
failure_threshold = 5 ,
recovery_timeout = 30
)
self .connection_pool = ConnectionPool( max_size = 10 )
async def call (
self ,
method : str ,
payload : dict ,
idempotency_key : str = None ,
deadline : datetime = None
):
if self .circuit_breaker.is_open():
raise ServiceUnavailable( "Circuit breaker open" )
remaining_time = (deadline - datetime.now()).total_seconds()
if remaining_time <= 0 :
raise DeadlineExceeded( "Request deadline passed" )
for attempt in range ( 3 ):
try :
conn = await self .connection_pool.acquire()
response = await conn.call(
method,
payload,
headers = { "Idempotency-Key" : idempotency_key},
timeout = remaining_time
)
self .circuit_breaker.record_success()
return response
except TransientError as e:
self .circuit_breaker.record_failure()
await asyncio.sleep(backoff_with_jitter(attempt))
except PermanentError:
raise # Don't retry
finally :
self .connection_pool.release(conn)
Q2: Explain the Two Generals Problem
Question : What is the Two Generals Problem and how does it relate to distributed systems?Answer :
The Two Generals Problem demonstrates the impossibility of achieving consensus over an unreliable channel.Scenario :
Two armies (A and B) must attack simultaneously to win
They communicate via messengers through enemy territory
Messengers can be captured (messages lost)
The Problem :
A sends “Attack at dawn” to B
A doesn’t know if B received it
B sends “Acknowledged” to A
B doesn’t know if A received the ack
A sends “Acknowledged your ack” to B
This continues infinitely!
Implications for Distributed Systems :
No protocol can guarantee agreement with unreliable messaging
This is why we need:
Timeouts and retries
Idempotency
Eventual consistency (accept uncertainty)
Consensus protocols (for synchronous systems)
Q3: Design a distributed notification system
Question : Design a system that sends notifications to millions of users with at-least-once delivery.Architecture :┌─────────────────────────────────────────────────────────────────┐
│ │
│ Event Source ──► Kafka ──► Notification Workers ──► Channels │
│ (scaled horizontally) │
│ │
│ Channels: Push, Email, SMS, In-App │
│ │
└─────────────────────────────────────────────────────────────────┘
Key Design Decisions :
Message Queue : Kafka for durability and replay
Partitioning : By user_id for ordering per user
Deduplication : Store sent notification IDs in Redis
Retry Strategy : Exponential backoff per channel
Dead Letter Queue : For failed notifications
Rate Limiting : Per user and per channel
Key Takeaways
Networks Fail Design for packet loss, delays, partitions, and duplicates. Never assume reliability.
Idempotency is Essential Make operations safe to retry. Use idempotency keys for non-idempotent operations.
Choose Semantics Wisely Understand at-most-once, at-least-once, and exactly-once. Pick based on your use case.
Detect Failures Carefully False positives are dangerous. Use adaptive failure detection like Phi Accrual.
Next Steps