# Cache-Aside (Lazy Loading)def get_user(user_id): # 1. Check cache first cached = redis.get(f"user:{user_id}") if cached: return json.loads(cached) # 2. Cache miss - fetch from DB user = db.query("SELECT * FROM users WHERE id = ?", user_id) # 3. Store in cache for next time redis.setex(f"user:{user_id}", 3600, json.dumps(user)) return user# Write-Throughdef update_user(user_id, data): # 1. Update database db.update("UPDATE users SET ... WHERE id = ?", data, user_id) # 2. Update cache immediately redis.setex(f"user:{user_id}", 3600, json.dumps(data))# Cache Invalidationdef delete_user(user_id): db.delete("DELETE FROM users WHERE id = ?", user_id) redis.delete(f"user:{user_id}")
-- ❌ Slow: Full table scanSELECT * FROM orders WHERE YEAR(created_at) = 2024;-- ✅ Fast: Use index-friendly querySELECT * FROM orders WHERE created_at >= '2024-01-01' AND created_at < '2025-01-01';-- ❌ Slow: SELECT *SELECT * FROM users WHERE id = 1;-- ✅ Fast: Select only needed columnsSELECT id, name, email FROM users WHERE id = 1;-- Explain query planEXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123;
-- Single column indexCREATE INDEX idx_users_email ON users(email);-- Composite index (order matters!)CREATE INDEX idx_orders_user_status ON orders(user_id, status);-- This index helps: WHERE user_id = 1 AND status = 'pending'-- Also helps: WHERE user_id = 1-- Does NOT help: WHERE status = 'pending'-- Covering index (includes all needed columns)CREATE INDEX idx_users_covering ON users(email) INCLUDE (name, created_at);
One of the most common performance issues in applications.
Copy
# ❌ N+1 Problem: 1 query for orders + N queries for usersorders = Order.objects.all() # 1 queryfor order in orders: print(order.user.name) # N queries (one per order)# ✅ Eager Loading: 2 queries totalorders = Order.objects.select_related('user').all()for order in orders: print(order.user.name) # No additional queries# For many-to-many relationshipsorders = Order.objects.prefetch_related('items').all()
// Code splitting with dynamic importsconst HeavyComponent = lazy(() => import('./HeavyComponent'));// Tree shaking - import only what you needimport { debounce } from 'lodash-es'; // ✅ Tree-shakeableimport _ from 'lodash'; // ❌ Imports everything
-- PostgreSQL: Enable slow query logALTER SYSTEM SET log_min_duration_statement = 1000; -- Log queries > 1s-- Analyze query execution planEXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)SELECT * FROM orders WHERE user_id = 123ORDER BY created_at DESCLIMIT 10;-- Look for:-- - Seq Scan (table scan, consider index)-- - High actual time-- - Large rows removed by filter-- - Sort operations on large datasets
# Rule of thumb: connections = (core_count * 2) + effective_spindle_count# For SSD: connections ≈ (cores * 2) + 1from sqlalchemy import create_engineengine = create_engine( DATABASE_URL, pool_size=20, # Base pool size max_overflow=10, # Extra connections allowed pool_timeout=30, # Wait time for connection pool_recycle=1800, # Recycle connections after 30 min pool_pre_ping=True, # Test connection before use)
Remember: “Premature optimization is the root of all evil” - Donald Knuth. Focus on clean code first, then optimize the actual bottlenecks. Always measure before and after optimization.
Common Mistake: Optimizing based on assumptions. Always profile first, optimize the actual hot paths, and verify improvements with benchmarks.