Migration Patterns
Migrating from a monolith to microservices is a journey, not a destination. Learn battle-tested patterns for safe, incremental migration.Learning Objectives:
- Understand why incremental migration beats big-bang rewrites
- Master the Strangler Fig pattern
- Implement Branch by Abstraction
- Learn database migration strategies
- Handle dual-writes and data synchronization
Why Migrations Fail
Copy
┌─────────────────────────────────────────────────────────────────────────────┐
│ MIGRATION ANTI-PATTERNS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ❌ BIG BANG REWRITE │
│ ───────────────────── │
│ │
│ "Let's rewrite everything from scratch" │
│ │
│ Problems: │
│ • Takes years (Netscape took 3 years, lost market share) │
│ • No value delivered until complete │
│ • Original system still needs maintenance │
│ • Requirements change during rewrite │
│ • Team burnout │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ ❌ WRONG SERVICE BOUNDARIES │
│ ────────────────────────────── │
│ │
│ "Let's split by technical layer" │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ UI Service │ ──▶ │ API Service │ ──▶ │ DB Service │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ Problems: │
│ • One change requires updating all three │
│ • Tight coupling disguised as services │
│ • Distributed monolith │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ ❌ MIGRATING DATA TOO EARLY │
│ ──────────────────────────── │
│ │
│ "Let's first migrate all the data, then the code" │
│ │
│ Problems: │
│ • Data coupling remains │
│ • Complex synchronization │
│ • Risk of data inconsistency │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The Strangler Fig Pattern
Named after strangler fig plants that grow around host trees, eventually replacing them entirely. Similarly, we gradually replace monolith functionality with microservices.
Copy
┌─────────────────────────────────────────────────────────────────────────────┐
│ STRANGLER FIG PATTERN │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ PHASE 1: INTERCEPT │
│ ────────────────────── │
│ │
│ ┌───────────────────┐ │
│ Client ─▶│ Facade/Proxy │──────────────────────▶ Monolith │
│ └───────────────────┘ │
│ │
│ All traffic goes through facade (no code changes yet) │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ PHASE 2: EXTRACT │
│ ────────────────── │
│ │
│ ┌───────────────────┐ ┌──────────────────┐ │
│ Client ─▶│ Facade/Proxy │────────▶│ New User Svc │ │
│ │ │ └──────────────────┘ │
│ │ │ │
│ └───────────────────┘──────────────────────▶ Monolith │
│ │ (minus users) │
│ │ │
│ └──▶ /users/* → New Service │
│ /* → Monolith │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ PHASE 3: REPEAT │
│ ───────────────── │
│ │
│ ┌───────────────────┐ ┌──────────────────┐ │
│ Client ─▶│ Facade/Proxy │────────▶│ User Service │ │
│ │ │ ├──────────────────┤ │
│ │ │────────▶│ Order Service │ │
│ │ │ ├──────────────────┤ │
│ │ │────────▶│ Payment Service │ │
│ └───────────────────┘ └──────────────────┘ │
│ │ │
│ └──▶ Monolith (shrinking) │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ PHASE 4: RETIRE │
│ ────────────────── │
│ │
│ ┌───────────────────┐ ┌──────────────────┐ │
│ Client ─▶│ API Gateway │────────▶│ User Service │ │
│ │ │ ├──────────────────┤ │
│ │ │────────▶│ Order Service │ │
│ │ │ ├──────────────────┤ │
│ │ │────────▶│ Payment Service │ │
│ │ │ ├──────────────────┤ │
│ │ │────────▶│ Catalog Service │ │
│ └───────────────────┘ └──────────────────┘ │
│ │
│ Monolith is dead! 🎉 │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Implementation
Copy
// strangler-facade.js - The routing layer that enables migration
const express = require('express');
const { createProxyMiddleware } = require('http-proxy-middleware');
const app = express();
// Configuration: which routes go where
const routingConfig = {
// New microservices
microservices: {
'/api/users': 'http://user-service:3001',
'/api/orders': 'http://order-service:3002',
'/api/payments': 'http://payment-service:3003'
},
// Everything else goes to monolith
monolith: 'http://monolith:8080'
};
// Feature flags for gradual rollout
const featureFlags = {
'users-service': { enabled: true, percentage: 100 },
'orders-service': { enabled: true, percentage: 50 }, // 50% traffic
'payments-service': { enabled: false, percentage: 0 } // Still in monolith
};
// Determine if request should go to microservice
function shouldRouteToMicroservice(serviceName, req) {
const flag = featureFlags[serviceName];
if (!flag || !flag.enabled) return false;
// Percentage rollout based on user ID or random
if (flag.percentage < 100) {
const userId = req.headers['x-user-id'] || req.ip;
const hash = hashCode(userId);
return (hash % 100) < flag.percentage;
}
return true;
}
function hashCode(str) {
let hash = 0;
for (let i = 0; i < str.length; i++) {
hash = ((hash << 5) - hash) + str.charCodeAt(i);
hash |= 0;
}
return Math.abs(hash);
}
// Dynamic routing middleware
app.use((req, res, next) => {
const path = req.path;
// Find matching microservice route
for (const [route, target] of Object.entries(routingConfig.microservices)) {
if (path.startsWith(route)) {
const serviceName = route.split('/').pop() + '-service';
if (shouldRouteToMicroservice(serviceName, req)) {
// Route to microservice
req.routingDecision = { target, type: 'microservice' };
} else {
// Route to monolith
req.routingDecision = { target: routingConfig.monolith, type: 'monolith' };
}
break;
}
}
// Default to monolith
if (!req.routingDecision) {
req.routingDecision = { target: routingConfig.monolith, type: 'monolith' };
}
// Log for analysis
console.log(`[Strangler] ${req.method} ${path} -> ${req.routingDecision.type}`);
next();
});
// Create proxy for each target
app.use('/api/users', (req, res, next) => {
if (req.routingDecision.type === 'microservice') {
createProxyMiddleware({
target: routingConfig.microservices['/api/users'],
changeOrigin: true,
pathRewrite: { '^/api/users': '' },
onError: (err, req, res) => {
// Fallback to monolith on error
console.error('[Strangler] Microservice error, falling back:', err.message);
createProxyMiddleware({
target: routingConfig.monolith,
changeOrigin: true
})(req, res, next);
}
})(req, res, next);
} else {
createProxyMiddleware({
target: routingConfig.monolith,
changeOrigin: true
})(req, res, next);
}
});
// Catch-all for monolith
app.use(createProxyMiddleware({
target: routingConfig.monolith,
changeOrigin: true
}));
app.listen(3000);
Branch by Abstraction
Use when you need to replace a component inside the monolith before extracting it. Create an abstraction layer, swap implementations, then extract.
Copy
┌─────────────────────────────────────────────────────────────────────────────┐
│ BRANCH BY ABSTRACTION │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ STEP 1: IDENTIFY COMPONENT │
│ ───────────────────────────── │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ MONOLITH │ │
│ │ │ │
│ │ Code A ──┬──▶ Notification Module ◀──┬── Code C │ │
│ │ │ │ │ │
│ │ Code B ──┘ (direct dependency) └── Code D │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ STEP 2: CREATE ABSTRACTION │
│ ──────────────────────────── │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ MONOLITH │ │
│ │ │ │
│ │ Code A ──┬──▶ NotificationInterface ◀──┬── Code C │ │
│ │ │ │ │ │ │
│ │ Code B ──┘ ▼ └── Code D │ │
│ │ OldNotificationImpl │ │
│ │ (existing code) │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ STEP 3: IMPLEMENT NEW VERSION │
│ ──────────────────────────────── │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ MONOLITH │ │
│ │ ┌──────────────────┐ │ │
│ │ Code ──▶ NotificationInterface │ │ │
│ │ │ │ toggle │ │ │
│ │ ┌───────┴───────┐ ▼ ▼ │ │
│ │ │ │ │ │
│ │ OldNotificationImpl NewNotificationImpl │ │
│ │ (calls microservice) │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ └────────────────▶ Notification Microservice │
│ │
│ STEP 4: MIGRATE AND REMOVE │
│ ────────────────────────────── │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ MONOLITH │ │
│ │ │ │
│ │ Code ──▶ NotificationClient ─────────▶ Notification Microservice │ │
│ │ │ │
│ │ (Old implementation deleted) │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Implementation Example
Copy
// branch-by-abstraction.js
// STEP 1: Original code (tightly coupled)
// Before: Direct usage of notification module
class OrderService_Before {
async placeOrder(order) {
// ... order logic ...
// Direct dependency - hard to change
const emailer = require('./email-sender');
emailer.send(order.userId, 'Order placed', orderDetails);
const sms = require('./sms-sender');
sms.send(order.phone, 'Your order is placed');
}
}
// =========================================================================
// STEP 2: Create abstraction interface
class NotificationService {
async sendOrderConfirmation(order) {
throw new Error('Not implemented');
}
async sendShippingUpdate(order, trackingInfo) {
throw new Error('Not implemented');
}
async sendDeliveryNotification(order) {
throw new Error('Not implemented');
}
}
// STEP 3: Implement with old code (same behavior)
class LegacyNotificationService extends NotificationService {
constructor() {
super();
this.emailer = require('./email-sender');
this.sms = require('./sms-sender');
}
async sendOrderConfirmation(order) {
await this.emailer.send(
order.userId,
'Order Confirmation',
this.formatOrderEmail(order)
);
if (order.phone) {
await this.sms.send(order.phone, `Order ${order.id} confirmed!`);
}
}
async sendShippingUpdate(order, trackingInfo) {
await this.emailer.send(
order.userId,
'Your order has shipped',
this.formatShippingEmail(order, trackingInfo)
);
}
}
// STEP 4: Implement with new microservice
class MicroserviceNotificationService extends NotificationService {
constructor(serviceUrl) {
super();
this.baseUrl = serviceUrl;
}
async sendOrderConfirmation(order) {
await fetch(`${this.baseUrl}/notifications/order-confirmation`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
userId: order.userId,
orderId: order.id,
items: order.items,
total: order.total
})
});
}
async sendShippingUpdate(order, trackingInfo) {
await fetch(`${this.baseUrl}/notifications/shipping`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
userId: order.userId,
orderId: order.id,
trackingNumber: trackingInfo.number,
carrier: trackingInfo.carrier
})
});
}
}
// STEP 5: Feature flag to switch implementations
class NotificationServiceFactory {
static create() {
const useNewService = process.env.USE_NOTIFICATION_MICROSERVICE === 'true';
if (useNewService) {
return new MicroserviceNotificationService(
process.env.NOTIFICATION_SERVICE_URL
);
}
return new LegacyNotificationService();
}
}
// STEP 6: Updated order service using abstraction
class OrderService {
constructor() {
this.notificationService = NotificationServiceFactory.create();
}
async placeOrder(order) {
// ... order logic ...
// Now uses abstraction - can switch implementations
await this.notificationService.sendOrderConfirmation(order);
}
}
// Gradual rollout with comparison
class ComparingNotificationService extends NotificationService {
constructor() {
super();
this.legacy = new LegacyNotificationService();
this.modern = new MicroserviceNotificationService(
process.env.NOTIFICATION_SERVICE_URL
);
}
async sendOrderConfirmation(order) {
// Run both, compare results, use legacy as source of truth
const [legacyResult, modernResult] = await Promise.allSettled([
this.legacy.sendOrderConfirmation(order),
this.modern.sendOrderConfirmation(order)
]);
// Log differences for analysis
if (modernResult.status === 'rejected') {
console.error('Modern notification failed:', modernResult.reason);
}
// Return legacy result (source of truth during migration)
if (legacyResult.status === 'rejected') {
throw legacyResult.reason;
}
return legacyResult.value;
}
}
Database Migration Strategies
Copy
┌─────────────────────────────────────────────────────────────────────────────┐
│ DATABASE MIGRATION PATTERNS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ PATTERN 1: SHARED DATABASE (TEMPORARY) │
│ ────────────────────────────────────── │
│ │
│ ┌───────────┐ ┌───────────┐ │
│ │ Monolith │ │ New Svc │ │
│ └─────┬─────┘ └─────┬─────┘ │
│ │ │ │
│ └────────┬─────────┘ │
│ ▼ │
│ ┌───────────────┐ │
│ │ Shared DB │ ← Both read/write │
│ └───────────────┘ │
│ │
│ Quick start, but creates coupling. Use as stepping stone only. │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ PATTERN 2: DATABASE VIEW (READ-ONLY) │
│ ──────────────────────────────────── │
│ │
│ ┌───────────┐ ┌───────────┐ │
│ │ Monolith │ │ New Svc │ │
│ └─────┬─────┘ └─────┬─────┘ │
│ │ R/W │ Read-only │
│ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Main DB │──▶│ View │ │
│ └─────────────┘ └─────────────┘ │
│ │
│ New service reads from view, writes go to monolith via API. │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ PATTERN 3: DATA SYNCHRONIZATION │
│ ───────────────────────────────── │
│ │
│ ┌───────────┐ events ┌───────────┐ │
│ │ Monolith │──────────▶│ New Svc │ │
│ └─────┬─────┘ └─────┬─────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Main DB │ sync │ Service DB │ │
│ └─────────────┘ ─────▶ └─────────────┘ │
│ │
│ Change Data Capture (CDC) keeps databases in sync. │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ PATTERN 4: DATABASE PER SERVICE (TARGET STATE) │
│ ─────────────────────────────────────────────── │
│ │
│ ┌───────────┐ ┌───────────┐ │
│ │ Service A │ events │ Service B │ │
│ └─────┬─────┘◀─────────▶└─────┬─────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Database A │ │ Database B │ │
│ └─────────────┘ └─────────────┘ │
│ │
│ Complete data ownership. Communication via APIs/events. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Change Data Capture Implementation
Copy
// cdc-sync.js - Database synchronization using CDC
const { Kafka } = require('kafkajs');
const { Pool } = require('pg');
// Debezium-style CDC connector (simplified)
class ChangeDataCaptureSync {
constructor(config) {
this.sourceDb = new Pool(config.source);
this.targetDb = new Pool(config.target);
this.kafka = new Kafka(config.kafka);
this.producer = this.kafka.producer();
this.consumer = this.kafka.consumer({ groupId: 'cdc-sync' });
this.lastPosition = null;
}
async startCapture() {
await this.producer.connect();
// Poll for changes (in production, use Debezium or similar)
setInterval(async () => {
await this.captureChanges();
}, 1000);
}
async captureChanges() {
// Using PostgreSQL logical replication slot
const changes = await this.sourceDb.query(`
SELECT * FROM pg_logical_slot_get_changes(
'cdc_slot',
NULL,
NULL,
'include-xids', '1',
'include-timestamp', '1'
)
`);
for (const change of changes.rows) {
const parsed = this.parseChange(change.data);
if (parsed) {
await this.producer.send({
topic: `cdc.${parsed.table}`,
messages: [{
key: parsed.key,
value: JSON.stringify({
operation: parsed.operation,
before: parsed.before,
after: parsed.after,
timestamp: parsed.timestamp
})
}]
});
}
}
}
async startSync() {
await this.consumer.connect();
await this.consumer.subscribe({ topic: /cdc\..*/ });
await this.consumer.run({
eachMessage: async ({ topic, message }) => {
const tableName = topic.replace('cdc.', '');
const change = JSON.parse(message.value.toString());
await this.applyChange(tableName, change);
}
});
}
async applyChange(tableName, change) {
const { operation, before, after } = change;
try {
switch (operation) {
case 'INSERT':
await this.targetDb.query(
`INSERT INTO ${tableName} SELECT * FROM json_populate_record(NULL::${tableName}, $1)`,
[after]
);
break;
case 'UPDATE':
const setClauses = Object.keys(after)
.map((key, i) => `${key} = $${i + 1}`)
.join(', ');
const values = Object.values(after);
await this.targetDb.query(
`UPDATE ${tableName} SET ${setClauses} WHERE id = $${values.length + 1}`,
[...values, after.id]
);
break;
case 'DELETE':
await this.targetDb.query(
`DELETE FROM ${tableName} WHERE id = $1`,
[before.id]
);
break;
}
} catch (error) {
console.error(`Failed to apply change to ${tableName}:`, error);
// Send to dead letter queue for retry
await this.sendToDeadLetter(tableName, change, error);
}
}
}
// Data Migration with verification
class DataMigrator {
constructor(sourceDb, targetDb) {
this.sourceDb = sourceDb;
this.targetDb = targetDb;
}
async migrateTable(tableName, options = {}) {
const batchSize = options.batchSize || 1000;
let offset = 0;
let totalMigrated = 0;
console.log(`Starting migration of ${tableName}...`);
while (true) {
// Fetch batch from source
const batch = await this.sourceDb.query(
`SELECT * FROM ${tableName} ORDER BY id LIMIT $1 OFFSET $2`,
[batchSize, offset]
);
if (batch.rows.length === 0) break;
// Transform if needed
const transformed = options.transform
? batch.rows.map(options.transform)
: batch.rows;
// Insert into target
for (const row of transformed) {
await this.targetDb.query(
`INSERT INTO ${tableName} (${Object.keys(row).join(', ')})
VALUES (${Object.keys(row).map((_, i) => `$${i + 1}`).join(', ')})
ON CONFLICT (id) DO UPDATE SET
${Object.keys(row).map((k, i) => `${k} = $${i + 1}`).join(', ')}`,
Object.values(row)
);
}
totalMigrated += batch.rows.length;
offset += batchSize;
console.log(`Migrated ${totalMigrated} rows from ${tableName}`);
}
// Verify migration
await this.verify(tableName);
return totalMigrated;
}
async verify(tableName) {
const sourceCount = await this.sourceDb.query(
`SELECT COUNT(*) FROM ${tableName}`
);
const targetCount = await this.targetDb.query(
`SELECT COUNT(*) FROM ${tableName}`
);
const sourceTotal = parseInt(sourceCount.rows[0].count);
const targetTotal = parseInt(targetCount.rows[0].count);
if (sourceTotal !== targetTotal) {
throw new Error(
`Verification failed for ${tableName}: ` +
`source=${sourceTotal}, target=${targetTotal}`
);
}
console.log(`✓ Verified ${tableName}: ${targetTotal} rows`);
// Sample verification
const sampleRows = await this.sourceDb.query(
`SELECT * FROM ${tableName} ORDER BY RANDOM() LIMIT 10`
);
for (const sourceRow of sampleRows.rows) {
const targetRow = await this.targetDb.query(
`SELECT * FROM ${tableName} WHERE id = $1`,
[sourceRow.id]
);
if (targetRow.rows.length === 0) {
throw new Error(`Missing row ${sourceRow.id} in target ${tableName}`);
}
// Deep comparison
const diff = this.compareRows(sourceRow, targetRow.rows[0]);
if (diff.length > 0) {
console.warn(`Row ${sourceRow.id} differs:`, diff);
}
}
console.log(`✓ Sample verification passed for ${tableName}`);
}
compareRows(source, target) {
const diffs = [];
for (const key of Object.keys(source)) {
if (JSON.stringify(source[key]) !== JSON.stringify(target[key])) {
diffs.push({ field: key, source: source[key], target: target[key] });
}
}
return diffs;
}
}
module.exports = { ChangeDataCaptureSync, DataMigrator };
Dual-Write Patterns
Copy
┌─────────────────────────────────────────────────────────────────────────────┐
│ DUAL-WRITE DURING MIGRATION │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ❌ NAIVE DUAL-WRITE (DON'T DO THIS) │
│ ────────────────────────────────── │
│ │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ async function saveOrder(order) { │ │
│ │ await monolithDb.save(order); // What if this succeeds... │ │
│ │ await newServiceDb.save(order); // ...but this fails? │ │
│ │ } │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ Problem: No atomicity = data inconsistency │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ ✅ TRANSACTIONAL OUTBOX PATTERN │
│ ────────────────────────────── │
│ │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ BEGIN TRANSACTION │ │
│ │ INSERT INTO orders (...); │ │
│ │ INSERT INTO outbox (event_type, payload); ← Same transaction │ │
│ │ COMMIT │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
│ Background job reads outbox → publishes events → marks processed │
│ │
│ ═══════════════════════════════════════════════════════════════════════ │
│ │
│ ✅ CHANGE DATA CAPTURE (CDC) │
│ ───────────────────────────── │
│ │
│ ┌───────────────────────────────────────────────────────────────────┐ │
│ │ Monolith DB ──▶ CDC (Debezium) ──▶ Kafka ──▶ New Service DB │ │
│ └───────────────────────────────────────────────────────────────────┘ │
│ │
│ Changes captured from DB transaction log. Guaranteed delivery. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Safe Dual-Write Implementation
Copy
// safe-dual-write.js - Using Outbox Pattern
const { Pool } = require('pg');
class OutboxDualWrite {
constructor(db) {
this.db = db;
}
// Safe dual-write using outbox
async saveOrder(order) {
const client = await this.db.connect();
try {
await client.query('BEGIN');
// 1. Save to main table
const result = await client.query(
`INSERT INTO orders (customer_id, items, total, status)
VALUES ($1, $2, $3, $4)
RETURNING *`,
[order.customerId, JSON.stringify(order.items), order.total, 'pending']
);
const savedOrder = result.rows[0];
// 2. Write to outbox (same transaction!)
await client.query(
`INSERT INTO outbox (aggregate_type, aggregate_id, event_type, payload, created_at)
VALUES ($1, $2, $3, $4, NOW())`,
[
'Order',
savedOrder.id,
'OrderCreated',
JSON.stringify({
orderId: savedOrder.id,
customerId: savedOrder.customer_id,
items: order.items,
total: order.total
})
]
);
await client.query('COMMIT');
return savedOrder;
} catch (error) {
await client.query('ROLLBACK');
throw error;
} finally {
client.release();
}
}
}
// Outbox processor - publishes events to message broker
class OutboxProcessor {
constructor(db, eventPublisher) {
this.db = db;
this.eventPublisher = eventPublisher;
this.running = false;
}
async start() {
this.running = true;
while (this.running) {
await this.processOutbox();
await this.sleep(100); // Poll interval
}
}
async processOutbox() {
const client = await this.db.connect();
try {
// Lock and fetch unprocessed events
await client.query('BEGIN');
const events = await client.query(
`SELECT * FROM outbox
WHERE processed_at IS NULL
ORDER BY created_at
LIMIT 100
FOR UPDATE SKIP LOCKED`
);
for (const event of events.rows) {
try {
// Publish to message broker
await this.eventPublisher.publish({
topic: `${event.aggregate_type.toLowerCase()}.events`,
key: event.aggregate_id.toString(),
value: {
eventId: event.id,
eventType: event.event_type,
aggregateType: event.aggregate_type,
aggregateId: event.aggregate_id,
payload: event.payload,
timestamp: event.created_at
}
});
// Mark as processed
await client.query(
`UPDATE outbox SET processed_at = NOW() WHERE id = $1`,
[event.id]
);
} catch (error) {
console.error(`Failed to process outbox event ${event.id}:`, error);
// Will be retried on next poll
}
}
await client.query('COMMIT');
} catch (error) {
await client.query('ROLLBACK');
console.error('Outbox processing error:', error);
} finally {
client.release();
}
}
stop() {
this.running = false;
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// Migration-specific dual-write with comparison
class MigrationDualWrite {
constructor(legacyService, newService) {
this.legacy = legacyService;
this.new = newService;
this.mode = 'shadow'; // shadow | primary-legacy | primary-new | new-only
}
async write(data) {
switch (this.mode) {
case 'shadow':
// Write to legacy (primary), shadow write to new, compare
return this.shadowWrite(data);
case 'primary-legacy':
// Write to both, legacy is source of truth
return this.dualWriteLegacyPrimary(data);
case 'primary-new':
// Write to both, new is source of truth
return this.dualWriteNewPrimary(data);
case 'new-only':
// Only write to new service
return this.new.write(data);
}
}
async shadowWrite(data) {
// Primary write
const legacyResult = await this.legacy.write(data);
// Shadow write (fire and forget, with logging)
this.new.write(data)
.then(newResult => {
// Compare results
const diff = this.compare(legacyResult, newResult);
if (diff.length > 0) {
console.warn('Shadow write difference detected:', diff);
this.metrics.recordDiff('write', diff);
}
})
.catch(error => {
console.error('Shadow write failed:', error);
this.metrics.recordError('shadow-write', error);
});
return legacyResult;
}
async read(id) {
switch (this.mode) {
case 'shadow':
case 'primary-legacy':
// Read from legacy, shadow read from new for comparison
const legacyData = await this.legacy.read(id);
this.new.read(id).then(newData => {
const diff = this.compare(legacyData, newData);
if (diff.length > 0) {
console.warn(`Read difference for ${id}:`, diff);
}
}).catch(() => {});
return legacyData;
case 'primary-new':
case 'new-only':
return this.new.read(id);
}
}
compare(legacy, current) {
const diffs = [];
const keys = new Set([...Object.keys(legacy), ...Object.keys(current)]);
for (const key of keys) {
if (JSON.stringify(legacy[key]) !== JSON.stringify(current[key])) {
diffs.push({
field: key,
legacy: legacy[key],
new: current[key]
});
}
}
return diffs;
}
}
module.exports = { OutboxDualWrite, OutboxProcessor, MigrationDualWrite };
Interview Questions
Q1: What is the Strangler Fig pattern?
Q1: What is the Strangler Fig pattern?
Answer:A migration pattern where new functionality is built around the existing system, gradually replacing it.Steps:
- Add a facade/proxy in front of monolith
- Extract one feature to microservice
- Route that feature’s traffic to new service
- Repeat until monolith is empty
- Remove monolith
- Zero downtime migration
- Gradual, low-risk
- Can stop/pause anytime
- Immediate value from extracted services
Q2: How do you handle database during migration?
Q2: How do you handle database during migration?
Answer:Progressive approach:
- Shared database (temporary)
- Quick start
- Both read/write to same DB
- Database view
- New service reads from view
- Writes via API to monolith
- CDC synchronization
- Capture changes from source
- Replay to new database
- Eventually consistent
- Database per service
- Full data ownership
- Communication via APIs/events
Q3: What's wrong with dual-writes?
Q3: What's wrong with dual-writes?
Answer:Problem: No atomicity between two writes.Solutions:
Copy
await db1.save(data); // Succeeds
await db2.save(data); // Fails! Data inconsistent
- Outbox pattern:
- Single DB transaction
- Write data + event to outbox
- Background processor publishes events
- Change Data Capture (CDC):
- Capture changes from DB transaction log
- Stream to message broker
- New service consumes and applies
- Saga pattern:
- Compensating transactions
- Eventually consistent
Q4: What is Branch by Abstraction?
Q4: What is Branch by Abstraction?
Answer:A technique to replace a component inside a system safely.Steps:
- Create abstraction interface
- Implement interface with existing code
- Change callers to use interface
- Create new implementation
- Switch implementations (feature flag)
- Remove old implementation
- Replacing internal components
- Need to run old/new in parallel
- Want to compare implementations
- Strangler: External facade, route traffic
- Branch: Internal abstraction, swap implementations
Chapter Summary
Key Takeaways:
- Never do big-bang rewrites - use incremental patterns
- Strangler Fig: Route traffic gradually to new services
- Branch by Abstraction: Swap internal implementations safely
- Database migration is the hardest part - plan carefully
- Use CDC or Outbox pattern, never naive dual-writes
- Always have rollback capability