Skip to main content

Documentation Index

Fetch the complete documentation index at: https://resources.devweekends.com/llms.txt

Use this file to discover all available pages before exploring further.

Event Driven Architecture

Synchronous calls (REST) couple services. If the Email Service is down, the Registration Service fails. Solution: Events.

1. Spring Cloud Stream

An abstraction over message brokers. You write code that produces/consumes messages, and Spring handles the broker details (Kafka or RabbitMQ). Dependency: spring-cloud-starter-stream-rabbit (or kafka).

2. The Functional Style (Spring Boot 3)

No more @EnableBinding. We use java.util.function.
@Configuration
public class StreamConfig {

    // PRODUCER: Supplier<T>
    // Executed by Spring continuously (polling) or manually triggered
    @Bean
    public Supplier<OrderEvent> orderSupplier() {
        return () -> new OrderEvent(123L, "CREATED");
    }

    // CONSUMER: Consumer<T>
    @Bean
    public Consumer<OrderEvent> orderConsumer() {
        return event -> {
            System.out.println("Received Event: " + event);
        };
    }

    // PROCESSOR: Function<T, R>
    @Bean
    public Function<String, String> uppercaseProcessor() {
        return String::toUpperCase;
    }
}

3. Reducing Boilerplate with StreamBridge

For REST-triggered events (e.g., User clicks “Buy”), Supplier is hard to use. Use StreamBridge.
@RestController
@RequiredArgsConstructor
public class OrderController {

    private final StreamBridge streamBridge;

    @PostMapping("/orders")
    public String createOrder(@RequestBody Order order) {
        // Save to DB...
        
        // Publish Event
        streamBridge.send("order-out-0", new OrderCreatedEvent(order.getId()));
        
        return "Order Placed";
    }
}

4. Configuration (application.yml)

Map the functions to actual queues/topics.
spring:
  cloud:
    stream:
      function:
        definition: orderConsumer;processOrder
      bindings:
        orderConsumer-in-0:
          destination: orders-topic
          group: inventory-service-group # Consumer Group (competing consumers)
        processOrder-out-0:
          destination: notifications-topic

5. Kafka vs RabbitMQ

FeatureRabbitMQKafka
ModelSmart Broker, Dumb ConsumerDumb Broker, Smart Consumer
Use CaseComplex routing, low latencyHigh throughput, event replay
PersistenceQueue basedLog based (Retention)
  • Kafka: High throughput, persistent log. Better for event streaming.
  • RabbitMQ: Traditional message broker. Better for task queues.

6. The Dual Write Problem

In Microservices, you often need to:
  1. Update the database (e.g., save order).
  2. Send an event (e.g., publish “OrderCreated” to Kafka).
Problem: These are two separate systems. What if one fails?
// DANGEROUS CODE
@Transactional
public void createOrder(Order order) {
    orderRepository.save(order); // Success
    kafkaTemplate.send("orders", order); // FAILS!
    // DB is committed, but event is NOT sent. Data inconsistency!
}
Even if you reverse the order, you have the opposite problem.

7. The Transactional Outbox Pattern (Solution)

Idea: Write the event to the same database transaction as the business data.

Implementation

  1. Create an Outbox table.
CREATE TABLE outbox (
    id UUID PRIMARY KEY,
    aggregate_type VARCHAR(50),
    aggregate_id VARCHAR(50),
    event_type VARCHAR(50),
    payload JSONB,
    created_at TIMESTAMP
);
  1. Save both Order AND Event in the same transaction.
@Transactional
public void createOrder(Order order) {
    orderRepository.save(order);
    
    OutboxEvent event = new OutboxEvent(
        UUID.randomUUID(),
        "Order",
        order.getId().toString(),
        "OrderCreated",
        toJson(order),
        Instant.now()
    );
    outboxRepository.save(event); // Same transaction!
}
  1. A background worker (scheduled task) reads from outbox and publishes to Kafka, then deletes the row.
@Scheduled(fixedDelay = 5000)
public void publishEvents() {
    List<OutboxEvent> events = outboxRepository.findAll();
    events.forEach(event -> {
        kafkaTemplate.send(event.getEventType(), event.getPayload());
        outboxRepository.delete(event);
    });
}

Why it Works

  • Atomicity: DB write + Outbox write are in one transaction.
  • Eventual Consistency: Events are published asynchronously, but guaranteed.

Flow Diagram

8. Dead Letter Queues (DLQ)

What if a consumer keeps failing to process a message (e.g., bad JSON, NPE)? After N retries, move the message to a Dead Letter Queue for manual inspection.
spring:
  cloud:
    stream:
      bindings:
        process-in-0:
          destination: orders
          group: order-service
          consumer:
            max-attempts: 3
      rabbit:
        bindings:
          process-in-0:
            consumer:
              auto-bind-dlq: true
Now, after 3 failed attempts, the message goes to orders.order-service.dlq.