AWS EventBridge: Event Rules, Cross-Account Routing, Schema Registry, and Terraform
Build event-driven architectures with AWS EventBridge: custom event buses, rule patterns, cross-account event routing, schema registry with code generation, Pipes for filtering, and Terraform IaC.
EventBridge is the glue layer for event-driven AWS architectures. Where SQS is a queue (one consumer), SNS is a fan-out (many consumers of the same message), and Kinesis is a stream (ordered, replay), EventBridge is a rule-based router: events flow in, rules match against the event payload, and matched events are routed to one or more targets. The pattern is powerful because producers and consumers don't know about each other — a new consumer just adds a rule.
This post covers custom event buses, rule pattern matching, cross-account event routing, the schema registry for type-safe event contracts, EventBridge Pipes for transformation, and Terraform IaC.
EventBridge vs SQS vs SNS
| EventBridge | SQS | SNS | |
|---|---|---|---|
| Consumer model | Rule-matched routing | Single consumer queue | Fan-out broadcast |
| Schema validation | ✅ Schema Registry | ❌ | ❌ |
| Cross-account | ✅ Native | ❌ (complex) | ✅ |
| Event replay | ✅ Archive + replay | ❌ | ❌ |
| Filtering | ✅ Content-based | ✅ Message attributes | ✅ Message attributes |
| Max payload | 256KB | 256KB | 256KB |
| Ordering | ❌ | ✅ (FIFO queues) | ❌ |
1. Custom Event Bus and Rules
# infrastructure/eventbridge/main.tf
# Custom event bus per domain (better than using default bus)
resource "aws_cloudwatch_event_bus" "orders" {
name = "${var.project}-orders"
tags = { Environment = var.environment }
}
resource "aws_cloudwatch_event_bus" "payments" {
name = "${var.project}-payments"
tags = { Environment = var.environment }
}
# Rule: route order.created events to fulfillment Lambda
resource "aws_cloudwatch_event_rule" "order_created" {
name = "${var.project}-order-created"
event_bus_name = aws_cloudwatch_event_bus.orders.name
description = "Route order.created events to fulfillment service"
event_pattern = jsonencode({
source = ["com.viprasol.orders"]
detail-type = ["order.created"]
detail = {
# Content-based filtering: only orders over $100
total_cents = [{ numeric = [">", 10000] }]
}
})
}
resource "aws_cloudwatch_event_target" "fulfillment_lambda" {
rule = aws_cloudwatch_event_rule.order_created.name
event_bus_name = aws_cloudwatch_event_bus.orders.name
target_id = "FulfillmentLambda"
arn = aws_lambda_function.fulfillment.arn
# Transform event before sending to Lambda
input_transformer {
input_paths = {
orderId = "$.detail.order_id"
userId = "$.detail.user_id"
totalCents = "$.detail.total_cents"
}
input_template = <<-EOF
{
"orderId": "<orderId>",
"userId": "<userId>",
"totalCents": <totalCents>,
"source": "eventbridge"
}
EOF
}
}
# Dead-letter queue for failed deliveries
resource "aws_sqs_queue" "order_dlq" {
name = "${var.project}-order-events-dlq"
message_retention_seconds = 1209600 # 14 days
}
resource "aws_cloudwatch_event_target" "fulfillment_lambda_dlq" {
rule = aws_cloudwatch_event_rule.order_created.name
event_bus_name = aws_cloudwatch_event_bus.orders.name
target_id = "FulfillmentDLQ"
arn = aws_sqs_queue.order_dlq.arn
dead_letter_config {
arn = aws_sqs_queue.order_dlq.arn
}
retry_policy {
maximum_event_age_in_seconds = 86400 # Retry for 24 hours
maximum_retry_attempts = 185 # Max retries
}
}
# Lambda permission
resource "aws_lambda_permission" "eventbridge_invoke" {
statement_id = "AllowEventBridgeInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.fulfillment.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.order_created.arn
}
☁️ Is Your Cloud Costing Too Much?
Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.
- AWS, GCP, Azure certified engineers
- Infrastructure as Code (Terraform, CDK)
- Docker, Kubernetes, GitHub Actions CI/CD
- Typical audit recovers $500–$3,000/month in savings
2. Multiple Targets Per Rule
# Fan-out: one event → multiple services
# Rule: payment.succeeded → analytics + email + metrics
resource "aws_cloudwatch_event_rule" "payment_succeeded" {
name = "${var.project}-payment-succeeded"
event_bus_name = aws_cloudwatch_event_bus.payments.name
event_pattern = jsonencode({
source = ["com.viprasol.payments"]
detail-type = ["payment.succeeded"]
})
}
# Target 1: Analytics Lambda
resource "aws_cloudwatch_event_target" "analytics" {
rule = aws_cloudwatch_event_rule.payment_succeeded.name
event_bus_name = aws_cloudwatch_event_bus.payments.name
target_id = "AnalyticsLambda"
arn = aws_lambda_function.analytics.arn
}
# Target 2: Email notification via SES (direct API destination)
resource "aws_cloudwatch_event_target" "email_api" {
rule = aws_cloudwatch_event_rule.payment_succeeded.name
event_bus_name = aws_cloudwatch_event_bus.payments.name
target_id = "EmailApiDestination"
arn = aws_cloudwatch_event_api_destination.email_service.arn
role_arn = aws_iam_role.eventbridge_invoke.arn
}
# Target 3: SQS for async processing
resource "aws_cloudwatch_event_target" "metrics_queue" {
rule = aws_cloudwatch_event_rule.payment_succeeded.name
event_bus_name = aws_cloudwatch_event_bus.payments.name
target_id = "MetricsSQS"
arn = aws_sqs_queue.metrics.arn
}
3. Publishing Events from TypeScript
// src/lib/events/publisher.ts
import {
EventBridgeClient,
PutEventsCommand,
type PutEventsRequestEntry,
} from '@aws-sdk/client-eventbridge';
const client = new EventBridgeClient({ region: process.env.AWS_REGION });
// Typed event definitions matching your schema registry
interface OrderCreatedEvent {
order_id: string;
user_id: string;
tenant_id: string;
total_cents: number;
items: Array<{ product_id: string; quantity: number; price_cents: number }>;
created_at: string;
}
interface PaymentSucceededEvent {
payment_id: string;
order_id: string;
user_id: string;
amount_cents: number;
currency: string;
stripe_payment_intent_id: string;
}
type EventMap = {
'order.created': OrderCreatedEvent;
'payment.succeeded': PaymentSucceededEvent;
};
export async function publishEvent<T extends keyof EventMap>(
bus: string,
detailType: T,
detail: EventMap[T]
): Promise<void> {
const entry: PutEventsRequestEntry = {
EventBusName: bus,
Source: `com.viprasol.${bus.split('-').pop()}`, // 'com.viprasol.orders'
DetailType: detailType,
Detail: JSON.stringify(detail),
Time: new Date(),
};
const result = await client.send(new PutEventsCommand({ Entries: [entry] }));
// Check for failures (EventBridge can partially succeed)
if (result.FailedEntryCount && result.FailedEntryCount > 0) {
const failures = result.Entries?.filter((e) => e.ErrorCode);
throw new Error(`EventBridge publish failed: ${JSON.stringify(failures)}`);
}
}
// Batch publishing (max 10 events per PutEvents call)
export async function publishEvents<T extends keyof EventMap>(
bus: string,
events: Array<{ detailType: T; detail: EventMap[T] }>
): Promise<void> {
const BATCH_SIZE = 10;
for (let i = 0; i < events.length; i += BATCH_SIZE) {
const batch = events.slice(i, i + BATCH_SIZE);
const entries: PutEventsRequestEntry[] = batch.map((e) => ({
EventBusName: bus,
Source: `com.viprasol.${bus.split('-').pop()}`,
DetailType: e.detailType,
Detail: JSON.stringify(e.detail),
Time: new Date(),
}));
await client.send(new PutEventsCommand({ Entries: entries }));
}
}
// Usage:
// await publishEvent(
// process.env.ORDERS_EVENT_BUS!,
// 'order.created',
// { order_id: order.id, user_id: order.userId, ... }
// );
⚙️ DevOps Done Right — Zero Downtime, Full Automation
Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.
- Staging + production environments with feature flags
- Automated security scanning in the pipeline
- Uptime monitoring + alerting + runbook automation
- On-call support handover docs included
4. Cross-Account Event Routing
# Account A (producer): allow Account B to receive events
resource "aws_cloudwatch_event_bus_policy" "allow_account_b" {
event_bus_name = aws_cloudwatch_event_bus.orders.name
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = { AWS = "arn:aws:iam::ACCOUNT_B_ID:root" }
Action = "events:PutEvents"
Resource = aws_cloudwatch_event_bus.orders.arn
}
]
})
}
# Account B (consumer): rule that receives events from Account A's bus
resource "aws_cloudwatch_event_rule" "from_account_a" {
provider = aws.account_b
name = "orders-from-account-a"
event_bus_name = "default"
event_pattern = jsonencode({
source = ["com.viprasol.orders"]
detail-type = ["order.created"]
})
}
# Account A: forward to Account B's default bus
resource "aws_cloudwatch_event_target" "cross_account" {
rule = aws_cloudwatch_event_rule.order_created.name
event_bus_name = aws_cloudwatch_event_bus.orders.name
target_id = "AccountBBus"
arn = "arn:aws:events:${var.aws_region}:ACCOUNT_B_ID:event-bus/default"
role_arn = aws_iam_role.eventbridge_cross_account.arn
}
5. Event Archive and Replay
# Archive: store all events for 90 days (enables replay)
resource "aws_cloudwatch_event_archive" "orders" {
name = "${var.project}-orders-archive"
source_arn = aws_cloudwatch_event_bus.orders.arn
retention_days = 90
event_pattern = jsonencode({
source = ["com.viprasol.orders"]
})
}
# Replay: re-process events from archive (useful for debugging or new consumers)
# Triggered via AWS Console or CLI:
# aws events start-replay \
# --replay-name "debug-2026-12-01" \
# --source-arn <archive-arn> \
# --event-start-time "2026-12-01T00:00:00Z" \
# --event-end-time "2026-12-01T23:59:59Z" \
# --destination '{"Arn": "<bus-arn>"}'
Cost Reference
| Usage | Monthly cost | Notes |
|---|---|---|
| Events published | $1.00/million | Custom bus events |
| Schema discovery | $0.10/million | Auto-discover from events |
| Pipes | $0.40/million events | Filtering + enrichment |
| Archive storage | $0.023/GB/month | 1M events ≈ ~50MB |
| Cross-account events | Same pricing | No extra charge |
See Also
- AWS Step Functions: State Machines and Lambda Orchestration
- AWS Lambda Layers: Shared Dependencies and Custom Runtimes
- Event-Driven Architecture: EventBridge, SNS, and SQS Patterns
- Terraform Modules: Reusable Infrastructure and Remote State
- SaaS Webhook System: Delivery, Retry, and Subscriber Management
Working With Viprasol
Decoupling your microservices with event-driven architecture? We design and implement AWS EventBridge event buses with typed event schemas, rule-based routing, cross-account fan-out, dead-letter queues, and archive/replay — with full Terraform IaC so your event topology is version-controlled and reproducible.
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need DevOps & Cloud Expertise?
Scale your infrastructure with confidence. AWS, GCP, Azure certified team.
Free consultation • No commitment • Response within 24 hours
Making sense of your data at scale?
Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.