Microservices Migration in 2026: Strangler Fig Pattern and Decomposing a Monolith
Migrate from monolith to microservices without downtime: strangler fig pattern, identifying service boundaries, data ownership, async event communication, and when not to decompose.
Microservices Migration in 2026: Strangler Fig Pattern and Decomposing a Monolith
Most teams that migrate to microservices don't need to. They need to modularize their monolith, fix their deployment pipeline, and address the specific bottleneck they're experiencing. Microservices solve real problems โ independent deployment, language heterogeneity, team autonomy at scale โ but they introduce distributed systems complexity that costs more than it saves at most team sizes.
That said, the migration question comes up at every growing company. This post covers how to do it correctly: the strangler fig pattern for incremental migration without a rewrite, identifying real service boundaries using Domain-Driven Design, and the data ownership problem that breaks most migrations.
Before You Migrate: The Real Checklist
## Questions to answer before starting any microservices migration
### Do you actually need microservices?
- [ ] Do different parts of your system need to scale independently?
- [ ] Do you have 5+ teams who deploy conflicting changes to the same repo?
- [ ] Do you need to run different services in different languages?
- [ ] Is your monolith deployment so slow/risky it's blocking velocity?
If you answered "no" to all four: don't migrate. Fix the monolith first.
### Is your monolith ready to be decomposed?
- [ ] Do you have clean module boundaries (not spaghetti dependencies)?
- [ ] Do you have comprehensive test coverage (>70%)?
- [ ] Can you deploy the monolith multiple times per day?
- [ ] Do you have observability (tracing, metrics, logs)?
If you answered "no" to any: fix these first. Decomposing a messy monolith
creates multiple messy services.
๐ Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different โ senior engineers only, direct Slack access, and a 5.0โ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript โ production-grade stack
- Fixed-price contracts โ no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
The Strangler Fig Pattern
The strangler fig tree grows around its host, gradually replacing it. The migration pattern works the same way: you route traffic to a new service for specific functionality, while the monolith continues handling everything else. Over time, the monolith shrinks and the new services grow.
Step 1: Monolith handles everything
[Client] โ [Monolith] โ [Database]
Step 2: Route /payments to new service (others unchanged)
[Client] โ [Router/Proxy] โ [Monolith] (for /users, /orders, ...)
โ [Payments Service] (for /payments/*)
Step 3: Route /notifications to another new service
[Client] โ [Router/Proxy] โ [Monolith] (for /users, /orders)
โ [Payments Service] (for /payments/*)
โ [Notifications Service] (for /notifications/*)
Step N: Monolith hollowed out or deleted
Implementing the Router (API Gateway)
// src/gateway/router.ts โ Fastify API Gateway
import Fastify from 'fastify';
import httpProxy from '@fastify/http-proxy';
const app = Fastify({ logger: true });
// Route payments to new service
app.register(httpProxy, {
upstream: process.env.PAYMENTS_SERVICE_URL!, // http://payments-service:3001
prefix: '/api/payments',
rewritePrefix: '/payments',
http2: false,
});
// Route notifications to new service
app.register(httpProxy, {
upstream: process.env.NOTIFICATIONS_SERVICE_URL!,
prefix: '/api/notifications',
rewritePrefix: '/notifications',
});
// Everything else goes to monolith
app.register(httpProxy, {
upstream: process.env.MONOLITH_URL!, // http://monolith:3000
prefix: '/api',
rewritePrefix: '',
});
// Health check aggregates all services
app.get('/health', async () => {
const checks = await Promise.allSettled([
fetch(`${process.env.MONOLITH_URL}/health`),
fetch(`${process.env.PAYMENTS_SERVICE_URL}/health`),
]);
const allHealthy = checks.every((c) => c.status === 'fulfilled' && c.value.ok);
return { status: allHealthy ? 'ok' : 'degraded', services: checks.map((c) => c.status) };
});
Identifying Service Boundaries
Service boundaries should follow business domain boundaries, not technical layers. Domain-Driven Design's bounded context concept is the right tool:
๐ Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4โ8 weeks, full platforms in 3โ5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
Bounded Context Analysis for a SaaS Product
Identify aggregates (things that change together)
User aggregate:
- User (root)
- UserProfile
- AuthCredentials Change together when: user edits profile, changes password
Order aggregate:
- Order (root)
- OrderItems[]
- ShippingAddress Change together when: order is placed, modified, fulfilled
Subscription aggregate:
- Subscription (root)
- BillingPeriod
- UsageRecord[] Change together when: subscription renewed, upgraded, cancelled
Payment aggregate:
- Payment (root)
- PaymentMethod
- RefundRecord[] Change together when: payment processed, disputed, refunded
Rule: Each aggregate = candidate service
Counter-rule: Don't split if they communicate synchronously and can't tolerate latency
Communication patterns between bounded contexts:
User โ โ Subscription: User has subscription status (read-only, sync OK) Order โ Payment: Order triggers payment charge (sync, must succeed atomically) Payment โ Notification: Payment success triggers email (async, fire-and-forget) Subscription โ Notification: Renewal reminder (async, scheduled)
Conclusion for this SaaS:
Service 1: Identity Service (users, auth) Service 2: Orders + Payments (too tightly coupled for synchronous requirement) Service 3: Billing/Subscriptions (Stripe integration) Service 4: Notifications (email, push โ pure async consumer) Monolith remainder: product catalog, search, dashboard
---
## Data Ownership: The Hard Part
The biggest mistake in microservices migrations: sharing the database. If two services read and write the same tables, you haven't created services โ you've created a distributed monolith with all the downsides of both worlds.
```sql
-- โ Anti-pattern: Two services share the payments table
-- payments-service reads/writes payments
-- orders-service reads payments to check payment status
-- โ
Correct: Orders service has its own view of payment status
-- payments table owned by payments-service
-- orders table owned by orders-service
-- orders-service stores payment_status column, updated via events
-- In orders-service database:
CREATE TABLE orders (
id UUID PRIMARY KEY,
customer_id UUID NOT NULL,
total_cents INTEGER NOT NULL,
payment_status TEXT NOT NULL DEFAULT 'pending',
-- payment_intent_id is a reference to payments-service, not a FK
payment_intent_id TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- When payments-service emits payment.succeeded event:
-- orders-service updates its own payment_status column
-- No cross-service database joins needed
Event-Based Data Synchronization
// payments-service: emit event on payment success
// src/events/publisher.ts
import { SQSClient, SendMessageCommand } from '@aws-sdk/client-sqs';
const sqs = new SQSClient({ region: 'us-east-1' });
export async function publishPaymentSucceeded(payload: {
paymentIntentId: string;
orderId: string;
amountCents: number;
customerId: string;
}): Promise<void> {
await sqs.send(new SendMessageCommand({
QueueUrl: process.env.PAYMENT_EVENTS_QUEUE_URL!,
MessageBody: JSON.stringify({
eventType: 'payment.succeeded',
eventId: crypto.randomUUID(),
occurredAt: new Date().toISOString(),
payload,
}),
MessageGroupId: payload.orderId, // FIFO queue: order events in sequence
MessageDeduplicationId: payload.paymentIntentId,
}));
}
// orders-service: consume payment events
// src/consumers/payment-events.ts
export async function handlePaymentEvent(message: SQSMessage): Promise<void> {
const event = JSON.parse(message.Body) as PaymentEvent;
switch (event.eventType) {
case 'payment.succeeded':
await db.query(
`UPDATE orders
SET payment_status = 'paid',
payment_intent_id = $1,
paid_at = now()
WHERE id = $2`,
[event.payload.paymentIntentId, event.payload.orderId],
);
break;
case 'payment.failed':
await db.query(
`UPDATE orders SET payment_status = 'failed' WHERE id = $1`,
[event.payload.orderId],
);
break;
case 'payment.refunded':
await db.query(
`UPDATE orders SET payment_status = 'refunded' WHERE id = $1`,
[event.payload.orderId],
);
break;
}
}
Migration Execution: Step by Step
## Strangler Fig Migration Playbook
### Phase 1: Prepare (2โ4 weeks)
1. Add comprehensive integration tests for functionality being extracted
2. Document all current behavior (edge cases, error handling)
3. Identify all callers of the code being extracted
4. Add observability (logging, metrics) to target module
### Phase 2: Build the new service (2โ6 weeks)
1. Create new repository with same test coverage as monolith module
2. Replicate (copy, then refactor) the target module's logic
3. Set up own database schema, migrations, and seeding
4. Run both side-by-side in staging: monolith still handles traffic
### Phase 3: Dual-write / shadow mode (1โ2 weeks)
1. Monolith writes to BOTH its DB and the new service via API
2. New service handles no real traffic yet
3. Compare results between monolith and new service continuously
4. Fix discrepancies before proceeding
### Phase 4: Cut over (1 day)
1. Route 1% of traffic to new service; monitor error rates
2. Route 10% โ 50% โ 100% over hours if metrics are healthy
3. Keep monolith's code path available for 1 week as rollback option
4. Monitor: error rates, latency, business metrics (no drop in conversions)
### Phase 5: Clean up (1 week)
1. Remove extracted code from monolith
2. Remove dual-write logic
3. Delete dead code paths
4. Update documentation
Shared Libraries vs Service APIs
// โ Shared database models (creates coupling)
// @company/shared-models โ both services import User, Order, etc.
// Problem: changing the User model requires coordinating both services
// โ
Shared utility code (no coupling)
// @company/shared โ pure utilities with no business logic or DB calls
// OK to share:
export { generateId } from './id'; // ID generation
export { formatCurrency } from './money'; // Currency formatting
export { validateEmail } from './validation'; // Input validation
export { logger } from './logging'; // Logging setup
export type { PaginatedResponse } from './pagination'; // Shared API types
// NOT OK to share:
// - Database models / Prisma schemas
// - Service-specific business logic
// - Shared database connections
Migration Cost Estimates
| Team Size | Monolith Size | Migration Duration | Engineering Cost |
|---|---|---|---|
| 5โ10 engineers | Small (<50 modules) | 3โ6 months | $150Kโ$400K |
| 10โ20 engineers | Medium (50โ200 modules) | 6โ12 months | $400Kโ$1M |
| 20โ50 engineers | Large (200+ modules) | 12โ24 months | $1Mโ$3M |
| 50+ engineers | Enterprise | 2โ5 years | $3M+ |
Key insight: These costs are why you shouldn't migrate unless the business case is clear. A well-structured monolith with good CI/CD and observability handles most scaling needs up to hundreds of engineers.
Working With Viprasol
We design and execute microservices migrations using the strangler fig pattern โ from bounded context analysis through API gateway setup, event infrastructure, and data migration.
What we deliver:
- Bounded context analysis and service boundary design
- Strangler fig implementation with API gateway routing
- Event-driven communication infrastructure (SQS/SNS or Kafka)
- Database separation strategy and dual-write migration plan
- Monitoring and rollback plan for each migration phase
โ Discuss your architecture migration โ Software architecture and consulting
See Also
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms โ we build it all with Next.js and React.
Free consultation โข No commitment โข Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse โฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ from architecture through deployment.