The evolution from monolithic architectures to microservices has created a critical need for incremental migration strategies. Organizations cannot afford the luxury of a complete stop-the-world migration, especially those operating at scale with Oracle or SQL Server legacy systems. This question emerged from real-world scenarios where enterprises needed to modernize without sacrificing years of historical data integrity or accepting maintenance windows that lasted hours.
The core challenge lies in the impedance mismatch between monolithic ACID transactions spanning multiple domains and the distributed nature of microservices. When decomposing a database, you face the split-brain scenario where updates occur in both the legacy system and new services simultaneously. Maintaining referential integrity across network boundaries while keeping both systems operational creates a distributed consensus problem that cannot be solved with simple database replication.
Implement an Event-Driven Architecture utilizing Change Data Capture (CDC) with an Outbox Pattern to ensure reliable event publishing. Deploy Debezium connectors to capture row-level changes from the legacy database transaction log, streaming events to Apache Kafka as the central nervous system. Concurrently, implement the Saga Pattern in the microservices layer to handle distributed transactions, ensuring eventual consistency while maintaining operational autonomy of each service.
A Fortune 500 e-commerce platform needed to migrate their order management system from a decade-old Oracle monolith to PostgreSQL-based microservices. The inventory, pricing, and order fulfillment modules were tightly coupled with foreign key constraints across twelve major tables. During holiday seasons, the system processed 50,000 transactions per minute with zero tolerance for data loss or downtime.
Solution A: Dual Write Strategy
The engineering team initially considered modifying legacy application code to write simultaneously to both Oracle and the new PostgreSQL services. This approach promised simplicity by keeping writes synchronous and consistent. However, it introduced catastrophic coupling risks—if the new service experienced latency or failure, the entire legacy system would crash. Additionally, implementing distributed transactions via XA protocol would severely degrade performance, potentially increasing response times by 400% during peak load.
Solution B: Database Triggers and Views
Another option involved creating database triggers in Oracle that would invoke REST endpoints directly upon row modifications. This seemed attractive because it required no application changes. Yet this created a tight coupling between database infrastructure and network topology, making the system fragile. If the microservice endpoint was unreachable, the trigger would fail, causing the entire legacy transaction to rollback—a violation of the zero-downtime requirement. Furthermore, managing schema migrations became nearly impossible when triggers depended on specific column structures.
Solution C: Change Data Capture with Event Sourcing
The chosen architecture leveraged Debezium to monitor Oracle's redo log, capturing every insert, update, and delete as immutable events published to Apache Kafka. The microservices consumed these events via Kafka Streams, transforming and persisting them into PostgreSQL using the Outbox Pattern to ensure exactly-once semantics. A Schema Registry managed by Confluent enforced backward and forward compatibility using Avro schemas. This decoupled the legacy system from migration complexity—Oracle remained oblivious to the new architecture while services consumed events at their own pace.
Chosen solution and rationale
The team selected Solution C because it respected the Single Responsibility Principle and provided fault isolation. Unlike dual writes, the legacy system performance remained unaffected by microservice latency. Compared to triggers, Debezium operated asynchronously without blocking transactions. The event log provided an immutable audit trail, and Kafka's retention policies allowed replaying historical data if microservices needed reprocessing during schema evolution.
Result
After an eight-month migration, the platform successfully moved 200TB of transactional data with 99.97% uptime. The system handled Black Friday traffic with 40% lower latency than the previous year. When a pricing calculation bug was discovered in the new services, the team replayed three days of events from Kafka without touching the legacy Oracle system, correcting 2.3 million records without downtime. The CDC pipeline now serves as the backbone for real-time analytics using Apache Flink.
How do you handle schema evolution when the monolith changes its table structure while microservices consume CDC events?
Candidates often suggest freezing the schema during migration, which is impractical for agile businesses. The correct approach involves implementing the Confluent Schema Registry with Avro schemas using forward and backward compatibility modes. When Oracle tables change, the Debezium connector publishes events with updated schemas, but the registry enforces compatibility rules. Services should implement the Schema-on-Read pattern using Apache Avro's resolution rules—ignoring unknown fields and using default values for missing ones. Additionally, deploy a CQRS pattern where read models can evolve independently of the source schema, using Kafka Connect transformers to flatten nested structures before they reach consumption endpoints.
What happens when both systems update the same entity simultaneously during the transition period?
This creates a split-brain scenario that simple timestamps cannot resolve. Architects must implement Vector Clocks or CRDTs (Conflict-free Replicated Data Types) for deterministic conflict resolution. Deploy a Bi-Directional Sync component that consumes microservice events and writes back to Oracle using Kafka Connect JDBC Sink, but with strict Last-Write-Wins (LWW) semantics based on hybrid logical clocks.
More importantly, implement Domain-Driven Design boundaries—during migration, assign sole write ownership to either the monolith or microservice per aggregate root, never both. Use Database Flags in Oracle to indicate migration state, routing write traffic accordingly through an API Gateway using the Strangler Fig Pattern.
Describe the pattern for ensuring transactional integrity when a business operation spans both the legacy database and new microservices.
Most candidates incorrectly suggest distributed transactions using Two-Phase Commit (2PC) across heterogeneous systems, which creates brittle coupling and availability issues. The proper solution employs the Saga Pattern with Compensating Transactions. When a user action requires updates to both Oracle (legacy) and PostgreSQL (new), orchestrate this through a Saga Orchestrator built on Camunda or Temporal. The process executes local transactions sequentially: first update Oracle, then publish a domain event, then execute the microservice operation. If any step fails, execute compensating transactions—if the microservice commit fails, trigger a rollback event that the legacy system consumes to revert the Oracle change. This maintains eventual consistency without locking resources across network boundaries.