Transition tables, introduced in PostgreSQL 10, provide this capability through the REFERENCING clause in statement-level triggers. This mechanism exposes the entire result set of a modification as temporary table structures—NEW TABLE for INSERT/UPDATE operations and OLD TABLE for DELETE/UPDATE operations—allowing the trigger function to perform set-based processing. Instead of iterating through rows individually, you can execute a single SQL statement that operates on all affected rows simultaneously.
CREATE TRIGGER bulk_audit_trigger AFTER UPDATE ON inventory REFERENCING NEW TABLE AS updated_items OLD TABLE AS previous_items FOR EACH STATEMENT EXECUTE FUNCTION log_inventory_changes();
Inside log_inventory_changes(), querying updated_items returns all rows touched by the triggering UPDATE, enabling efficient bulk operations like INSERT INTO audit_log SELECT * FROM updated_items;.
An e-commerce platform suffered severe performance degradation during nightly bulk price adjustments affecting 100,000 product records. The existing architecture used a row-level AFTER trigger to write audit entries, causing 100,000 sequential INSERT executions into the price_history table and exhausting connection pool resources.
Solution 1: Application-side batch processing The team considered removing the trigger and handling audits within the Java application using JDBC batch inserts. While this would reduce database CPU load, it introduced critical consistency risks: if the application crashed mid-batch, committed price updates would permanently lack corresponding audit records, violating SOX compliance. Additionally, this approach required complex distributed transaction management between the app server and PostgreSQL.
Solution 2: Asynchronous message queue Another proposal involved writing row identifiers to a Redis stream during the update, then processing audits via a background worker. This decoupled the write paths but sacrificed immediate transactional consistency. The asynchronous worker could lag during high load, creating temporary gaps in the audit trail that regulatory auditors would flag. Furthermore, ensuring exactly-once semantics across PostgreSQL and Redis added significant infrastructure complexity.
Solution 3: Statement-level trigger with transition tables
The selected approach replaced the row trigger with a statement-level trigger utilizing REFERENCING NEW TABLE AS new_prices. The trigger function performed a single set-based operation: INSERT INTO price_history SELECT product_id, old_price, new_price, NOW() FROM new_prices;. This maintained strict ACID compliance within the same transaction while processing all modifications in one pass.
Result: The nightly batch completion time decreased from 45 seconds to 300 milliseconds. WAL (Write-Ahead Log) generation dropped by 90%, and the system eliminated the lock contention spikes previously caused by massive trigger recursion.
How do transition tables interact with BEFORE triggers and INSTEAD OF triggers on views?
Transition tables are exclusively available in AFTER triggers on ordinary tables. BEFORE triggers operate on individual rows before the statement's final result set materializes, so the complete collection of modified rows does not yet exist. INSTEAD OF triggers on views cannot use transition tables because they define alternative execution paths rather than observing the actual outcome of DML operations on underlying base tables.
Can the trigger function modify data within the NEW TABLE or OLD TABLE to alter the final result?
No, transition tables are read-only snapshots accessible only during trigger execution. They represent immutable views of the rows affected by the statement and cannot be modified using UPDATE, DELETE, or INSERT operations. To change values before they are committed, you must use row-level BEFORE triggers that manipulate the NEW record variable directly, or perform separate DML against permanent tables.
Why do TRUNCATE operations not populate the OLD TABLE transition table?
Although PostgreSQL supports triggers on TRUNCATE at the statement level, this DDL command removes all rows without generating individual row transition states through the MVCC system. TRUNCATE bypasses the tuple versioning mechanism used to construct transition relations, so it cannot populate OLD TABLE with the deleted rows. Auditing TRUNCATE operations requires alternative mechanisms such as event triggers or logical decoding streams.