PostgreSQL WAL File Retention

Mydbops
Feb 5, 2026
5
Mins to Read
All
PostgreSQL WAL File Retention
PostgreSQL WAL File Retention

Behind the Scenes: How PostgreSQL Logical Replication Manages WAL File Retention

When implementing PostgreSQL logical replication, one critical detail often overlooked is how it impacts your storage: WAL retention.

While logical replication provides granular, row-level control—allowing you to sync specific data from a publisher to multiple subscribers—it introduces a specific storage risk. If a subscriber slows down, disconnects, or fails, PostgreSQL will "pin" every WAL file that the subscriber still needs. Left unmonitored, this can lead to massive storage bloat and eventual disk exhaustion.

WAL Retention Lifecycle

Visualizing why "Archived" isn't enough to free disk space

MASTER PUBLISHER
📍
📍
📍
DISK BLOAT INCREASING
Sub 3 (Active)
LSN: Caught Up
WAL Archive
Safety Backup
Sub 2 (LAGGING)
Connection: FAILED
Sub 3 (Active)
LSN: Caught Up
WAL Archive
Safety Backup
Sub 2 (LAGGING)
Connection: FAILED
Successful Archive & Sync Data moves freely to Sub 3 and the Archive cabinet. Note that even though the data is archived, it still exists on the Publisher!
The "Pinning" Lock The red dots can't reach Sub 2. This creates a "Pin" (📍) that stops the Publisher from recycling old notes, causing the red pile to grow.

Let’s walk through a practical scenario to see this retention mechanism in action.

Step 1: Verifying Active Replication Slots

Before troubleshooting storage, we must identify the active replication channels. We use the pg_replication_slots view to check the status of our subscribers.

Replication Slot Monitor

Slot Name Status Confirmed Flush
sub3
Active
0/1F38A10
sub2
Active
0/1F38A10
testdb=# SELECT slot_name, plugin, slot_type, database, active, restart_lsn, confirmed_flush_lsn FROM pg_replication_slots;

 slot_name |  plugin  | slot_type | database | active | restart_lsn | confirmed_flush_lsn 
-----------+----------+-----------+----------+--------+-------------+---------------------
       sub3      | pgoutput | logical   | testdb   | t      | 0/1F33A78   | 0/1F38A10
       sub2      | pgoutput | logical   | testdb   | t      | 0/1F389D8   | 0/1F38A10

This confirms that both logical replication slots—sub2 and sub3—are active and receiving changes as expected. Monitoring this view ensures that all subscribers are healthy and participating in the replication stream (Read: pg_replication_slots).

Step 2:  Inspecting WAL Files and Archiving

During this step of the test, the pg_wal directory held the following WAL files. This is the area where PostgreSQL maintains its active WAL segments before they are archived or recycled.

root@ip-172-31-3-181:/var/lib/postgresql/16/main/pg_wal# ls 
total 33M
-rw------- 1 postgres postgres  16M Feb 02 05:11 000000010000000000000002
-rw------- 1 postgres postgres  16M Feb 02 05:07 000000010000000000000003
drwx------ 2 postgres postgres 4.0K Feb 02 05:11 archive_status

Configuration context:

The size of the pg_wal directory is governed by max_wal_size and min_wal_size. Under normal circumstances, PostgreSQL recycles old WAL files to stay within these limits. However, logical replication slots can override the recycling process if a subscriber falls behind, forcing the directory to grow beyond max_wal_size until the disk is full or the subscriber catches up. (Read more on WAL Settings).

Step 3: Archived WAL files 

To determine which WAL files have been archived so far, we can look at the pg_stat_archiver view or simply browse the archive directory. Both approaches clearly show what’s been stored .

SELECT * FROM pg_stat_archiver;

Archived_count    | last_archived_wal                      | last_archived_time       | failed_count
----------------+------------------+-------------------------+------------------------------------------------------
 1                             | 000000010000000000000001 | 2025-12-10 05:07:18    | 0

Configuration context:

Archiving relies on the archive_command setting (See docs: Archiving). Even if files are successfully archived to the backup location, PostgreSQL must retain the physical WAL files in the pg_wal directory as long as a replication slot requires them.

Step 4: Wal progress 

We can see which WAL segment, PostgreSQL is currently writing to:

SELECT pg_walfile_name(pg_current_wal_lsn());

 pg_walfile_name
------------------
 000000010000000000000002

Since PostgreSQL is currently writing to 000000010000000000000002, this segment stays in the pg_wal directory. The earlier file, 000000010000000000000001, has been successfully archived, as none of the subscribers need it anymore. This shows that WAL files are being archived correctly.

Simulating a Lagging Subscriber

To see how a single node can cause storage issues, we disabled one subscriber:

ALTER SUBSCRIPTION sub2 DISABLE;

Slot status now shows:

slot_name | active
-----------+--------
 sub3      | t
 sub2      | f

With sub2 inactive, the publisher continues to generate new WAL files as data is written. However, it cannot discard segments from the point where sub2 stopped.

The Result:

Row counts illustrate the difference:

Data Sync Divergence

Publisher / Sub3 (Active) 420,006 rows
Subscriber 2 (Inactive) 220,006 rows
SYSTEM ALERT: LAG EXCEEDS 200,000 ROWS

Publisher: 420006 rows

Subscriber 3 (active): 420006 rows

Subscriber 2 (inactive): 220006 rows

In the pg_wal folder, you will now see files ...03, ...04, and ...05 persisting. Even if files 03 and 04 are safely in your S3 or disk archive, the publisher keeps them in the live directory specifically for sub2.

Observing WAL Retention with a Lagging Slot

After generating additional WAL activity, we check the publisher and subscribers:

Publisher:

Current WAL file: 000000010000000000000005

Subscriber 2 (inactive):

Current WAL file: 000000010000000000000003

Subscriber 3

Current WAL file: 000000010000000000000005

Supportive logs:

Archive Folder : 

#/var/lib/postgresql/16/archive
000000010000000000000001  000000010000000000000003
000000010000000000000002  000000010000000000000004

Wal file folder:

# /var/lib/postgresql/16/main/pg_wal/
-rw------- 1 postgres postgres  16M Feb 02 06:04 000000010000000000000003
-rw------- 1 postgres postgres  16M Feb 02 06:05 000000010000000000000004
-rw------- 1 postgres postgres  16M Feb 02 06:09 000000010000000000000005
-rw------- 1 postgres postgres  16M Feb 02 06:09 000000010000000000000006
Sub2 Lag Simulation ⚠️ Storage Bloat Active
SUB3 (ACTIVE)
420,006
SUB2 (DISABLED)
220,006

Observation:

The publisher retains all WAL files from 000000010000000000000003 onward, even if they are archived, until the lagging slot catches up. This explains why logical replication slots can cause WAL bloat when subscribers fall behind. For more details on how slots manage retention, see the official documentation in Replication Slots.

Key Takeaways

  1. Lagging Subscribers Retain WAL: Any subscriber that stops causes PostgreSQL to hold WAL files indefinitely, regardless of your archive settings.
  2. Monitor Your Slots: Regularly audit pg_replication_slots to ensure all subscribers are active.
  3. Disk Alerts: Set alerts for disk usage on the pg_wal partition, as replication lag can bypass standard WAL size limits

Don't let replication lag bring your operations to a halt. Mydbops provides expert oversight to keep your data flowing and your storage lean. From monitoring and scaling with Mydbops Managed PostgreSQL to tailored replication strategies through our Consulting and 24/7 expert support Services, we ensure your production databases remain optimized and resilient.

No items found.

About the Author

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.