PostgreSQL Transaction Isolation Levels Explained

Mydbops
Mar 18, 2026
8 min read
Mins to Read
All
PostgreSQL Isolation levels
PostgreSQL Isolation levels

Transactions are a core part of working with PostgreSQL, and isolation is what keeps them from stepping on each other. Isolation is the "I" in ACID. It determines how much one transaction can see of what another transaction is doing, and when. Understanding PostgreSQL transaction isolation levels is essential for any developer or DBA building reliable, concurrent database applications.

PostgreSQL supports the four isolation levels defined in the SQL standard, but implements them through Multiversion Concurrency Control (MVCC) rather than traditional locking. This makes PostgreSQL highly performant under concurrency, but knowing exactly how each isolation level behaves in practice is what separates a well-tuned PostgreSQL deployment from one that silently produces inconsistent data.

This guide covers all four PostgreSQL isolation levels: Read Uncommitted, Read Committed, Repeatable Read, and Serializable, with real two-session examples, a comparison table of concurrency anomalies, and a full section on savepoints for fine-grained transaction control. If you are looking to understand how PostgreSQL MVCC works internally, that post pairs well with this one.

What are transaction isolation levels in PostgreSQL?

When multiple transactions run concurrently in PostgreSQL, they can interfere with each other in three well-defined ways. The isolation level controls which of these concurrency anomalies are permitted:

  • Dirty read: reading data that another transaction has modified but not yet committed. If that transaction rolls back, the data you read never actually existed.
  • Non-repeatable read: reading the same row twice within a single transaction and getting different values, because another transaction committed a change in between.
  • Phantom read: running the same query twice and getting a different set of rows, because another transaction inserted or deleted rows in between.

Higher isolation prevents more anomalies but introduces more potential for transaction conflicts, serialization errors, and retry overhead. Choosing the right isolation level for your PostgreSQL workload is a key part of PostgreSQL performance tuning.

How PostgreSQL implements isolation levels with MVCC

Unlike databases that use lock-based concurrency control, PostgreSQL uses MVCC (Multiversion Concurrency Control) to serve consistent snapshots to concurrent transactions without blocking reads. Each row version is stamped with transaction IDs, and PostgreSQL determines which version a transaction can see based on its isolation level and snapshot timestamp. This is why PostgreSQL can avoid dirty reads at every isolation level. The MVCC engine simply never exposes uncommitted row versions to other sessions.

Read Tx Snap: 1045
Write Tx XID: 1046
Snapshot Isolating
Vacuum Dead Tuples
PostgreSQL Logo

Setting up the test table

All session examples in this guide use the same accounts table. Run the following to set it up:

CREATE TABLE accounts
  (
     id      SERIAL PRIMARY KEY,
     balance INT
  );

INSERT INTO accounts
            (balance)
VALUES      (1000),
            (2000),
            (3000); 

Now open two separate PostgreSQL sessions, Session A and Session B. Start by verifying that both sessions see the same initial data.

Session A (time: 11:10:00)

postgres=> select * from accounts;
id | balance                           
----+---------
 1 |    1000
 2 |    2000
 3 |    3000

Session B (time: 11:11:00)

postgres=> select * from accounts;
id | balance
----+---------                                 
 1 |    1000
 2 |    2000
 3 |    3000
(3 rows)

Both sessions see the same committed state. The examples below show how each isolation level changes what each session sees as concurrent changes happen.

1. Read Uncommitted

In the SQL standard, READ UNCOMMITTED is the weakest isolation level and explicitly permits dirty reads. In PostgreSQL, however, this level is not truly implemented as the standard describes. When a session sets SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED, PostgreSQL silently treats it as READ COMMITTED(which we will cover in the next section). This means that even at this lowest level, your transaction will only ever see data that has been successfully saved by other transactions.

Dirty reads simply cannot happen in PostgreSQL under any isolation level, because the MVCC engine does not expose uncommitted row versions to other transactions.

The session example below shows what the SQL standard says should happen at READ UNCOMMITTED, and why it does not apply in PostgreSQL.

Session A                                  Session B       

time: 11:12:00
postgres=> begin;                 
BEGIN
postgres=> update accounts 
           set balance = 1500 
           where id = 1;
UPDATE 1
                                           time: 11:13:00
                                           postgres=*> select balance 
                                                        from accounts 
                                                        where id = 1;
                                           balance
                                           ---------
                                           1500
                                           (1 row)


                                           postgres=*> commmit;

In Read Uncommitted isolation (per the SQL standard), Session B can read changes made by Session A before they are committed. If Session A rolls back, Session B has read data that never actually existed in the database. This is called a dirty read and it violates data consistency. PostgreSQL prevents this entirely through MVCC, regardless of the isolation level set.

Running PostgreSQL in production? Mydbops Managed PostgreSQL handles version upgrades, replication, configuration tuning, and 24/7 monitoring so your engineering team can focus on the application. Get Managed DB

2. Read Committed: the PostgreSQL default

READ COMMITTED is the default PostgreSQL transaction isolation level. Every SQL statement within a transaction sees only data that was committed before that statement began, not before the transaction began. This means the visibility snapshot refreshes with every new query inside the transaction.

Concurrency anomalies allowed

Non-repeatable reads and phantom reads are both possible under READ COMMITTED. Dirty reads are not possible.

Session example: non-repeatable read under Read Committed

Session A                                  Session B

time: 11:14:00                  
postgres=> begin;
BEGIN
postgres=*> select balance 
            from accounts 
            where id = 1;
balance
---------
   1000
(1 row)
                                           time: 11:16:00                                                             
                                           postgres=*> update accounts 
                                                       set balance = 1500 
                                                       where id = 1;
                                           UPDATE 1
                                           postgres=*> commit;
                                           COMMIT



time: 11:18:00
postgres=*> select balance 
             from accounts 
             where id = 1;
balance
---------
   1500
(1 row)

-- Session A now sees 1500, even though it read 1000 earlier
-- in the same transaction. This is a non-repeatable read.

postgres=*> commmit;

Session A reads a row, Session B updates and commits that row, and when Session A reads it again it sees the new value. This is expected and correct behaviour for READ COMMITTED. It is well-suited for general OLTP workloads where each query needs the latest committed state, but multi-step reports requiring a consistent point-in-time snapshot need a higher isolation level. Read more about PostgreSQL transaction isolation in the official documentation.

3. Repeatable Read

At the REPEATABLE READ isolation level, PostgreSQL takes a consistent snapshot of the database at the moment the transaction begins. All queries within that transaction see data as it existed at that snapshot timestamp, regardless of any commits made by other sessions during the transaction lifetime.

Concurrency anomalies prevented

Dirty reads and non-repeatable reads are both prevented. Phantom reads are also prevented in PostgreSQL's MVCC implementation, which is stronger behaviour than the SQL standard strictly requires for this level. Understanding how PostgreSQL handles concurrent data changes is also closely related to how it manages locks and deadlocks.

Session example: consistent snapshot under Repeatable Read

Session A                                    Session B


time: 11:21:00
postgres=> begin;
BEGIN
postgres=*> set transaction 
            isolation level 
            repeatable read;
SET
postgres=*> select balance 
            from accounts 
            where id = 2;
balance
---------
   2000
(1 row)

                

                                             time: 11:23:00 
                                             postgres=*> update accounts 
                                                         set balance = 1500 
                                                         where id = 2;
                                             UPDATE 1
                                             postgres=*> commit;

time: 11:24:00
 postgres=*> select balance 
            from accounts 
            where id = 2;
balance
---------
   2000
(1 row)

-- Session A still sees 2000, not 1500.
-- The snapshot is frozen at the point Session A began.

postgres=*> commit;
COMMIT

-- After Session A commits, a fresh query outside any transaction returns:

postgres=> select balance
           from accounts
           where id = 2;
balance
---------
   1500
(1 row)

-- Session B's committed change is now visible.

The snapshot is scoped to the transaction, not the session. Once Session A commits, the next standalone query reflects Session B's committed change. REPEATABLE READ is ideal for generating multi-step reports, financial summaries, or any workload where query consistency within a transaction matters more than seeing the absolute latest data.

Need help designing the right isolation strategy for your PostgreSQL workload? Mydbops consulting can audit your transaction patterns, identify concurrency risks, and recommend configuration changes. Talk to an Expert

4. Serializable: the strictest isolation level

SERIALIZABLE is the highest PostgreSQL isolation level. It guarantees that the result of any set of concurrent serializable transactions is identical to some serial execution, as if each transaction ran one at a time, in some order.

PostgreSQL implements this using Serializable Snapshot Isolation (SSI), an advanced algorithm that tracks read/write dependencies between concurrent transactions. When PostgreSQL detects a combination of dependencies that would produce a serialization anomaly, it rolls back one of the conflicting transactions with a serialization error, forcing the application to retry.

Serializable Isolation: Detecting Cycles

Transaction A
1. READ aggregate sums
2. INSERT new row
3. COMMIT (Success)
SI Validator
Transaction B
1. READ aggregate sums
2. INSERT new row
3. COMMIT (Blocked!)
*ERROR: could not serialize access

This is fundamentally different from the traditional approach of taking explicit locks, which is why PostgreSQL's Serializable isolation performs better under concurrency than lock-based implementations. For the technical specification, see the PostgreSQL SSI documentation.

What this means for application design

Your application must be prepared to catch ERROR: could not serialize access due to concurrent update and retry the affected transaction. Serialization errors are not bugs. They are the mechanism by which SSI enforces correctness, and any application using SERIALIZABLE isolation in PostgreSQL must implement retry logic.

Session example: serialization failure

Session A                                      Session B

11:28:00
postgres=> select * from accounts;
id | balance
----+---------
 3 |    3000
 1 |    1500
 2 |    1500
(3 rows)


postgres=> begin;
BEGIN
postgres=*> set transaction
           isolation level
           serializable;
SET
postgres=*> select sum(balance)
           from accounts;
sum 
------
6000
(1 row)


postgres=*> update accounts
           set balance = -1500
           where id = 3;
UPDATE 1
                                               11:29:00
                                               postgres=> select * from accounts;
                                               id | balance
                                               ----+---------
                                               3 |    3000
                                               1 |    1500
                                               2 |    1500
                                                (3 rows)


                                               postgres=> begin;
                                               BEGIN
                                               postgres=*> set transaction 
                                                           isolation level 
                                                           serializable;
                                               SET
                                               postgres=*> update accounts 
                                                           set balance = 1500 
                                                           where id = 3;
11:30:00
postgres=*> commit;
COMMIT
                                               11:30:19
                                               ERROR: could not serialize access
                                                      due to concurrent update
                                                   postgres=!>
                                               postgres=!> rollback;
                                               ROLLBACK
                                               
                                               -- Session B must be retried from the beginning.
                                               
                                               postgres=> select * from accounts;
                                               id | balance
                                               ----+---------
                                                1 |    1500
                                                2 |    1500
                                                3 |   -1500
                                                (3 rows)

PostgreSQL detected a read/write dependency conflict between the two transactions and forced Session B to retry. The final database state is consistent with serial execution.

Every transaction runs with a consistent snapshot of the database, similar to REPEATABLE READ. PostgreSQL monitors read/write dependencies among concurrent transactions. If it detects a combination of dependencies that could lead to a serialization anomaly, one of the transactions is rolled back with a serialization failure error.

Not sure if your team is handling PostgreSQL serialization errors and retry logic correctly? Mydbops Remote DBA services include ongoing query review, transaction auditing, and concurrency pattern analysis. Talk to a Certified PostgreSQL DBA

PostgreSQL isolation levels: comparison table

Isolation Level Dirty Read Non-Repeatable Read Phantom Read PostgreSQL Behavior
Read Uncommitted Same as Read Committed
Read Committed Default level
Repeatable Read Snapshot at start
Serializable Full serializability

* PostgreSQL prevents phantom reads at Repeatable Read, which is stronger than the SQL standard requires for this level.

Savepoints in PostgreSQL

A savepoint is a named checkpoint within an active PostgreSQL transaction. If an error or unexpected result occurs after a savepoint, you can roll back to that checkpoint without abandoning the entire transaction. Everything committed before the savepoint remains intact, and savepoints give you fine-grained recovery within a longer transaction.

Savepoints are particularly valuable in complex multi-step transactions, batch processing pipelines, and application-level error handling where you want to undo one operation without losing the work done by earlier operations in the same transaction. They work at all PostgreSQL isolation levels.

The Savepoint Rescue Pipeline

Start
BEGIN
Checkpoint
SAVEPOINT sp1
Aborted State
ERROR
↺ ROLLBACK TO sp1
Retry Path
UPDATE 2
Success
COMMIT

Savepoint commands

SAVEPOINT sp_name;               -- create a named savepoint
ROLLBACK TO SAVEPOINT sp_name;   -- undo all changes since the savepoint
RELEASE SAVEPOINT sp_name;       -- delete the savepoint (cannot roll back to it after this)

For the full syntax reference, see the PostgreSQL SAVEPOINT documentation.

Practical savepoint example: partial rollback in a banking transaction

Starting state:

postgres=> select * from accounts;
id | balance
----+---------
 1 |    1500
 2 |    1500
 3 |   -1500
(3 rows)

Begin the transaction and create a savepoint after the first update:

postgres=> begin;
BEGIN

-- Deduct 200 from account 1
postgres=*> update accounts set balance = balance - 200 where id =1;
UPDATE 1

-- Create a savepoint before the next two updates
postgres=*> savepoint sp1;
SAVEPOINT

Apply two more updates, one of which has a mistake:

-- Update account 2 correctly
postgres=*> UPDATE accounts SET balance = balance + 200 WHERE id = 2;
UPDATE 1

-- Accidentally update account 3 with a wrong value
postgres=*> UPDATE accounts SET balance = balance + 99999 WHERE id = 3;
UPDATE 1

postgres=*> SELECT * FROM accounts; 
id | balance
----+---------
1 |    1300
2 |    1700
3 |   98499
(3 rows)

-- Account 3 is wrong. Roll back only to sp1.

Roll back to the savepoint and verify what was preserved:

postgres=*> ROLLBACK TO SAVEPOINT sp1;
ROLLBACK

postgres=*> SELECT * FROM accounts; 
id | balance
----+--------- 
2 |    1500
3 |   -1500
1 |    1300
(3 rows)

-- Account 1 deduction (-200) is preserved.
-- Accounts 2 and 3 updates are fully undone.

Redo the account 2 update correctly and commit:

postgres=*> UPDATE accounts SET balance = balance + 200 WHERE id = 2;
UPDATE 1

postgres=*> COMMIT;
COMMIT

postgres=> SELECT * FROM accounts;
id | balance
----+---------
3 |   -1500
1 |    1300
2 |    1700
(3 rows)

Account 1's deduction was preserved through the rollback to sp1. Only the changes after sp1 were undone. The erroneous update to account 3 was reversed without touching the rest of the transaction.

Key rules for PostgreSQL savepoints

  • Savepoints only exist inside an active transaction (BEGIN ... COMMIT or BEGIN ... ROLLBACK).
  • You can create multiple savepoints within a single transaction, each with a unique name.
  • ROLLBACK without a savepoint name cancels the entire transaction, not just the last savepoint.
  • After RELEASE SAVEPOINT, you can no longer roll back to that savepoint. It is deleted.
  • Savepoints work at all isolation levels, including SERIALIZABLE.

For connection-level transaction management at scale, savepoints often work in conjunction with connection pooling. See our guide on PostgreSQL connection pooling with PgBouncer for how to structure your connection strategy effectively.

Concerned about data integrity, privilege escalation, and access control in your PostgreSQL environment? Mydbops Security Audit covers encryption, audit logging, role management, and transaction-level access patterns. Request Audit

Choosing the right PostgreSQL isolation level

Most PostgreSQL applications operate correctly and efficiently at READ COMMITTED. It prevents dirty reads while allowing maximum concurrency, making it the right default for OLTP workloads, API backends, and most transactional use cases.

Switch to REPEATABLE READ when you need a stable, point-in-time view of data across multiple queries within a transaction. Generating a multi-table financial report, running an export job, or performing analytics that span several related queries are all good examples.

Use SERIALIZABLE when the business logic of your application genuinely requires that concurrent transactions cannot interfere, such as double-spend prevention, inventory reservation systems, or financial ledger operations. Make sure your application implements retry logic for serialization errors before deploying at this level.

Avoid defaulting to higher isolation levels for performance-sensitive, high-concurrency workloads unless correctness genuinely requires it. The overhead of SSI dependency tracking and the cost of serialization error retries can be significant at scale.

Running PostgreSQL in production?

Mydbops offers fully managed PostgreSQL services with expert support, proactive monitoring, and performance tuning built in.

No items found.

About the Author

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.