Online DDL in TiDB 

Mydbops
Jan 31, 2026
7
Mins to Read
All
TiDB Online DDL
TiDB Online DDL

Online DDL in TiDB 

Online Data Definition Language (DDL) is essential for maintaining modern databases. It enables administrators to modify the database schema without triggering extended outages or heavy locking mechanisms that interrupt daily operations. Practically, this ensures the database remains online, processing incoming reads and writes while structural changes occur in the background.

While traditional databases often lock tables during these operations, TiDB—a distributed, MySQL-compatible HTAP database—takes a different approach. It utilizes Online DDL, a system designed to apply schema changes without blocking read or write traffic.

In this post, we will look at the technical workings of Online DDL in TiDB.

TiDB’s DDL Architecture

Online DDL in TiDB is not handled by a single process but is orchestrated by a few major components working in unison across the cluster. You can read the official TiDB DDL Introduction for a high-level overview, but here is the architectural breakdown:

TiDB Cluster Orchestration

DDL Owner
Single node coordinating all changes
Job Queue
Persistent job tracking in system tables
Workers
Distributed nodes handling data chunks

1. DDL Owner

TiDB selects one node in the cluster to act as the DDL Owner. This node coordinates all schema changes. Only the owner node can execute DDL jobs to prevent different nodes from having conflicting schema versions.

2. DDL Jobs

When a DDL command runs, TiDB generates a "DDL job" record in its internal system tables (mysql.schema_version and mysql.tidb_ddl_job). This record tracks the job as it progresses through different stages.

DDL Job Record Structure

DDL Job Record Schema

Field Name Description
job_id The unique identifier assigned to the task.
type The specific operation type (e.g., ADD INDEX, DROP COLUMN).
schema_state The current processing phase of the DDL operation.
start_time Timestamp recording when the DDL operation began.

3. Background Reorganization Workers

Once a job reaches the reorganization phase, background workers take over. They process data in chunks—handling tasks like index building and schema propagation—while keeping data consistent.

Continuous Chunk Scan

Scanning...

Automatic background data reorganization.

You can view active background DDL operations using the admin command:

→ admin show ddl

tidb:4000> admin show ddl \G 
--------------
admin show ddl
--------------

*************************** 1. row ***************************
   SCHEMA_VER: 87
     OWNER_ID: 65785f73-d647-4049-b3bb-d50ab9c6c87a
OWNER_ADDRESS: 127.0.0.1:4000
 RUNNING_JOBS: ID:133, Type:add index, State:running, SchemaState:write reorganization, SchemaID:125, TableID:127, RowCount:0, ArgLen:0, start time: 2026-01-14 23:07:05.835 +0530 IST, Err:<nil>, ErrCount:0, SnapshotVersion:463578654620516358, Version: v2, UniqueWarnings:0
      SELF_ID: 65785f73-d647-4049-b3bb-d50ab9c6c87a
        QUERY: alter table sbtest1 add index idx_k (k)
1 row in set (0.13 sec)

tidb:4000>

Output Breakdown:

  • OWNER_ADDRESS: Shows which node is the DDL owner.
  • RUNNING_JOBS: Displays the current job ID, Type, and State (e.g., write reorganization).
  • QUERY: The SQL statement being processed.

To see a history of current and past DDL jobs, run:

→ ADMIN SHOW DDL JOBS;

tidb:4000> ADMIN SHOW DDL JOBS ;
--------------
ADMIN SHOW DDL JOBS
--------------

+--------+---------+------------+---------------+--------------+-----------+----------+-----------+----------------------------+----------------------------+----------------------------+--------+-------------+
| JOB_ID | DB_NAME | TABLE_NAME | JOB_TYPE      | SCHEMA_STATE | SCHEMA_ID | TABLE_ID | ROW_COUNT | CREATE_TIME                | START_TIME                 | END_TIME                   | STATE  | COMMENTS    |
+--------+---------+------------+---------------+--------------+-----------+----------+-----------+----------------------------+----------------------------+----------------------------+--------+-------------+
|    135 | sbtest  | sbtest1    | add index     | public       |       125 |      127 |   1165864 | 2026-01-14 23:12:42.284000 | 2026-01-14 23:12:42.335000 | 2026-01-14 23:12:51.083000 | synced | ingest, DXF |
|    134 | sbtest  | sbtest1    | drop index    | none         |       125 |      127 |         0 | 2026-01-14 23:12:22.933000 | 2026-01-14 23:12:23.134000 | 2026-01-14 23:12:23.434000 | synced |             |
|    133 | sbtest  | sbtest1    | add index     | public       |       125 |      127 |   1165864 | 2026-01-14 23:07:05.835000 | 2026-01-14 23:07:05.986000 | 2026-01-14 23:07:15.283000 | synced | ingest, DXF |
|    132 | sbtest  | sbtest1    | drop index    | none         |       125 |      127 |         0 | 2026-01-14 23:07:01.963000 | 2026-01-14 23:07:02.085000 | 2026-01-14 23:07:02.383000 | synced |             |
|    131 | sbtest  | sbtest1    | add index     | public       |       125 |      127 |   1165864 | 2026-01-14 23:06:34.435000 | 2026-01-14 23:06:34.536000 | 2026-01-14 23:06:42.933000 | synced | ingest, DXF |
|    130 | sbtest  | sbtest1    | drop index    | none         |       125 |      127 |         0 | 2026-01-14 23:06:27.634000 | 2026-01-14 23:06:27.741000 | 2026-01-14 23:06:28.033000 | synced |             |
|    129 | sbtest  | sbtest1    | add index     | public       |       125 |      127 |   1165864 | 2026-01-14 23:05:23.983000 | 2026-01-14 23:05:24.084000 | 2026-01-14 23:05:34.083000 | synced | ingest, DXF |
|    128 | sbtest  | sbtest1    | create table  | public       |       125 |      127 |         0 | 2026-01-14 23:03:37.284000 | 2026-01-14 23:03:37.384000 | 2026-01-14 23:03:37.483000 | synced |             |
|    126 | sbtest  |            | create schema | public       |       125 |        0 |         0 | 2026-01-14 23:03:32.984000 | 2026-01-14 23:03:33.091000 | 2026-01-14 23:03:33.183000 | synced |             |
|    124 | sbtest  |            | drop schema   | none         |       110 |        0 |         0 | 2026-01-14 23:03:24.986000 | 2026-01-14 23:03:25.134000 | 2026-01-14 23:03:25.433000 | synced |             |
+--------+---------+------------+---------------+--------------+-----------+----------+-----------+----------------------------+----------------------------+----------------------------+--------+-------------+
10 rows in set (0.01 sec)


This lists the Job ID, Table Name, State (like synced), and the Schema State (like public), providing a clear view of the operation's history.

How TiDB Performs Schema Changes

TiDB uses a Three-Phase Protocol to handle Online DDL:

  1. Prepare: TiDB updates the metadata and readies the cluster for the change. It ensures the modification is propagated across all nodes.
  2. Reorganize: The actual data changes happen here. TiDB ensures the database stays operational for other reads and writes during this process.
  3. Commit: Once reorganization finishes, the schema change is finalized. Metadata is updated to make the changes active.

How Online DDL Progresses Through Its States

When a DDL executes, TiDB does not immediately lock the table. Instead, it moves through a specific sequence of states to allow queries to continue running.

The typical state progression (for operations like ADD INDEX) is:
None → Delete_only → Write_only → Write_reorg → Public

TiDB DDL Lifecycle

1
None
Job Initialization
2
Delete Only
Safety Phase
3
Write Only
Schema Shift
4
Write Reorg
Data Backfill
5
Public
Fully Online
  • None: The job is created, but no schema changes are visible.
  • Delete_only: Reads and writes to unaffected fields continue. For the target columns, only delete operations are processed; no data transformation happens yet.
  • Write_only: Insert and Update operations are allowed if they match the new schema. Heavy background work hasn't started yet.
  • Write_reorg: Background workers read data in batches. Indexes are built or rows updated incrementally. New writes are captured and merged on the fly. Note: TiDB scans and reorganizes data page by page rather than locking the whole table.
  • Public: Reorganization is finished. The schema is fully usable, and the job is marked complete.

Execution of Instant DDL Changes in TiDB

1.Column Addition : 

During column addition, TiDB avoids table locks and ensures continuous availability.

Eg : ALTER TABLE EMPLOYEE ADD COLUMN DEPT VARCHAR(300)

While executing the above DDL , TiDB goes through below phase. 

  • Prepare : TiDB updates the table’s metadata to include the new column definition. This change propagates across all nodes in the cluster to ensure consistency.
  • Reorganize : The new column is added quietly in the background. TiDB uses a background process to gradually fill in the column values for existing rows. Throughout this process, the table remains fully available for both reads and writes, and any new data written to the table already includes the new column.
  • Commit: Once the backfill finishes, TiDB completes the schema change. It updates the metadata to reflect the new structure, making the new column fully active and visible for all future operations.

2.Index creation : 

While an index is being created, TiDB ensures that regular queries and data changes continue to run smoothly without interruption.

Eg :  ALTER TABLE EMPLOYEE ADD INDEX idx_dept(DEPT)

During Index creation ,  TiDB goes through the below phase. 

  • Prepare: TiDB gets ready to create the index by updating the metadata and making sure the change is properly distributed across the cluster.
  • Reorg : TiDB builds the index quietly in the background, gradually processing existing data to fill the index. During this time, reads and writes continue as usual without interruption.
  • Commit : TiDB finalizes the new index and makes it active. The metadata is updated, and the index becomes available to optimize queries, improving performance—all without causing any downtime.

DXF (Distributed Execution Framework) in TiDB

DXF (Distributed Execution Framework) is TiDB’s internal framework that allows heavy operations such as DDL reorganization ,Index backfilling and Table scans to be distributed across multiple TiKV nodes instead of being handled by a single node. Without DXF, DDL operations would be slow on large tables. And with DXF, tasks are split based on key-range. Each TiKV node processes its portion in parallel and the results are merged safely by the DDL owner.

Read more: TiDB Distributed Execution Framework

Monitoring and Managing DDL Jobs

TiDB allows you to monitor jobs and control them (pause, resume, or cancel) if they impact performance.

Viewing DDL Job Status

The ADMIN SHOW DDL command provides real-time insights into ongoing DDL operations, including schema versions, DDL owners, and currently running jobs.

Example:

ADMIN SHOW DDL\G;

Listing DDL Jobs

The ADMIN SHOW DDL JOBS command retrieves a list of DDL jobs with details such as job ID, table name, job type, execution state, and timestamps.

Example:

ADMIN SHOW DDL JOBS;

Pausing a DDL job

Sometimes you don’t want to completely stop a DDL, you just want to temporarily slow things down, for example:

  • During peak traffic hours
  • When the cluster is under heavy load

In that case, you can pause the job:

Example : ADMIN PAUSE DDL JOB 133;

Cancelling a DDL Job

You can restart a paused job at any time.

Example : ADMIN CANCEL DDL JOB 133;

So , after you’ve cancelled your DDL , it’ll stop immediately and any progress made during reorganization is safely rolled back. The table is kept in a consistent, useful condition.

Resuming a DDL job

You can resume your paused DDL at any time by using the below

Example : ADMIN RESUME DDL JOB 133;

After resuming , TiDB continues the DDL where it was left off. There’s no need to restart the operation or rebuild progress , the job proceeds through reorganization and eventually completes.

Real-World Example of DDL Job Management

Scenario: Adding an Index to a Table

Step 1: Add an Index to the Table

ALTER TABLE ddl_test.sbtest1 ADD INDEX idx_created_at (created_at);

(Execution time may vary based on table size.)

Step 2: Pause the DDL Job

ADMIN PAUSE DDL JOBS 140;

Output:

+--------+------------+
| JOB_ID | RESULT     |
+--------+------------+
| 140    | successful |
+--------+------------+

(The job with ID 140 is paused successfully.)

Step 3: View Paused Jobs

ADMIN SHOW DDL JOBS;

Output:

| JOB_ID | DB_NAME  | TABLE_NAME | JOB_TYPE | SCHEMA_STATE | CREATE_TIME          | STATE  |
|--------|---------|-----------|-----------|--------------|---------------------|--------|
| 140    | ddl_test | sbtest1   | add index | write reorganization | 2025-03-24 10:33:19 | paused |

Step 4: Resume the DDL Job

ADMIN RESUME DDL JOBS 140;

Output:

+--------+------------+
| JOB_ID | RESULT     |
+--------+------------+
| 140    | successful |
+--------+------------+

(The job resumes execution successfully.)

Step 5: Verify Completion

ADMIN SHOW DDL JOBS;

Output:

| JOB_ID | DB_NAME  | TABLE_NAME | JOB_TYPE | SCHEMA_STATE | CREATE_TIME          | STATE  |
|--------|---------|-----------|-----------|--------------|---------------------|--------|
| 140    | ddl_test | sbtest1   | add index | public       | 2025-03-24 10:33:19 | synced |

Step 6: Cancel the DDL Job

ADMIN CANCEL DDL JOBS 145;

Output:

+--------+------------+
| JOB_ID | RESULT     |
+--------+------------+
| 145    | successful |
+--------+------------+

Verify Cancellation Status

ADMIN SHOW DDL JOBS;

Output (Before Completion):

| JOB_ID | DB_NAME  | TABLE_NAME | JOB_TYPE   | SCHEMA_STATE | CREATE_TIME          | STATE        |
|--------|---------|-----------|-----------|--------------|---------------------|--------------|
| 145    | sbtest  | trx_test  | add column | delete only  |2025-03-24 14:47:30 | cancelling   |

Output (After Completion):

| JOB_ID | DB_NAME  | TABLE_NAME | JOB_TYPE   | SCHEMA_STATE | CREATE_TIME          | STATE         |
|--------|---------|-----------|-----------|--------------|---------------------|---------------|
| 145    | sbtest  | trx_test  | add column | none         |2025-03-24 14:47:30 | rollback done |

How TiDB Avoids Blocking While Reorganizing 

TiDB keeps the database running during schema changes by using:

  • Versioned Schema Access: Multiple schema versions exist briefly to keep reads consistent.
  • Chunked Background Reorg: Tables are updated in small pieces to save resources.
  • Capture & Merge Writes: New writes are logged and merged into the updated data.
  • Transactional Safety: Ensures all changes are coordinated and safe across the cluster.

Online DDL in TiDB allows for seamless schema changes without blocking reads or writes, ensuring zero downtime even on large distributed tables. Its state-based orchestration and background reorganization make schema evolution safe and efficient.

Managing distributed databases requires specific expertise. Mydbops has the team to help optimize your Online DDL strategies and provide 24/7 support for your TiDB cluster.

No items found.

About the Author

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.