PostgreSQL DDL (modify column type) lock & downtime concept

Hi team,

There are some questions about the PostgreSQL DDL (modify column type) concept hoping to consult with team.
I have created sample table & data to better describe the questions.

PostgreSQL version: 14.7, community
Table schema:

CREATE TABLE IF NOT EXISTS public.orders_alter
o_orderdate timestamp without time zone,
o_orderkey numeric NOT NULL,
o_custkey numeric,
o_orderpriority character(15) COLLATE pg_catalog.“default”,
o_shippriority numeric,
o_clerk character(15) COLLATE pg_catalog.“default”,
o_orderstatus character(1) COLLATE pg_catalog.“default”,
o_totalprice numeric,
o_comment character varying(79) COLLATE pg_catalog.“default”,
test_col_1 int,
CONSTRAINT orders_alter_pkey PRIMARY KEY (o_orderkey)

Table size:

schema_table_name    | relkind | table_bytes | toast_bytes | index_bytes | total_bytes

public.orders_alter | r | 43 GB | 8192 bytes | 7905 MB | 51 GB

Q1: I have tried to Alter COLUMN test_col_1 type from int to bigint.

tpch=> ALTER TABLE public.orders_alter Alter COLUMN test_col_1 type bigint;

It needs AccessExclusiveLock during the whole DDL running period (all other read & write transactions will be blocked)
and also a table rewrite which is as expected due to int & bigint is not binary compatible, right~?

Q2: After research, the only way to reduce the downtime (lock) for a huge PostgreSQL table modifying column type is
Add a new column with new data type → syncing data → switch to a new column, is it correct~?
Ref: This great blog -

May I know is there any other possible solution to reduce the lock downtime (no need to sync data & switch column)~?
(By test, it looks like MySQL 5.7 also locks the table during the whole DDL modify column type period)

Q3: Regarding modifying string column length from character(15) to character(30)

tpch=> ALTER TABLE public.orders_alter Alter COLUMN o_clerk type character(30);

It should be binary compatible, right~?
But there is still AccessExclusiveLock during the whole DDL running period and also a table rewrite in my lab.

tpch=> SELECT pg_class.relname, pg_locks.locktype,, pg_locks.mode, pg_locks.granted
FROM pg_locks INNER JOIN pg_class ON pg_locks.relation = pg_class.oid
WHERE relkind in (‘r’) and relname !~ ‘^pg_’ and relname <> ‘active’;
relname | locktype | pid | mode | granted
orders_alter | relation | 5845 | AccessExclusiveLock | t

DB size will increase around 40~50GB during DDL period, then back to original after DDL.

Is this normal ~? or is there any other way to modify string column length by less lock downtime~?

Highly appreciate it.


1 Like


is there any other possible solution to reduce the lock downtime (no need to sync data & switch column)

There is another way that guarantees no downtime. But as with all no downtime solutions there is more work. The method uses table inheritance.

  • Create an updateable view, which is the parent.
  • Attach a first child table, the original table in question
  • Attach a 2nd child table, with the desired table definition changes

Add the necessary trigger to the parent table for INSERT, UPDATE such that all new or updated records are placed in the newer child table.

Create and execute a background process that migrates, in small batches, the data from the original table to the newer one.

Please refer here for more information.

Hope this helps.

1 Like