Logical replication help on Postgres DB

Hello Gurus,

We have some large tables created a long time ago with OIDs. Are there any replication methods/tools available for large tables (within the same DB) without any performance hit? Once the tables are caught up, I can take a small downtime to swap large tables to new tables (without OIDs).

The end goal is to upgrade from version 11.9 to 12.4. Right now, OIDs are stopping us from upgrading as OIDs are not supported in version 12. I came up with a script to get rid of the OIDs (as we are not using them) to run in parallel. But we have some big tables and this process itself is taking 4 to 6 hours. We can’t afford to have downtime more than 1 hour.
All my script does is the following for each table.
ALTER TABLE ONLY SET WITHOUT OIDS;

This is a busy DB with 24X7 availability. Any help in this regard is appreciated.

Thanks
-Bheem

1 Like

Regarding your question

Are there any replication methods/tools available for large tables (within the same DB) without any performance hit?

The regular approach is to capture all changes to a log table using trigger (CDC - Change data capture) while the required data is copied over to new table. Then apply all the changes to the table. Extensions like pg_repack works with the same logic. I recently blogged about how internally it works : Understanding pg_repack: What Can Go Wrong - and How to Avoid It
You may use a similar approach to have almost zero downtime swapping of tables. But definitely, it has its own cost and performance implications.
Regards,
Jobin.

1 Like