Postgres13.2 disk was full postgres: archiver failed

Disk was full .Space was added and started postgres. arching is struck and files are being retained in pg_wal of primary and in pg_wal/archive_status .ready files are being retained We have streaming replication setup with replication slots . replication appears to be working fine
ps -ef |grep post from primary

postgres 4241 4225 0 00:01:15 postgres: archiver failed on 000000020000001F00000033

SELECT * FROM pg_stat_archive is not getting updated and struck on file last_failed_wal | 000000020000001F00000033
How to resolve the issue
Any help will be appreciated.

1 Like

Is this the Proper forum to post this ?

1 Like

Hey Hi Kumar - Yes, it’s the correct forum.

is this still happening? if so can you check if the Postgres user has the correct permissions for the archive location?

1 Like

Since the WAL archiving is failing, the WAL segments will be retained until it is fixed. Please check your archive_command.

1 Like

Along with the above, do verify the WAL file in question ‘000000020000001F00000033’ is present in the pg_wal/ directory. As you mentioned the issue started to happen after disk got full, so most probable cause would be the postgresql was not able to save the WAL file and now the archiver requires that file along with the corresponding .ready file in pg_wal/archive_status to proceed further

1 Like

@muhammad.usama 000000020000001F00000033 file is in pg_wal with 16MB size , pg_archive folder with 14MB size . will renaming 000000020000001F00000033.ready to archives remaining files from pg_wal/archive_statusl ? if we have 100GB files in pg_wal do we need same freespae space in pgarchive . we have one mount point for all files .

1 Like

Hello @kumarduvvuri ,

You mention that you have replication slots and the replication seems to be working fine and only the archiving process is failing, right? In this case, the archive command you are using is returning an error to Postgres.

What is the archive command you are using? Are you using plain bash commands or any tool? Could you paste it here?

For example, if you are using a “cp” command and the file already exists it may fail to copy the file again.

1 Like

cp command in postgressql.conf
the below from logfile
[4241] DETAIL: The failed archive command was: test ! -f /local/pgsql/13.2/pgbackup/pgarchive/000000020000001F00000033 && cp pg_wal/000000020000001F00000033 /local/pgsql/13.2/pgbackup/pgarchive/000000020000001F00000033
[4241] WARNING: archiving write-ahead log file “000000020000001F00000033” failed too many times, will try again later

file exists with 16MB in pg_wal
pgdata]$ ls pg_wal/000000020000001F00000033

file with 14MB exists in pgbackup/pgarchive
pgdata]$ ls /local/pgsql/13.2/pgbackup/pgarchive/000000020000001F00000033

.ready file exists in pg_wal/archive_status
000000020000001F00000033.ready file exists in archive_status

Does renaming partial 14MB file in /local/pgsql/13.2/pgbackup/pgarchive/ resolves the issues ?

1 Like

Hello @kumarduvvuri

The first part of your archive command checks if the files already exists in the destination folder:

This test will fail because the file indeed exists in the folder /local/pgsql/13.2/pgbackup/pgarchive/. You can move the files “000000020000001F00000033” from the archive folder /local/pgsql/13.2/pgbackup/pgarchive to another folder (for example your /home folder) and it should solve this issue and let the archive command continue.

1 Like

@charly.batista renaming file /local/pgsql/13.2/p/000000020000001F00000033 fixed the issue in Lab environment . We require the same amount of free space in pgbackup/pgarchive as the pending files in pg_wal .
Noticed another thing as an option 2 if we donot have pg_wal space
a) after runign checkpoint few time on primary , used pg_controldata -D /local/pgsql/13.2/pgdata
b) clean up files before Latest checkpoint’s REDO WAL file from pg_wal using pg_archivecleanup -d /local/pgsql/13.2/pgdata/pg_wal , leaving around 100 files before checkpoint
c) noticed 100 files leftover before pg_archivecleanup are not getting cleaned up , no issues with archiving new files

Thanks for helping

1 Like