Streaming backups with xtrabackup failing mid-transfer due to broken pipe errors

Hello Percona Community,

I’m running Percona MySQL 5.7.36 with xtrabackup 2.4.29. I’m attempting to stream backups from one host to a remote server using nc and xbstream.

On the source host, I run:

xtrabackup --backup
–user=host_user
–password=host_pwd
–stream=xbstream
–parallel=2
–use-memory=2G 2>> /data/archive/ncat_backup_$(date +%F).log | nc remote_server_ip 9999

on the remote server,

nc -l 9999 | xbstream -x -C /data

The stream starts fine and gets to about 30% before failing with repeated errors like:

260321 01:56:23 >> log scanned up to (323904557000424)
260321 01:56:24 >> log scanned up to (323904557802303)
260321 01:56:25 >> log scanned up to (323904558587381)
xtrabackup: Error writing file ‘UNOPENED’ (Errcode: 32 - Broken pipe)
xb_stream_write_data() failed.
xtrabackup: Error writing file ‘UNOPENED’ (Errcode: 32 - Broken pipe)
xb_stream_write_data() failed.
xtrabackup: Error: write to logfile failed
xtrabackup: Error writing file ‘UNOPENED’ (Errcode: 32 - Broken pipe)
xb_stream_write_data() failed.
xtrabackup: Error writing file ‘UNOPENED’ (Errcode: 32 - Broken pipe)
[02] xtrabackup: Error: xtrabackup_copy_datafile() failed.
[02] xtrabackup: Error: failed to copy datafile.
xtrabackup: Error writing file ‘UNOPENED’ (Errcode: 32 - Broken pipe)
[01] xtrabackup: Error: xtrabackup_copy_datafile() failed.
[01] xtrabackup: Error: failed to copy datafile.
xtrabackup: Error writing file ‘UNOPENED’ (Errcode: 32 - Broken pipe)
xtrabackup: Error: xtrabackup_copy_logfile() failed.

There is sufficient storage space on the remote server, but the stream still breaks.
Has anyone encountered this issue before? Are there recommended optimizations for streaming backups with nc and xbstream

Any guidance on stabilizing the stream or best practices for large backups would be greatly appreciated.
The database is 5TB

@shevy

There are some reported issues; however, the version you’re using - PXB v2.4.29 already have the fix.

You can try with multi-thread and compress options as well.

--parallel=4

--compress --compress-threads=4

Did you check the Network, OS resouerces etc if not saturated already ? Can you please share the OS/Kernel logs also ?

What happens if you try at some other time period? Did you tried running it again ?

1 Like