Joining a new node to cluster

I also noticed percona is ignoring the temporary directory tmpdir=/db3/tmp and adding files directory to datadir=/db1 and that is filling up my datadir and causing it to run out of space. Which might be the cause of some of my issues.
Percona created /db1/sst-xb-tmpdir/ instead of using the tmdir to create the path /db3/tmp/sst-xb-tmpdir/

1 Like

Try adding this to your my.cnf

[sst]
tmpdir=/db3/tmp
1 Like

thanks, i just found that in the documentation.

1 Like

Reviewing my logs and I noticed this message

xtrabackup: Redo Log Archiving is not set up.
WARNING: unknown option --binlog-info=ON

I don’t have the binlog-info option set in my.cnf.
The only option i have with bin or log are the following.

max_binlog_size = 128M
log-bin=binlog
log_slave_updates
binlog_expire_logs_seconds=604800
binlog_format=ROW
log-error=/var/log/mysql/error.log
1 Like

–binlog-info is an option to xtrabackup, not mysql. Are you sure you’ve installed xtrabackup v8?

1 Like

Yes.

xtrabackup -v
xtrabackup: recognized server arguments: --server-id=5 --tmpdir=/db3/tmp --datadir=/db1 --innodb_buffer_pool_size=1G --log_bin=binlog
xtrabackup version 8.0.26-18 based on MySQL server 8.0.26 Linux (x86_64) (revision id: 4aecf82)

Steps to this point. on node1 Installed percona server 5.x restored data to directory. Removed percona 5.x installed xtradb cluster 8.x. Edited my.cnf, started server. node 2, installed percona 8.x edit my.cnf to join node1. start node2 fails with errors.
The error WARNING: unknown option --binlog-info=ON is coming from the innobackup.backup.log file on the donor.

Streaming ./projects/data_stats.ibd
log scanned up to (10790818701060)
...
xtrabackup: Error writing file '<unopen fd>' (OS errno 32 - Broken pipe)

It seems to always fail on that database table. It is a large table.

Is there a way to view the innobackup.backup.log on the joiner?

1 Like

Is there anyway to control the memory or size of the data when bringing a node online. One of my tables is large and I think that is an issue. When bringing node online it fails on the same table.

1 Like

“large” is quite subjective. I’ve personally worked on PXCs with single tables around 1TB in size without any SST issues.

Yes, on joiner go into $datadir/.sst/

1 Like

Not sure what you mean by go into $datadir/.sst/?
What I have done is on the joiner

[sst]
inno-apply-opts="--use-memory=500M

If so that failed as well. My thought was maybe if I could control the memory or size of the data transfer the connection would not time out. Is there a sizing option?

1 Like