MySQL data directory move

We’ve recently built a x3 XtraDB cluster and got all nodes online and syncing. My issue now is I want to move the datadir from the default location to a new raid disk set. I’ve followed the link below for the process which has worked in the past for previous builds yet with this new cluster it is not. I’m not getting much info as to why it’s failing to start the mysql@bootstrap service so I will provide as much as I can. If you know of something else you’d like to see please let me know.

mysql@bootstrap.service - Percona XtraDB Cluster with config /etc/default/mysql.bootstrap
Loaded: loaded (/lib/systemd/system/mysql@.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-04-28 15:50:56 UTC; 3s ago
Process: 731630 ExecStartPre=/usr/bin/mysql-systemd start-pre (code=exited, status=0/SUCCESS)
Process: 731667 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0>
Process: 731669 ExecStartPre=/bin/sh -c VAR=bash /usr/bin/mysql-systemd galera-recovery; [ $? -eq 0 ] && syste>
Process: 731717 ExecStart=/usr/sbin/mysqld $EXTRA_ARGS $_WSREP_START_POSITION (code=exited, status=1/FAILURE)
Process: 731720 ExecStopPost=/usr/bin/mysql-systemd stop-post (code=exited, status=0/SUCCESS)
Main PID: 731717 (code=exited, status=1/FAILURE)
Status: “Server startup in progress”

Starting Percona XtraDB Cluster with config /etc/default/mysql.bootstrap…
mysql@bootstrap.service: Main process exited, code=exited, status=1/FAILURE
WARNING: mysql pid file /var/run/mysqld/mysqld.pid empty or not readable
WARNING: mysql may be already dead
mysql@bootstrap.service: Failed with result ‘exit-code’.
Failed to start Percona XtraDB Cluster with config /etc/default/mysql.bootstrap.

I have ensured to chown the old and new location to mysql:mysql and did a stare & compare to individual file permissions. Any help you can provide is much appreciated.

Something I had also attempted was to move the data dir back to the original location /var/lib/mysql which resulted in the same issue unable to start the mysql service.

1 Like

You need to look at MySQL’s error log for more information. Did you edit the my.cnf and reconfigure the correct parameters for moving the datadir?

1 Like

Hi MatthewB again :).

I was trying to look at the logs however mysql doesn’t appear to be generating anything when attempting to start the service. I didn’t update the /etc/mysql/my.cnf file as that points to /etc/mysql/mysql.conf.d/mysqld.cnf which I did update the datadir=.

my.cnf content
!includedir /etc/mysql/mysql.conf.d/
!includedir /etc/mysql/conf.d/

mysqld.cnf content
[mysqld]
server-id=1
#datadir=/var/lib/mysql - Original
datadir=/mnt/md0/mysql - new mount location

1 Like

Hmm. Well, that’s all you should need to do:

  1. Stop mysql
  2. Move the data to new location
  3. Fix ownerships/privs of new location
  4. Update my.cnf with new location
  5. Start mysql
1 Like

Yeah that is what I did and it has worked before but I guess I may need to just throw this server away and before joining it to a cluster or creating a new cluster move the mysql data/config then build the new cluster.

Quick side question. Is it possible to update the /etc/default/mysql.bootstrap file and remove the option in it ? EXTRA_ARGS=" --wsrep-new-cluster "

Or just ignore that and start over with a new cluster? This is a new cluster so I have flexibility to build/destroy as needed right now.

1 Like

You should not need to take such drastic steps. It’s probably something simple you are overlooking.

Don’t do that. The mysql.bootstrap file is only executed when you start in bootstrap mode and this parameter is needed to tell MySQL to create a new cluster. (ex: systemctl start mysql@bootstrap)

If you already have a cluster running (ie: other nodes online and connected), you can simply delete this node’s datadir, set the parameters and start mysql normally (don’t bootstrap). This should cause this node to join the cluster and do an SST into the new datadir.

1 Like

Ok so I did as you suggested and repointed back to the original location. That worked once I copied the matching pem files back into the /var/lib/mysql folder. Now the 3 nodes are back up and communicating.

I’m still not 100% on what got missed but I will see if I can repeat the process and see if it fails again.

One last question that I am not clear on is the *.pem file locations and the proper config file to place that in as I am wondering if this may have been the issue again. I placed the path info in the /etc/mysq/mysql.conf.d/mysql.cnf file with all the other options that have been set for the cluster but that doesn’t seem to work. Permissions have also been set to match what is on their default location.

[mysqld]
wsrep_provider_options="socket.ssl_key=/etc/mysql/certs/server-key.pem;socket.ssl_cert=/etc/mysql/certs/server-cert>

[sst]
encrypt=4
ssl-key=/etc/mysql/certs/server-key.pem
ssl-ca=/etc/mysql/certs/ca.pem
ssl-cert=/etc/mysql/certs/server-cert.pem

1 Like

I was able to move everything to a different location and now get the mysql services running. Thanks again for your help :slight_smile:

1 Like

That should be all the config you need to have SSL everywhere.

1 Like