Not the answer you need?
Register and ask your own question!

Cluster switches to non primary after gracefull shutdown of node

bart.haalstra@prowise.com[email protected] EntrantInactive User Role Beginner
Hello,

We have a extraDB cluster running on our servers. It exists of 3 servers in the Netherlands (10.1.0.5, 10.1.0.6, 10.1.0.7) And 2 in the US (10.5.0.4, 10.5.0.5).
Normally it runs fine. A week age our hoster did an update to the servers and the servers needed an reboot. Due to an error the disks which contained the database didn't mount. The rebooted servers did a SST and no-one noticed the databases where on the wrong disk. after a while we noticed 2 (10.1.0.5, 10.1.0.6) of the 5 servers didn't work anymore because the main disk (which now contained the full database) was full. For the 2 servers I've remounted the correct disk and waited for a full SST.
Everything was running fine til now.
Because the other 3 servers also needed to be remounted I've shut down the MySQL server on 10.1.0.7 (15:56:12). Somehow the server did not get removed from the server pool (503f2a17) and the entire cluster switched to NON PRIMARY. When i tried to restart the MySQL server after remounting the disk it failed to start (15:56:40). The server got added to the server pool (9f79f8cb) and was also never removed.
I've got no clue why this happened. Did i do something wrong? how can i prevent this in the future

Log of 10.1.0.7
http://pastebin.com/kcSyc46h

Log of 10.1.0.6
http://pastebin.com/2z82MWci
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.