It’s good to keep track of the node to shutdown using the MySQL Error Log (tail -f it), the init script can give you a timeout, but, it’s just the init script timeout. Most of time the shutdown processes is running yet behind the scenes.
BTW, let’s organize this thread and check what’s really going on. I’ve got a Galera Cluster with three nodes running here on my side. All the version information about what I’ve been running here is that below:
+------------------------+----------------+
| variable_name | variable_value |
+------------------------+----------------+
| WSREP_PROTOCOL_VERSION | 6 |
| WSREP_PROVIDER_VERSION | 3.8(rf6147dd) |
+------------------------+----------------+
mysqld Ver 5.6.21-70.1-56 for Linux on x86_64 (Percona XtraDB Cluster (GPL), Release rel70.1, Revision 938, WSREP version 25.8, wsrep_25.8.r4150)
After some tests, I’d say that considering the version I’m using, it’s crystal clear that the gvwstate.dat will survive just in case of a node/cluster crash. But this is not a guarantee that the cluster could be brought back online with no bootstrapping again. After a clean shutdown, the file will not survive and the cluster must be bootstrapped, what’s a little bit weird if we recap the docs. All the cluster’s nodes has pc.recover as its default (true) and ny other configuration was added to wsrep_provider_options in my.cnf.
Not sure if I have inconsistencies among cluster’s nodes and because that I’m going to keep investigating this problem.