Crash recovery - failed cluster restart with missing volume (Kubernetes, Docker, MySQL 5.x)

Hi everyone!
Lately I’ve been playing around with replicated MySQL servers and with Kubernetes. I was trying to “kill - and recover” the first node within the Kubernetes stateful sets, however I couldn’t revocer from this scenario:
1. Run mysql cluster on 3 kubernetes nodes (HA)
2. Stop / kill all of the replications
3. Delete the first node’s volume
4. Restart all nodes
Details:
- Docker image: percona/percona-xtradb-cluster:5.7.19, installed on Kubernetes cluster with Helm
My question: is it possible to bootstrap all nodes with one volume missing? If yes, how can I achieve this?
If I restart the stateful set in Kubernetes, the first node would start a fresh (empty) image and all new nodes connecting to it would delete the “old” content and start mirroring the empty database. 
If however I start the first node with other node’s volume, it fails with “WSREP: SST failed: 1 (Operation not permitted)” (I couldn’t manage to skip sst).

Thanks for any help. I would like to say that I always make auto backups of my data, this is just a playground for now.

I am not quite sure what you are trying to achieve.
If you delete volume, then most likely Percona XtraDB Cluster won’t start as it requires data directory to work.
If you want to just delete data from existing volume, then Percona XtraDB Cluster node should start and perform SST to populate data into empty volume, but it requires to have a volume where it can store data.

Sorry, you might misunderstand me.
I want to delete the data inside one volume (not the whole volume). If I delete the first node’s data, the SST won’t start, instead the first node spins up normally as if it was the first startup, thus resulting with an empty DB. The other connecting nodes then start copying the first node’s data (with SST) and overwrites the valid database to the “most fresh” (the empty) database.
I’m wondering if I lost the first node’s volume (the data inside), can I still revocer the database?

Thanks.
Khald

Khald,
I am still not clear what exactly you are trying to do.
Can you provide your exact steps how you stop nodes, how do you delete data, how do you start cluster?


I’ll try to clarify:
My goal is to try if I would lose my first node’s data (e.g. accidently rm the volume), would I still be able to recover my data. 
1. Start 3 nodes. (empty db) (in kubernetes they start up one by one, first, second and then third)
2. upload dummy db
3. Stop the cluster (in kubernetes the nodes stop in reverse order: 3rd, 2nd and first)
4. Empty the first node’s volume
5. Start up the cluster again (1st -> 2nd -> 3rd)
What is happening is when I start up the cluster second time (step5) the first node detects an empty volume so it starts up as a fresh database. The 2nd and 3rd node starts up, connects to the first node and rewrites my dummy db with the “newer version” (which is now the empty db). Now I sucessfully lost all my data. This is my problem.
We would like to use this cluster solution, however if I lose the volume of my first node, then there’s no point of mirroring the db to multiple nodes.

I hope this clears my question :slight_smile:
Thanks
Khaled

@Khaled
I understand now. You are correct, this scenario does not work at the moment.
What you have is the case of the full cluster stop, and when you start nodes, you practically start new cluster,
and it will start with the empty datadir.
We are working on the tool to handle this automatically, but for now it requires manual intervention.
You need to find the node with the most recent data and start the first node on that volume.

Thanks. How can I migrate it manually (I don’t think this is a problem for us).
If I start the first node on with another volume, it failed to start. Maybe I missed something?
(I got the “WSREP: SST failed: 1 (Operation not permitted)” error)
If it helps, I can redo the scenario and copy-paste the log output into a file.
Thanks for your help.