k8s Operator Restart after machine fail

Hi,

while testing the k8s operator (with 3 pods each) in my Minikube environment i had to restart Minikube.
After that the cluster didn’t come back up. (CrashLoopBackOff)

In our current, non-k8s-environment i made a script to force bootstrap in such a situation, keeping in mind that data could be lost:
Setting the “safe_to_bootstrap” to 1 in grastate.dat and call “service mysql bootstrap-pxc” on the selected node.

So i ran a Job that sets this variable in the PVC of the first pod:


apiVersion: batch/v1
kind: Job
metadata:
name: safe-to-bootstrap
spec:
template:
spec:
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: datadir-cluster1-pxc-0
containers:
- name: safe-to-bootstrap
image: busybox
imagePullPolicy: IfNotPresent
command:
- sed
- -i
- "s|safe_to_bootstrap.*:.*|safe_to_bootstrap:1|1"
- /var/lib/mysql/grastate.dat
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-data
restartPolicy: OnFailure

Then i deleted all the “cluster1-pxc-” pods to bring it back up.

Is there a safer and/or cleaner way to do this?