Testing the Percona XtraDB Cluster Operator with Kind

Kind is an alternative to minikube for running a local Kubernetes cluster. I find it easier to work with and less likely to annoy me by running out of resources in the VM while I’m testing. There is a guide for installing the Percona XtraDB Cluster on minikube using the operator, but it doesn’t work out of the box on kind.

This post describes the two issues I found and the solutions to them in case somebody else has issues in the future.

(Note that I’ve only tested this with MySQL, but I expect a similar thing will happen with Postgres)

There are two issues and one tip:

1) AppArmor configuration on my host was blocking the containers access to libthread.so.

The symptom was that my pods were in an error / respawn / backoff loop.

kubectl logs my-pod

showed the following:

mysqld error while loading shared libraries libpthread.so.0

It turns out that machine’s apparmor was controlling what could happen inside containers running on my machine. I didn’t realise it did this, but it does. In this case you can either modify the apparmor config for mysqld and reload apparmor or you can teardown to disable apparmor completely (not recommended). In this case the apparmor config came from having existing percona-server-server packages installed.

This is not a problem in Minikube as the VM either has no apparmor config or the config is more permissive.

2) The MySQL instances cannot write to their data directory (so DB initialisation fails).

If you’re getting this error, this applies to you:


mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied) 

Background: The operator provides a volume via a PVC which is mounted as /var/lib/mysql. You can see that here (from

kubectl get pod my-pod -o yaml

)


volumeMounts: - mountPath: /var/lib/mysql name: datadir ... 
volumes: - name: datadir persistentVolumeClaim: claimName: datadir-cluster100-pxc-0 

Importantly though, the containers (correctly) don’t run as root. They run as the user 1001 and the operator enforces a security context:


docker inspect percona/percona-xtradb-cluster-operator:1.2.0-pxc | jq '.[0] | .Config.User' "1001" 

securityContext: fsGroup: 1001 supplementalGroups: - 1001 

The volumeMount using the standard storage provider in Kubernetes. Unfortunately, volumeMounts provided by the standard storage class in Kubernetes by default does not support the fsGroup setting and will provide storage that the host cannot write to. See [url]https://github.com/kubernetes/kubernetes/issues/2630[/url] for more details. This is not a problem in Minikube as it can ‘cheat’, and cloud providers can override the standard provider for their own disk (e.g. EBS volumes) which can be provisioned with the correct permissions.

For Kind, there’s no way around it yet. I solved this by installing the Rancher Local Path Provisioner:


kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc.yaml After that, everything came up as expected.

[B]3. Node anti-affinity[/B]

The Minikube instructions suggest modifying the custom resource to remove the node anti affinity policy. This is because Minikube is a single-node Kubernetes cluster and the policy will prevent Kubernetes from scheduling a pod on the same node as an existing pod. You can do the same thing on Kind, but there's another option. Kind supports the creation of multi-node clusters. To do that, make a file like this, kind-config.yaml:


kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:

  • role: control-plane
  • role: worker
  • role: worker


Then bring up the cluster with 

kind create cluster --config kind-config.yaml

.