Percona XtraDB MySQL Kubernetes Operator uninstallation

Hi there!

Your MysQL Kubernetes Operator seems to have nice feature set. I tested this on two different namespaces.
Then I tried to make some changes to configurations (cr.yaml) regarding upgrades and backup and applied the changes to wrong installation (wrong namespace).

Now I’m trying to get rid of both installations, but I couldn’t find uninstallation manual. Is there a instruction how to completely remove resources related to XtraDB cluster along with Custom Resources?

Thanks,
Kimmo Katajisto

1 Like

Hey @katajistok ,

  • to remove the CR just execute smth like kubectl delete -f deploy/cr.yaml
  • to remove the Operator execute kubectl delete -f deploy/bundle.yaml

Probably it is a good idea to add this into docs.

2 Likes

Yes. It would be good. I messed up my setup so that ‘kubectl delete’ didn’t help at all. I managed to remove one installation by editing cluster ‘kubectl edit pxc cluster1 -n pxc’ and removed finalizers, but with another cluster it’s not helping.

[root@dbaasjump002 deploy]# kubectl edit pxc cluster1 -n pxc
error: perconaxtradbclusters.pxc.percona.com “cluster1” could not be found on the server
The edits you made on deleted resources have been saved to “/tmp/kubectl-edit-107r9.yaml”

Only one cluster now running. But I would like to clean this up also. Any ideas?

[root@dbaasjump002 deploy]# kubectl get pxc --all-namespaces
NAMESPACE NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
pxc cluster1 cluster1-haproxy.pxc initializing 1 3 5d2h

[root@dbaasjump002 deploy]# kubectl get all -n pxc
No resources found in pxc namespace.

[root@dbaasjump002 deploy]# kubectl get pvc -n pxc
No resources found in pxc namespace.

[root@dbaasjump002 deploy]# kubectl get pv -n pxc
No resources found

1 Like

Seems you have removed the Custom Resource Definition before deleting the Custom Resource. Now Kubernetes does not know what to do with this CR.
Installing CRD again might solve this.
You can do that by kubectl apply -f deploy/crd.yaml

As for kubectl get all - it does not show Custom Resources by design.

1 Like

Hello,

I applied crd.yaml then tried to edit cluster1, didn’t work. Then i’m hitting kubectl delete, but it’s stuck:

[root@dbaasjump002 deploy]# kubectl apply -f crd.yaml -n pxc
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Warning: Detected changes to resource perconaxtradbclusterbackups.pxc.percona.com which is currently being deleted.
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com unchanged
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com created
The CustomResourceDefinition “perconaxtradbclusters.pxc.percona.com” is invalid: status.storedVersions[1]: Invalid value: “v1-8-0”: must appear in spec.versions
[root@dbaasjump002 deploy]# kubectl get pxc --all-namespaces
NAMESPACE NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
pxc cluster1 cluster1-haproxy.pxc initializing 1 3 5d3h
[root@dbaasjump002 deploy]# kubectl edit pxc cluster1 -n pxc
error: perconaxtradbclusters.pxc.percona.com “cluster1” could not be found on the server
The edits you made on deleted resources have been saved to “/tmp/kubectl-edit-h0r1k.yaml”
[root@dbaasjump002 deploy]# kubectl delete pxc cluster1 -n pxc --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
perconaxtradbcluster.pxc.percona.com “cluster1” force deleted

First I tested v 1.6 then 1.8. There is notice about invalid value on status.storedVersions… This might be because I ran kubectl apply to wrong namespace after configuration changes.

1 Like

Because we are just in test phase now, we will re-deploy Kubernetes cluster and that will solve our problem. I’ll post more if I encounter the same situation afterwards.

1 Like

I deployed XtraDB again after Kubernetes cluster de-deployment, now with a new version (1.9) and some changed noted below, but it didn’t start at all. Then I’m now trying to remove it ‘kubectl delete -f cr.yaml’ and ‘kubectl delete -f bundle.yaml’, result from that under installation logs.

Installation:
[root@dbaasjump002 deploy]# kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com created

[root@dbaasjump002 deploy]# kubectl create namespace pxc-v-1-9-0-ceph
namespace/pxc-v-1-9-0-ceph created

[root@dbaasjump002 deploy]# kubectl apply -f rbac.yaml -n pxc-v-1-9-0-ceph
role.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created
serviceaccount/percona-xtradb-cluster-operator created
rolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created

[root@dbaasjump002 deploy]# kubectl apply -f operator.yaml -n pxc-v-1-9-0-ceph
deployment.apps/percona-xtradb-cluster-operator created

[root@dbaasjump002 deploy]# kubectl apply -f secrets.yaml -n pxc-v-1-9-0-ceph
secret/my-cluster-secrets created

Percona XtraDB MySQL cluster changes on default cr.yaml, before creating cluster:
pxc, memory request: 1G → 4G (changes how much memory will be allocated for each MySQL node at the first)
pxc, storage: 6G → 150G (changes how much each storage will be allocated for each MySQL node at the first)
pmm, enabled: false → true (enables Percona Monitoring and Management)

[root@dbaasjump002 deploy]# kubectl apply -f cr.yaml -n pxc-v-1-9-0-ceph
perconaxtradbcluster.pxc.percona.com/xtradb001 created

Situation now:
[root@dbaasjump002 deploy]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/xtradb001-ceph-haproxy-0 1/3 CrashLoopBackOff 26 45m
pod/xtradb001-ceph-pxc-0 3/4 CrashLoopBackOff 13 45m
pod/xtradb001-ceph-pxc-1 2/4 CrashLoopBackOff 25 42m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/xtradb001-ceph-haproxy ClusterIP 10.255.145.162 3306/TCP,3309/TCP,33062/TCP,33060/TCP 45m
service/xtradb001-ceph-haproxy-replicas ClusterIP 10.255.192.212 3306/TCP 45m
service/xtradb001-ceph-pxc ClusterIP None 3306/TCP,33062/TCP,33060/TCP 45m
service/xtradb001-ceph-pxc-unready ClusterIP None 3306/TCP,33062/TCP,33060/TCP 45m

NAME READY AGE
statefulset.apps/xtradb001-ceph-haproxy 0/3 45m
statefulset.apps/xtradb001-ceph-pxc 0/1 45m
[root@dbaasjump002 deploy]# kubectl get pxc
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
xtradb001-ceph xtradb001-ceph-haproxy.pxc-v-1-9-0-ceph stopping 46m

1 Like

Hi @katajistok , did you deploy pmm server and set correct credentials for it ?
Please have a look at the following documentation Monitor with Percona Monitoring and Management (PMM) - Percona Operator for MySQL based on Percona XtraDB Cluster , in this documentation you can find step by step instruction how to enable PMM.

1 Like

Thanks. I was able to get PMM server work and also clients connected. I deployed PMM server to Kubernetes cluster. But I know that this configuration is not supported (StatefulSet did not work).

1 Like

@katajistok Yes, you are right. We officially do not support such type of installation for now. You can install PMM server beyond Kubernetes using the way proper for you. Please have a look at the following links:

1 Like

Dear @Sergey_Pronin,

Where can I find these files?

Thanks: Bela

1 Like

Hey @Beci_Roboz ,

which ones?

1 Like

I meant these ones:

deploy/cr.yaml
deploy/bundle.yaml

1 Like

You can find them in our git repo: GitHub - percona/percona-xtradb-cluster-operator: Percona Operator for MySQL based on Percona XtraDB Cluster

Clone the repo:
git clone -b v1.11.0 https://github.com/percona/percona-xtradb-cluster-operator

Go to folder:
cd percona-xtradb-cluster-operator

In deploy/ you will find these files.

2 Likes

Dear @Sergey_Pronin,

I’m watching.

Thanks: Beci

1 Like