Hi,
I have followed the instruction to install the operator in Kubernetes @ Generic Kubernetes installation - Percona Operator for MySQL based on Percona XtraDB Cluster
I am able to run the operator pod, but the cluster pod fails to come up. Please let me know how to resolve it.
>kubectl get pods
NAME READY STATUS RESTARTS AGE
percona-xtradb-cluster-operator-5c95894bbf-hwv5h 1/1 Running 0 122m
test-cluster-pxc-0 2/3 Running 0 13m
>kubectl describe pod/test-cluster-pxc-0
Events:
Type Reason Age From Message
Warning FailedScheduling 7m28s (x3 over 7m39s) default-scheduler 0/20 nodes are available: 20 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 7m24s default-scheduler Successfully assigned pxc/test-cluster-pxc-0 to 10.47.244.15
Normal SuccessfulAttachVolume 7m24s attachdetach-controller AttachVolume.Attach succeeded for volume âpvc-ab3faef3-3a15-4592-8d83-774212a9c926â
Normal Pulling 7m6s kubelet Pulling image âpercona/percona-xtradb-cluster-operator:1.12.0â
Normal Pulled 7m6s kubelet Successfully pulled image âpercona/percona-xtradb-cluster-operator:1.12.0â in 693.444389ms
Normal Created 7m6s kubelet Created container pxc-init
Normal Pulling 7m5s kubelet Pulling image âpercona/percona-xtradb-cluster-operator:1.12.0-logcollectorâ
Normal Started 7m5s kubelet Started container pxc-init
Normal Created 7m4s kubelet Created container logs
Normal Pulled 7m4s kubelet Successfully pulled image âpercona/percona-xtradb-cluster-operator:1.12.0-logcollectorâ in 663.401732ms
Normal Created 7m3s kubelet Created container logrotate
Normal Started 7m3s kubelet Started container logs
Normal Pulling 7m3s kubelet Pulling image âpercona/percona-xtradb-cluster-operator:1.12.0-logcollectorâ
Normal Pulled 7m3s kubelet Successfully pulled image âpercona/percona-xtradb-cluster-operator:1.12.0-logcollectorâ in 645.501357ms
Normal Started 7m2s kubelet Started container logrotate
Normal Pulling 7m2s kubelet Pulling image âpercona/percona-xtradb-cluster:8.0.29-21.1â
Normal Pulled 7m2s kubelet Successfully pulled image âpercona/percona-xtradb-cluster:8.0.29-21.1â in 646.002133ms
Normal Created 7m2s kubelet Created container pxc
Normal Started 7m1s kubelet Started container pxc
Warning DNSConfigForming 5m52s (x6 over 7m7s) kubelet Search Line limits were exceeded, some search paths have been omitted, the applied search line is: pxc.svc.cluster.local svc.cluster.local cluster.local
Warning Unhealthy 5m17s (x3 over 6m17s) kubelet Readiness probe failed: ERROR 2003 (HY000): Canât connect to MySQL server on â10.20.80.134:33062â (111)
- [[ ââ == \P\r\i\m\a\r\y ]]
- exit 1
Warning Unhealthy 112s kubelet Liveness probe failed: ERROR 2003 (HY000): Canât connect to MySQL server on â10.20.80.134:33062â (111)
- [[ -n ââ ]]
- exit 1
>kubect logs pod/test-cluster-pxc-0
Percona XtraDB Cluster: Finding peers
2023/01/19 01:06:22 Peer finder enter
2023/01/19 01:06:22 Determined Domain to be pxc.svc.cluster.local
2023/01/19 01:06:22 lookup test-cluster-pxc-unready on 10.21.0.10:53: no such host
2023/01/19 01:06:23 lookup test-cluster-pxc-unready on 10.21.0.10:53: no such host
1 Like
Hi @Ravi_Kumar_Pokala ,
As I can see you have some issues connected with DNS. We need to know more about your k8s setup. E.g. which k8s version do you use? So, please tell us more about your k8s deployment.
1 Like
@Slava_Sarzhan : We are using the vendor-managed(Platform 9) Kubernetes deployment. please find the version information below.
Server Version: version.Info{Major:â1â, Minor:â20â, GitVersion:âv1.20.5â, GitCommit:â6b1d87acf3c8253c123756b9e61dac642678305fâ, GitTreeState:âcleanâ, BuildDate:â2021-03-18T01:02:01Zâ, GoVersion:âgo1.15.8â, Compiler:âgcâ, Platform:âlinux/amd64â}
1 Like
Cluster creations fails with below error:
>kubect get pods
NAME READY STATUS RESTARTS AGE
percona-xtradb-cluster-operator-5c95894bbf-7g52j 1/1 Running 0 5m24s
test-cluster-haproxy-0 0/2 Pending 0 2m26s
Operator logs
k logs pod/percona-xtradb-cluster-operator-5c95894bbf-7g52j
2023-02-04T01:48:40.334Z ERROR Reconciler error {âcontrollerâ: âperconaxtradbcluster-controllerâ, âobjectâ: {ânameâ:âtest-clusterâ,ânamespaceâ:âpxcâ}, ânamespaceâ: âpxcâ, ânameâ: âtest-clusterâ, âreconcileIDâ: â75d74f89-417d-4015-b720-74714e3725aeâ, âerrorâ: âPodDisruptionBudget for test-cluster-haproxy: reconcile pdb: get object: no matches for kind "PodDisruptionBudget" in version "policy/v1"â, âerrorVerboseâ: âno matches for kind "PodDisruptionBudget" in version "policy/v1"\nget object\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).createOrUpdate\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:1299\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).reconcilePDB\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:945\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:685\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:312\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:234\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1594\nreconcile pdb\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).reconcilePDB\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:945\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:685\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:312\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:234\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1594\nPodDisruptionBudget for test-cluster-haproxy\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).deploy\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:687\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:312\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:234\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1594â}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:234
1 Like
@Ravi_Kumar_Pokala PXCO v1.12.0 does not support K8S 1.20. Have a look at our official doc:
It is connected with PodDisruptionBudget
API version. We started to use PDB API v1 to support k8s >= 1.21 in PXCO v1.12.0 release.
1 Like
@Slava_Sarzhan : I have tried with with PXCO v1.11.0 now I am running into another issue. Haproxy stays in Pending state
>kubectl get pods
NAME READY STATUS RESTARTS AGE
mytest-cluster-haproxy-0 0/3 Pending 0 14m
mytest-cluster-pxc-0 3/3 Running 0 14m
mytest-cluster-pxc-1 3/3 Running 0 11m
mytest-cluster-pxc-2 3/3 Running 0 8m44s
percona-xtradb-cluster-operator-655769795d-h8q67 1/1 Running 0 112m
>kubectl get sts
NAME READY AGE
mytest-cluster-haproxy 0/3 14m
mytest-cluster-pxc 3/3 14m
>kubectl describe pod/mytest-cluster-haproxy-0
Name: mytest-cluster-haproxy-0
Namespace: pxc
Priority: 0
Service Account: default
Node:
Labels: app.kubernetes.io/component=haproxy
app.kubernetes.io/instance=mytest-cluster
app.kubernetes.io/managed-by=percona-xtradb-cluster-operator
app.kubernetes.io/name=percona-xtradb-cluster
app.kubernetes.io/part-of=percona-xtradb-cluster
controller-revision-hash=mytest-cluster-haproxy-5df959fb5b
rack=rack-22
statefulset.kubernetes.io/pod-name=mytest-cluster-haproxy-0
Annotations: iam.amazonaws.com/role: role-arn
percona.com/configuration-hash: f0cf38deede8503f8be79b49927dba41
Status: Pending
IP:
IPs:
Controlled By: StatefulSet/mytest-cluster-haproxy
Containers:
haproxy:
Image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
Ports: 3306/TCP, 3307/TCP, 3309/TCP, 33062/TCP, 33060/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 700m
memory: 1G
Requests:
cpu: 600m
memory: 1G
Liveness: exec [/usr/local/bin/liveness-check.sh] delay=120s timeout=5s period=30s #success=1 #failure=4
Readiness: exec [/usr/local/bin/readiness-check.sh] delay=135s timeout=1s period=5s #success=1 #failure=3
Environment Variables from:
mytest-cluster-env-vars-haproxy Secret Optional: true
Environment:
PXC_SERVICE: mytest-cluster-pxc
LIVENESS_CHECK_TIMEOUT: 5
READINESS_CHECK_TIMEOUT: 1
Mounts:
/etc/haproxy-custom/ from haproxy-custom (rw)
/etc/haproxy/pxc from haproxy-auto (rw)
/etc/mysql/haproxy-env-secret from mytest-cluster-env-vars-haproxy (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9s5n6 (ro)
pxc-monit:
Image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
Port:
Host Port:
Args:
/usr/bin/peer-list
-on-change=/usr/bin/add_pxc_nodes.sh
-service=$(PXC_SERVICE)
Limits:
cpu: 600m
memory: 2G
Requests:
cpu: 500m
memory: 1G
Environment Variables from:
mytest-cluster-env-vars-haproxy Secret Optional: true
Environment:
PXC_SERVICE: mytest-cluster-pxc
Mounts:
/etc/haproxy-custom/ from haproxy-custom (rw)
/etc/haproxy/pxc from haproxy-auto (rw)
/etc/mysql/haproxy-env-secret from mytest-cluster-env-vars-haproxy (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9s5n6 (ro)
my-sidecar-1:
Image: busybox
Port:
Host Port:
Command:
/bin/sh
Args:
-c
while true; do trap âexit 0â SIGINT SIGTERM SIGQUIT SIGKILL; done;
Limits:
cpu: 200m
memory: 200M
Requests:
cpu: 100m
memory: 100M
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9s5n6 (ro)
Volumes:
haproxy-custom:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mytest-cluster-haproxy
Optional: true
haproxy-auto:
Type: EmptyDir (a temporary directory that shares a podâs lifetime)
Medium:
SizeLimit:
mysql-users-secret-file:
Type: Secret (a volume populated by a Secret)
SecretName: internal-mytest-cluster
Optional: false
mytest-cluster-env-vars-haproxy:
Type: Secret (a volume populated by a Secret)
SecretName: mytest-cluster-env-vars-haproxy
Optional: true
default-token-9s5n6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9s5n6
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
1 Like
@Ravi_Kumar_Pokala Do you see any events? You need to understand why this pod has a Pending state. Also, the output of this comment can be helpful:
kubectl get events --sort-by=.metadata.creationTimestamp -w
1 Like
@Slava_Sarzhan : below is the event log related to cluster-haproxy-0, it doesnât show any issues as such. I donât see PVC created for HAPROXY as well. Let me know if you need to any other info to find the cause.
0s Normal SuccessfulCreate statefulset/mytest-cluster-haproxy create Pod mytest-cluster-haproxy-0 in StatefulSet mytest-cluster-haproxy successful
1 Like
HaProxy is stainless. We do not have PVCs for these pods. In your output of the following command
**>kubectl describe pod/mytest-cluster-haproxy-0**
I do not see any info in Evenst:
section. Did you remove it? If yes, please resent it.
1 Like
@Slava_Sarzhan : Nothing captured in the Events: it is none
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
1 Like
@Ravi_Kumar_Pokala try to check the output of the following command
kubectl describe sts mytest-cluster-haproxy
It can help you to understand what is wrong. And also provide your CR.
1 Like
@Slava_Sarzhan : I checked the âmytest-cluster-haproxyâ statefulset description it doesnât provide enough information about what the issue is.
Please find the description below. I have attached the CR.yaml for your reference.
kubectl describe sts mytest-cluster-haproxy
Name: mytest-cluster-haproxy
Namespace: pxc
CreationTimestamp: Tue, 07 Feb 2023 11:31:51 -0800
Selector: app.kubernetes.io/component=haproxy,app.kubernetes.io/instance=mytest-cluster,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
Labels:
Annotations: percona.com/last-config-hash:
eyJyZXBsaWNhcyI6Mywic2VsZWN0b3IiOnsibWF0Y2hMYWJlbHMiOnsiYXBwLmt1YmVybmV0ZXMuaW8vY29tcG9uZW50IjoiaGFwcm94eSIsImFwcC5rdWJlcm5ldGVzLmlvL2luc3âŠ
Replicas: 3 desired | 1 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/component=haproxy
app.kubernetes.io/instance=mytest-cluster
app.kubernetes.io/managed-by=percona-xtradb-cluster-operator
app.kubernetes.io/name=percona-xtradb-cluster
app.kubernetes.io/part-of=percona-xtradb-cluster
rack=rack-22
Annotations: iam.amazonaws.com/role: role-arn
percona.com/configuration-hash: f0cf38deede8503f8be79b49927dba41
Service Account: default
Containers:
haproxy:
Image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
Ports: 3306/TCP, 3307/TCP, 3309/TCP, 33062/TCP, 33060/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 700m
memory: 1G
Requests:
cpu: 600m
memory: 1G
Liveness: exec [/usr/local/bin/liveness-check.sh] delay=120s timeout=5s period=30s #success=1 #failure=4
Readiness: exec [/usr/local/bin/readiness-check.sh] delay=135s timeout=1s period=5s #success=1 #failure=3
Environment Variables from:
mytest-cluster-env-vars-haproxy Secret Optional: true
Environment:
PXC_SERVICE: mytest-cluster-pxc
LIVENESS_CHECK_TIMEOUT: 5
READINESS_CHECK_TIMEOUT: 1
Mounts:
/etc/haproxy-custom/ from haproxy-custom (rw)
/etc/haproxy/pxc from haproxy-auto (rw)
/etc/mysql/haproxy-env-secret from mytest-cluster-env-vars-haproxy (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
pxc-monit:
Image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
Port: <none>
Host Port: <none>
Args:
/usr/bin/peer-list
-on-change=/usr/bin/add_pxc_nodes.sh
-service=$(PXC_SERVICE)
Limits:
cpu: 600m
memory: 2G
Requests:
cpu: 500m
memory: 1G
Environment Variables from:
mytest-cluster-env-vars-haproxy Secret Optional: true
Environment:
PXC_SERVICE: mytest-cluster-pxc
Mounts:
/etc/haproxy-custom/ from haproxy-custom (rw)
/etc/haproxy/pxc from haproxy-auto (rw)
/etc/mysql/haproxy-env-secret from mytest-cluster-env-vars-haproxy (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
my-sidecar-1:
Image: busybox
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
while true; do trap âexit 0â SIGINT SIGTERM SIGQUIT SIGKILL; done;
Limits:
cpu: 200m
memory: 200M
Requests:
cpu: 100m
memory: 100M
Environment: <none>
Mounts: <none>
Volumes:
haproxy-custom:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mytest-cluster-haproxy
Optional: true
haproxy-auto:
Type: EmptyDir (a temporary directory that shares a podâs lifetime)
Medium:
SizeLimit: <unset>
mysql-users-secret-file:
Type: Secret (a volume populated by a Secret)
SecretName: internal-mytest-cluster
Optional: false
mytest-cluster-env-vars-haproxy:
Type: Secret (a volume populated by a Secret)
SecretName: mytest-cluster-env-vars-haproxy
Optional: true
Volume Claims: <none>
Events:
mytest-cryaml.txt (15.9 KB)
<none>
@Ravi_Kumar_Pokala do you have a custom scheduler? As I can see schedulerName option is uncommented for haproxy.
@Slava_Sarzhan : No custom scheduler, I have commented on it then cluster creation works!!, thank you. I am running into another issue while restoring a new cluster from a backup.
Steps followed:
- Created a cluster and added new table to MYSQL database.
- Ran a backup to store it in a PVC.
- Created another cluster - name ârestore-mytest-clusterâ
- Applied restore spec file
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterRestore
metadata:
- name: restore1*
spec:
- pxcCluster: restore-mytest-cluster*
- backupName: backup-to-restore-emp-table*
-
ârestore-mytest-clusterâ was stopped and pxc-pod got recreated but failed to run successfully with heath check failure.
restore-mytest-cluster-haproxy-0 2/3 Running 3 12m
restore-mytest-cluster-pxc-0 2/3 Running 5 13m
-
Event logs shows login failed for user âmonitorâ
>kubectl describe pod/restore-mytest-cluster-pxc-0
Name: restore-mytest-cluster-pxc-0
Namespace: pxc
Priority: 0
Service Account: default
Node: 10.47.244.15/10.47.244.15
Start Time: Fri, 10 Feb 2023 13:31:10 -0800
Labels: app.kubernetes.io/component=pxc
app.kubernetes.io/instance=restore-mytest-cluster
app.kubernetes.io/managed-by=percona-xtradb-cluster-operator
app.kubernetes.io/name=percona-xtradb-cluster
app.kubernetes.io/part-of=percona-xtradb-cluster
controller-revision-hash=restore-mytest-cluster-pxc-6cc754c9f5
statefulset.kubernetes.io/pod-name=restore-mytest-cluster-pxc-0
Annotations: cni.projectcalico.org/podIP: 10.20.80.154/32
cni.projectcalico.org/podIPs: 10.20.80.154/32
kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory limit for container logs; cpu, memory limit for container logrotate
Percona d41d8cd98f00b204e9800998ecf8427e
Percona eca633ffe87d12049ad0f444e2485101
Percona cc20d1bd6e04d4d34816e6f944711423
Status: Running
IP: 10.20.80.154
IPs:
IP: 10.20.80.154
Controlled By: StatefulSet/restore-mytest-cluster-pxc
Init Containers:
pxc-init:
Container ID: docker://544436ecae60bad7b8de3c88a486d056c1b48c66da3ce2690225c57137a19aad
Image: percona/percona-xtradb-cluster-operator:1.11.0
Image ID: docker-pullable://percona/percona-xtradb-cluster-operator@sha256:69501813d433aba1b9bd0babf7b7033000696da2a4c7fc582dac00c79a100c82
Port:
Host Port:
Command:
/pxc-init-entrypoint.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 10 Feb 2023 13:31:20 -0800
Finished: Fri, 10 Feb 2023 13:31:20 -0800
Ready: True
Restart Count: 0
Limits:
cpu: 1
ephemeral-storage: 1400M
memory: 1400M
Requests:
cpu: 600m
ephemeral-storage: 1G
memory: 1G
Environment:
Mounts:
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9s5n6 (ro)
Containers:
logs:
Container ID: docker://34436a072f376763f9d9f3aa00cfe0ac520d57ab3ad27595bf679cc5ca0b894b
Image: percona/percona-xtradb-cluster-operator:1.11.0-logcollector
Image ID: docker-pullable://percona/percona-xtradb-cluster-operator@sha256:fda6ca8f7bf95e86808ae60e305cda8007f7a886688f9bcfa5a566d2b43d05c0
Port:
Host Port:
State: Running
Started: Fri, 10 Feb 2023 13:31:22 -0800
Ready: True
Restart Count: 0
Limits:
cpu: 1500m
memory: 512Mi
Requests:
cpu: 200m
memory: 100M
Environment Variables from:
restore-mytest-cluster-log-collector Secret Optional: true
Environment:
LOG_DATA_DIR: /var/lib/mysql
POD_NAMESPASE: pxc (v1:metadata.namespace)
POD_NAME: restore-mytest-cluster-pxc-0 (v1:metadata.name)
Mounts:
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9s5n6 (ro)
logrotate:
Container ID: docker://400e529139c3d32d51383ab16d0373c3aeb90569d8695fc9e87026a3eadefe0b
Image: percona/percona-xtradb-cluster-operator:1.11.0-logcollector
Image ID: docker-pullable://percona/percona-xtradb-cluster-operator@sha256:fda6ca8f7bf95e86808ae60e305cda8007f7a886688f9bcfa5a566d2b43d05c0
Port:
Host Port:
Args:
logrotate
State: Running
Started: Fri, 10 Feb 2023 13:31:23 -0800
Ready: True
Restart Count: 0
Limits:
cpu: 1500m
memory: 512Mi
Requests:
cpu: 200m
memory: 100M
Environment:
SERVICE_TYPE: mysql
MONITOR_PASSWORD: <set to the key âmonitorâ in secret âinternal-restore-mytest-clusterâ> Optional: false
Mounts:
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9s5n6 (ro)
pxc:
Container ID: docker://8e22f96e5325f9531797c7edc8e87c376150c192ecf7c46f7cbe3e03e1977dd9
Image: percona/percona-xtradb-cluster:8.0.27-18.1
Image ID: docker-pullable://percona/percona-xtradb-cluster@sha256:a0fced75ecd2cd164dd9937917440911aed972476d48a2b8a84fe832bc67e43a
Ports: 3306/TCP, 4444/TCP, 4567/TCP, 4568/TCP, 33062/TCP, 33060/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
/var/lib/mysql/pxc-entrypoint.sh
Args:
mysqld
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 10 Feb 2023 13:45:23 -0800
Finished: Fri, 10 Feb 2023 13:47:43 -0800
Ready: False
Restart Count: 6
Limits:
cpu: 1
ephemeral-storage: 1400M
memory: 1400M
Requests:
cpu: 600m
ephemeral-storage: 1G
memory: 1G
Liveness: exec [/var/lib/mysql/liveness-check.sh] delay=100s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/var/lib/mysql/readiness-check.sh] delay=125s timeout=15s period=30s #success=1 #failure=5
Environment Variables from:
restore-mytest-cluster-env-vars-pxc Secret Optional: true
Environment:
PXC_SERVICE: restore-mytest-cluster-pxc-unready
MONITOR_HOST: %
MYSQL_ROOT_PASSWORD: <set to the key ârootâ in secret âinternal-restore-mytest-clusterâ> Optional: false
XTRABACKUP_PASSWORD: <set to the key âxtrabackupâ in secret âinternal-restore-mytest-clusterâ> Optional: false
MONITOR_PASSWORD: <set to the key âmonitorâ in secret âinternal-restore-mytest-clusterâ> Optional: false
LOG_DATA_DIR: /var/lib/mysql
IS_LOGCOLLECTOR: yes
CLUSTER_HASH: 2828535
OPERATOR_ADMIN_PASSWORD: <set to the key âoperatorâ in secret âinternal-restore-mytest-clusterâ> Optional: false
LIVENESS_CHECK_TIMEOUT: 5
READINESS_CHECK_TIMEOUT: 15
Mounts:
/etc/my.cnf.d from auto-config (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
/etc/mysql/ssl from ssl (rw)
/etc/mysql/ssl-internal from ssl-internal (rw)
/etc/mysql/vault-keyring-secret from vault-keyring-secret (rw)
/etc/percona-xtradb-cluster.conf.d from config (rw)
/tmp from tmp (rw)
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9s5n6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-restore-mytest-cluster-pxc-0
ReadOnly: false
tmp:
Type: EmptyDir (a temporary directory that shares a podâs lifetime)
Medium:
SizeLimit:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: restore-mytest-cluster-pxc
Optional: true
ssl-internal:
Type: Secret (a volume populated by a Secret)
SecretName: restore-mytest-cluster-ssl-internal
Optional: true
ssl:
Type: Secret (a volume populated by a Secret)
SecretName: restore-mytest-cluster-ssl
Optional: false
auto-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: auto-restore-mytest-cluster-pxc
Optional: true
vault-keyring-secret:
Type: Secret (a volume populated by a Secret)
SecretName: restore-mytest-cluster-vault
Optional: true
mysql-users-secret-file:
Type: Secret (a volume populated by a Secret)
SecretName: internal-restore-mytest-cluster
Optional: false
default-token-9s5n6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9s5n6
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 16m default-scheduler Successfully assigned pxc/restore-mytest-cluster-pxc-0 to 10.47.244.15
Warning FailedAttachVolume 16m attachdetach-controller Multi-Attach error for volume âpvc-d761e245-3cca-4d15-bfc3-3846556fae35â Volume is already exclusively attached to one node and canât be attached to another
Normal SuccessfulAttachVolume 16m attachdetach-controller AttachVolume.Attach succeeded for volume âpvc-d761e245-3cca-4d15-bfc3-3846556fae35â
Normal Pulling 16m kubelet Pulling image âpercona/percona-xtradb-cluster-operator:1.11.0â
Normal Pulled 16m kubelet Successfully pulled image âpercona/percona-xtradb-cluster-operator:1.11.0â in 432.030729ms
Normal Created 16m kubelet Created container pxc-init
Normal Started 16m kubelet Started container pxc-init
Normal Pulled 16m kubelet Successfully pulled image âpercona/percona-xtradb-cluster-operator:1.11.0-logcollectorâ in 379.571455ms
Normal Pulling 16m kubelet Pulling image âpercona/percona-xtradb-cluster-operator:1.11.0-logcollectorâ
Normal Created 16m kubelet Created container logs
Normal Started 16m kubelet Started container logs
Normal Pulling 16m kubelet Pulling image âpercona/percona-xtradb-cluster-operator:1.11.0-logcollectorâ
Normal Pulled 16m kubelet Successfully pulled image âpercona/percona-xtradb-cluster-operator:1.11.0-logcollectorâ in 382.095939ms
Normal Created 16m kubelet Created container logrotate
Normal Started 16m kubelet Started container logrotate
Normal Pulling 16m kubelet Pulling image âpercona/percona-xtradb-cluster:8.0.27-18.1â
Normal Pulled 16m kubelet Successfully pulled image âpercona/percona-xtradb-cluster:8.0.27-18.1â in 371.509446ms
Normal Created 16m kubelet Created container pxc
Normal Started 16m kubelet Started container pxc
Warning Unhealthy 14m (x2 over 14m) kubelet Liveness probe failed: ERROR 1045 (28000): Access denied for user âmonitorâ@ârestore-mytest-cluster-pxc-0.restore-mytest-cluster-pxc.pxc.svc.â (using password: YES)
@Ravi_Kumar_Pokala According to our documentation, you need to use the same secrets (user passwords) as in the original cluster. Do you have it?
When restoring to a new Kubernetes-based environment, make sure it has a Secrets object with the same user passwords as in the original cluster. More details about secrets can be found in System Users.
@Slava_Sarzhan: Thank you. It worked after using the same secrets.
1 Like