hi
I’m getting this error in operator log:
{"level":"error","ts":1605766525.9552927,"logger":"controller_perconaxtradbcluster","msg":"Update status","error":"send update: Operation cannot be fulfilled on perconaxtradbclusters.pxc.percona.com \"cluster1\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:189\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:494\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
$ k get pods
NAME READY STATUS RESTARTS AGE
cluster1-proxysql-0 3/3 Running 0 28m
cluster1-proxysql-1 3/3 Running 0 27m
cluster1-proxysql-2 3/3 Running 0 27m
cluster1-pxc-0 0/1 Init:CrashLoopBackOff 2 52s
percona-xtradb-cluster-operator-749b86b678-h8jlp 1/1 Running 0 13m
$ k describe pod cluster1-pxc-0
Name: cluster1-pxc-0
Namespace: pxc
Priority: 0
Node: node-1.internal/192.168.250.112
Start Time: Thu, 19 Nov 2020 06:15:51 +0000
Labels: app.kubernetes.io/component=pxc
app.kubernetes.io/instance=cluster1
app.kubernetes.io/managed-by=percona-xtradb-cluster-operator
app.kubernetes.io/name=percona-xtradb-cluster
app.kubernetes.io/part-of=percona-xtradb-cluster
controller-revision-hash=cluster1-pxc-74f85d4868
statefulset.kubernetes.io/pod-name=cluster1-pxc-0
Annotations: percona.com/configuration-hash: d41d8cd98f00b204e9800998ecf8427e
percona.com/ssl-hash: 57ef53b333e4d4eedbb9aad081fd4ef6
percona.com/ssl-internal-hash: ee70e6b562c7e24292590033ce7c4652
Status: Pending
IP: 10.233.100.16
IPs:
IP: 10.233.100.16
Controlled By: StatefulSet/cluster1-pxc
Init Containers:
pxc-init:
Container ID: docker://b0b4a6440649c341a2843236af75acbe127342d3abbc4e7258d67aee78b598ca
Image: percona/percona-xtradb-cluster-operator:1.6.0
Image ID: docker-pullable://percona/percona-xtradb-cluster-operator@sha256:9871d6fb960b4ec498430a398a44eca08873591a6b6efb8a35349e79e24f3072
Port: <none>
Host Port: <none>
Command:
/pxc-init-entrypoint.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 19 Nov 2020 06:19:06 +0000
Finished: Thu, 19 Nov 2020 06:19:06 +0000
Ready: False
Restart Count: 5
Requests:
cpu: 2
memory: 2G
Environment: <none>
Mounts:
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l2p5j (ro)
Containers:
pxc:
Container ID:
Image: percona/percona-xtradb-cluster:8.0.20-11.1
Image ID:
Ports: 3306/TCP, 4444/TCP, 4567/TCP, 4568/TCP, 33062/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
/var/lib/mysql/pxc-entrypoint.sh
Args:
mysqld
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 2
memory: 2G
Liveness: exec [/var/lib/mysql/liveness-check.sh] delay=300s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/var/lib/mysql/readiness-check.sh] delay=15s timeout=15s period=30s #success=1 #failure=5
Environment:
PXC_SERVICE: cluster1-pxc-unready
MONITOR_HOST: %
MYSQL_ROOT_PASSWORD: <set to the key 'root' in secret 'internal-cluster1'> Optional: false
XTRABACKUP_PASSWORD: <set to the key 'xtrabackup' in secret 'internal-cluster1'> Optional: false
MONITOR_PASSWORD: <set to the key 'monitor' in secret 'internal-cluster1'> Optional: false
CLUSTERCHECK_PASSWORD: <set to the key 'clustercheck' in secret 'internal-cluster1'> Optional: false
OPERATOR_ADMIN_PASSWORD: <set to the key 'operator' in secret 'internal-cluster1'> Optional: false
Mounts:
/etc/my.cnf.d from auto-config (rw)
/etc/mysql/ssl from ssl (rw)
/etc/mysql/ssl-internal from ssl-internal (rw)
/etc/mysql/vault-keyring-secret from vault-keyring-secret (rw)
/etc/percona-xtradb-cluster.conf.d from config (rw)
/tmp from tmp (rw)
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l2p5j (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: HostPath (bare host directory volume)
Path: /app/test/mysql-operator
HostPathType: Directory
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: cluster1-pxc
Optional: true
ssl-internal:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-ssl-internal
Optional: true
ssl:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-ssl
Optional: false
auto-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: auto-cluster1-pxc
Optional: true
vault-keyring-secret:
Type: Secret (a volume populated by a Secret)
SecretName: keyring-secret-vault
Optional: true
default-token-l2p5j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-l2p5j
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/unreachable:NoExecute for 30s
node.kubernetes.io/not-ready:NoExecute for 30s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m27s default-scheduler Successfully assigned pxc/cluster1-pxc-0 to eranet-node-1.internal
Normal Pulled 4m30s (x4 over 5m23s) kubelet Successfully pulled image "percona/percona-xtradb-cluster-operator:1.6.0"
Normal Created 4m30s (x4 over 5m23s) kubelet Created container pxc-init
Normal Started 4m30s (x4 over 5m22s) kubelet Started container pxc-init
Normal Pulling 3m38s (x5 over 5m25s) kubelet Pulling image "percona/percona-xtradb-cluster-operator:1.6.0"
Warning BackOff 23s (x23 over 5m18s) kubelet Back-off restarting failed container
This is a standard K8s 1.18 cluster built using Kubespray on bare metal.
Using version 1.6.0
Any hint what could be wrong? Is it init container which fails to load using pxc-init-entrypoint.sh? There is no log produced at this phase.
The only one version which works okay is 1.4.0. Haven’t had a luck to run successfully 1.5.0 or 1.6.0 in several tries.
Many thanks