Hello,
Percona helm 1.17.0 pxc-db mysql 5.7 pxc-db-haproxy is experiencing restarts. What are the causes, and how can this be resolved or optimized?
1.The mysql-pxc-db-haproxy is experiencing restarts, reporting the following error:
error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=“”
[WARNING] (298) : Server galera-admin-nodes/mysql-pxc-db-pxc-0 is UP, reason: External check passed, code: 0, check duration: 131ms. 1 active and 1 backup servers online. 0 sessions requeued, 0 total in queue.
error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="
Backup Server galera-admin-nodes/mysql-pxc-db-pxc-1 is DOWN, reason: External check timeout, code: 0, check duration: 9999ms.
pod information example:
$ kubectl -n percona-operator-mysql get pod
NAME READY STATUS RESTARTS AGE
mysql-pxc-db-haproxy-0 2/2 Running 0 91m
mysql-pxc-db-haproxy-1 2/2 Running 0 88m
mysql-pxc-db-haproxy-2 2/2 Running 2 (27m ago) 86m
mysql-pxc-db-pxc-0 1/1 Running 0 91m
mysql-pxc-db-pxc-1 1/1 Running 0 88m
mysql-pxc-db-pxc-2 1/1 Running 0 87m
mysql-pxc-operator-7957785f46-w92fl 1/1 Running 14 (5d2h ago) 5d7h
2.The describe command for mysql-pxc-db-haproxy-0 shows unhealthy
Warning Unhealthy 39m (x5 over 81m) kubelet Liveness probe failed: command timed out: “/opt/percona/haproxy_liveness_check.sh” timed out after 15s
Warning Unhealthy 39m (x3 over 81m) kubelet Liveness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
Warning Unhealthy 38m (x14 over 84m) kubelet Readiness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
Warning Unhealthy 37m (x11 over 85m) kubelet Readiness probe failed: command timed out: “/opt/percona/haproxy_readiness_check.sh” timed out after 15s
Warning Unhealthy 3m21s (x4 over 70m) kubelet Readiness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
Warning Unhealthy 3m21s (x5 over 69m) kubelet Liveness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
Specific example as follows:
$ kubectl describe -n percona-operator-mysql pod mysql-pxc-db-haproxy-0
Name: mysql-pxc-db-haproxy-0
Namespace: percona-operator-mysql
Priority: 0
Service Account: default
Node: k16l09-h240-worker03/192.168.9.207
Start Time: Mon, 11 Aug 2025 17:40:23 +0800
Labels: app.kubernetes.io/component=haproxy
controller-revision-hash=mysql-pxc-db-haproxy-5b5cc5f988
Annotations: cni.projectcalico.org/containerID: f66865a512d7dc17b6b8b0fe8e4cc0f81473807dd62653b4f259760a833fc8a5
cni.projectcalico.org/podIP: 10.9.166.88/32
cni.projectcalico.org/podIPs: 10.9.166.88/32
kubectl.kubernetes.io/default-container: haproxy
percona.com/configuration-hash: d41d8cd98f00b204e9800998ecf8427e
Status: Running
IP: 10.9.166.88
IPs:
IP: 10.9.166.88
Controlled By: StatefulSet/mysql-pxc-db-haproxy
Init Containers:
pxc-init:
Container ID: containerd://2c01e9f7a3096453772218ae3daf602d8bce9ab048f282876b293bddbb89974c
Image: harbor.shandy.com/percona/percona-xtradb-cluster-operator:1.17.0
Image ID: harbor.shandy.com/percona/percona-xtradb-cluster-operator@sha256:8305f55e485d6899ced1b640c9aeb84bc325d83019c841d68593085227617e05
Port:
Host Port:
Command:
/pxc-init-entrypoint.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 11 Aug 2025 17:40:25 +0800
Finished: Mon, 11 Aug 2025 17:40:29 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 50m
memory: 50M
Requests:
cpu: 50m
memory: 50M
Environment:
Mounts:
/var/lib/mysql from bin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxgd8 (ro)
haproxy-init:
Container ID: containerd://d90176d44c0b5e32b3519ebe99ddaad617aa04f7f16188915bcdfd32437b2945
Image: harbor.shandy.com/percona/percona-xtradb-cluster-operator:1.17.0
Image ID: harbor.shandy.com/percona/percona-xtradb-cluster-operator@sha256:8305f55e485d6899ced1b640c9aeb84bc325d83019c841d68593085227617e05
Port:
Host Port:
Command:
/haproxy-init-entrypoint.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 11 Aug 2025 17:40:30 +0800
Finished: Mon, 11 Aug 2025 17:40:34 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 50m
memory: 50M
Requests:
cpu: 50m
memory: 50M
Environment:
Mounts:
/opt/percona from bin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxgd8 (ro)
Containers:
haproxy:
Container ID: containerd://47fe416f9a82fafc4de251b939373d44d2c6c03d6b27cc7dd4cc1aa411d543c2
Image: shandy.com
Image ID: shandy.com
Ports: 3306/TCP, 3307/TCP, 3309/TCP, 33062/TCP, 33060/TCP, 8404/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
/opt/percona/haproxy-entrypoint.sh
Args:
haproxy
State: Running
Started: Mon, 11 Aug 2025 17:40:35 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 2G
Requests:
cpu: 600m
memory: 1G
Liveness: exec [/opt/percona/haproxy_liveness_check.sh] delay=300s timeout=15s period=30s #success=1 #failure=5
Readiness: exec [/opt/percona/haproxy_readiness_check.sh] delay=50s timeout=15s period=10s #success=1 #failure=5
Environment Variables from:
mysql-pxc-db-env-vars-haproxy Secret Optional: true
Environment:
PXC_SERVICE: mysql-pxc-db-pxc
LIVENESS_CHECK_TIMEOUT: 15
READINESS_CHECK_TIMEOUT: 15
Mounts:
/etc/haproxy-custom/ from haproxy-custom (rw)
/etc/haproxy/pxc from haproxy-auto (rw)
/etc/mysql/haproxy-env-secret from mysql-pxc-db-env-vars-haproxy (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
/opt/percona from bin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxgd8 (ro)
pxc-monit:
Container ID: containerd://5a290824adc7389588f700eb0d88c90d0919dfd855c8471c53b9a7b1522cec5c
Image: shandy.com
Image ID: shandy.com
Port:
Host Port:
Command:
/opt/percona/haproxy-entrypoint.sh
Args:
/opt/percona/peer-list
-on-change=/opt/percona/haproxy_add_pxc_nodes.sh
-service=$(PXC_SERVICE)
State: Running
Started: Mon, 11 Aug 2025 20:06:43 +0800
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 11 Aug 2025 19:32:11 +0800
Finished: Mon, 11 Aug 2025 20:06:42 +0800
Ready: True
Restart Count: 3
Environment Variables from:
mysql-pxc-db-env-vars-haproxy Secret Optional: true
Environment:
PXC_SERVICE: mysql-pxc-db-pxc
REPLICAS_SVC_ONLY_READERS: false
Mounts:
/etc/haproxy-custom/ from haproxy-custom (rw)
/etc/haproxy/pxc from haproxy-auto (rw)
/etc/mysql/haproxy-env-secret from mysql-pxc-db-env-vars-haproxy (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
/opt/percona from bin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxgd8 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
haproxydata:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
haproxy-custom:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mysql-pxc-db-haproxy
Optional: true
haproxy-auto:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
mysql-users-secret-file:
Type: Secret (a volume populated by a Secret)
SecretName: internal-mysql-pxc-db
Optional: false
mysql-pxc-db-env-vars-haproxy:
Type: Secret (a volume populated by a Secret)
SecretName: mysql-pxc-db-env-vars-haproxy
Optional: true
bin:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
kube-api-access-gxgd8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: mysql=dedicated
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Pulling 44m (x4 over 3h10m) kubelet Pulling image “shandy.com
Normal Created 44m (x4 over 3h10m) kubelet Created container: pxc-monit
Normal Started 44m (x4 over 3h10m) kubelet Started container pxc-monit
Normal Pulled 44m kubelet Successfully pulled image “shandy.com in 158ms (158ms including waiting). Image size: 101394890 bytes.
Warning Unhealthy 39m (x5 over 81m) kubelet Liveness probe failed: command timed out: “/opt/percona/haproxy_liveness_check.sh” timed out after 15s
Warning Unhealthy 39m (x3 over 81m) kubelet Liveness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
Warning Unhealthy 38m (x14 over 84m) kubelet Readiness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
Warning Unhealthy 37m (x11 over 85m) kubelet Readiness probe failed: command timed out: “/opt/percona/haproxy_readiness_check.sh” timed out after 15s
Events:
Type Reason Age From Message
Warning Unhealthy 50m (x3 over 58m) kubelet Readiness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
Warning Unhealthy 9m10s (x3 over 57m) kubelet Liveness probe failed: ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 2
3.Error log example:
{“time”:“11/Aug/2025:14:05:53.238”, “backend_source_ip”: “10.9.166.74”, “backend_source_port”: “33062”, “message”: “The following values are used for PXC node 10.9.166.74 in backend galera-replica-nodes: wsrep_local_state is 4; pxc_maint_mod is DISABLED; wsrep_cluster_status is Primary; wsrep_reject_queries is NONE; wsrep_sst_donor_rejects_queries is OFF; 3 nodes are available”}
{“time”:“11/Aug/2025:14:05:53.242”, “backend_source_ip”: “10.9.166.74”, “backend_source_port”: “33062”, “message”: “PXC node 10.9.166.74 for backend galera-replica-nodes is ok”}
{“time”:“11/Aug/2025:14:05:55.216”, “backend_source_ip”: “10.9.166.74”, “backend_source_port”: “33062”, “message”: “The following values are used for PXC node 10.9.166.74 in backend galera-admin-nodes: wsrep_local_state is 4; pxc_maint_mod is DISABLED; wsrep_cluster_status is Primary; wsrep_reject_queries is NONE; wsrep_sst_donor_rejects_queries is OFF; 3 nodes are available”}
{“time”:“11/Aug/2025:14:05:55.221”, “backend_source_ip”: “10.9.166.74”, “backend_source_port”: “33062”, “message”: “PXC node 10.9.166.74 for backend galera-admin-nodes is ok”}
[WARNING] (1154) : kill 3234
[WARNING] (1) : Process 3237 exited with code 0 (Exit)
{“time”:“11/Aug/2025:14:06:01.063”, “backend_source_ip”: “10.9.106.79”, “backend_source_port”: “33062”, “message”: “The following values are used for PXC node 10.9.106.79 in backend galera-mysqlx-nodes: wsrep_local_state is 4; pxc_maint_mod is DISABLED; wsrep_cluster_status is Primary; wsrep_reject_queries is NONE; wsrep_sst_donor_rejects_queries is OFF; 3 nodes are available”}
{“time”:“11/Aug/2025:14:06:01.068”, “backend_source_ip”: “10.9.106.79”, “backend_source_port”: “33062”, “message”: “PXC node 10.9.106.79 for backend galera-mysqlx-nodes is ok”}
{“time”:“11/Aug/2025:14:06:01.274”, “backend_source_ip”: “10.9.106.79”, “backend_source_port”: “33062”, “message”: “The following values are used for PXC node 10.9.106.79 in backend galera-replica-nodes: wsrep_local_state is 4; pxc_maint_mod is DISABLED; wsrep_cluster_status is Primary; wsrep_reject_queries is NONE; wsrep_sst_donor_rejects_queries is OFF; 3 nodes are available”}
{“time”:“11/Aug/2025:14:06:01.279”, “backend_source_ip”: “10.9.106.79”, “backend_source_port”: “33062”, “message”: “PXC node 10.9.106.79 for backend galera-replica-nodes is ok”}
Server galera-replica-nodes/mysql-pxc-db-pxc-2 is UP, reason: External check passed, code: 0, check duration: 9456ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] (1154) : Server galera-replica-nodes/mysql-pxc-db-pxc-2 is UP, reason: External check passed, code: 0, check duration: 9456ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
{“time”:“11/Aug/2025:14:06:03.453”, “backend_source_ip”: “10.9.70.132”, “backend_source_port”: “33062”, “message”: “The following values are used for PXC node 10.9.70.132 in backend galera-mysqlx-nodes: wsrep_local_state is 4; pxc_maint_mod is DISABLED; wsrep_cluster_status is Primary; wsrep_reject_queries is NONE; wsrep_sst_donor_rejects_queries is OFF; 3 nodes are available”}
{“time”:“11/Aug/2025:14:06:03.457”, “backend_source_ip”: “10.9.70.132”, “backend_source_port”: “33062”, “message”: “PXC node 10.9.70.132 for backend galera-mysqlx-nodes is ok”}
[WARNING] (1154) : kill 3275
[WARNING] (1) : Process 3278 exited with code 0 (Exit)
[WARNING] (1154) : kill 3286
[WARNING] (1) : Process 3289 exited with code 0 (Exit)
[WARNING] (8373) : kill 35450
[WARNING] (8373) : Server galera-replica-nodes/mysql-pxc-db-pxc-0 is DOWN, reason: External check timeout, code: 0, check duration: 10002ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server galera-replica-nodes/mysql-pxc-db-pxc-0 is DOWN, reason: External check timeout, code: 0, check duration: 10002ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] (8373) : kill 35451
[WARNING] (8373) : Backup Server galera-admin-nodes/mysql-pxc-db-pxc-1 is DOWN, reason: External check timeout, code: 0, check duration: 9999ms. 0 active and 1 backup servers left. Running on backup. 0 sessions active, 0 requeued, 0 remaining in queue.
Backup Server galera-admin-nodes/mysql-pxc-db-pxc-1 is DOWN, reason: External check timeout, code: 0, check duration: 9999ms. 0 active and 1 backup servers left. Running on backup. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] (1) : Process 35456 exited with code 0 (Exit)
[WARNING] (1) : Process 35460 exited with code 0 (Exit)
{“time”:“11/Aug/2025:14:47:45.880”, “backend_source_ip”: “10.9.106.79”, “backend_source_port”: “33062”, “message”: “The following values are used for PXC node 10.9.106.79 in backend galera-nodes: wsrep_local_state is 4; pxc_maint_mod is DISABLED; wsrep_cluster_status is Primary; wsrep_reject_queries is NONE; wsrep_sst_donor_rejects_queries is OFF; 3 nodes are available”}
{“time”:“11/Aug/2025:14:47:45.885”, “backend_source_ip”: “10.9.106.79”, “backend_source_port”: “33062”, “message”: “PXC node 10.9.106.79 for backend galera-nodes is ok”}
[WARNING] (8373) : Backup Server galera-nodes/mysql-pxc-db-pxc-2 is UP, reason: External check passed, code: 0, check duration: 2925ms. 1 active and 2 backup servers online. 0 sessions requeued, 0 total in queue.
Backup Server galera-nodes/mysql-pxc-db-pxc-2 is UP, reason: External check passed, code: 0, check duration: 2925ms. 1 active and 2 backup servers online. 0 sessions requeued, 0 total in queue.
{“time”:“11/Aug/2025:14:47:40.020”, “client_ip”: “10.9.252.134”, “client_port”:“55564”, “backend_source_ip”: “10.9.106.78”, “backend_source_port”: “35660”, “frontend_name”: “galera-in”, “backend_name”: “galera-nodes”, “server_name”:“mysql-pxc-db-pxc-0”, “tw”: “1”, “tc”: “0”, “Tt”: “7944”, “bytes_read”: “2758”, “termination_state”: “–”, “actconn”: “1”, “feconn” :“1”, “beconn”: “0”, “srv_conn”: “0”, “retries”: “0”, “srv_queue”: “0”, “backend_queue”: “0” }
{“time”:“11/Aug/2025:14:47:48.018”, “backend_source_ip”: “10.9.70.132”, “backend_source_port”: “33062”, “message”: “The following values are used for PXC node 10.9.70.132 in backend galera-nodes: wsrep_local_state is 4; pxc_maint_mod is DISABLED; wsrep_cluster_status is Primary; wsrep_reject_queries is NONE; wsrep_sst_donor_rejects_queries is OFF; 3 nodes are available”}
{“time”:“11/Aug/2025:14:47:48.023”, “backend_source_ip”: “10.9.70.132”, “backend_source_port”: “33062”, “message”: “PXC node 10.9.70.132 for backend galera-nodes is ok”}
[WARNING] (8373) : kill 35468
[WARNING] (1) : Process 35471 exited with code 0 (Exit)
[WARNING] (8373) : kill 35486
[WARNING] (1) : Process 35489 exited with code 0 (Exit)
[WARNING] (8373) : kill 35495
[WARNING] (8373) : kill 35496
[WARNING] (8373) : kill 35499
[WARNING] (1) : Process 35501 exited with code 0 (Exit)
[WARNING] (1) : Process 35508 exited with code 0 (Exit)
[WARNING] (1) : Process 35516 exited with code 0 (Exit)
[WARNING] (8373) : kill 35545
[WARNING] (1) : Process 35548 exited with code 0 (Exit)
{“time”:“11/Aug/2025:14:47:50.724”, “client_ip”: “127.0.0.1”, “client_port”:“51374”, “backend_source_ip”: “10.9.106.78”, “backend_source_port”: “39168”, “frontend_name”: “galera-admin-in”, “backend_name”: “galera-admin-nodes”, “server_name”:“mysql-pxc-db-pxc-2”, “tw”: “1”, “tc”: “0”, “Tt”: “6355”, “bytes_read”: “3071”, “termination_state”: “–”, “actconn”: “1”, “feconn” :“1”, “beconn”: “0”, “srv_conn”: “0”, “retries”: “0”, “srv_queue”: “0”, “backend_queue”: “0” }
[WARNING] (8373) : kill 35565
4.The cluster status is ready.
$ kubectl get perconaxtradbcluster mysql-pxc-db -n percona-operator-mysql -o yaml
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
annotations:
meta.helm.sh/release-name: mysql-pxc-db
meta.helm.sh/release-namespace: percona-operator-mysql
creationTimestamp: “2025-08-11T13:57:20Z”
finalizers:
-
percona.com/delete-pxc-pods-in-order
generation: 1
labels:
app.kubernetes.io/instance: mysql-pxc-db
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: pxc-db
app.kubernetes.io/version: 1.17.0
helm.sh/chart: pxc-db-1.17.1
name: mysql-pxc-db
namespace: percona-operator-mysql
resourceVersion: “35537002”
uid: 085e5185-4dfe-4c3e-9f0f-9f3a9fda837b
spec:
crVersion: 1.17.0
enableCRValidationWebhook: false
enableVolumeExpansion: false
haproxy:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
annotations: {}
enabled: true
gracePeriod: 30
image: shandy.com
labels: {}
livenessProbes:
failureThreshold: 5
initialDelaySeconds: 120
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 30
nodeSelector:
mysql: dedicated
podDisruptionBudget:
maxUnavailable: 1
readinessProbes:
failureThreshold: 5
initialDelaySeconds: 50
periodSeconds: 39
successThreshold: 1
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 2.5G
requests:
cpu: 900m
memory: 1G
sidecarPVCs:
sidecarResources:
limits: {}
requests: {}
sidecarVolumes:
sidecars:
size: 3
tolerations:
volumeSpec:
emptyDir: {}
logCollectorSecretName: mysql-pxc-db-log-collector
logcollector:
enabled: false
pause: false
pmm:
enabled: false
proxysql:
enabled: false
pxc:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
annotations: {}
autoRecovery: true
configuration: |2[mysqld]
connect_timeout=30 # Default 10s, increase to 60s
net_read_timeout=60
net_write_timeout=60
wsrep_debug=ON
wsrep_provider_options=“gcache.size=2G; gcache.recover=yes”
pxc_strict_mode=PERMISSIVE
max_allowed_packet=64M
max_connections=300
wait_timeout=900
innodb_lock_wait_timeout=60
innodb_buffer_pool_size=2G
gracePeriod: 600
image: harbor.shandy.com/percona/percona-xtradb-cluster:5.7.44-31.65
labels: {}
livenessProbes:
failureThreshold: 5
initialDelaySeconds: 120
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 30
nodeSelector:
mysql: dedicated
podDisruptionBudget:
maxUnavailable: 1
readinessProbes:
failureThreshold: 5
initialDelaySeconds: 50
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 2.8Gi
requests:
cpu: 900m
memory: 2.3G
sidecarPVCs:
sidecarResources:
limits: {}
requests: {}
sidecarVolumes:
sidecars:
size: 3
tolerations:
volumeSpec:
persistentVolumeClaim:
accessModes:- ReadWriteOnce
resources:
requests:
storage: 130Gi
storageClassName: longhorn
secretsName: mysql-pxc-db-secrets
sslInternalSecretName: mysql-pxc-db-ssl-internal
sslSecretName: mysql-pxc-db-ssl
tls:
enabled: true
updateStrategy: SmartUpdate
vaultSecretName: mysql-pxc-db-vault
status:
backup: {}
conditions:
- ReadWriteOnce
-
lastTransitionTime: “2025-08-11T13:57:20Z”
status: enabled
type: tls -
lastTransitionTime: “2025-08-11T13:57:20Z”
status: “True”
type: initializing -
lastTransitionTime: “2025-08-11T14:03:08Z”
status: “True”
type: ready
haproxy:
labelSelectorPath: app.kubernetes.io/component=haproxy,app.kubernetes.io/instance=mysql-pxc-db,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
ready: 3
size: 3
status: ready
host: mysql-pxc-db-haproxy.percona-operator-mysql
logcollector: {}
observedGeneration: 1
pmm: {}
proxysql: {}
pxc:
image: harbor.shandy.com/percona/percona-xtradb-cluster:5.7.44-31.65
labelSelectorPath: app.kubernetes.io/component=pxc,app.kubernetes.io/instance=mysql-pxc-db,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
ready: 3
size: 3
status: ready
version: 5.7.44-48-57
ready: 6
size: 6
state: ready