Percona MySQL operator kubernetes on EKS

Hi Team,

  1. I m facing a issue while deploying percona MySQL operator in kubernetes in aws eks cluster, after i deploy i see the pxc -o pod doesn’t comes up and after 5 min it goes into crash-loop back-off stage and i don’t see any logs in containers (logs,logrotate), then i checked in pxc pod logs and it shows this error message below like no host found

  2. so i deployed the same yamls (cr.yaml) in GCP GKE and within few mins the cluster came up

  3. Then i check in our eks cluster whether any firewall issue is there but there is no firewall issue, Plus to test i deployed a sample standalone MySQL pod and its working fine on the same aws EKS cluster and others applications in the same cluster is working fine so there wouldn’t be any firewall issue.

But i m trying to find the root-case why same cr.yaml file is working on GCP GKE and not working on AWS EKS, Would appreciate any help or pointers
@egegunes @spronin @matthewb

Operator link : percona-xtradb-cluster-operator/cr.yaml at main · percona/percona-xtradb-cluster-operator · GitHub

screenshot:

pxc container error log:

+ '[' m = - ']'
+ CFG=/etc/mysql/node.cnf
+ wantHelp=
+ for arg in "$@"
+ case "$arg" in
++ mysqld -V
++ awk '{print $3}'

++ awk -F. '{print $1"."$2}'
+ MYSQL_VERSION=8.0
++ mysqld -V
++ awk '{print $3}'
++ awk -F. '{print $3}'
++ awk -F- '{print $1}'
+ MYSQL_PATCH_VERSION=23
+ vault_secret=/etc/mysql/vault-keyring-secret/keyring_vault.conf
+ '[' -f /etc/mysql/vault-keyring-secret/keyring_vault.conf ']'
+ '[' -f /usr/lib64/mysql/plugin/binlog_utils_udf.so ']'
+ sed -i '/\[mysqld\]/a plugin_load="binlog_utils_udf=binlog_utils_udf.so"' /etc/mysql/node.cnf
+ sed -i '/\[mysqld\]/a gtid-mode=ON' /etc/mysql/node.cnf
+ sed -i '/\[mysqld\]/a enforce-gtid-consistency' /etc/mysql/node.cnf
+ grep -q '^progress=' /etc/mysql/node.cnf
+ sed -i 's|^progress=.*|progress=1|' /etc/mysql/node.cnf
+ grep -q '^\[sst\]' /etc/mysql/node.cnf
+ grep -q '^cpat=' /etc/mysql/node.cnf
+ sed '/^\[sst\]/a cpat=.*\\.pem$\\|.*init\\.ok$\\|.*galera\\.cache$\\|.*wsrep_recovery_verbose\\.log$\\|.*readiness-check\\.sh$\\|.*liveness-check\\.sh$\\|.*sst_in_progress$\\|.*sst-xb-tmpdir$\\|.*\\.sst$\\|.*gvwstate\\.dat$\\|.*grastate\\.dat$\\|.*\\.err$\\|.*\\.log$\\|.*RPM_UPGRADE_MARKER$\\|.*RPM_UPGRADE_HISTORY$\\|.*pxc-entrypoint\\.sh$\\|.*unsafe-bootstrap\\.sh$\\|.*pxc-configure-pxc\\.sh\\|.*peer-list$' /etc/mysql/node.cnf
+ [[ 8.0 == \8\.\0 ]]
+ [[ 23 -ge 26 ]]
+ grep -q '^skip_slave_start=ON' /etc/mysql/node.cnf
+ sed -i '/\[mysqld\]/a skip_slave_start=ON' /etc/mysql/node.cnf
+ file_env XTRABACKUP_PASSWORD xtrabackup xtrabackup
+ set +o xtrace
+ file_env CLUSTERCHECK_PASSWORD '' clustercheck
+ set +o xtrace
++ hostname -f
+ NODE_NAME=percona-mysql-pxc-0.percona-mysql-pxc.percona-mysql.svc.cluster.local
+ NODE_PORT=3306
Percona XtraDB Cluster: Finding peers
+ '[' -n percona-mysql-pxc-unready ']'
+ echo 'Percona XtraDB Cluster: Finding peers'
+ /var/lib/mysql/peer-list -on-start=/var/lib/mysql/pxc-configure-pxc.sh -service=percona-mysql-pxc-unready
2022/03/10 04:07:14 Peer finder enter
2022/03/10 04:07:14 Determined Domain to be percona-mysql.svc.cluster.local
2022/03/10 04:07:14 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:15 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:16 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:17 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:18 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:19 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:20 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:21 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:22 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:23 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:24 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
1 Like

@mohamedkashifuddin we need more details about your EKS cluster and cr.yaml.

anything specific about your EKS deployment?
Which version is it? How do you deploy it (if it is through eksctl - please share the yaml or full command)? CoreDNS is up and running?

On PXC - please share the cr.

1 Like

Hi @spronin

anything specific about your EKS deployment? No its just a general cluster for testing the operator and i try few things and it seems CoreDNS is up

  1. Kubernetes version : 1.21 and eks.4

  2. CoreDNS : v1.8.4-eksbuild.1

  3. kubectl -n kube-system get endpoints kube-dns and this does not return empty i few

NAME       ENDPOINTS                                                             AGE
kube-dns   192.168.11XXXX ,192.168.XX.XXX.XX,192.168.XXX.XX.XXX + 1 more...   5d21h
  1. kubectl exec -it percona-mysql-pxc-0 -c pxc – sh
sh-4.4$ cat /etc/resolv.conf
nameserver 10.100.0.10
search percona-mysql.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
options ndots:5
  1. i try to do a nslookup but it failed since its not present in pxc container
sh-4.4$ nslookup kubernetes 10.100.0.10
sh: nslookup: command not found

  1. i m facing some issue in uploading cr in yaml format so i m changing it as .txt

Cr.yaml below :

cr.txt (14.9 KB)

  1. Coredns :

image

1 Like

Hi @spronin Please let me know if you need anymore details.

1 Like

@mohamedkashifuddin how do you create the EKS cluster? Do you use eksctl? Could you please share the command?

1 Like

@spronin
We created cluster using this documentation link below using AWS management console , we did not use eksctl to create the cluster.
Please let me know if you need anymore details.

Doc link : Creating an Amazon EKS cluster - Amazon EKS

1 Like