Hi Team,
-
I m facing a issue while deploying percona MySQL operator in kubernetes in aws eks cluster, after i deploy i see the pxc -o pod doesn’t comes up and after 5 min it goes into crash-loop back-off stage and i don’t see any logs in containers (logs,logrotate), then i checked in pxc pod logs and it shows this error message below like
no host found
-
so i deployed the same yamls (cr.yaml) in GCP GKE and within few mins the cluster came up
-
Then i check in our eks cluster whether any firewall issue is there but there is no firewall issue, Plus to test i deployed a sample standalone MySQL pod and its working fine on the same aws EKS cluster and others applications in the same cluster is working fine so there wouldn’t be any firewall issue.
But i m trying to find the root-case why same cr.yaml file is working on GCP GKE and not working on AWS EKS, Would appreciate any help or pointers
@Ege_Gunes @Sergey_Pronin @matthewb
Operator link : percona-xtradb-cluster-operator/cr.yaml at main · percona/percona-xtradb-cluster-operator · GitHub
screenshot:
pxc container error log:
+ '[' m = - ']'
+ CFG=/etc/mysql/node.cnf
+ wantHelp=
+ for arg in "$@"
+ case "$arg" in
++ mysqld -V
++ awk '{print $3}'
++ awk -F. '{print $1"."$2}'
+ MYSQL_VERSION=8.0
++ mysqld -V
++ awk '{print $3}'
++ awk -F. '{print $3}'
++ awk -F- '{print $1}'
+ MYSQL_PATCH_VERSION=23
+ vault_secret=/etc/mysql/vault-keyring-secret/keyring_vault.conf
+ '[' -f /etc/mysql/vault-keyring-secret/keyring_vault.conf ']'
+ '[' -f /usr/lib64/mysql/plugin/binlog_utils_udf.so ']'
+ sed -i '/\[mysqld\]/a plugin_load="binlog_utils_udf=binlog_utils_udf.so"' /etc/mysql/node.cnf
+ sed -i '/\[mysqld\]/a gtid-mode=ON' /etc/mysql/node.cnf
+ sed -i '/\[mysqld\]/a enforce-gtid-consistency' /etc/mysql/node.cnf
+ grep -q '^progress=' /etc/mysql/node.cnf
+ sed -i 's|^progress=.*|progress=1|' /etc/mysql/node.cnf
+ grep -q '^\[sst\]' /etc/mysql/node.cnf
+ grep -q '^cpat=' /etc/mysql/node.cnf
+ sed '/^\[sst\]/a cpat=.*\\.pem$\\|.*init\\.ok$\\|.*galera\\.cache$\\|.*wsrep_recovery_verbose\\.log$\\|.*readiness-check\\.sh$\\|.*liveness-check\\.sh$\\|.*sst_in_progress$\\|.*sst-xb-tmpdir$\\|.*\\.sst$\\|.*gvwstate\\.dat$\\|.*grastate\\.dat$\\|.*\\.err$\\|.*\\.log$\\|.*RPM_UPGRADE_MARKER$\\|.*RPM_UPGRADE_HISTORY$\\|.*pxc-entrypoint\\.sh$\\|.*unsafe-bootstrap\\.sh$\\|.*pxc-configure-pxc\\.sh\\|.*peer-list$' /etc/mysql/node.cnf
+ [[ 8.0 == \8\.\0 ]]
+ [[ 23 -ge 26 ]]
+ grep -q '^skip_slave_start=ON' /etc/mysql/node.cnf
+ sed -i '/\[mysqld\]/a skip_slave_start=ON' /etc/mysql/node.cnf
+ file_env XTRABACKUP_PASSWORD xtrabackup xtrabackup
+ set +o xtrace
+ file_env CLUSTERCHECK_PASSWORD '' clustercheck
+ set +o xtrace
++ hostname -f
+ NODE_NAME=percona-mysql-pxc-0.percona-mysql-pxc.percona-mysql.svc.cluster.local
+ NODE_PORT=3306
Percona XtraDB Cluster: Finding peers
+ '[' -n percona-mysql-pxc-unready ']'
+ echo 'Percona XtraDB Cluster: Finding peers'
+ /var/lib/mysql/peer-list -on-start=/var/lib/mysql/pxc-configure-pxc.sh -service=percona-mysql-pxc-unready
2022/03/10 04:07:14 Peer finder enter
2022/03/10 04:07:14 Determined Domain to be percona-mysql.svc.cluster.local
2022/03/10 04:07:14 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:15 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:16 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:17 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:18 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:19 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:20 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:21 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:22 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:23 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host
2022/03/10 04:07:24 lookup percona-mysql-pxc-unready on 10.100.0.10:53: no such host