Ha-proxy error and continuous restarts after HPA scale down

Hello everyone,

We have deployed a percona mysql operator 1.9 on K8S and we added an HPA on the cluster. Everything works fine however we have noticed that almost everytime a scale down happens haproxy gives an error and restarts. We see this on logs on pxc-monit on ha proxy:

+ cat

+ IFS='

'

+ echo 'server db-cluster-pxc-0 db-cluster-pxc-0.db-cluster-pxc.-m2.svc.cluster.local:33060 check inter 10000 rise 1 fall 2 weight 1 on-marked-up shutdown-backup-sessions

server db-cluster-pxc-3 db-cluster-pxc-3.db-cluster-pxc.-m2.svc.cluster.local:33060 check inter 10000 rise 1 fall 2 weight 1 backup

server db-cluster-pxc-2 db-cluster-pxc-2.db-cluster-pxc.-m2.svc.cluster.local:33060 check inter 10000 rise 1 fall 2 weight 1 backup

server db-cluster-pxc-1 db-cluster-pxc-1.db-cluster-pxc.-m2.svc.cluster.local:33060 check inter 10000 rise 1 fall 2 weight 1 backup'

+ SOCKET=/etc/haproxy/pxc/haproxy.sock

+ path_to_custom_global_cnf=/etc/haproxy-custom

+ '[' -f /etc/haproxy-custom/haproxy-global.cfg ']'

+ '[' -f /etc/haproxy-custom/haproxy-global.cfg -a -z '' ']'

+ haproxy -c -f /etc/haproxy/haproxy-global.cfg -f /etc/haproxy/pxc/haproxy.cfg

[NOTICE] 312/170344 (402) : haproxy version is 2.3.10

[NOTICE] 312/170344 (402) : path to executable is /usr/sbin/haproxy

[ALERT] 312/170344 (402) : parsing [/etc/haproxy/pxc/haproxy.cfg:8] : 'server db-cluster-pxc-3' : could not resolve address 'db-cluster-pxc-3.db-cluster-pxc.-m2.svc.cluster.local'.

[ALERT] 312/170344 (402) : parsing [/etc/haproxy/pxc/haproxy.cfg:18] : 'server db-cluster-pxc-3' : could not resolve address 'db-cluster-pxc-3.db-cluster-pxc.-m2.svc.cluster.local'.

[ALERT] 312/170344 (402) : parsing [/etc/haproxy/pxc/haproxy.cfg:30] : 'server db-cluster-pxc-3' : could not resolve address 'db-cluster-pxc-3.db-cluster-pxc.-m2.svc.cluster.local'.

[ALERT] 312/170344 (402) : parsing [/etc/haproxy/pxc/haproxy.cfg:38] : 'server db-cluster-pxc-3' : could not resolve address 'db-cluster-pxc-3.db-cluster-pxc.-m2.svc.cluster.local'.

[ALERT] 312/170344 (402) : Failed to initialize server(s) addr.

It seems that haproxy doesn’t update correctly when scale down happens. On the operator logs it could be related it says the following:

{"level":"error","ts":1636477422.5753634,"logger":"controller","msg":"Reconciler error","controller":"perconaxtradbcluster-controller","name":"db-cluster","namespace":"-m2","error":"pxc upgrade error: update error: Operation cannot be fulfilled on statefulsets.apps \"db-cluster-pxc\": the object has been modified; please apply your changes to the latest version and try again","errorVerbose":"Operation cannot be fulfilled on statefulsets.apps \"db-cluster-pxc\": the object has been modified; please apply your changes to the latest version and try again\nupdate error\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).updatePod\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/upgrade.go:201\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:339\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371\npxc upgrade error\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:341\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:237\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}

Is there something we are missing? Any ideas?

Thanks in advance!

1 Like

Hi @jcarretie ,

This issue was fixed in the latest release and it will be available in one or two weeks.

1 Like

Thank you Slava, we are looking forward to testing it

Best regards,

1 Like

@jcarretie

JFYI Percona Kubernetes Operator for Percona XtraDB Cluster v1.10.0 was released.

1 Like