Using Percona XtraDB on a IPv6 only cluster

Description:

I have a Homelab that I have setup as a IPv6 only cluster. With this I try to use the Percona pxc-operator-1.18.0 helm chart. The problem is that the cluster does not get into a ready state because the SST is listing on a IPv4 only address:

bash-5.1$ bash /tmp/netcat.sh
COMMAND PID USER LOCAL ADDRESS REMOTE ADDRESS STATE
mysqld 1 mysql 0.0.0.0:3306 0.0.0.0:0 LISTEN
mysqld 1 mysql 0.0.0.0:4567 0.0.0.0:0 LISTEN
mysqld 1 mysql [0:0:0:0:0:0:0:0]:33060 [0:0:0:0:0:0:0:0]:0 LISTEN
mysqld 1 mysql [fd40:10:0:0:0:0:0:275]:33062 [0:0:0:0:0:0:0:0]:0 LISTEN

I use portcheck/pcng.sh at patch-1 · mpiscaer/portcheck · GitHub to get this output.

root@node1:~# kubectl get pods -A -owide | grep percona
openstack percona-xtradb-haproxy-0 2/2 Running 1 (62m ago) 71m fd40:10::1dd node2
openstack percona-xtradb-haproxy-1 2/2 Running 1 (62m ago) 69m fd40:10::2a4 node3
openstack percona-xtradb-haproxy-2 2/2 Running 1 (62m ago) 68m fd40:10::87 node1
openstack percona-xtradb-pxc-0 2/2 Running 0 37m fd40:10::275 node3
openstack percona-xtradb-pxc-1 1/2 CrashLoopBackOff 10 (103s ago) 36m fd40:10::9a node1

root@node1:~# kubectl -n openstack logs pod/percona-xtradb-pxc-1

2026-01-02T21:01:30.548929Z 0 [Note] [MY-000000] [Galera] (36421b4f-8259, ‘ssl://0.0.0.0:4567’) listening at ssl://0.0.0.0:4567
2026-01-02T21:01:30.548979Z 0 [Note] [MY-000000] [Galera] (36421b4f-8259, ‘ssl://0.0.0.0:4567’) multicast: , ttl: 1
2026-01-02T21:01:30.550051Z 0 [Note] [MY-000000] [Galera] EVS version 1
2026-01-02T21:01:30.550306Z 0 [Note] [MY-000000] [Galera] gcomm: connecting to group ‘percona-xtradb-pxc’, peer ‘percona-xtradb-pxc-0.percona-xtradb-pxc:’
2026-01-02T21:01:30.551623Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:31.551323Z 0 [Note] [MY-000000] [Galera] EVS version upgrade 0 → 1
2026-01-02T21:01:31.551728Z 0 [Note] [MY-000000] [Galera] PC protocol upgrade 0 → 1
2026-01-02T21:01:31.551973Z 0 [Note] [MY-000000] [Galera] No nodes coming from primary view, primary view is not possible
2026-01-02T21:01:31.552159Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(NON_PRIM,36421b4f-8259,1)
memb {
36421b4f-8259,0
}
joined {
}
left {
}
partitioned {
}
)
2026-01-02T21:01:31.554086Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:32.052624Z 0 [Warning] [MY-000000] [Galera] last inactive check more than PT1.5S (3*evs.inactive_check_period) ago (PT1.50258S), skipping check
2026-01-02T21:01:33.054247Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:33.556087Z 0 [Note] [MY-000000] [Galera] announce period timed out (pc.announce_timeout)
2026-01-02T21:01:34.554568Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:36.055037Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:37.555227Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:39.056594Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:40.555912Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:42.058009Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused
2026-01-02T21:01:43.558334Z 0 [Note] [MY-000000] [Galera] Failed to establish connection: Connection refused

It would be great you one of you can give me any tips to troubleshoot this issue.

Currently I have the following percona-xtradb-cluster-operator configuration:

kind: PerconaXtraDBCluster
metadata:
  creationTimestamp: "2026-01-03T17:21:02Z"
  generation: 11
  name: percona-xtradb
  namespace: openstack
  resourceVersion: "1170568"
  uid: 50985ad0-e528-4503-bd79-6f65a71cfef3
spec:
  allowUnsafeConfigurations: true
  backup:
    image: docker.io/percona/percona-xtrabackup:8.0.35-33.1
  crVersion: 1.18.0
  enableVolumeExpansion: true
  haproxy:
    configuration: |
      global
        log stdout format raw local0
        maxconn 8192
        external-check
        insecure-fork-wanted
        hard-stop-after 10s
        stats socket /etc/haproxy/pxc/haproxy.sock mode 600 expose-fd listeners level admin

      defaults
        no option dontlognull
        log-format '{"time":"%t", "client_ip": "%ci", "client_port":"%cp", "backend_source_ip": "%bi", "backend_source_port": "%bp",  "frontend_name": "%ft", "backend_name": "%b", "server_name":"%s", "tw": "%Tw", "tc": "%Tc", "Tt": "%Tt", "bytes_read": "%B", "termination_state": "%ts", "actconn": "%ac", "feconn" :"%fc", "beconn": "%bc", "srv_conn": "%sc", "retries": "%rc", "srv_queue": "%sq", "backend_queue": "%bq" }'
        default-server init-addr last,libc,none
        log global
        mode tcp
        retries 10
        timeout client 28800s
        timeout connect 100500
        timeout server 28800s

      resolvers kubernetes
        parse-resolv-conf

      frontend galera-in
        bind [::]:3309 accept-proxy
        bind [::]:3306
        mode tcp
        option clitcpka
        default_backend galera-nodes

      frontend galera-admin-in
        bind [::]:33062
        mode tcp
        option clitcpka
        default_backend galera-admin-nodes

      frontend galera-replica-in
        bind [::]:3307
        mode tcp
        option clitcpka
        default_backend galera-replica-nodes

      frontend galera-mysqlx-in
        bind [::]:33060
        mode tcp
        option clitcpka
        default_backend galera-mysqlx-nodes

      frontend stats
        bind [::]:8404
        mode http
        http-request use-service prometheus-exporter if { path /metrics }
    enabled: true
    image: docker.io/percona/haproxy:2.8.17
    nodeSelector:
      openstack-control-plane: enabled
    size: 3
  pxc:
    autoRecovery: true
    configuration: |
      [mysqld]
      bind_address=::
      wsrep_node_address=[AUTO]
      wsrep_provider_options="gmcast.listen_addr=tcp://[::]:4567"

      max_connections=8192
      innodb_buffer_pool_size=4096M
      # Skip reverse DNS lookup of clients
      skip-name-resolve
      pxc_strict_mode=MASTER
    image: docker.io/percona/percona-xtradb-cluster:8.0.42-33.1
    livenessProbes:
      failureThreshold: 100
      timeoutSeconds: 60
    nodeSelector:
      openstack-control-plane: enabled
    sidecars:
    - args:
      - --mysqld.username=monitor
      - --collect.info_schema.processlist
      env:
      - name: MYSQLD_EXPORTER_PASSWORD
        valueFrom:
          secretKeyRef:
            key: monitor
            name: percona-xtradb
      image: quay.io/prometheus/mysqld-exporter:v0.17.0
      name: exporter
      ports:
      - containerPort: 9104
        name: metrics
        protocol: TCP
      readinessProbe:
        httpGet:
          path: /
          port: metrics
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 160Gi
  secretsName: percona-xtradb
status:
  backup: {}
  conditions:
  - lastTransitionTime: "2026-01-03T17:21:02Z"
    status: disabled
    type: tls
  - lastTransitionTime: "2026-01-03T17:21:03Z"
    status: "True"
    type: initializing
  - lastTransitionTime: "2026-01-03T17:23:26Z"
    status: "True"
    type: ready
  - lastTransitionTime: "2026-01-06T17:06:25Z"
    status: "True"
    type: initializing
  - lastTransitionTime: "2026-01-06T17:07:28Z"
    status: "True"
    type: ready
  - lastTransitionTime: "2026-01-06T17:09:59Z"
    status: "True"
    type: initializing
  haproxy:
    labelSelectorPath: app.kubernetes.io/component=haproxy,app.kubernetes.io/instance=percona-xtradb,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
    size: 3
    status: initializing
  host: percona-xtradb-haproxy.openstack
  logcollector: {}
  observedGeneration: 11
  pmm: {}
  proxysql: {}
  pxc:
    image: docker.io/percona/percona-xtradb-cluster:8.0.42-33.1
    labelSelectorPath: app.kubernetes.io/component=pxc,app.kubernetes.io/instance=percona-xtradb,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
    size: 3
    status: initializing
    version: 8.0.42-33.1
  ready: 0
  size: 6
  state: initializing

With this configuration I get the following error, when the second database server try’s to sync the logs give me the following error:

2026-01-06T21:15:59.262692Z 2 [Warning] [MY-000000] [Galera] Failed to prepare for incremental state transfer: Failed to open IST listener at tcp://[AUTO]:4568’, asio error 'Failed to listen: resolve: Host not found (authoritative): System error: 1 (Operation not permitted)

The logs:

2026-01-06T21:15:57.937670Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [141, 141]
2026-01-06T21:15:57.937690Z 0 [Note] [MY-000000] [Galera] Shifting OPEN -> PRIMARY (TO: 1232)
2026-01-06T21:15:57.937819Z 2 [Note] [MY-000000] [Galera] ####### processing CC 1232, local, ordered
2026-01-06T21:15:57.937886Z 2 [Note] [MY-000000] [Galera] Maybe drain monitors from -1 upto current CC event 1232 upto:-1
2026-01-06T21:15:57.937936Z 2 [Note] [MY-000000] [Galera] Drain monitors from -1 up to -1
2026-01-06T21:15:57.937979Z 2 [Note] [MY-000000] [Galera] Process first view: a1490146-e8c8-11f0-befb-978a5f67f81f my uuid: e451151f-eb44-11f0-b1ca-2f7dab2d9415
2026-01-06T21:15:57.938036Z 2 [Note] [MY-000000] [Galera] Server percona-xtradb-pxc-1 connected to cluster at position a1490146-e8c8-11f0-befb-978a5f67f81f:1232 with ID e451151f-eb44-11f0-b1ca-2f7dab2d9415
2026-01-06T21:15:57.938074Z 2 [Note] [MY-000000] [WSREP] Server status change disconnected -> connected
2026-01-06T21:15:57.960736Z 2 [Note] [MY-000000] [Galera] ####### My UUID: e451151f-eb44-11f0-b1ca-2f7dab2d9415
2026-01-06T21:15:57.960841Z 2 [Note] [MY-000000] [Galera] Cert index reset to 00000000-0000-0000-0000-000000000000:-1 (proto: 11), state transfer needed: yes
2026-01-06T21:15:57.960930Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2026-01-06T21:15:57.960985Z 2 [Note] [MY-000000] [Galera] ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:-1, protocol version: -1
2026-01-06T21:15:57.961011Z 2 [Note] [MY-000000] [Galera] State transfer required: 
        Group state: a1490146-e8c8-11f0-befb-978a5f67f81f:1232
        Local state: 00000000-0000-0000-0000-000000000000:-1
2026-01-06T21:15:57.961028Z 2 [Note] [MY-000000] [WSREP] Server status change connected -> joiner
2026-01-06T21:15:57.984331Z 0 [Note] [MY-000000] [WSREP] Initiating SST/IST transfer on JOINER side (wsrep_sst_xtrabackup-v2 --role 'joiner' --address '[AUTO]' --datadir '/var/lib/mysql/' --basedir '/usr/' --plugindir '/usr/lib64/mysql/plugin/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '1' --mysqld-version '8.0.42-33.1'  --binlog 'binlog' )
2026-01-06T21:15:58.756574Z 0 [Warning] [MY-000000] [WSREP-SST] Found a stale sst_in_progress file: /var/lib/mysql//sst_in_progress
2026-01-06T21:15:59.252883Z 2 [Note] [MY-000000] [WSREP] Prepared SST request: xtrabackup-v2|[AUTO]:4444/xtrabackup_sst//1
2026-01-06T21:15:59.253049Z 2 [Note] [MY-000000] [Galera] Check if state gap can be serviced using IST
2026-01-06T21:15:59.253112Z 2 [Note] [MY-000000] [Galera] Local UUID: 00000000-0000-0000-0000-000000000000 != Group UUID: a1490146-e8c8-11f0-befb-978a5f67f81f
2026-01-06T21:15:59.253162Z 2 [Note] [MY-000000] [Galera] ####### IST uuid:00000000-0000-0000-0000-000000000000 f: 0, l: 1232, STRv: 3
2026-01-06T21:15:59.253474Z 2 [Note] [MY-000000] [Galera] IST receiver addr using tcp://[AUTO]:4568
2026-01-06T21:15:59.262549Z 2 [Note] [MY-000000] [Galera] State gap can't be serviced using IST. Switching to SST
2026-01-06T21:15:59.262692Z 2 [Warning] [MY-000000] [Galera] Failed to prepare for incremental state transfer: Failed to open IST listener at tcp://[AUTO]:4568', asio error 'Failed to listen: resolve: Host not found (authoritative): System error: 1 (Operation not permitted)
         at ../../../../percona-xtradb-cluster-galera/galerautils/src/gu_asio_stream_react.cpp:listen():922'
         at ../../../../percona-xtradb-cluster-galera/galera/src/ist.cpp:prepare():357. IST will be unavailable.
2026-01-06T21:15:59.263790Z 0 [Note] [MY-000000] [Galera] Member 1.0 (percona-xtradb-pxc-1) requested state transfer from '*any*'. Selected 0.0 (percona-xtradb-pxc-0)(SYNCED) as donor.
2026-01-06T21:15:59.263853Z 0 [Note] [MY-000000] [Galera] Shifting PRIMARY -> JOINER (TO: 1232)
2026-01-06T21:15:59.263907Z 2 [Note] [MY-000000] [Galera] Requesting state transfer: success, donor: 0
2026-01-06T21:15:59.263962Z 2 [Note] [MY-000000] [Galera] Resetting GCache seqno map due to different histories.
2026-01-06T21:15:59.264008Z 2 [Note] [MY-000000] [Galera] GCache history reset: a1490146-e8c8-11f0-befb-978a5f67f81f:0 -> a1490146-e8c8-11f0-befb-978a5f67f81f:1232
2026-01-06T21:15:59.265594Z 0 [Warning] [MY-000000] [Galera] 0.0 (percona-xtradb-pxc-0): State transfer to 1.0 (percona-xtradb-pxc-1) failed: No message of desired type
2026-01-06T21:15:59.265639Z 0 [ERROR] [MY-000000] [Galera] ../../../../percona-xtradb-cluster-galera/gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():1334: Will never receive state. Need to abort.
2026-01-06T21:15:59.265669Z 0 [Note] [MY-000000] [Galera] gcomm: terminating thread
2026-01-06T21:15:59.265717Z 0 [Note] [MY-000000] [Galera] gcomm: joining thread
2026-01-06T21:15:59.265879Z 0 [Note] [MY-000000] [Galera] gcomm: closing backend
2026-01-06T21:16:00.274161Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(NON_PRIM,ca5c544d-8a16,2)
memb {
        e451151f-b1ca,0
        }
joined {
        }
left {
        }
partitioned {
        ca5c544d-8a16,0
        }
)
2026-01-06T21:16:00.274320Z 0 [Note] [MY-000000] [Galera] (e451151f-b1ca, 'tcp://[::]:4567') turning message relay requesting off
2026-01-06T21:16:00.274365Z 0 [Note] [MY-000000] [Galera] PC protocol downgrade 1 -> 0
2026-01-06T21:16:00.274385Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view ((empty))
2026-01-06T21:16:00.274765Z 0 [Note] [MY-000000] [Galera] gcomm: closed
2026-01-06T21:16:00.274818Z 0 [Note] [MY-000000] [Galera] mysqld: Terminated.
2026-01-06T21:16:00.274834Z 0 [Note] [MY-000000] [WSREP] Initiating SST cancellation
2026-01-06T21:16:00.274843Z 0 [Note] [MY-000000] [WSREP] Terminating SST process

Hi @mpiscaer The PXC Kubernetes Operator supports IPv4 + IPv6 dual-stack clusters, but does not support IPv6-only clusters. As far as I know, it’s currently not possible to deploy GKE, Azure, EKS, or OpenShift clusters (all officially supported by PXCO) using IPv6 only.