Percona XtraDB Cluster 8.0 - cluster size as 1 at node 2 and 3

Hi there…

when the checked the cluster size showing as below at node 2 and node 3, what could be the problem…?

Attaching the config file of 3 nodes.

Config file - PXC 8.0 .pdf

Node -2

mysql@pxc2 > show status like ‘%cluster_size%’;


| Variable_name | Value |


| wsrep_cluster_size | 1 |


1 row in set (0.00 sec)

Node -3

mysql@pxc3 > show status like ‘%cluster_size%’;


| Variable_name | Value |


| wsrep_cluster_size | 1 |


1 row in set (0.00 sec)

Below is the status when issued the clustercheck command.

root@pxc-1:~# clustercheck

HTTP/1.1 503 Service Unavailable

Content-Type: text/plain

Connection: close


Percona XtraDB Cluster Node is not synced or non-PRIM.

Config file - PXC 8.0 .pdf (98.8 KB)

In your configs you should have

wsrep_cluster_address = gcomm://,,

on ALL your nodes, not only on Node 1

Hi @vadimtk

added the wsrep_cluster_address = gcomm://,, in all the nodes 1,2,3.

After bootstrapped the first node ,Started the mysql service at node 2 ,facing the below error.

Attaching the error log file for your reference.

node 2 error.log

Note : I have changed the variable from ON to pxc-encrypt-cluster-traffic=OFF in all the config files.

root@pxc-2:~# service mysql start

Job for mysql.service failed because the control process exited with error code.

See “systemctl status mysql.service” and “journalctl -xe” for details.

root@pxc-2:~# systemctl status mysql.service

● mysql.service - Percona XtraDB Cluster

Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)

Active: failed (Result: exit-code) since Fri 2020-10-23 17:54:16 UTC; 1min 21s ago

Process: 6245 ExecStopPost=/usr/bin/mysql-systemd stop-post (code=exited, status=0/SUCCESS)

Process: 3748 ExecStop=/usr/bin/mysql-systemd stop (code=exited, status=0/SUCCESS)

Process: 2375 ExecStartPost=/usr/bin/mysql-systemd start-post $MAINPID (code=exited, status=0/SUCCESS)

Process: 2373 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)

Process: 6240 ExecStart=/usr/sbin/mysqld $_WSREP_START_POSITION (code=exited, status=1/FAILURE)

Process: 6192 ExecStartPre=/bin/sh -c VAR=bash /usr/bin/mysql-systemd galera-recovery; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, s

Process: 6186 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)

Process: 6155 ExecStartPre=/usr/bin/mysql-systemd check-grastate (code=exited, status=0/SUCCESS)

Process: 6106 ExecStartPre=/usr/bin/mysql-systemd start-pre (code=exited, status=0/SUCCESS)

Main PID: 6240 (code=exited, status=1/FAILURE)

Status: “Server startup in progress”

Oct 23 17:53:43 pxc-2 systemd[1]: Starting Percona XtraDB Cluster…

Oct 23 17:54:16 pxc-2 systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILURE

Oct 23 17:54:16 pxc-2 mysql-systemd[6245]: WARNING: mysql pid file /var/run/mysqld/ empty or not readable

Oct 23 17:54:16 pxc-2 mysql-systemd[6245]: WARNING: mysql may be already dead

Oct 23 17:54:16 pxc-2 systemd[1]: mysql.service: Failed with result ‘exit-code’.

Oct 23 17:54:16 pxc-2 systemd[1]: Failed to start Percona XtraDB Cluster.

lines 1-21/21 (END)

[8]+ Stopped systemctl status mysql.service

Hi there …

Can anyone look in to this issue …?

Hi @AneeshBabuAs I can see from log

2020-10-23T17:06:18.566323Z 0 [ERROR] [MY-000000] [Galera] failed to open gcomm backend connection: 110: failed to reach primary view (pc.wait_prim_timeout): 110 (Connection timed out)
     at gcomm/src/pc.cpp:connect():159
2020-10-23T17:06:18.566350Z 0 [ERROR] [MY-000000] [Galera] gcs/src/gcs_core.cpp:gcs_core_open():220: Failed to open backend connection: -110 (Connection timed out)
2020-10-23T17:06:18.566426Z 0 [ERROR] [MY-000000] [Galera] gcs/src/gcs.cpp:gcs_open():1700: Failed to open channel 'pxc-cluster' at 'gcomm://,,': -110 (Connection timed out)
2020-10-23T17:06:18.566444Z 0 [ERROR] [MY-000000] [Galera] gcs connect failed: Connection timed out

It might be connected with SELinux polices.

According to documentation it is recommended the best solution is to change the mode from enforcing to permissive by running the following command:

setenforce 0

This only changes the mode at runtime. To run SELinux in permissive mode after a reboot, set SELINUX=permissive in the /etc/selinux/config configuration file.