Not the answer you need?
Register and ask your own question!

Cluster 3 nodes but wsrep_cluster_size = 1

matfelmatfel EntrantCurrent User Role Beginner
Dear community,

I did the past year maybe 2 install of percona but since may i'm not able to perform an installation who works..
I already tried everything I could but i'm still blocked..

Maybe some of you can help me ?

I have followed the official doc

Here is what I have and tryed:
3 Ubuntu 16.04 freshly installed
SQL-NODE01 192.168.1.101/24
SQL-NODE02 192.168.1.102/24
SQL-NODE03 192.168.1.103/24
Every server are able to communicate to each other

Apparmor services stopped (Also tried with the services running or removed it but same conclusion)
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 3306 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4567 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4568 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4444 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol udp --match udp --dport 4567 --source 192.168.1.0/24 --jump ACCEPT

Repo configured as described here : https://www.percona.com/doc/percona-repo-config/index.html
Right after "apt-get install percona-xtradb-cluster-57"

The installation goes well, I stop the mysql services on the 3 nodes "sudo service mysql stop"

Here is my my.cnf for the NODE01:
[mysqld]
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_name=db-cluster
wsrep_cluster_address=gcomm://192.168.1.101,192.168.1.102,192.168.1.103
wsrep_node_name=SQL-NODE01
wsrep_node_address=192.168.1.101
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:passw0rd
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoD
innodb_autoinc_lock_mode=2
#
# The Percona XtraDB Cluster 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#   Please make any edits and changes to the appropriate sectional files
#   included below.
#
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/percona-xtradb-cluster.conf.d/

Time to bootstrap the first node "/etc/init.d/mysql bootstrap-pxc"
Everything seems well except the error of PID on the syslog :
Jun  5 15:33:03 SQL-NODE01 systemd[1]: Stopping LSB: AppArmor initialization...
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]:  * Clearing AppArmor profiles cache
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]:    ...done.
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]: All profile caches have been cleared, but no profiles have been unloaded.
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]: Unloading profiles will leave already running processes permanently
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]: unconfined, which can lead to unexpected situations.
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]: To set a process to complain mode, use the command line tool
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]: 'aa-complain'. To really tear down all profiles, run the init script
Jun  5 15:33:03 SQL-NODE01 apparmor[2107]: with the 'teardown' option."
Jun  5 15:33:03 SQL-NODE01 systemd[1]: Stopped LSB: AppArmor initialization.
Jun  5 15:37:04 SQL-NODE01 systemd[1]: Starting Cleanup of Temporary Directories...
Jun  5 15:37:04 SQL-NODE01 systemd-tmpfiles[3052]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring.
Jun  5 15:37:10 SQL-NODE01 systemd[1]: Started Cleanup of Temporary Directories.
Jun  5 15:38:54 SQL-NODE01 systemd[1]: Reloading.
Jun  5 15:38:54 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun  5 15:42:37 SQL-NODE01 systemd[1]: Reloading.
Jun  5 15:42:37 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun  5 15:42:37 SQL-NODE01 systemd[1]: Reloading.
Jun  5 15:42:37 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun  5 15:42:37 SQL-NODE01 systemd[1]: Starting LSB: Start and stop the mysql (Percona XtraDB Cluster)daemon...
Jun  5 15:42:37 SQL-NODE01 mysql[4312]:  * Starting MySQL (Percona XtraDB Cluster) database server mysqld
Jun  5 15:42:37 SQL-NODE01 /etc/init.d/mysql[4357]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid
Jun  5 15:42:43 SQL-NODE01 mysql[4312]:    ...done.
Jun  5 15:42:43 SQL-NODE01 systemd[1]: Started LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon.
Jun  5 15:42:50 SQL-NODE01 systemd[1]: Reloading.
Jun  5 15:42:50 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun  5 15:49:37 SQL-NODE01 systemd[1]: Stopping LSB: Start and stop the mysql (Percona XtraDB Cluster)daemon...
Jun  5 15:49:37 SQL-NODE01 mysql[4859]:  * Stopping MySQL (Percona XtraDB Cluster) mysqld
Jun  5 15:49:53 SQL-NODE01 /etc/init.d/mysql[4987]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid
Jun  5 15:49:53 SQL-NODE01 /etc/init.d/mysql[4991]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid
Jun  5 15:49:53 SQL-NODE01 mysql[4859]:    ...done.
Jun  5 15:49:53 SQL-NODE01 systemd[1]: Stopped LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon.
Jun  5 15:54:37 SQL-NODE01 /etc/init.d/mysql[5047]: ERROR: The partition with /var/lib/mysql is too full!
Jun  5 15:55:00 SQL-NODE01 /etc/init.d/mysql[5109]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid

Here is the result of "show status like 'wsrep%';" : (The first line got some value, it's because I did this copy after trying to add a the second node)
+
+
+
| Variable_name | Value |
+
+
+
| wsrep_local_state_uuid | 508226c7-68c6-11e8-b4e3-2faae9ed4ea1 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 3 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 3 |
| wsrep_replicated_bytes | 728 |
| wsrep_repl_keys | 3 |
| wsrep_repl_keys_bytes | 96 |
| wsrep_repl_data_bytes | 425 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
.....
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.1.101:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 0a96d502-68c8-11e8-87ae-12c3077b44a0 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 508226c7-68c6-11e8-b4e3-2faae9ed4ea1 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <[email protected]> |
| wsrep_provider_version | 3.26(rac090bc) |
| wsrep_ready | ON |
+
+
+
68 rows in set (0.01 sec)

So for the moment everything seems to be fine I'll create the sst users
[email protected]> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';
[email protected]> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; [email protected]> FLUSH PRIVILEGES;

Time to bootsrap an another node via "/etc/init.d/mysql start"
The services start properly and there is what I got :
mysql> show status like 'wsrep%';
+
+
+
| Variable_name | Value |
+
+
+
| wsrep_local_state_uuid | 3bb3744c-68c6-11e8-8782-e32c0e432b82 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 0 |
| wsrep_last_committed | 0 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
| wsrep_received_bytes | 155 |
....
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.1.102:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 4.391e-06/9.1114e-06/1.9822e-05/5.63194e-06/5 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 440afb89-68c8-11e8-b82c-976d9c2e0aa3 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 3bb3744c-68c6-11e8-8782-e32c0e432b82 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <[email protected]>|
| wsrep_provider_version | 3.26(rac090bc) |
| wsrep_ready | ON |
+
+
+
68 rows in set (0.01 sec)

Did someone already encountered this error ? Did I do something wrong ?
I'll wait for your answer :)

Comments

  • PeterPeter Percona CEO Percona Moderator Role
    Hi,

    "Jun 5 15:54:37 SQL-NODE01 /etc/init.d/mysql[5047]: ERROR: The partition with /var/lib/mysql is too full!"

    This does not look like the normal successful start to me ?
  • matfelmatfel Entrant Current User Role Beginner
    Dear Peter,

    I hope that you won't lose precious time on this thread ^^'

    As you can see I have a lot of free space :/
    root&#64;SQL-NODE01:~# df -h
    Filesystem                   Size  Used Avail Use% Mounted on
    udev                         2,0G     0  2,0G   0% /dev
    tmpfs                        396M  5,6M  390M   2% /run
    /dev/mapper/ubuntu--vg-root   97G  2,3G   90G   3% /
    tmpfs                        2,0G     0  2,0G   0% /dev/shm
    tmpfs                        5,0M     0  5,0M   0% /run/lock
    tmpfs                        2,0G     0  2,0G   0% /sys/fs/cgroup
    /dev/sda1                    472M  107M  342M  24% /boot
    tmpfs                        396M     0  396M   0% /run/user/1000
    
    root&#64;SQL-NODE01:~# ls -alh /var/lib/mysql
    total 252M
    drwxr-x---  5 mysql mysql 4,0K juin   5 15:55 .
    drwxr-xr-x 42 root  root  4,0K juin   5 15:37 ..
    -rw-r-----  1 mysql mysql   56 juin   5 15:39 auto.cnf
    -rw-------  1 mysql mysql 1,7K juin   5 15:39 ca-key.pem
    -rw-r--r--  1 mysql mysql 1,1K juin   5 15:39 ca.pem
    -rw-r--r--  1 mysql mysql 1,1K juin   5 15:39 client-cert.pem
    -rw-------  1 mysql mysql 1,7K juin   5 15:39 client-key.pem
    -rw-r-----  1 mysql mysql 129M juin   5 15:56 galera.cache
    -rw-r-----  1 mysql mysql  113 juin   5 15:56 grastate.dat
    -rw-r-----  1 mysql mysql  170 juin   5 15:55 gvwstate.dat
    -rw-r-----  1 mysql mysql  362 juin   5 15:49 ib_buffer_pool
    -rw-r-----  1 mysql mysql  12M juin   5 15:56 ibdata1
    -rw-r-----  1 mysql mysql  48M juin   5 15:56 ib_logfile0
    -rw-r-----  1 mysql mysql  48M juin   5 15:39 ib_logfile1
    -rw-r-----  1 mysql mysql  12M juin   5 15:55 ibtmp1
    drwxr-x---  2 mysql mysql 4,0K juin   5 15:41 mysql
    -rw-rw----  1 root  root     5 juin   5 15:55 mysqld_safe.pid
    drwxr-x---  2 mysql mysql 4,0K juin   5 15:41 performance_schema
    -rw-------  1 mysql mysql 1,7K juin   5 15:39 private_key.pem
    -rw-r--r--  1 mysql mysql  452 juin   5 15:39 public_key.pem
    -rw-r--r--  1 mysql mysql 1,1K juin   5 15:39 server-cert.pem
    -rw-------  1 mysql mysql 1,7K juin   5 15:39 server-key.pem
    -rw-r-----  1 mysql mysql  854 juin   5 15:42 SQL-NODE01-bin.000001
    -rw-r-----  1 mysql mysql  177 juin   5 15:49 SQL-NODE01-bin.000002
    -rw-r-----  1 mysql mysql  800 juin   5 15:56 SQL-NODE01-bin.000003
    -rw-r-----  1 mysql mysql   72 juin   5 15:55 SQL-NODE01-bin.index
    drwxr-x---  2 mysql mysql  12K juin   5 15:42 sys
    -rw-r-----  1 mysql mysql 3,8M juin   5 15:56 xb_doublewrite
    
  • limurchicklimurchick Entrant Inactive User Role Beginner
    You wrote your config into /etc/mysql/my.cnf.
    Have you cleaned "duplicated" values at additional files at: /etc/mysql/percona-xtradb-cluster.conf.d/ directory? There are some default setting there. And they can ovverride as mentioned
  • matfelmatfel Entrant Current User Role Beginner
    Dear limurchick,

    I just rechecked and it seem's that I didn't commented some of the wsrep options in folder "/etc/mysql/percona-xtradb-cluster.conf.d/ "
    It also allowed me to fixed the error I had about the PID !

    My cluster is now running properly

    ​​​​​​​Thanks a lot !
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.