Dear community,
I did the past year maybe 2 install of percona but since may i’m not able to perform an installation who works…
I already tried everything I could but i’m still blocked…
Maybe some of you can help me ?
I have followed the official doc
Here is what I have and tryed:
3 Ubuntu 16.04 freshly installed
SQL-NODE01 192.168.1.101/24
SQL-NODE02 192.168.1.102/24
SQL-NODE03 192.168.1.103/24
Every server are able to communicate to each other
Apparmor services stopped (Also tried with the services running or removed it but same conclusion)
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 3306 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4567 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4568 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4444 --source 192.168.1.0/24 --jump ACCEPT
iptables --append INPUT --in-interface ens18 --protocol udp --match udp --dport 4567 --source 192.168.1.0/24 --jump ACCEPT
Repo configured as described here : [url]https://www.percona.com/doc/percona-repo-config/index.html[/url]
Right after “apt-get install percona-xtradb-cluster-57”
The installation goes well, I stop the mysql services on the 3 nodes “sudo service mysql stop”
Here is my my.cnf for the NODE01:
[mysqld]
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_name=db-cluster
wsrep_cluster_address=gcomm://192.168.1.101,192.168.1.102,192.168.1.103
wsrep_node_name=SQL-NODE01
wsrep_node_address=192.168.1.101
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:passw0rd
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoD
innodb_autoinc_lock_mode=2
#
# The Percona XtraDB Cluster 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
# Please make any edits and changes to the appropriate sectional files
# included below.
#
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/percona-xtradb-cluster.conf.d/
Time to bootstrap the first node “/etc/init.d/mysql bootstrap-pxc”
Everything seems well except the error of PID on the syslog :
Jun 5 15:33:03 SQL-NODE01 systemd[1]: Stopping LSB: AppArmor initialization...
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: * Clearing AppArmor profiles cache
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: ...done.
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: All profile caches have been cleared, but no profiles have been unloaded.
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: Unloading profiles will leave already running processes permanently
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: unconfined, which can lead to unexpected situations.
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: To set a process to complain mode, use the command line tool
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: 'aa-complain'. To really tear down all profiles, run the init script
Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: with the 'teardown' option."
Jun 5 15:33:03 SQL-NODE01 systemd[1]: Stopped LSB: AppArmor initialization.
Jun 5 15:37:04 SQL-NODE01 systemd[1]: Starting Cleanup of Temporary Directories...
Jun 5 15:37:04 SQL-NODE01 systemd-tmpfiles[3052]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring.
Jun 5 15:37:10 SQL-NODE01 systemd[1]: Started Cleanup of Temporary Directories.
Jun 5 15:38:54 SQL-NODE01 systemd[1]: Reloading.
Jun 5 15:38:54 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun 5 15:42:37 SQL-NODE01 systemd[1]: Reloading.
Jun 5 15:42:37 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun 5 15:42:37 SQL-NODE01 systemd[1]: Reloading.
Jun 5 15:42:37 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun 5 15:42:37 SQL-NODE01 systemd[1]: Starting LSB: Start and stop the mysql (Percona XtraDB Cluster)daemon...
Jun 5 15:42:37 SQL-NODE01 mysql[4312]: * Starting MySQL (Percona XtraDB Cluster) database server mysqld
Jun 5 15:42:37 SQL-NODE01 /etc/init.d/mysql[4357]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid
Jun 5 15:42:43 SQL-NODE01 mysql[4312]: ...done.
Jun 5 15:42:43 SQL-NODE01 systemd[1]: Started LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon.
Jun 5 15:42:50 SQL-NODE01 systemd[1]: Reloading.
Jun 5 15:42:50 SQL-NODE01 systemd[1]: Started ACPI event daemon.
Jun 5 15:49:37 SQL-NODE01 systemd[1]: Stopping LSB: Start and stop the mysql (Percona XtraDB Cluster)daemon...
Jun 5 15:49:37 SQL-NODE01 mysql[4859]: * Stopping MySQL (Percona XtraDB Cluster) mysqld
Jun 5 15:49:53 SQL-NODE01 /etc/init.d/mysql[4987]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid
Jun 5 15:49:53 SQL-NODE01 /etc/init.d/mysql[4991]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid
Jun 5 15:49:53 SQL-NODE01 mysql[4859]: ...done.
Jun 5 15:49:53 SQL-NODE01 systemd[1]: Stopped LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon.
Jun 5 15:54:37 SQL-NODE01 /etc/init.d/mysql[5047]: ERROR: The partition with /var/lib/mysql is too full!
Jun 5 15:55:00 SQL-NODE01 /etc/init.d/mysql[5109]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid
Here is the result of “show status like ‘wsrep%’;” : (The first line got some value, it’s because I did this copy after trying to add a the second node)
±---------------------------------±-----------------+
| Variable_name | Value |
±---------------------------------±-----------------+
| wsrep_local_state_uuid | 508226c7-68c6-11e8-b4e3-2faae9ed4ea1 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 3 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 3 |
| wsrep_replicated_bytes | 728 |
| wsrep_repl_keys | 3 |
| wsrep_repl_keys_bytes | 96 |
| wsrep_repl_data_bytes | 425 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
…
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.1.101:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 0a96d502-68c8-11e8-87ae-12c3077b44a0 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 508226c7-68c6-11e8-b4e3-2faae9ed4ea1 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.26(rac090bc) |
| wsrep_ready | ON |
±---------------------------------±-------------------------------------+
68 rows in set (0.01 sec)
So for the moment everything seems to be fine I’ll create the sst users
mysql@pxc1> CREATE USER ‘sstuser’@‘localhost’ IDENTIFIED BY ‘passw0rd’;
mysql@pxc1> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON . TO ‘sstuser’@‘localhost’; mysql@pxc1> FLUSH PRIVILEGES;
Time to bootsrap an another node via “/etc/init.d/mysql start”
The services start properly and there is what I got :
mysql> show status like ‘wsrep%’;
±---------------------------------±--------------------+
| Variable_name | Value |
±---------------------------------±--------------------+
| wsrep_local_state_uuid | 3bb3744c-68c6-11e8-8782-e32c0e432b82 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 0 |
| wsrep_last_committed | 0 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
| wsrep_received_bytes | 155 |
…
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.1.102:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 4.391e-06/9.1114e-06/1.9822e-05/5.63194e-06/5 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 440afb89-68c8-11e8-b82c-976d9c2e0aa3 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 3bb3744c-68c6-11e8-8782-e32c0e432b82 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com>|
| wsrep_provider_version | 3.26(rac090bc) |
| wsrep_ready | ON |
±---------------------------------±----------------------------------------------+
68 rows in set (0.01 sec)
Did someone already encountered this error ? Did I do something wrong ?
I’ll wait for your answer