SSL issue with async replication between 2 pxc

Hi All,
I am trying to setup async(master,slave) replication between 2 xtradb clusters 8.0.28, I node on primary I have made master and one node on DR I have made slave following below steps.

CREATE USER 'replica1’@‘xx.xx.xx.xx' IDENTIFIED WITH caching_sha2_password BY 'xxxxxxxx';
GRANT REPLICATION SLAVE ON *.* TO 'replica1’@‘xx.xx.xx.xx';
Change master to source_host='xx.xx.xx.xx', source_user='replica1’, source_password='txxxxxxxxxx', source_log_file='mysql-bin.000010', source_log_pos=1xxx;

but as on Primary and DR env certificates in my.cnf are from bootstrapped node of each env under [mysqld] and [client] sections because of which I get below error while trying to login on master node of primary cluster but if I copy client cert from master node of primary cluster to slave node of dr cluster I am able to login but in that case unable to login to root@localhot of slave server and get same error.

ERROR 2026 (HY000): SSL connection error: error:0407008A:rsa routines:RSA_padding_check_PKCS1_type_1:invalid padding

Also I tried “Replication Between Two Percona XtraDB Clusters, GTIDs and Schema Changes - Percona Database Performance Blog” doc provided to me in previous forum but I am getting issues in first step itself
as after creating table using below command in test DB
create table toi1(id int) engine=innodb;
and after words running
show global variables like ‘gtid_executed’\G
in doc it shows value of gitd but on my env it’s blank

mysql> create table toi1(id int) engine=innodb;
Query OK, 0 rows affected (0.05 sec)

mysql> show global variables like 'gtid_executed'\G
*************************** 1. row ***************************
Variable_name: gtid_executed
        Value: 
1 row in set (0.01 sec)

So not sure how to proceed.

1 Like

Hi Aditya,

Replication is not configured to use SSL. Please follow the indications on this page of the documentation, it explains how to configure SSL replication.

https://dev.mysql.com/doc/refman/8.0/en/replication-encrypted-connections.html

If you need more help, please add the following information to this post:

show global variables like '%ssl%';
show global variables like 'wsrep%';
show replica status\G

Make sure to edit any information that could identify your databases (public IP addresses, usernames, passwords, hostnames…) and could be used by any attacker before posting it here.

Thanks

Hi @Pep_Pla
Thanks for replying and providing your valuable time, I have to use SSL while using replication because my PXC Primary and DR clusters are enabled with SSL/TLS1.2 and my.cnf of Primary and DR contains below PEM entries without these entries I was unable to start PXC cluster

[client]
ssl-ca=/var/lib/mysql/ca.pem
ssl-cert=/var/lib/mysql/client-cert.pem
ssl-key=/var/lib/mysql/client-key.pem

[mysqld]
ssl-ca=/var/lib/mysql/ca.pem
ssl-cert=/var/lib/mysql/server-cert.pem
ssl-key=/var/lib/mysql/server-key.pem

So now when I am enabling replication between 1st node Primary PXC ------> 1st node DR PXC, replication user is unable to access 1st node DR PXC and it is resolved by adding **1st node Primary PXC ** client PEM in my.cnf of 1st node DR PXC under [client]

after which I have to access 1st node DR PXC by using it’s on certs.

For accessing st node DR PXC after adding client certs of primary PXC
mysql -u root -p --ssl-ca=/DR_certs/ca.pem --ssl-cert=/DR_certs/server-cert.pem --ssl-key=/DR_certs/server-key.pem

Can I have both client certs(Primary and DR) at same time in my.cnf as using which I can login in DR with normal mysql -u root -p.

Please find out put of request commands.

show global variables like '%ssl%';
+-------------------------------------+-----------------------------+
| Variable_name                       | Value                       |
+-------------------------------------+-----------------------------+
| admin_ssl_ca                        |                             |
| admin_ssl_capath                    |                             |
| admin_ssl_cert                      |                             |
| admin_ssl_cipher                    |                             |
| admin_ssl_crl                       |                             |
| admin_ssl_crlpath                   |                             |
| admin_ssl_key                       |                             |
| have_openssl                        | YES                         |
| have_ssl                            | YES                         |
| mysqlx_ssl_ca                       |                             |
| mysqlx_ssl_capath                   |                             |
| mysqlx_ssl_cert                     |                             |
| mysqlx_ssl_cipher                   |                             |
| mysqlx_ssl_crl                      |                             |
| mysqlx_ssl_crlpath                  |                             |
| mysqlx_ssl_key                      |                             |
| performance_schema_show_processlist | OFF                         |
| ssl_ca                              | /node1_cert/ca.pem          |
| ssl_capath                          |                             |
| ssl_cert                            | /node1_cert/server-cert.pem |
| ssl_cipher                          |                             |
| ssl_crl                             |                             |
| ssl_crlpath                         |                             |
| ssl_fips_mode                       | OFF                         |
| ssl_key                             | /node1_cert/server-key.pem  |
+-------------------------------------+-----------------------------+

1 Like
show global variables like 'wsrep%'\G
**** 1. row ****
Variable_name: wsrep_OSU_method
        Value: TOI
**** 2. row ****
Variable_name: wsrep_RSU_commit_timeout
        Value: 5000
**** 3. row ****
Variable_name: wsrep_SR_store
        Value: table
**** 4. row ****
Variable_name: wsrep_applier_FK_checks
        Value: ON
**** 5. row ****
Variable_name: wsrep_applier_UK_checks
        Value: OFF
**** 6. row ****
Variable_name: wsrep_applier_threads
        Value: 8
**** 7. row ****
Variable_name: wsrep_auto_increment_control
        Value: ON
**** 8. row ****
Variable_name: wsrep_causal_reads
        Value: OFF
**** 9. row ****
Variable_name: wsrep_certification_rules
        Value: strict
**** 10. row ****
Variable_name: wsrep_certify_nonPK
        Value: ON
**** 11. row ****
Variable_name: wsrep_cluster_address
        Value: gcomm://XX.XX.XX.XX, XX.XX.XX.XX, XX.XX.XX.XX
**** 12. row ****
Variable_name: wsrep_cluster_name
        Value: pxc-cluster
**** 13. row ****
Variable_name: wsrep_data_home_dir
        Value: /var/lib/mysql/
**** 14. row ****
Variable_name: wsrep_dbug_option
        Value: 
**** 15. row ****
Variable_name: wsrep_debug
        Value: NONE
**** 16. row ****
Variable_name: wsrep_desync
        Value: OFF
**** 17. row ****
Variable_name: wsrep_dirty_reads
        Value: OFF
**** 18. row ****
Variable_name: wsrep_ignore_apply_errors
        Value: 0
**** 19. row ****
Variable_name: wsrep_load_data_splitting
        Value: OFF
**** 20. row ****
Variable_name: wsrep_log_conflicts
        Value: ON
**** 21. row ****
Variable_name: wsrep_max_ws_rows
        Value: 0
**** 22. row ****
Variable_name: wsrep_max_ws_size
        Value: 2147483647
**** 23. row ****
Variable_name: wsrep_min_log_verbosity
        Value: 3
**** 24. row ****
Variable_name: wsrep_node_address
        Value: XX.XX.XX.XX
**** 25. row ****
Variable_name: wsrep_node_incoming_address
        Value: AUTO
**** 26. row ****
Variable_name: wsrep_node_name
        Value: rxxxxxxxx
**** 27. row ****
Variable_name: wsrep_notify_cmd
        Value: 
**** 28. row ****
Variable_name: wsrep_provider
        Value: /usr/lib64/galera4/libgalera_smm.so
**** 29. row ****
Variable_name: wsrep_provider_options
        Value: base_dir = /var/lib/mysql/; base_host = XX.XX.XX.XX; base_port = 4567; cert.log_conflicts = no; cert.optimistic_pa = no; debug = no; evs.auto_evict = 0; evs.causal_keepalive_period = PT1S; evs.debug_log_mask = 0x1; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.info_log_mask = 0; evs.install_timeout = PT7.5S; evs.join_retrans_period = PT1S; evs.keepalive_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 10; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.use_aggregate = true; evs.user_send_window = 4; evs.version = 1; evs.view_forget_timeout = P1D; gcache.dir = /var/lib/mysql/; gcache.freeze_purge_at_seqno = -1; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = galera.cache; gcache.page_size = 128M; gcache.recover = yes; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 100; gcs.fc_master_slave = no; gcs.fc_single_primary = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.listen_addr = ssl://0.0.0.0:4567; gmcast.mcast_addr = ; gmcast.mcast_ttl = 1; gmcast.peer_timeout = PT3S; gmcast.segment = 0; gmcast.time_wait = PT5S; gmcast.version = 0; ist.recv_addr = XX.XX.XX.XX; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.linger = PT20S; pc.npvo = false; pc.recovery = true; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = PT30S; pc.weight = 1; protonet.backend = asio; protonet.version = 0; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.max_ws_size = 2147483647; repl.proto_max = 10; socket.checksum = 2; socket.recv_buf_size = auto; socket.send_buf_size = auto; socket.ssl = YES; socket.ssl_ca = /node1_cert/ca.pem; socket.ssl_cert = /node1_cert/server-cert.pem; socket.ssl_cipher = ; socket.ssl_compression = YES; s
**** 30. row ****
Variable_name: wsrep_recover
        Value: OFF
**** 31. row ****
Variable_name: wsrep_reject_queries
        Value: NONE
**** 32. row ****
Variable_name: wsrep_replicate_myisam
        Value: OFF
**** 33. row ****
Variable_name: wsrep_restart_replica
        Value: OFF
**** 34. row ****
Variable_name: wsrep_restart_slave
        Value: OFF
**** 35. row ****
Variable_name: wsrep_retry_autocommit
        Value: 1
**** 36. row ****
Variable_name: wsrep_slave_FK_checks
        Value: ON
**** 37. row ****
Variable_name: wsrep_slave_UK_checks
        Value: OFF
**** 38. row ****
Variable_name: wsrep_slave_threads
        Value: 8
**** 39. row ****
Variable_name: wsrep_sst_allowed_methods
        Value: xtrabackup-v2
**** 40. row ****
Variable_name: wsrep_sst_donor
        Value: 
**** 41. row ****
Variable_name: wsrep_sst_donor_rejects_queries
        Value: OFF
**** 42. row ****
Variable_name: wsrep_sst_method
        Value: xtrabackup-v2
**** 43. row ****
Variable_name: wsrep_sst_receive_address
        Value: AUTO
**** 44. row ****
Variable_name: wsrep_start_position
        Value: e3828439-XXXX-11ed-XXXX-fXXXXXXXXXXXX:36
**** 45. row ****
Variable_name: wsrep_sync_wait
        Value: 0
**** 46. row ****
Variable_name: wsrep_trx_fragment_size
        Value: 0
**** 47. row ****
Variable_name: wsrep_trx_fragment_unit
        Value: bytes
47 rows in set (0.01 sec)


1 Like
show replica status\G
*************************** 1. row ***************************
             Replica_IO_State: Waiting for source to send event
                  Source_Host: xx.xx.xx.xx
                  Source_User: replica
                  Source_Port: 3306
                Connect_Retry: 60
              Source_Log_File: binlog.000018
          Read_Source_Log_Pos: 197
               Relay_Log_File: xxxxx-relay-bin.000004
                Relay_Log_Pos: 367
        Relay_Source_Log_File: binlog.000018
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 197
              Relay_Log_Space: 745
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File: 
           Source_SSL_CA_Path: 
              Source_SSL_Cert: 
            Source_SSL_Cipher: 
               Source_SSL_Key: 
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Source_Server_Id: 1
                  Source_UUID: 34xxxxx-45f9-xxxxx-92b3-xxxxxxadf8c
             Source_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Source_Retry_Count: 86400
                  Source_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Source_SSL_Crl: 
           Source_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Source_TLS_Version: 
       Source_public_key_path: 
        Get_Source_public_key: 0
            Network_Namespace: 
1 row in set (0.00 sec)


1 Like

Sir, the below requested details have been provided from 1st node DR PXC

show global variables like '%ssl%';
show global variables like 'wsrep%';
show replica status\G

Thanks
Adi

1 Like

Update:
Hi @Pep_Pla,
I found some terrible issues with my both PXC clusters, somehow today all nodes on both cluster crashed when I tried to search the logs and found handshake errors because of which services on all nodes crashed and it is also happening on the test env where aysn replication is not configured between PXC clusters.

I am not sure of the reason but suspect this is because of Client PEM entries ssl-ca,ssl-cert and ssl-key under [client] section of my.cnf file. So I have removed these entries and also added require_secure_transport=ON and socket.ssl=ON in wsrep_provider_options.

Note: - Also adding require_secure_transport=ON to my my.cnf broke async replication between pxc clusters and was able to fix it by alter replication user in master with option REQUIRE SSL, and changing slave config command to following

change master to master_host=‘xx.xx.xx.xx’, master_user=‘replica_user’, master_password=‘xxxxxxxxxx’, master_log_file=‘binlog.0000xx’, master_log_pos=xxxx, SOURCE_SSL=1, MASTER_SSL_CA=‘/path/ca.pem’, MASTER_SSL_CERT=‘/path/client-cert.pem’, MASTER_SSL_KEY=‘/path/client-key.pem’;

Also attaching my my.cnf and error logs I have found.

I have also noticed these HANDSHAKE failed errors occur after 24 hrs of starting mysql service and before these errors replication and cluster works fine. Please Help.

2022-10-05T19:09:08.570239Z 2 [Note] [MY-000000] [Galera] Non-primary view
2022-10-05T19:09:08.570258Z 2 [Note] [MY-000000] [WSREP] Server status change connected -> connected
2022-10-05T19:09:08.570282Z 2 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.
2022-10-05T19:09:49.164600Z 0 [Note] [MY-000000] [Galera] (c4178a5a-a826, 'ssl://0.0.0.0:4567') reconnecting to 524f1316-a18e (ssl://XX.XX.XX.XX:4567), attempt 30
2022-10-05T19:10:21.372476Z 0 [Warning] [MY-000000] [Galera] Handshake failed: wrong version number
2022-10-05T19:10:34.177166Z 0 [Note] [MY-000000] [Galera] (c4178a5a-a826, 'ssl://0.0.0.0:4567') reconnecting to 524f1316-a18e (ssl://XX.XX.XX.XX:4567), attempt 60
2022-10-05T19:10:36.123405Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unsupported protocol
2022-10-05T19:10:36.427602Z 0 [Warning] [MY-000000] [Galera] Handshake failed: version too low
2022-10-05T19:10:36.731975Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unexpected message
2022-10-05T19:10:36.890275Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unsupported protocol
2022-10-05T19:10:37.347029Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:37.796862Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:38.263682Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:38.425001Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:38.610108Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:38.610352Z 0 [Warning] [MY-000000] [Galera] Handshake failed: wrong version number
2022-10-05T19:10:38.766126Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:40.415602Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:40.416064Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unknown protocol
2022-10-05T19:10:40.730927Z 0 [Warning] [MY-000000] [Galera] Handshake failed: http request
2022-10-05T19:10:41.012670Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:41.013066Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unknown protocol
2022-10-05T19:10:41.477549Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:41.645134Z 0 [Warning] [MY-000000] [Galera] Handshake failed: peer did not return a certificate
2022-10-05T19:10:41.645607Z 0 [Warning] [MY-000000] [Galera] Handshake failed: version too low
2022-10-05T19:10:41.993786Z 0 [Warning] [MY-000000] [Galera] Handshake failed: version too low
2022-10-05T19:10:42.408583Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unsupported protocol
2022-10-05T19:10:42.701874Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unsupported protocol
2022-10-05T19:10:43.997672Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unsupported protocol
2022-10-05T19:10:45.291258Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unsupported protocol
2022-10-05T19:10:46.923371Z 0 [Warning] [MY-000000] [Galera] Handshake failed: no shared cipher
2022-10-05T19:10:46.926175Z 0 [Warning] [MY-000000] [Galera] Handshake failed: no shared cipher
2022-10-05T19:10:46.937637Z 0 [Warning] [MY-000000] [Galera] Handshake failed: no shared cipher
2022-10-05T19:10:49.244751Z 0 [Warning] [MY-000000] [Galera] Handshake failed: unsupported protocol
2022-10-05T19:11:18.696780Z 0 [Note] [MY-000000] [Galera] (c4178a5a-a826, 'ssl://0.0.0.0:4567') reconnecting to 524f1316-XXX (ssl://XX.XX.XX.XX:4567), attempt 90
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::system_error> >'
  what():  remote_endpoint: Transport endpoint is not connected
2022-10-05T19:11:32.388564Z 0 [Note] [MY-000000] [WSREP] Initiating SST cancellation
19:11:32 UTC - mysqld got signal 6 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.

Build ID: 5aaeb8aff2f9757ae471361dbf4fa4ba945f6104
Server Version: 8.0.28-19.1 Percona XtraDB Cluster (GPL), Release rel19, Revision f544540, WSREP version 26.4.3, wsrep_26.4.3

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...

Thanks

1 Like

Hi Sir,
Please also find my my.cnf file.

[client]
socket=/var/lib/mysql/mysql.sock
#####ssl-ca=/node1_cert/ca.pem
#####ssl-cert=/node1_cert/client-cert.pem
#####ssl-key=/node1_cert/client-key.pem

[mysqld]
server-id=2
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

binlog_expire_logs_seconds=604800

wsrep_provider=/usr/lib64/galera4/libgalera_smm.so

wsrep_cluster_address=gcomm://XX.XX.XX.XX,XX.XX.XX.XX,XX.XX.XX.XX

default_storage_engine=InnoDB

tls_version=TLSv1.2


wsrep_provider_options="socket.ssl_key=server-key.pem;socket.ssl_cert=server- cert.pem;socket.ssl_ca=ca.pem;socket.ssl=ON"

require_secure_transport=ON

ssl-ca=/node1_cert/ca.pem
ssl-cert=/node1_cert/server-cert.pem
ssl-key=/node1_cert/server-key.pem

binlog_format=ROW

wsrep_slave_threads=8

wsrep_log_conflicts

innodb_autoinc_lock_mode=2

wsrep_node_address=XX.XX.XX.XX
wsrep_cluster_name=pxc-cluster

wsrep_node_name=rxxxxx

pxc_strict_mode=ENFORCING

wsrep_sst_method=xtrabackup-v2

[sst]
encrypt=4
ssl-key=/node1_cert/server-key.pem
ssl-ca=/node1_cert/ca.pem
ssl-cert=/node1_cert/server-cert.pem

1 Like

I am missing something. It should be working fine if you did not change anything in the Primary cluster. I could understand that the Secondary cluster is affected by the changes you made, but this should not break the primary cluster.

Make sure to restore all the original certificates and configuration files to a safe state and make both clusters work independently. Make sure they are running without errors and then we may work to configure replication.

The errors you are reporting seem to appear due to SSL misconfiguration, if you restore all the original certificates and configurations, it should work.

1 Like

Hi @Pep_Pla,

Sure Thanks, Sir, currently in logs I only see one troublesome line consisting string (“blacklist” in the second line of attached logs) but I am not sure if it’s normal or troublesome can you please confirm it, for more logs, I think I need to wait for 24 hrs as handshake error approximately occurs after that but I am hoping it doesn’t occur.
But galera replication seems to work fine on all nodes.

[MY-000000] [Galera] (2eb57887-xxxx, ‘ssl://0.0.0.0:4567’) Found matching local endpoint for a connection, blacklisting address ssl://xx.xx.xx.xx:4567

[MY-000000] [Galera] gcomm: connecting to group 'pxc-cluster', peer 'xx.xx.xx.xx:,xx.xx.xx.xx:,xx.xx.xx.xx:'
[MY-000000] [Galera] (2eb57887-xxxx, 'ssl://0.0.0.0:4567') Found matching local endpoint for a connection, blacklisting address ssl://xx.xx.xx.xx:4567
[MY-000000] [Galera] (2eb57887-xxxx, 'ssl://0.0.0.0:4567') connection established to a2270799-xxxx ssl://xx.xx.xx.xx:4567
[MY-000000] [Galera] (2eb57887-xxxx, 'ssl://0.0.0.0:4567') turning message relay requesting on, nonlive peers:
[MY-000000] [Galera] (2eb57887-xxxx, 'ssl://0.0.0.0:4567') connection established to ef15e977-xxxxx ssl://xx.xx.xx.xx:4567
[MY-000000] [Galera] EVS version upgrade 0 -> 1
[MY-000000] [Galera] declaring a2270799-xxxx at ssl://xx.xx.xx.xx:4567 stable
[MY-000000] [Galera] declaring ef15e977-xxxx at ssl://xx.xx.xx.xx:4567 stable
[MY-000000] [Galera] PC protocol upgrade 0 -> 1
[MY-000000] [Galera] Node a2270799-a826 state primary
[MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(PRIM,2eb57887-xxxx,45)
 memb {
         2eb57887-93dd,0
         a2270799-a826,0
         ef15e977-a452,0
         }
 joined {
         }
 left {
         }
 partitioned {
         }
 )
1 Like

Hi @Pep_Pla,
Looks like it’s a bug! as per the error message on line 18,
This happens when the security team tries to scan MySQL using root for scanning vulnerabilities, the SSL error occurs and PXC node on which scan is running crashes, log message says it’s a bug and my my.cnf and other outputs shared above look fine.

After they try to login multiple times still mysql service should not crash, correct? or is it a safety feature in PXC if to many attempts are made service will crash??

The scanning tool attempts connecting approx 160 times and every second it connects 6 times after which mysql services crash.

In the general query log out of above mentioned 160 entries 99% of them are just connect and quit on repeat in each new line of the general log file, the scan tool also tries to scan if root and anonymous users are enabled without password.

Does the PXC cluster get overloaded and crashes??

Can you please help here.

Handshake failed: wrong version number
Handshake failed: unsupported protocol
Handshake failed: version too low
Handshake failed: unexpected message
Handshake failed: peer did not return a certificate
[libprotobuf ERROR /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.28/extra/protobuf/protobuf-3.11.4/src/google/protobuf/message_lite.cc:123] Can't parse message of type "Mysqlx.Connection.CapabilitiesSet" because it is missing required fields: (cannot determine missing fields for lite message)
Handshake failed: peer did not return a certificate
[libprotobuf ERROR /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.28/extra/protobuf/protobuf-3.11.4/src/google/protobuf/message_lite.cc:123] Can't parse message of type "Mysqlx.Prepare.Prepare" because it is missing required fields: (cannot determine missing fields for lite message)
Handshake failed: wrong version number
[libprotobuf ERROR /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.28/extra/protobuf/protobuf-3.11.4/src/google/protobuf/message_lite.cc:123] Can't parse message of type "Mysqlx.Crud.DropView" because it is missing required fields: (cannot determine missing fields for lite message)
Handshake failed: unknown protocol
Handshake failed: version too low
Handshake failed: no shared cipher
Terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::system_error> >'
  what():  remote_endpoint: Transport endpoint is not connected
2022-10-09T09:26:40.868149Z 0 [Note] [MY-000000] [WSREP] Initiating SST cancellation
09:26:40 UTC - mysqld got signal 6 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.

Build ID: 5aaeb8aff2f9757ae471361dbf4fa4ba945f6104
Server Version: 8.0.28-19.1 Percona XtraDB Cluster (GPL), Release rel19, Revision f544540, WSREP version 26.4.3, wsrep_26.4.3

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x100000
/usr/sbin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x41) [0x217bee1]
/usr/sbin/mysqld(print_fatal_signal(int)+0x323) [0x11a0993]
/usr/sbin/mys
1 Like

Sir, just created a new thread as context has changed what it was started for, so that in the future someone has a similar issue they can directly get to the correct thread.

Thanks
Adi

1 Like

Ok. Thank you for posting here. I will close this one.

1 Like