Not the answer you need?
Register and ask your own question!

Replicate from XtraDB Cluster 5.5 to Cluster 5.6

zuconzucon EntrantCurrent User Role Beginner
Hi,

I try to replicate from XtraDB Cluster 5.5 to XtraDB Cluster 5.6. I tried it with the following steps:

1. Copy the data from the old cluster to one node in the new cluster with innobackupex
2. Prepare the data with xtrabackup
3. Start Node 1
4. mysql_upgrade
5. join on other node in the new cluster.
6. Start the replication.

The replication runs for a while but after few seconds, the other node in the new cluster shuts down:

2017-02-06 16:39:53 21724 [ERROR] Slave SQL: Could not execute Write_rows event on table userdatadb.resulting_tracks_state; Cannot add or update a child row: a foreign key constraint fails (`userdatadb`.`resu
lting_tracks_state`, CONSTRAINT `wish_state_id_idx` FOREIGN KEY (`wish_state_id`) REFERENCES `wishes_state` (`wish_state_id`) ON DELETE CASCADE ON UPDATE NO ACTION), Error_code: 1452; handler error HA_ERR_NO_
REFERENCED_ROW; the event's master log FIRST, end_log_pos 282, Error_code: 1452
2017-02-06 16:39:53 21724 [Warning] WSREP: RBR event 3 Write_rows apply warning: 151, 333958
2017-02-06 16:39:53 21724 [Warning] WSREP: Failed to apply app buffer: seqno: 333958, status: 1
at galera/src/trx_handle.cpp:apply():351
Retrying 4th time
2017-02-06 16:39:53 21724 [ERROR] Slave SQL: Could not execute Write_rows event on table userdatadb.resulting_tracks_state; Cannot add or update a child row: a foreign key constraint fails (`userdatadb`.`resu
lting_tracks_state`, CONSTRAINT `wish_state_id_idx` FOREIGN KEY (`wish_state_id`) REFERENCES `wishes_state` (`wish_state_id`) ON DELETE CASCADE ON UPDATE NO ACTION), Error_code: 1452; handler error HA_ERR_NO_
REFERENCED_ROW; the event's master log FIRST, end_log_pos 282, Error_code: 1452
2017-02-06 16:39:53 21724 [Warning] WSREP: RBR event 3 Write_rows apply warning: 151, 333958
2017-02-06 16:39:53 21724 [ERROR] WSREP: Failed to apply trx: source: 871100a1-ec71-11e6-8dd4-36374173d6d5 version: 3 local: 0 state: APPLYING flags: 1 conn_id: 60 trx_id: 36314758891 seqnos (l: 8952, g: 3339
58, s: 333957, d: 333852, ts: 4847777374835513)
2017-02-06 16:39:53 21724 [ERROR] WSREP: Failed to apply trx 333958 4 times
2017-02-06 16:39:53 21724 [ERROR] WSREP: Node consistency compromized, aborting...
2017-02-06 16:39:53 21724 [Note] WSREP: Closing send monitor...
2017-02-06 16:39:53 21724 [Note] WSREP: Closed send monitor.
2017-02-06 16:39:53 21724 [Note] WSREP: gcomm: terminating thread
2017-02-06 16:39:53 21724 [Note] WSREP: gcomm: joining thread
2017-02-06 16:39:53 21724 [Note] WSREP: gcomm: closing backend

I'm wondering why this node talks mentions 'Slave SQL' since I replicate to the other node in the new cluster. All nodes in the old cluster have the same server_id, and all nodes in the news cluster have another identical server_id.

Any ideas what causes this behaviour?

Matthias

Comments

  • zuconzucon Entrant Current User Role Beginner
    After some experiments I've noticed that it seems to work when I set wsrep_slave_threads to 1, it was 16 before.

    And for the record, I'm using Version 5.6.32-78.1-56-log / Percona XtraDB Cluster (GPL), Release rel78.1, Revision 979409a, WSREP version 25.17, wsrep_25.17 on the slave cluster and 5.5.41-37.0-55-log / Percona XtraDB Cluster (GPL), Release rel37.0, Revision 855, WSREP version 25.12, wsrep_25.12.r4027 on the master cluster.
  • zuconzucon Entrant Current User Role Beginner
    An Update:

    The problem is not the master slave replication itself. When the mysql 5.6 cluster is hit by live traffic, all nodes shutdown within minutes with foreign key errrors.

    Am I right that this happens because the required constrain is in some queue when I increase the parallel worker threads? The fact the there are hunderds of commits per seconds might make things worse. How can I mitigate the issue? Some ideas:

    - Only one wsrep applier (or less than 16)
    - Set wsrep_slave_FK_checks to OFF (How dangerous is this?)
    - Allow more attempts? It seems to shut the node down after 4 attemps.
    - ...

    Matthias
  • zuconzucon Entrant Current User Role Beginner
    Another Update:

    One wsrep applier doesn't help, it happens only more rarely.
  • bdelmedicobdelmedico Contributor Current User Role Beginner
    hi...

    I upgraded from 5.5 to 5.7 by following the steps on the site below ..

    https://www.percona.com/doc/percona-...ide_55_56.html

    obs ... have to remove some lines not compatible between versions .. one of them was from cache_table ..

    Before the start you have to remove the ib_logfile *, done so you go up without it wsrep_provider = none .. then yes just run mysql_upgrade

    if mysql_upgrade runs normally, just remove wsrep_provider = none and start normally.

    *** If you do not use a loadbalance you have to put the node in readonly
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.