Hi!
I have a fully working MariaDB Replication Environment:
Master: MariaDB 10.3.27 (about 450 databases, all with the MyISAM engine, almost 500GB of total storage)
Slave: MariaDB 10.6.10 (using the ‘old’ binlog coordinates, the ‘new’ GTID is not used)
I’m planning to setup a a second Slave, and I would like to use Percona 8 for this purpose (keeping the current MariaDB 10.3 Master); the main issue is that I cannot lock, temporally shutdown, or make a full backup of the current Master (it is a live system with a huge load of of trafic), so I would like to ‘build’ the second replica starting from the current Slave (that can be paused, stopped, rebooted, and so on)!
Now, online I’ve found many solutions/tutorials on how to:
- create a Percona Slave from a MariaDB Master (via xtrabackup, not applicable in my case, since the Master can’t be touched);
- create a MariaDB Slave from an existing MariaDB Slave (via mariabackup, good… but then I will not be using Percona, and I would like to use it!);
- create a MySql Slave from an existing MySql Slave (simply copying the data dir between the replicated servers… it would be great, but I’m not using MySql!);
…but I found nothing on how to create a Percona Slave starting from an existing MariaDB Slave!
Do you have any suggestion on which process to follow to achieve this?
Any help would be appreciated!
Thank you in advance!
Hello @laluigino,
I would do the following:
- mydumper against the mariadb replica. This will perform a full logical backup of the replica, and it will also grab the binary log position relative to the source mariadb server.
- Use myloader to restore this backup to the new percona mysql.
- Configure replication from percona mysql → mariadb source, using the replication information collected by mydumper. (in the metadata file)
I would highly recommend that you also take this opportunity on the Percona MySQL to convert all the tables from myisam to InnoDB. MyISAM is a dead engine and should not be used in any production environment. This is the main reason you cannot lock the tables. If your tables were InnoDB, you’d be able to take hot, non-blocking backups on the source.
Hi @mattewb, thank you for your suggestion!
Sorry for the late answer, meanwhile I’ve created a new Percona for MySQL server, and started a mydumper against the mariadb 10.6 replica (it is running since 2 hours ago):
mydumper --threads 7 --host $MYSQL_HOST --user $MYSQL_USER --password $MYSQL_PASSWORD --compress --verbose 3 --regex '^(?!(test))' --outputdir ./dbr --logfile ./backup-dbr.log
Now I’ve just realized that i did not use the “–enable-binlog” option… does it matter? Or should I restart the process adding that option? (I can see that a metadata file is present into the backup dir, after all)
Then, regarding your suggestion to convert all the tables from MyIsam to InnoDB on the Percona MySQL after the use of myloader… can InnoDB keep working as a replica of the current MyISAM Master?! Or are you suggesting that the conversion from MyISAM to InnoDB should be made after that the Percona replica becomes the new Master?
Thank you again for you support!
You don’t need --enable-binlog
on mydumper. You would use that option on myloader only if you wanted the restore process contained within the binlogs, which usually you don’t on a replica.
Yes, tables on a replica can be InnoDB while tables on source can be MyISAM.
Make the conversion to InnoDB after you’ve loaded all the tables into the replica. Then just change them one by one. Or you can use sed
on the *-schema.sql files from mydumper to do a simple search/replace.
Hi @matthewb, I’m still fighting with myloader to make it work!
Just now it is processing the dump again, it should be finish in about 10 hours…
Unforunately, I had to make some search/replace with sed to make the dump of my MariaDB 10.3 DB (MyISAM only) ‘compatible’ with Percona 8 (thank you for your suggestions!).
More precisely I had to:
- remove NO_AUTO_CREATE_USER from sql_mode definitions;
- change all the ‘SET NAMES binary’ to ‘SET NAMES utf8mb4’;
- apply directly the MyISAM->InnoDB conversion, replacing ‘ENGINE=MyISAM’ with ‘ENGINE=InnoDB’;
- delete all the backup files of the ‘mysql’ database.
I’ve reported these, just in case those can be useful for someone else.
Hoping that this time myloader can finish restoring all the databases, I will have only to set up the replication from the master. I saw that all the relevant info shoud be in the ‘metadata’ file, but I’m not sure of which ones I need.
Doing this:
CHANGE MASTER TO
MASTER_HOST='$sourceip',
MASTER_USER='repl',
MASTER_PASSWORD='$replicapass',
MASTER_LOG_FILE='Source-bin.000001',
MASTER_LOG_POS=481;
should I take the MASTER_LOG_FILE and MASTER_LOG_POS from the following lines taken from metadata?
# Master_Log_File = 'master1-bin.023682'
# Read_Master_Log_Pos = 23689974
Thank you, I’ll keep you updated on this migration!
You need to use Exec_Master_Log_Pos
, not Read_Master_log_pos
Hi @matthewb, thank to your help I finally managed to start the replica… but since 2 days ago I keep receiving the error:
[ERROR] [MY-010596] [Repl] Error reading relay log event for channel '': Found invalid event in binary log
[ERROR] [MY-013121] [Repl] Replica SQL for channel '': Relay log read failure: Could not parse relay log event entry. The possible reasons are: the source's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the replica's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, the server was unable to fetch a keyring key required to open an encrypted relay log file, or a bug in the source's or replica's MySQL code. If you want to check the source's binary log or replica's relay log, you will be able to know their names by issuing 'SHOW REPLICA STATUS' on this replica. Error_code: MY-013121
[MY-010586] [Repl] Error running query, replica SQL thread aborted. Fix the problem, and restart the replica SQL thread with "START REPLICA". We stopped at log 'master1-bin.023683' position 39511846
I tried everything, I had also restored the dbdata folder (I made a backup after using myloader, just before starting the replica) several times, START REPLICA, then all seems to go ok… until reaches the position 39511846, then it stops with the error above.
I checked that the master binlogs are not corrupted, neither the replica relay logs.
I tried also to parse the master binlogs, extract the query, execute it manually, with no success.
Then I tried to RESET the replica moving the position to the next event… no way.
Please help me… do you have any suggestion on which other ways I can try to make this replica (percona 8, replica only mode, InnoDB tables) stay synced with the master (mariaDB 10.3, MyISAM tables) ?
Thank you in advance!
I would purge all binary logs on the source, take a new/fresh dump, and restore to new Percona MySQL. This would erase any potentially corrupted binary logs on the source. After you’ve done all that, if replication is still breaking, there is a possibility that MariaDB has changed their binlog format making it incompatible with MySQL. I haven’t heard of that happening, but it’s possible.
Hi @matthewb, always thanks for your support.
I’m going to do a last try with a:
START SLAVE UNTIL MASTER_LOG_FILE='master1-bin.023683', MASTER_LOG_POS=39511846;
to see if I manage to skip that event, and hoping that i will not find more.
If this try will fail also, then I’m going to make a new dump and re-start from scratch.
Since the dump is made from a replica that runs MariaDB 10.6, do you think it is a good idea to make this replica a master/replica instance, and let Percona to use this as source?
Then it would be like this: A (MariaDB 10.3 master) → B (MariaDB 10.6 replica of A/master) → C (Percona 8 replica of B)
Thank you again!
Hi @mattewb, it seems that i managed to skip the events that were causing the error!
Now the new replica is slowly syncing to the master, and i have to manually skip events every time.
When it stops, I get the SOURCE_LOG_FILE and the SOURCE_LOG_POS from SHOW REPLICA STATUS, then I use mysqlbinlog on the master to find out the very next valid position after SOURCE_LOG_POS (look for the “at” row preceding the second START TRANSACTION after the row “at #SOURCE_LOG_POS”).
With this new value, i do:
STOP REPLICA IO_THREAD FOR CHANNEL '';
CHANGE REPLICATION SOURCE TO SOURCE_LOG_FILE='master1-bin.0xxxxx', SOURCE_LOG_POS=new_value_xxxxxxxxx;
START SLAVE;
Obviouvsly this is not a real solution, but it seems that the events that cause the error are always UPDATEs with some newline (nl) - carriage return (cr) chars in a mediumtext field… in the logs these characters appear like
\x0D
\x0A
Luckily, these events are rare (just a couple of dozens), but then I need a method to ‘ignore’ these chars, keeping the replica working without the need of a manual skip.
Is there any configuration option to achieve this?
Thank you again
You should be doing like this. I would not do the chain replication. It is unnecessary.
A (Maria 10.3) -> B (Maria 10.6)
\-> C (Percona 8, a direct replica from 10.3)
1 Like
Hmm. I would make sure this table is utf8mb4, and in your code, strip those characters, and use correct newline.
No, there is no way to filter specific row-events from replication. Table name is the lowest filtering you can do and it would ignore everything for that table.