Not the answer you need?
Register and ask your own question!

wsrep_max_ws_size limit or how to work with importing large datasets

aesellarsaesellars ContributorInactive User Role Beginner
I'm having some issues getting large data sets into a Percona cluster. We're trying to convert two stand alone database servers into a single cluster, and finding a way to export our data in a way that lets us do an import without failing is proving to be difficult. I've tried to mysqldump with and without extended inserts, exporting to value separated files and using loaddata - and we wind up getting some kind of failure. Some of our tables have 10s of millions of records, and our raw dataset is about 300GB cumulatively. How is everyone else handling datasets this size?

How are other people managing to make this kind of migration, and do it efficiently? Idealy, we'd like to find a way to get the datasets exported and imported in an 8 hour window, or at least break them into sets that can be done in multiple runs of 8 hour windows.

Is there just a 32bit limitation in Galera that blocks write sets bigger than 4GB?

Comments

  • aesellarsaesellars Contributor Inactive User Role Beginner
    Xtrabackup partial backup doesn't help his scenario, as the writesets for the individual tables to be restored are still bigger than the buffer. A full backup and restore can't be done either, because the source database servers are not part of an existing cluster - and contain different databases. So full backups using xtradb are out as well.

    Modifying the

    That limit is arbitrary, as we're running servers with 32-64GB of ram. There's plenty to spare with larger writesets.

    Load data infile and splitting might work, but I'd need to see someone's example of how there are exporting the data into a working format.
  • rollsrolls Contributor Inactive User Role Beginner
    I have problem with the changes between the cluster backup and the real data because the grow ot the data and i use wsrep_sst_method=rsync
  • aesellarsaesellars Contributor Inactive User Role Beginner
    I wound up solving my problem by using the following method:

    Dump structure only of all databases I want to migrate

    Create a full backup using innobackupex

    Perform an --apply-log and --export against the new backup

    Import structure only backups to target percona cluster

    Discard table space for all the tables in my imported structures

    Copy .ibd, .exp, and .cfg files from the appropriate database folders in my innobackupex backup into the corresponding fodlers in my msql datair

    Set ownership to the mysql user on the copied files

    Import table space for each table

    The whole process start to finish, including taking the backup, was about 4.5 hours to converge and import 350GB of databases.

    I'm including a bash script I wrote to loop through the process. It reads in the location of your structure backups and your innobackupex from the command line as input params, and loops through the whole process.

    THIS SCRIPT IS DESTRUCTIVE TO YOUR CLUSTER FOR ANY EXISTING TABLES LISTED IN YOUR STRUCTURE EXPORT

    It assumes your structure dump files have been dumped to a file named identically to your database name, ie) If your database is called fizzy_widgets, your dump file is called /path/to/structure/fizzy_widgets
    #!/bin/bash
    structure=${1%/}
    backup=${2%/}
    
    for i in $(ls $structure)
            do
                    echo "Cleaning $i"
                    rm -Rf /var/lib/mysql/$i/*.cfg
                    rm -Rf /var/lib/mysql/$i/*.exp
                    echo "Dropping $i"
                    mysql -e "drop database if exists $i"
                    echo "Creating $i"
                    mysql -e "create database $i"
                    echo "Constructing $i"
                    mysql $i < $structure/$i
    
                            for t in $(mysql -e "use $i;show tables;" | grep -v "Tables_in")
                                    do
                                            echo "Working on $i.$t"
                                            echo "Discarding tablespace"
                                            mysql -e "alter table $i.$t discard tablespace"
                                            echo "Replacing tablespace"
                                            cp $backup/$i/$t.exp /var/lib/mysql/$i/
                                            cp $backup/$i/$t.ibd /var/lib/mysql/$i/
                                            cp $backup/$i/$t.cfg /var/lib/mysql/$i/
                                            chown -R mysql:mysql /var/lib/mysql/$i
                                            echo "Importing tablespace"
                                            mysql -e "alter table $i.$t import tablespace"
                            done
    
    done
    
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.