Not the answer you need?
Register and ask your own question!

for mysql data dump too large to import, 20GB of mysqldump file.

DBA100DBA100 Current User Role Patron
hi,

right now I have only 2 x big big MySQL tables to import from myql dump file and it is the remaining problem.

both near 20GB .sql and I use this command to restore the zipped dump file:

nohup bash -c  "gunzip < UAT_fmkLogDB_fmklog_tfmk_seq_no_bak.sql.gz | mysql -u root -p fmklog"

so that when the putty session end, this command still keep going at the background, however the restore command above still cant finish, so may I know if I can do:
1) ONLY export data and import data only ?

 if yes, what is the command to do it ?

nohup bash -c " mysqldump -uroot -p <dbname> <table name>  --data --replace | sed   's/NO_AUTO_CREATE_USER//g' | gzip -9 >  <dump file>.gz"

?

2) export to .CSV files for data only from percona xtraDB cluster 5.7.x and import that CSV back to target percona xtraDB cluster 8.0.19
 
  if yes, what is the command to do it ?

3) copy everything in the data directory path, e.g. /var/data/mysql folder of the percona xtraDB cluster 5.7.x  copy it to the respecitve path, e.g.: /var/data/mysql folder of the percona xtraDB cluster 8.0.19 and then start the percona xtraDB cluster 8.0.19 ?

what if just one DB ?


can I just simply copy the fmklog folder from /var/data/mysql folder of the percona xtraDB cluster 5.7.x  copy it to the respecitve path, e.g.: /var/data/mysql folder of the percona xtraDB cluster 8.0.19 and then start it by using command :

  1. rsync -Pavzxl /var/data/mysql/fmklog/*.*     /var/data/mysql /fmklog

but for this both percona xtraDB cluster on both size has to stop right? please note that in the /var/data/mysql  of target DB, the percona xtraDB cluster 8.0.19 , already has some data inside.

is it possible ?

I am not sure if we can do this to transfer DB between percona xtraDB cluster 5.7.x  and percona xtraDB cluster 8.0.19?

Answers

  • jriverajrivera Percona Support Engineer Percona Staff Role
    You can use mysqldump to export only and then use the mysql client to import the dumped data. You can find examples on how to export data here: https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#mysqldump-option-examples
    For example:
    mysqldump --single-transaction --master-data=2 --databases $dbname > dbname.sql
    Restore with:
    mysql -uroot -p $dbname < dbname.sql

    2) When you say data only do you mean you do not need the CREATE DATABASE/TABLE commands included in the dump? If so then use --no-create-info in the mysqldump command. You can also take a dump of the table structure only with --no-data.

    3) No! It is not advisable to copy a single folder from the mysql data directory and expect that it will work on the new host.

    4) Since you are migrating the data from 5.7 to 8.0 it is best to use mysqldump also considering that the 8.0 cluster already has existing data.

  • DBA100DBA100 Current User Role Patron
    hi, " mysqldump --single-transaction --master-data=2 --databases $dbname > dbname.sql" I knew this, and i even use zip format to do it but it is not the whole point, it is about it takes for a long time to restore and it make putty session keep dropping out. "2) When you say data only do you mean you do not need the CREATE DATABASE/TABLE commands included in the dump? If so then use --no-create-info in the mysqldump command. You can also take a dump of the table structure only with --no-data." only data to save space, table schema and all other logic can script out and redeploy in target cluster 8.0.19 ! how ? how about no any other DB logic? e.g. trigger, view and SP? what I am missing is data at this moment and the mysql dump file is 20GB, gziped is around 10-15GB one single file! a lot of error when importing using mysql -uxxx -p < has problem on 8.0.19 when the dump from 5.7.x has trigger , view, function and SP recreation! this make the whole dump progress stopped! "3) No! It is not advisable to copy a single folder from the mysql data directory and expect that it will work on the new host." so the rsync IS NOT GOING TO WORK? if I want rsync to work in that way , from 5.7x and 8.0.19 any intermedia version in between 5.7.x and 8.0.19 should I try this method first and then from that version to do the same thing to upgrade to 8.0.19 , e.g.: 5.7.x -> a stable version and data migrate by rsync (it will upgrade the meta data) - > 8.0.19 ? "Since you are migrating the data from 5.7 to 8.0 it is best to use mysqldump also considering that the 8.0 cluster already has existing data.' So I don't mysqdump import: mysql DB, sys DB and performance_schema DB, which is different for different MySQL, right ? the rest of DB is ok to import right ? "If so then use --no-create-info in the mysqldump command. " something like : mysqldump -uroot -p --databases --no-create-info --replace | sed 's/NO_AUTO_CREATE_USER//g' | gzip -9 > UAT_fDB_bak.sql.gz previously we do: mysqldump -uxxx -p --databases UAT_bak.sql.gz SO no need to export to a .CSV file and import the .CSV file ?
  • jriverajrivera Percona Support Engineer Percona Staff Role
    Instead of using nohup, try GNU screen instead. It will allow you to "exit" putty and leave the dump/restore to continue processing. You still need to check if the dump was successful or not.
    Data only dump sample, please test to confirm it works.
    mysqldump --single-transaction --no-create-info --no-create-db --skip-triggers --master-data=2 $dbname > data.sql
    Regardless, it will still be a relatively large file even without table objects.

    Using rsync to transfer a specific folder inside the mysql data directory is not advisable and it does not work for this case. MySQL stores metadata in other files like ibdata1 and copying the folder and all files in that specific folder will render the database as "inconsistent".

    If the 8.0 cluster is a fresh install it is still advisable to use mysqldump and restore procedure.
  • DBA100DBA100 Current User Role Patron
    "Instead of using nohup, try GNU screen instead"
    you mean SCREEN ? 

    "It will allow you to "exit" putty and leave the dump/restore to continue processing. You still need to check if the dump was successful or not."

    yeah, but how to check it is working or not ,  > logfile.log and check this log  files later? so you are saying SCREEN is better than nohup ?

    "mysqldump --single-transaction --no-create-info --no-create-db --skip-triggers --master-data=2 $dbname > data.sql"

    so for me , it will be:

    nohup bash -c " mysqldump -uroot -p  dbname tablename  --single-transaction --no-create-info --no-create-db --skip-triggers  --replace | sed  's/NO_AUTO_CREATE_USER//g' | gzip -9 >  UAT_fmkLogDB_fmklog_tfmk_seq_no_bak.sql.gz"

    ?
    "Regardless, it will still be a relatively large file even without table objects."
    so mysqldump still can't handle 60GB of data dump and restore ?

  • DBA100DBA100 Current User Role Patron
    hi,

    this one:

    nohup bash -c " mysqldump -uroot -p  dbname tablename  --single-transaction --no-create-info --no-create-db --skip-triggers  --replace | sed  's/NO_AUTO_CREATE_USER//g' | gzip -9 >  UAT_fmkLogDB_fmklog_tfmk_seq_no_bak.sql.gz

    seems give me only a 6MB output, I don't think it is only data!
  • jriverajrivera Percona Support Engineer Percona Staff Role
    That is data for only one table though - dbname tablename was used. You can check the sql file to confirm if entire data was dumped with that command.
  • DBA100DBA100 Current User Role Patron
    and can't see why data import is less than original 
  • jriverajrivera Percona Support Engineer Percona Staff Role
    I also noticed you used --replace in your mysqldump command. This will use REPLACE INTO instread of INSERT INTO statement. Are you sure this is what you want to use?
  • DBA100DBA100 Current User Role Patron
    edited August 17
    " This will use REPLACE INTO instread of INSERT INTO statement. Are you sure this is what you want to use?"
    actaully yes, assuming that there are data inside, so you suggest I do not add --replace at all ?
    but the target table also has nothing inside, so replace also fine, right ? I think this one is focusing on replace all other object.

    What If I just want table schema?
    what my plan is : 
    1) only migrate table first with no data, the --all- database options seems export a lot of thing new percona mysql do not like , and it make the size of dump file BIG ! why out full dump gz file is 30GB and we have very less MySQL logic and data only dump file is 6M only !!
    2) then manually recreate mysql logic from using mysql workbench! A lot of error comes out before is because of the percona xtraDB cluster 5.7 mysqldump mysql logic dump format is not accepted by percona xtraDB cluster 8.0.19 ! I have to open the .sql and remove it myself one by one ! like the 'NO_AUTO_CREATE_USER', mysql 8.0.19 hate it.
    3) then dump data only .sql or gz, then import as just what I did!

    by this :

    "why out full dump gz file is 30GB and we have very less MySQL logic and data only dump file is 6M only !!"

    import of data can be very quick!






  • DBA100DBA100 Current User Role Patron
    one thing, any latest edition on mysqldump for percona xtraDB cluster 5.7.x ? any individual mysqldump binary rpm can be download and install so that import to 8.0.19 much easier ?
  • DBA100DBA100 Current User Role Patron
    sir, what is the data migration size is 6TB, using mysqldump to backup from Percona XtraDB cluster 5.7.x and restore to 8.0.19 still the best method ?
  • DBA100DBA100 Current User Role Patron
    hi,
    any idea for me ?
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.