Not the answer you need?
Register and ask your own question!
Many Forum changes were implemented on Tue 22 Sep. Read about new Ranks, Scoring, and Reactions.
Email [email protected] for any comments or concerns.

mysqldump speed for a DB with thousands of tables

quentinquentin EntrantInactive User Role Beginner
Hello !
I am running Wordpress MultiUser (WPMU) to host a quite large french blog hosting network (close to 100000 users). WPMU creates 10 tables per user (basically duplicates wordpress'tables for each user).
This makes quite a lot of tables as you can imagine.
Now, I'm migrating my DB to a custom multi-DB setup, basically seperating users into multiple DBs instead of keeping them just in one.
Thing is, when i run mysqldump for just a few tables on the initial DB (trying to migrate one user's data for instance), the mysqldump takes *really long* to perform.
It takes me basically 1 hour to migrate 50 users, and considering the total number of users that's gonna take forever to move everyone around.

So, I've made some investigation, and using show processlist, I've seen that most of the time spent by mysqldump seemed to be running queries like
show table status like tablename
show table like tablename

These are probably taking very long due to the large number of tables in the DB.
Is there a way I can make these faster ? Or just have mysqldump use something else at all ?

Thanks !


  • joemastersjoemasters Entrant Current User Role Beginner
    Have you thought about skipping mysqldump and just creating tables in your new database from a select unioning the tables you want to be combined? If the general structures are the same, that should be much quicker.
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.