mysqldump speed for a DB with thousands of tables

Hello !
I am running Wordpress MultiUser (WPMU) to host a quite large french blog hosting network (close to 100000 users). WPMU creates 10 tables per user (basically duplicates wordpress’tables for each user).
This makes quite a lot of tables as you can imagine.
Now, I’m migrating my DB to a custom multi-DB setup, basically seperating users into multiple DBs instead of keeping them just in one.
Thing is, when i run mysqldump for just a few tables on the initial DB (trying to migrate one user’s data for instance), the mysqldump takes really long to perform.
It takes me basically 1 hour to migrate 50 users, and considering the total number of users that’s gonna take forever to move everyone around.

So, I’ve made some investigation, and using show processlist, I’ve seen that most of the time spent by mysqldump seemed to be running queries like
show table status like tablename
or
show table like tablename

These are probably taking very long due to the large number of tables in the DB.
Is there a way I can make these faster ? Or just have mysqldump use something else at all ?

Thanks !

Have you thought about skipping mysqldump and just creating tables in your new database from a select unioning the tables you want to be combined? If the general structures are the same, that should be much quicker.