I am upgrading some extremely large MySQL DBs from 5.0 to Percona 5.1. These contain hundreds of databases, hundreds of GB of data, and many tens of thousands of tables.
The upgrade went fine, new binaries are in place and everything is working including replication with the still-5.0 master.
However, during the mysql_upgrade script (or if I run mysqlcheck -A --check-upgrade directly) I get the needs repair message for perhaps 10% of my tables as described in Peters post here:
Following his instructions to run “alter table X engine=InnoDB” does fix these up but the problem is I have 10k tables needing this done and the process is extremely slow.
I’ve enabled expand_fast_index_creation and moved the tmpdir directory to an array of SSDs (following advice from Alexey at http://www.mysqlperformanceblog.com/2011/11/06/improved-inno db-fast-index-creation/) and that combination did make things faster but no where near fast enough - at the current pace it will take weeks to complete. I am running 15 at a time by piping the list of tables that need this through xargs. I have bin logs disabled and the MySQL data directory itself is also on SSDs.
Is there anything else I can do to make this faster? Is this process necessary and what are the consequences of skipping it?
The ugpraded DB appears to work perfectly, including the tables that mysqlcheck says need to have this slow process performed so I don’t understand why this is necessary.
Any help is greatly appreciated. I am doing this migration on an extremely hefty box so assume I have all options available in terms of RAM/CPU/Disk.
As Peter was responding in a comment on the same thread you are referring to:
Peter
So the answer is something like
-“it will probably work, but down the line it might return wrong data”
which doesn’t feel very assuring from a user point of view.
So I would recommend with continuing the repairs even though it’s painfully slow.
An alternative if you can live with the risk is to start running production on the server and fixing the tables in the background, but I’m guessing that is something like what you are doing at the moment?
Thanks, I had missed that comment on Peter’s post. I’m letting the alters run until completion. The expand_fast_index_creation and faster tmpdir location have helped enough. I was also basing my timings off some especially large databases that were front loaded so now that those are complete it’s moving faster. Looking at perhaps 2 - 3 days to complete the process which is easier to understand for a MySQL instance this large.
For anyone else needing to do this with a large DB containing a lot of tables, this is the approach I ended up with.
turn on expand_fast_index_creation, and put tmpdir on fastest disks available
write the list of tables that need altered to a file by grepping the output of
mysqlcheck -A --check-upgrade
write a little process script, something like this (assuming all your tables are InnoDB, don’t want to accidently convert tables you wish to keep as myisam for some reason):
then cat your list of tables that need altered through
xargs -n 1 -P CORE_COUNT ./process.sh
This way if the script is interrupted or you need to stop it occasionally you can remove the tables in “finished_tables” from your main list and start it back up later. You could also just run mysqlcheck again but in my case that takes 30+ minutes so was a lot faster to keep my own list maintained.