I’m running a MySQL innodb database for a SugarCRM installation - and currently suffering due to hardware spec.
We implemented a custom email-capture system and now we have some large tables. Three are 1 GB each with about 4 million row each - and one table is 26GB with 1 million rows. This is the majority of the whole db (which is about 32GB).
The database occasionally suffers when executing multi-join queries against these tables. We believe it is down to lots of disk IO and fragmented indices which is compounding the ‘analyze’ of these tables into making inappropriate query plans.
To accommodate this, we plan to upgrade the db server with lots of RAM so we can ramp up the innodb_buffer_pool_size to handle much more processing in memory (32GB?). However this bug (http://bugs.mysql.com/bug.php?id=29847) makes it look like any crash recovery on this db would take forever.
For the new server, we would use Ubuntu 10.04 LTS - which runs with MySQL 5.1.41 - so we can’t use the innodb plugin 1.0.7 which may have a fix for this.
If I’m manually restarting the mysql daemon, I can temporarily edit the mysql config file to set the innodb_buffer_pool_size low just for the recovery process. However, if the server were to crash, rebooting the box would automatically restart the database daemon with the large buffer pool size value and who knows how long it would take to recover?
So the short of it (?!) is: Can I safely “kill -9” the mysql daemon when it’s in the middle of a recovery? Then edit the config to reduce the buffer pool size - and then restart it again to recover more quickly?