Optimize table fails!

Hi everyone,

I have a table with 3 indexes on a slave database. When I issue an optimize from a master server on that table (no indexes), the optimize goes through ok on a master, but fails on a slave:

110127 6:33:12 [Warning] Warning: Optimize table got errno 2 on log_collector.logs, retrying110127 6:33:13 [ERROR] myisam_sort_buffer_size is too small

My myisam_sort_buffer_size is set to 4Gb on a slave, and I don’t think I can set it to be any larger.

The table size when running optimize is approximately 80Gb with 1Billion of rows, partitioned with 12 partitions. Index takes approximately 30 Gb of that size.

I understand there is a problem with rebuilding the index, so one option would be to remove indexes to reduce total index size, but i really need the indexes there.

Running mysql 5.5.8 here.

Any suggestions would be greatly appreciated.
Thank you.


I would check if OPTIMIZE works on the slave if ran manually, if it still fails run CHECK TABLE, it might be some kind of corruption

Thanks for reply Peter,

Before posting this post I actually tried running optimize directly on a slave and the result was the same.
I did not run check table, will try it today.

Peter, do you think this may be related to MySQL limitations for index size or to some sort of data corruption within the table? Also what is supposed to happen if index being rebuilt is larger then myisam_sort_buffer_size?
Can this also be related to the fact the table is partitioned?

Thank you

Hi Peter,

Yes I did, http://bugs.mysql.com/bug.php?id=59925

Hopefully they can confirm it and it will get some traction.

The problem I had is that LOAD, DELETE, OPTIMIZE series takes a very …very long time for the tables of this size, and that’s on a HP DL580 64G ram, 8 disk hardware raid 5 with buffer! ( i know raid 5:) but there are reasons why)

It took me so long that I started copying tables on the OS level, even then delete+optimize took 3 - 3.5 hours. I spent almost 4 days to more or less understand the behavior, but was not able to pinpoint exact index/data size ratio when it fails. I think this would have helped them a lot, but I just don’t have anymore time:(