I have been using Percona server with TokuDB engine for 2 years. Now I have a one problem. I have used TokuDB row format LZMA by COMPRESSED option on LOGSERVER. This table 200M rows and now our disk 91%. So I need to descrease disk size. Then I have deleted 100M row. But not disk descrease and table open select is very slowed.
Why our table very slow? / select * from LOGTABLE limit 0,10 / this query almost 10 min executed.
I used this delete query. / delete from LOGTABLE where ID < 100M /. ID is Primary Key and Auto Increment field.
Deleting rows from the table will not reduce table size. You should dump your table (ie with mysqldump) and load it back to reclaim space. In the future consider using partitioning if you need to clean up the old data and get free space back immediately.
This may actually cause the files to grow during the optimization process to up to 2x the actual data size. The files may then slowly shrink over time on each checkpoint as new/dirtied nodes get relocated to the head of the files. Otherwise, Peter is correct, a logical dump and reload of the data will re-create all indices. Partitioning by age is indeed the best way to deal with handling old/rotating data.
But I run 1 bigtable optimize table LOGTABLE. Unfortunately disk size 2-3x larger. I have 10 big tables that all of activity write constantly online. Not redundant. In my opinion I will create new table (example:LOGTABLE_1 with partition) select insert necessary data. So drop old table and rename new table to old. Is it correct way?
I understand it “All of delete not disk descrease” is it right?