I am using Percona Server 8.0 latest version and am migrating a DB that used TokuDB with LZMA compression to RocksDB. Main table is 50 billion rows (2 int columns), 1.1TB data uncompressed, and around 100GB compressed (excl indexes totalling 300GB compressed).
I was a bit disappointed to see that TokuDB will be discontinued. I am now testing with Rocksdb to see if it can replace it for our usecase. I migrated all tables using ALTER TABLE ENGINE=ROCKSDB. This worked fine, except for the main large table, which ran out of space when converting (we have a 1TB Optane disk), since it keeps the old table data until the table engine change is fully done.
I read about the advice to drop secondary indices and add them again after conversion, and this seemed like a fine approach. However… when doing a ALTER TABLE ADD KEY x(x,x) after the table was converted to ROCKSDB, it caused the machine to run out of /tmp space.
I checked what was going on and the ADD KEY command results in copying all data of the compressed table into a single tmp file in tmpdir uncompressed(!) and then generating the index based on that data (fortunately that was compressed in sst’s immediately).
I moved the tmpdir to a BTRFS compressed mount, and ran again. It ultimately led to a 1.4TB tmp file and then the index started to be build.
Main question I have, why is this tmp file not compressed? Sounds like a not so wanted ‘feature’. Is there an option to avoid this? Why is it even generated?