Rocksdb: Alter table fails silently and drops all data: Global seqno is required, but disabled,

I did alter a fairly small rocksdb table yesterday, and it reported that everything went ok. So I was happy. Today I did another table which was a bit larger, and added a field there as well.
It reported that all went fine, but I noticed inphpmyadmin afterwards that all data was gone…
I checked the logs, and found the following message:

Plugin rocksdb reported: ‘Failed to bulk load. status code = 4, status = Invalid argument: Global seqno is required, but disabled, IngestExternalFileOptions={move_files=1,failed_move_fall_back_to_copy=1,snapshot_consistency=0,allow_global_seqno=0,allow_blocking_flush=0,ingest_behind=0,write_global_seqno=1,verify_checksums_before_ingest=0,verify_checksums_readahead_size=0,verify_file_checksum=1,fail_if_not_bottommost_level=0}’

I double checked the table change of yesterday, and surely, that one also had no data in it anymore besides new data from yesterday onwards… The logs had a identical message I found out.

Seems like a big issue!

What I did was:

set session rocksdb_bulk_load=1; 
ALTER TABLE `foo` ADD `bar` TINYINT NULL DEFAULT NULL AFTER `foobar`; 

Can I get my data back besides restoring a backup? Is this a bug? Should I set a certain variable for this to work?

My config:

rocksdb-override-cf-options=‘default={compression=kZSTD;bottommost_compression=kZSTD;compression_opts=-14:12:0;level_compaction_dynamic_level_bytes=true;write_buffer_size=128m;target_file_size_base=32m;max_bytes_for_level_base=512m;block_based_table_factory={format_version=5;index_block_restart_interval=16;cache_index_and_filter_blocks=1;filter_policy=bloomfilter:12:false;whole_key_filtering=1}}’
rocksdb_block_size=16384
rocksdb_max_open_files=-1
rocksdb_max_background_jobs=8
rocksdb_flush_log_at_trx_commit=2
rocksdb_large_prefix=1
open_files_limit = 102400
rocksdb_block_cache_size = 12G
rocksdb_max_total_wal_size=1G
transaction-isolation=READ-COMMITTED
rocksdb_max_row_locks=1073741824

Ver 8.0.34-26 for Linux on x86_64 (Percona Server (GPL), Release ‘26’, Revision ‘0fe62c85’)
Ubuntu 22.04.3 LTS

I performed a restore of the staging DB, but the restoring fails as well, with same error…

Hi @peterdk,
I’m not seeing anything in the manual that says you should set bulk loading ON when doing normal table alters. The manual says you should do this only when converting from other table engines, or when doing LOAD DATA.
The manual also states “The data may not be visible until bulk load mode is ended (i.e. the rocksdb_bulk_load is set to zero again)”

Additionally, “None of the data being bulk loaded can overlap with existing data in the table.” which is pretty much what you are doing with an ALTER.

And finally, “Inserting one or more rows out of order will result in an error and may result in some of the data being inserted in the table and some not.”

It is recommended that you do not use bulk loading for simple ALTER table to add columns. You should opt to use pt-online-schema-change instead.

Hi @matthewb

Thanks for the reply. It’s interesting. I thought I did read that bulk loading setting would speed up alter tables. But I might have mistaken indeed. Strangely I did encounter the error as well when importing logical backup of a table. I spun up a instance of MariaDB and there all imports went without issue. So I am not sure if this was my error or Percona Server. I’ll try to reproduce the import error again in Percona, and report back. This might take a few days.

Hi, I did experiment a bit more, and read more about it. And you are right, bulk load should not be used for ALTER TABLE commands. I guess I thought so, because it was needed to do for TokuDB tables.

I do still get weird errors when importing data into a fresh table when using bulk load. But it might be caused by the /tmp folder not having enough space. I’ll research a bit more.

So, when inserting all data without using bulk_load_unsorted, everything works as expected. Table changes work as well, without setting bulk_load, as indicated by Matthew.

My DB server is humming along again.