Our system copies around 10B rows (each rows consists of few numbers) every day from Big Query to Mysql, the rows are fetched later by primary key (key-value style queries) with 10-20K reads per second. Ingestion process exports the rows from Big Query to CSV files and load them to Mysql with rocksdb engine (which should be write-optimized) with guidelines according to https://www.percona.com/doc/percona-server/LATEST/myrocks/data_loading.html
Current benchmark on 16 Core / 84GB RAM server shows CSV load speed of 150-200K writes/sec which is not enough. The bottleneck is in the index creation (loading without the index is much faster).
Any ideas how can that be improved?