Hi Matthew, thanks for the response… so why it feels like “lacking”… sorry that’s a longer response, maybe just lack of experience on my side…
- Basically I feel many simple, straightforward and “always supported” features are not working as “expected”. That’s actually not on percona but maria so maybe your plugin is different.
It feels that innodb, aria, myisam, memory and tokudb are interchargable. So i can change the engine to the one with proper performance characteristics and it’ll work “same way” more or less and i can’t break anything with config…
With myrocks it seems it’s even problematic to start, for example
ALTER TABLE trading
._raw_order_hist
ENGINE = ROCKSDB;
Operation aborted: Failed to acquire lock due to rocksdb_max_row_locks limit
Same thing when user wants to “copy” data with INSERT INTO.
I know there’s bulk load, with some constraints. After reading the docs i’m not even sure if data loaded using bulk mode will be correctly loaded. Okay bulk load, unordered… loaded the data, it worked.
Then there’s another issue… when doing simple UPDATEs (like i want to set a colum to zero for debug) i’m getting the same errors even on 800k row tables while rocksdb_max_row_locks is set to > million. Even on full table where it should be easy to just lock full table when there’s nothing else going on…
So the point is that if you have to “micromanage” simple operations this way it should at least be predictable to the point that if you have max locks > total rows, that should succeed every time on dev where you’re not holding any other locks. Ideally it’d just “switch” to full table lock when you’re “out of” row locks (i know that sounds super simple, probably horribly complicated to do… however other engines manage it).
That’s just one aspect. I was testing rocks for a long time, not sure if that’s true anymore but
-
I read that compacting the LSM tree is limited on default settings to some rather small size (like 100GB). The other thing is that tree compaction (tested that) required twice as much storage as the full database, so if i have eg. 6 TB of data i’d need to keep another 6 TB of SSD space, idling so the server can “balance” the levels and reclaim free space after deleting rows.
-
The compaction process is resource intensive and affects whole server. So it’s hard to plan it because whole server is affected. I can’t compact eg. single table which is rarely used so it can be slow. Also read some claims that large data size is affecting whole server making it slow (>1TB of data).
-
Couple of times when i needed to kill the server or vm was closed without shutdown by mistake, it resulted in all rocks tables to be “broken” and inaccessible anymore (as plugin wasn’t able to load, unlike even myisam when such issues were limited to single table and were repaired automatically).
So unlike all other engines single (large) table could affect performance and validity of other tables. I have some data which are large, often updated and not very needed (as they can be re-generated or just restored). These will affect small, critical tables.
As for other comments. Don’t expect that myrocks will store data the same way as inno or myisam, maybe each (or choosen) tables could have its own LSM tree stored in separated folder. Not sure what the overhead of this approach would be to manage eg. 500 LSM trees per server so around 100k-200k files per server.
For asking if it’s being developed maybe i understand it incorrectly but i believed rocksdb is a k-v database, which is maintained by facebook and then there’s probably this very complicated middleware (mysql plugin) which allows mysql to use it as a backend for transactional, row-based database. Facebook also develops the mysql plugin for maria, oracle or percona server? What im afraid of is that toku was obsoleted because there was lack of manpower to maintain it and it got very complicated (as far as i understand the issue with keeping toku alive)… now if i use rocks and rocks (mysql plugin) will be abandoned - there’s no other migration path as there’s nothing comparable available…
I believe that rocks was picked over toku because it was “simpler”. However i also believe that rocks could become even more complicated than toku, later as features will be added… so currently thinking about proper migration path which would be innodb+clickhouse, with possible addition of rocks… while i’m not sure if i shouldn’t focus on innodb only as this seems to have best support and even mariadb is slowly becoming “innosql” really
MyRocks is very reliable, otherwise the companies in the results above would not be using it.
So it’s just steep learning curve and all the issues can be mitigated by proper level or config / tuning (so it can be on par with toku when properly configured, for example).
Eg. from my experience, with toku it has its issues, eg. something’s broken with statistics and sometimes table row count is totally off, so optimizer creates unoptimal execution plan in some cases… but it can be fixed by manually issuing recount. I never lost any data with it (maybe i have started using rocks too early as i was testing it for a long time)…