I’ve been testing out the RC as I like the theory behind fractal tree indexes, so would prefer to use that storage engine if possible. Initial testing bears out the theory well in that we can do many more inserts per second compared to standard mongo when the indexes for the working set no longer fit in memory.
However there appears to be one show stopper that would stop us treating it as more than an experiment just now. Once you’ve fully populated the oplog and the capped collection maintenance kicks in on it, replication performance drops off a cliff
What we see in the logs on the replicas is lots of
STORAGE PerconaFT: Capped deleter has been optimizing for XX seconds, may be seriously falling behind
PerconaFT: Capped delete optimizer is XXMB behind, waiting for it to catch up somewhat
Performance on the primary remains unaffected, but obviously we cant consider this viable for a production use case if that is an unfixable problem.
We’ve tried various oplog sizes, and basically replication performance is great up until the capped deletion starts to kick in.
is this a known issue?