so only removal of PVC might cause this.
we are using the default setting, which is like this
According to the Percona Docs it is stated that the in-memory-engine should never ever experience a out of memory event (OOM Killed). Well this exactly happened and I am wondering why this can be.
- is this the usual Kubernetes problem, that pods do not respect the memory/limits ?
- or is inMemorySizeRatio: 0.9 + cacheSizeRatio: 0.5 → 1.4 the problem ?
a) is there a way to turn off the in-memory-engine ?
b) to set the cacheSizeRatio in GB (then I would assume it would def. not use more then the given value).
I don’t think liveness probe should kick in if there is recovery going on.
We have currently ca. 20’000 databases each with 4 collections which result in about 80’000 files (coll + indexes). When the mongod starts in the container then it could be that it takes some time - so the default livenessprobe value of 30 sec is not sufficient in all cases - we’ve set it to 5min and that seems to be ok (for now) but on the other hand if a pod gets move to another node then there are side effects (very long response times). So what is not clear for me is, whether the slow startup is due to the in-memory-engine or if MongoDB takes that long to read all the files. Or would it help to keep the cache sizes to a minimum ?