Mongod pod does not respect limits.memory

Just tested and the pod(s) get OOM Killed when adding “too” much data.

This is the default configuration and I would expect that this could handle anything the disk can store.

storage:
  engine: wiredTiger
  inMemory:
    engineConfig:
      inMemorySizeRatio: 0.9
  wiredTiger:
    collectionConfig:
      blockCompressor: snappy
    engineConfig:
      cacheSizeRatio: 0.5
      directoryForIndexes: false
      journalCompressor: snappy
    indexConfig:
      prefixCompression: true

What I dont understand is

inMemorySizeRatio: 0.9
cacheSizeRatio: 0.5
OS cache

that would in fact result in most likely 150+% of the containers memory set by limits.memory.

Am I missing something ?

1 Like

Just as an example :

{“t”:{“$date”:“2021-07-14T22:11:48.774+00:00”},“s”:“W”, “c”:“CONTROL”, “id”:20720, “ctx”:“initandlisten”,“msg”:“Available memory is less than system memory”,“attr”:{“availableMemSizeMB”:8192,“systemMemSizeMB”:52232}}

what is systemMemSizeMB ? looks like this is the GKE node’s memory. pods memory limit is 8GB

1 Like

@jamoser is this case similar to systemMemSizeMB flag - can it be set externally - #4 by jamoser ?

1 Like

See my answer in MongoDB not respecting limits · Issue #936 · mongodb/mongodb-kubernetes-operator · GitHub

→ I think Percona can’t do anything, it’s a MongoDB issue

1 Like

@jamoser seems they fixed the cgroup2 issue at MongoDB not respecting limits · Issue #936 · mongodb/mongodb-kubernetes-operator · GitHub was this also part of latest percona mongoDB operator