prometheus got killed because of large usage of memory

the pmm server has 16 CPUs and 32GB RAM with 512M swap running with PMM 1.4.0
i set metrics_memory to 20GB and metrics_retention to 360h,
but prometheus got killed by kernel nearly each 4 hours,
here is the part of information of /var/log/message:
Oct 24 09:36:43 10-10-105-82 kernel: Out of memory: Kill process 21655 (prometheus) score 963 or sacrifice child
Oct 24 09:36:43 10-10-105-82 kernel: Killed process 21655 (prometheus) total-vm:34285988kB, anon-rss:31879644kB, file-rss:0kB
Oct 24 09:36:43 10-10-105-82 dockerd: time=“2017-10-24T09:36:43.117061551+08:00” level=error msg=“libcontainerd: failed to receive event from containerd: rpc error: code = 13 desc = transport is closing”
Oct 24 09:36:43 10-10-105-82 dockerd: time=“2017-10-24T09:36:43.189466005+08:00” level=info msg=“libcontainerd: new containerd process, pid: 18340”
Oct 24 09:36:43 10-10-105-82 dockerd: time=“2017-10-24T09:36:43.989786588+08:00” level=info msg=“libcontainerd: new containerd process, pid: 18361”

need your help,
if you need more information,
feel free to inform me,
thanks in advance.

in addition, we are monitoring over 40 mysql servers and this server is a virtual server using virtual disk

Can you reduce the value of METRICS_MEMORY (say, to ​​​​​​​16777216) and try again?