Percona Postgres takes too much CPU and RAM usage while running on OpenShift Cluster

Hello Percona Team,

I am working on an OpenShift Cluster with this configuration: 3 worker nodes and 4 cores CPU and 32 Gib RAM for each node.

Today, I tried to install and setup Percona Operator to manage our projects’ database. I installed Percona Operator for PostgreSQL from OperatorHub of OpenShift web console.
After installing successfully, I added a new PerconaPGCluster from Percona Distribution for PostgresSQL Cluster via the form view. Then, the pods of percona postgres database were be generated and run as expected.

But, when I checked CPU and RAM usage after above installation, I found that Percona database cluster took too much of these resources.

After some more analyzes,
I had to remove the Percona Operator from my OpenShift cluster to ensure our existing projects to work fine.

So, please check this case and let me know whether I did some wrong setup or configuration.
I am looking forward to hearing your response.

1 Like

Hi Kieu,
Given that this is an operator issue would it be possible to re-allocate to the Percona Operator for PostgreSQL category, available here?

Also would you like sharing the cr.yaml files that you are using?

1 Like

Hi Takis,
Thank you for your response.
Here is my cr.yaml. It’s the default cr.yaml from Percona guideline..

apiVersion: pg.percona.com/v1
kind: PerconaPGCluster
metadata:
  labels:
    pgo-version: 1.3.0
  name: cluster1
spec:
#  secretsName: cluster1-users
#  sslCA: cluster1-ssl-ca
#  sslSecretName: cluster1-ssl-keypair
#  sslReplicationSecretName: cluster1-ssl-keypair
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: disabled
    schedule: "0 4 * * *"
  database: pgdb
  port: "5432"
  user: pguser
  disableAutofail: false
  tlsOnly: false
  standby: false
  pause: false
  keepData: true
  keepBackups: true
#  pgDataSource:
#    restoreFrom: ""
#    restoreOpts: ""
#  tablespaceStorages:
#    lake:
#      volumeSpec:
#        size: 1G
#        accessmode: ReadWriteOnce
#        storagetype: dynamic
#        storageclass: ""
#        matchLabels: ""
#  walStorage:
#    volumeSpec:
#      size: 1G
#      accessmode: ReadWriteOnce
#      storagetype: dynamic
#      storageclass: ""
#      matchLabels: ""
#  userLabels:
#    pgo-version: "1.3.0"
  pgPrimary:
    image: percona/percona-postgresql-operator:1.3.0-ppg14-postgres-ha
#    imagePullPolicy: Always
    resources:
      requests:
        cpu: 500m
        memory: "256Mi"
#      limits:
#        cpu: 500m
#        memory: "256Mi"
#     affinity:
#       antiAffinityType: preferred
#       nodeAffinityType: required
#       nodeLabel:
#         kubernetes.io/region: us-central1
#       advanced:
#         nodeAffinity:
#           requiredDuringSchedulingIgnoredDuringExecution:
#             nodeSelectorTerms:
#             - matchExpressions:
#               - key: kubernetes.io/e2e-az-name
#                 operator: In
#                 values:
#                 - e2e-az1
#                 - e2e-az2
    tolerations: []
    volumeSpec:
      size: 1G
      accessmode: ReadWriteOnce
      storagetype: dynamic
      storageclass: ""
#      matchLabels: ""
    expose:
      serviceType: ClusterIP
#      loadBalancerSourceRanges:
#      annotations:
#        pg-cluster-annot: cluster1
#      labels:
#        pg-cluster-label: cluster1
#    customconfig: ""
  pmm:
    enabled: false
    image: percona/pmm-client:2.29.0
#    imagePullPolicy: Always
    serverHost: monitoring-service
    serverUser: admin
    pmmSecret: cluster1-pmm-secret
    resources:
      requests:
        memory: 200M
        cpu: 500m
#      limits:
#        cpu: "1"
#        memory: "400M"
  backup:
    image: percona/percona-postgresql-operator:1.3.0-ppg14-pgbackrest
#    imagePullPolicy: Always
    backrestRepoImage: percona/percona-postgresql-operator:1.3.0-ppg14-pgbackrest-repo
    resources:
      requests:
        cpu: "200m"
        memory: "48Mi"
#      limits:
#        cpu: "1"
#        memory: "64Mi"
#     affinity:
#       antiAffinityType: preferred
    volumeSpec:
      size: 1G
      accessmode: ReadWriteOnce
      storagetype: dynamic
      storageclass: ""
#      matchLabels: ""
#    storages:
#      my-gcs:
#        type: gcs
#        bucket: some-gcs-bucket
#    repoPath: ""
    schedule:
      - name: "sat-night-backup"
        schedule: "0 0 * * 6"
        keep: 3
        type: full
        storage: local
  pgBouncer:
    image: percona/percona-postgresql-operator:1.3.0-ppg14-pgbouncer
#    imagePullPolicy: Always
#    exposePostgresUser: false
    size: 3
    resources:
      requests:
        cpu: "1"
        memory: "128Mi"
#      limits:
#        cpu: "2"
#        memory: "512Mi"
#     affinity:
#       antiAffinityType: preferred
    expose:
      serviceType: ClusterIP
#      loadBalancerSourceRanges:
#      annotations:
#        pg-cluster-annot: cluster1
#      labels:
#        pg-cluster-label: cluster1
  pgReplicas:
    hotStandby:
      size: 2
      resources:
        requests:
          cpu: "500m"
          memory: "256Mi"
#        limits:
#          cpu: "500m"
#          memory: "256Mi"
      volumeSpec:
        accessmode: ReadWriteOnce
        size: 1G
        storagetype: dynamic
        storageclass: ""
#        matchLabels: ""
#      labels:
#        pg-cluster-label: cluster1
#      annotations:
#        pg-cluster-annot: cluster1-1
      enableSyncStandby: false
      expose:
        serviceType: ClusterIP
#        loadBalancerSourceRanges:
#        annotations:
#          pg-cluster-annot: cluster1
#        labels:
#          pg-cluster-label: cluster1
  pgBadger:
    enabled: false
    image: percona/percona-postgresql-operator:1.3.0-ppg14-pgbadger
#    imagePullPolicy: Always
    port: 10000
#  securityContext:
#    fsGroup: 1001
#    supplementalGroups: [1001, 1002, 1003]

By using this default cr.yaml to run the Percona distribution for PostgresSQL on our cluster, the CPU and RAM increased a lot (about more than 20%).
Please let me know if there is any configuration in operator or cr.yaml to optimize the CPU and RAM usage, but still keep the normal effort.

1 Like