Cronjobs not created in 2.3.1

The 2.3.1 version is not creating the cronjobs. The same yaml with 2.3.0 does produce the cronjobs.

Using the cr.yaml from the 2.3.1 github source apply the yaml and no cronjobs are created.

Kubernetes 1.27.11

Below is my test yaml. I did have to add a storage class and min vol size that is required for our environment. The rest of the yaml is the cr.yaml from github.

apiVersion: pgv2.percona.com/v2
kind: PerconaPGCluster
metadata:
name: cluster1

finalizers:

- percona.com/delete-pvc

- percona.com/delete-ssl

spec:
crVersion: 2.3.1

secrets:

customTLSSecret:

name: cluster1-cert

customReplicationTLSSecret:

name: replication1-cert

standby:

enabled: true

host: “”

port: “”

repoName: repo1

openshift: true

users:

- name: rhino

databases:

- zoo

options: “SUPERUSER”

password:

type: ASCII

secretName: “rhino-credentials”

databaseInitSQL:

key: init.sql

name: cluster1-init-sql

pause: true

unmanaged: true

dataSource:

postgresCluster:

clusterName: cluster1

repoName: repo1

options:

- --type=time

- --target=“2021-06-09 14:15:11-04”

pgbackrest:

stanza: db

configuration:

- secret:

name: pgo-s3-creds

global:

repo1-path: /pgbackrest/postgres-operator/hippo/repo1

repo:

name: repo1

s3:

bucket: “my-bucket”

endpoint: “s3.ca-central-1.amazonaws.com

region: “ca-central-1”

image: artifactory.ssnc.dev/docker-repos/percona/percona-postgresql-operator:2.3.1-ppg16-postgres
imagePullPolicy: Always
postgresVersion: 16

port: 5432

expose:

annotations:

my-annotation: value1

labels:

my-label: value2

type: LoadBalancer

loadBalancerSourceRanges:

- 10.0.0.0/8

instances:

resources:

limits:

cpu: 2.0

memory: 4Gi

sidecars:

- name: testcontainer

image: mycontainer1:latest

- name: testcontainer2

image: mycontainer1:latest

topologySpreadConstraints:

- maxSkew: 1

topologyKey: my-node-label

whenUnsatisfiable: DoNotSchedule

labelSelector:

matchLabels:

postgres-operator.crunchydata.com/instance-set: instance1

tolerations:

- effect: NoSchedule

key: role

operator: Equal

value: connection-poolers

priorityClassName: high-priority

walVolumeClaimSpec:

accessModes:

- “ReadWriteOnce”

resources:

requests:

storage: 1Gi

dataVolumeClaimSpec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: pvt-vol-storage

resources:

limits:

cpu: 2.0

memory: 4Gi

sidecars:

- name: testcontainer

image: mycontainer1:latest

- name: testcontainer2

image: mycontainer1:latest

topologySpreadConstraints:

- maxSkew: 1

topologyKey: my-node-label

whenUnsatisfiable: DoNotSchedule

labelSelector:

matchLabels:

postgres-operator.crunchydata.com/instance-set: instance1

tolerations:

- effect: NoSchedule

key: role

operator: Equal

value: connection-poolers

priorityClassName: high-priority

walVolumeClaimSpec:

accessModes:

- “ReadWriteOnce”

resources:

requests:

storage: 1Gi

dataVolumeClaimSpec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: pvt-vol-storage        

proxy:
pgBouncer:
replicas: 3
image: artifactory.ssnc.dev/docker-repos/percona/percona-postgresql-operator:2.3.1-ppg16-pgbouncer

exposeSuperusers: true

resources:

limits:

cpu: 200m

memory: 128Mi

expose:

annotations:

my-annotation: value1

labels:

my-label: value2

type: LoadBalancer

loadBalancerSourceRanges:

- 10.0.0.0/8

  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        podAffinityTerm:
          labelSelector:
            matchLabels:
              postgres-operator.crunchydata.com/role: pgbouncer
          topologyKey: kubernetes.io/hostname

tolerations:

- effect: NoSchedule

key: role

operator: Equal

value: connection-poolers

topologySpreadConstraints:

- maxSkew: 1

topologyKey: my-node-label

whenUnsatisfiable: ScheduleAnyway

labelSelector:

matchLabels:

postgres-operator.crunchydata.com/role: pgbouncer

sidecars:

- name: bouncertestcontainer1

image: mycontainer1:latest

customTLSSecret:

name: keycloakdb-pgbouncer.tls

config:

global:

pool_mode: transaction

backups:
pgbackrest:

metadata:

labels:

  image: artifactory.ssnc.dev/docker-repos/percona/percona-postgresql-operator:2.3.1-ppg16-pgbackrest

configuration:

- secret:

name: cluster1-pgbackrest-secrets

jobs:

priorityClassName: high-priority

resources:

limits:

cpu: 200m

memory: 128Mi

tolerations:

- effect: NoSchedule

key: role

operator: Equal

value: connection-poolers

global:

repo1-retention-full: “14”

repo1-retention-full-type: time

repo1-path: /pgbackrest/postgres-operator/cluster1/repo1

repo1-cipher-type: aes-256-cbc

repo1-s3-uri-style: path

repo2-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo2

repo3-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo3

repo4-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo4

  repoHost:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
         - weight: 1
           podAffinityTerm:
             labelSelector:
               matchLabels:
                 postgres-operator.crunchydata.com/data: pgbackrest
             topologyKey: kubernetes.io/hostname

priorityClassName: high-priority

topologySpreadConstraints:

- maxSkew: 1

topologyKey: my-node-label

whenUnsatisfiable: ScheduleAnyway

labelSelector:

matchLabels:

postgres-operator.crunchydata.com/pgbackrest: “”

  manual:
    repoName: repo1
    options:
     - --type=full
  repos:
  - name: repo1
    schedules:
      full: "0 0 * * 6"

differential: “0 1 * * 1-6”

    volume:
      volumeClaimSpec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: pvt-vol-storage        

- name: repo2

s3:

bucket: “<YOUR_AWS_S3_BUCKET_NAME>”

endpoint: “<YOUR_AWS_S3_ENDPOINT>”

region: “<YOUR_AWS_S3_REGION>”

- name: repo3

gcs:

bucket: “<YOUR_GCS_BUCKET_NAME>”

- name: repo4

azure:

container: “<YOUR_AZURE_CONTAINER>”

restore:

enabled: true

repoName: repo1

options:

PITR restore in place

- --type=time

- --target=“2021-06-09 14:15:11-04”

restore individual databases

- --db-include=hippo

pmm:
enabled: false
image: artifactory.ssnc.dev/docker-repos/percona/pmm-client:2.41.0

imagePullPolicy: IfNotPresent

secret: cluster1-pmm-secret
serverHost: monitoring-service

patroni:

dynamicConfiguration:

postgresql:

parameters:

max_parallel_workers: 2

max_worker_processes: 2

shared_buffers: 1GB

work_mem: 2MB

extensions:

image: percona/percona-postgresql-operator:2.3.1

imagePullPolicy: Always

storage:

type: s3

bucket: pg-extensions

region: eu-central-1

secret:

name: cluster1-extensions-secret

builtin:

pg_stat_monitor: true

pg_audit: true

custom:

- name: pg_cron

version: 1.6.1

Hello @Lobo_Lobo20919 ,

you are correct, we stopped using cronjobs. The problem that we faced was with owner reference and validation if pg backup succeeded.
As a result, we switched to an Operator SDK scheduling mechanism. We use it in all our Operators.

Please let me know if it is a problem and why.

No it’s not a problem. It was a lack of understanding as the documentation I could find including the .MD doc in the project still talks about K8 CronJobs. We had constant failures with the cron jobs so I tried the upgrade to 2.3.1 and did not understand how it was working.

Thanks for your response.