Descriptions
-
We are using operator v2.4.0 , when i configured the backup to s3 or local (pvc) full and incremental . Using the following config in cr.yaml of the helm chart -
spec:
backups:
pgbackrest:
configuration:
- secret:
name: cluster1-pgbackrest-secrets
global:
repo1-path: /pgbackrest/repo1
repo1-retention-full: “3”
repo1-retention-full-type: count
repo2-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo2
repo2-retention-full: “3”
repo2-retention-full-type: count
image: perconalab/percona-postgresql-operator:main-ppg16-pgbackrest
manual:
options:
- --type=full
repoName: repo1
repoHost:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: db
operator: In
values:
- enabled
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
postgres-operator.crunchydata.com/data: pgbackrest
topologyKey: kubernetes.io/hostname
weight: 1
tolerations:
- effect: NoSchedule
key: db
operator: Equal
value: allowed
repos:
- name: repo1
schedules:
full: 0 0 * * 6
incremental: 0 1 * * 1-6
volume:
volumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
- name: repo2
s3:
bucket: postgres-backup
endpoint: s3.us-east-2…
region: us-east-2
schedules:
full: 0 0 * * 0
incremental: 0 3 * * 1-6 -
When this get schedules according to cron it works fine for first time but as per cron when second time backup job tries to schedule pod again to run backup , it does not get scheduled.
-
If we check kubectl get events we get error message -
not starting job because prior execution is running and concurrency policy is forbid
- And the job which ran first time for backup have been successfully completed , you can see in screen shot .
Steps to Reproduce:
[Step-by-step instructions on how to reproduce the issue, including any specific settings or configurations]
- Install percona operator postgresql
- Add config for s3 or local back using cron
- It will run successfully for the first time and will say the above as mentioned at second time onwards.
Version:
v2.4.0
Additional Information:
- if i edit and add ( ttlSecondsAfterFinished: 6000 ) in cronjob.batch it works as it deletes the job entry after it finished . so , when corn setup new job it doesn’t find the old entry and run the backup successfully .
- But i am unable to set this ttlSecondsAfterFinished in cr.yaml for persistence , it does not recognise this parameter . otherwise whenever i do new deployment this parameter will be gone.