Description:
I’ve setup 2 clusters in the same namespace and faced with issue that opertaor is spamming with messages like:
11daf0d8-15a5-48cc-891c-47c4c938d268", "job": "replicaset-test-cluster-daily-physical-backup"}
2025-05-22T20:27:55.840Z INFO deleting outdated backup job {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"replicaset-test-cluster","namespace":"percona-mongodb"}, "namespace": "percona-mongodb", "name": "replicaset-test-cluster", "reconcileID": "1c4ef043-1bc8-486b-aec1-0cc1a30789d8", "job": "replicaset-test2-cluster-daily-physical-backup"}
2025-05-22T20:27:59.908Z INFO deleting outdated backup job {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"replicaset-test2-cluster","namespace":"percona-mongodb"}, "namespace": "percona-mongodb", "name": "replicaset-test2-cluster", "reconcileID": "d7f8da95-4a60-42eb-a75b-3ddc7399ff6e", "job": "replicaset-test-cluster-daily-physical-backup"}
2025-05-22T20:28:01.900Z INFO deleting outdated backup job {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"replicaset-test-cluster","namespace":"percona-mongodb"}, "namespace": "percona-mongodb", "name": "replicaset-test-cluster", "reconcileID": "7cb4e87f-4f81-431b-a678-32d120700de2", "job": "replicaset-test2-cluster-daily-physical-backup"}
2025-05-22T20:28:06.681Z INFO deleting outdated backup job {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"replicaset-test2-cluster","namespace":"percona-mongodb"}, "namespace": "percona-mongodb", "name": "replicaset-test2-cluster", "reconcileID": "a4a36133-fc61-4323-87fb-dca2226ebf94", "job": "replicaset-test-cluster-daily-physical-backup"}
For some reasons it tries to delete outdated backup job for cluster A by task from cluster B and for cluster B by task from cluster A.
Deleting cluster B helps to reduce spamming to only deleting backup jobs for cluster A from cluster B tasks. Operator and cluster recreation doesn’t help
Steps to Reproduce:
Setup cluster A and check logs, should be ok:
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
name: replicaset-test-cluster
namespace: percona-mongodb
finalizers:
- percona.com/delete-psmdb-pods-in-order
spec:
clusterServiceDNSMode: Internal
crVersion: 1.20.0
image: percona/percona-server-mongodb:7.0.18-11
secrets:
users: replicaset-test-cluster-secrets
encryptionKey: replicaset-test-cluster-mongodb-encryption-key
replsets:
- name: rs0
size: 3
configuration: |
operationProfiling:
mode: all
slowOpThresholdMs: 100
rateLimit: 10
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
expose:
enabled: true
type: ClusterIP
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 3Gi
sharding:
enabled: false
users:
- name: app-test-user
db: admin
passwordSecretRef:
name: app-test-user-secret
key: password
roles:
- name: readWrite
db: test_db
- name: read
db: sample_mflix
- name: read
db: sample_airbnb
backup:
enabled: true
image: percona/percona-backup-mongodb:2.9.1
storages:
s3-us-east:
type: s3
s3:
bucket: k8s-slice-mongodb-qa
credentialsSecret: replicaset-test-cluster-backup-s3
region: us-east-1
prefix: "replicaset-test-cluster"
tasks:
- name: replicaset-test-cluster-daily-physical-backup
enabled: true
schedule: "0 0 * * *"
keep: 3
storageName: s3-us-east
compressionType: gzip
compressionLevel: 6
type: physical
pmm:
enabled: false
image: percona/pmm-client:2.44.1
serverHost: pmm-qa.slicetest.com
mongodParams: --environment=QA --cluster=replicaset-test-cluster
Setup cluster B:
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
name: replicaset-test2-cluster
namespace: percona-mongodb
finalizers:
- percona.com/delete-psmdb-pods-in-orderspec:
clusterServiceDNSMode: Internal
crVersion: 1.20.0
image: percona/percona-server-mongodb:7.0.18-11
secrets:
users: replicaset-test2-cluster-secrets
encryptionKey: replicaset-test2-cluster-mongodb-encryption-key
replsets:
- name: rs0
size: 3
configuration: |
operationProfiling:
mode: all
slowOpThresholdMs: 100
rateLimit: 10
affinity:
antiAffinityTopologyKey: "kubernetes.io/hostname"
expose:
enabled: true
type: ClusterIP
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 3Gi
sharding:
enabled: false
users:
- name: app-test-user
db: admin
passwordSecretRef:
name: app-test-user-secret
key: password
roles:
- name: readWrite
db: test_db
backup:
enabled: true
image: percona/percona-backup-mongodb:2.9.1
storages:
s3-us-east:
type: s3
s3:
bucket: k8s-slice-mongodb-qa
credentialsSecret: replicaset-test2-cluster-backup-s3
region: us-east-1
prefix: "replicaset-test2-cluster"
tasks:
- name: replicaset-test2-cluster-daily-physical-backup
enabled: true
schedule: "0 0 * * *"
keep: 3
storageName: s3-us-east
compressionType: gzip
compressionLevel: 6
type: physical
pmm:
enabled: false
image: percona/pmm-client:2.44.1
serverHost: pmm-qa.slicetest.com
mongodParams: --environment=QA --cluster=replicaset-test2-cluster
With setting up cluster B the operator starts spamming immediately.
Version:
Operator: 1.20.0
MongoDB: 7.0.18-11
Backup: 2.9.1
Logs:
Logs includes:
- creating cluster B
- dropping cluster B and some errors
Expected Result:
Operator should recognise backup tasks per cluster and do not mixed them.
Actual Result:
Opertor mixing backup tasks.