Hi.
I’m testing on AWS EKS and using 1.15.0.
As shown in the configuration below, IRSA is applied to send backups to S3.
So I don’t use the backup.storages.<storage-name>.s3.credentialsSecret
setting and the backup-agent is running backups fine in each pod.
ServiceAccount :
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: 'arn:aws:iam::1234567890:role/irsa--mongodb'
creationTimestamp: '2024-01-25T10:27:13Z'
labels:
app.kubernetes.io/instance: mongodb-operator
name: mongodb
namespace: mongodb
DB :
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
labels:
app.kubernetes.io/instance: case-rs
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: case-rs
app.kubernetes.io/version: 1.15.0
helm.sh/chart: psmdb-db-1.15.0
name: case-rs
namespace: mongodb
spec:
backup:
enabled: true
image: 'percona/percona-backup-mongodb:2.3.0'
pitr:
enabled: true
serviceAccountName: mongodb
storages:
s3-ap-northeast-2:
s3:
bucket: database-backup-kr-alpha
prefix: scheduled/case-rs
region: ap-northeast-2
type: s3
tasks:
- compressionLevel: 6
compressionType: gzip
enabled: true
keep: 7
name: daily-s3-ap-northeast-2
schedule: 30 18 * * *
storageName: s3-ap-northeast-2
type: physical
clusterServiceDNSMode: ServiceMesh
crVersion: 1.15.0
image: 'percona/percona-server-mongodb:4.4.24-23'
imagePullPolicy: Always
multiCluster:
enabled: false
pause: false
pmm:
enabled: false
image: 'percona/pmm-client:2.39.0'
serverHost: monitoring-service
replsets:
- affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
arbiter:
enabled: false
size: 1
configuration: |
net:
tls:
mode: preferTLS
allowConnectionsWithoutCertificates: true
allowInvalidHostnames: true
allowInvalidCertificates: true
security:
enableEncryption: false
systemLog:
verbosity: 0
setParameter:
diagnosticDataCollectionDirectoryPath: "/data/db/diagnostic-mongod.data"
diagnosticDataCollectionPeriodMillis: 10000
authenticationMechanisms : "SCRAM-SHA-1,MONGODB-X509"
maxIndexBuildMemoryUsageMegabytes: 250
expose:
enabled: true
exposeType: LoadBalancer
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-security-groups: 'eks-default,alpha-mongodb'
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=false
service.beta.kubernetes.io/aws-load-balancer-type: external
livenessProbe:
failureThreshold: 9999
initialDelaySeconds: 300
periodSeconds: 300
startupDelaySeconds: 7200
timeoutSeconds: 30
name: shard01
nodeSelector:
group: mongodb
nonvoting:
enabled: false
size: 1
podDisruptionBudget:
maxUnavailable: 1
readinessProbe:
failureThreshold: 9
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 10
resources:
limits:
memory: 4Gi
serviceAccountName: mongodb
size: 3
splitHorizons:
case-rs-shard01-0:
external: >-
k8s-mongodb-caserssh-c446b747cf-d6619e73f716c4f4.elb.ap-northeast-2.amazonaws.com
case-rs-shard01-1:
external: >-
k8s-mongodb-caserssh-34a286c734-c6920b717490dc70.elb.ap-northeast-2.amazonaws.com
case-rs-shard01-2:
external: >-
k8s-mongodb-caserssh-7289baff1b-cf76537fa00fa7d2.elb.ap-northeast-2.amazonaws.com
storage:
wiredTiger:
collectionConfig:
blockCompressor: snappy
engineConfig:
cacheSizeRatio: 0.6
directoryForIndexes: false
journalCompressor: snappy
indexConfig:
prefixCompression: true
tolerations:
- effect: NoSchedule
key: group
operator: Equal
value: mongodb
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 150Gi
storageClassName: gp3-xfs
secrets:
users: case-rs-user
sharding:
balancer:
enabled: true
configsvrReplSet:
podDisruptionBudget:
maxUnavailable: 1
size: 3
volumeSpec:
emptyDir: {}
enabled: false
mongos:
expose: {}
podDisruptionBudget:
maxUnavailable: 1
size: 2
unmanaged: false
updateStrategy: SmartUpdate
upgradeOptions:
apply: Disabled
schedule: 0 8 * * *
setFCV: true
versionServiceEndpoint: 'https://check.percona.com'
The problem is that I am getting the message below in Operator.
2024-03-20T08:18:46.337Z ERROR failed to run finalizer {"controller": "psmdbbackup-controller", "object": {"name":"cron-case-rs-20240217000000-2qsln","namespace":"mongodb"}, "namespace": "mongodb", "name": "cron-case-rs-20240217000000-2qsln", "reconcileID": "c786b558-089e-4d06-9d5b-cc6498bd92d6", "finalizer": "delete-backup", "error": "get storage: no s3 credentials specified for the secret name", "errorVerbose": "no s3 credentials specified for the secret name\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).getPBMStorage\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:259\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).deleteBackupFinalizer\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:373\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).checkFinalizers\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:340\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:182\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598\nget storage\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).deleteBackupFinalizer\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:375\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).checkFinalizers\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:340\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:182\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598"}
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).checkFinalizers
/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:341
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup.(*ReconcilePerconaServerMongoDBBackup).Reconcile
/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodbbackup/perconaservermongodbbackup_controller.go:182
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:119
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:316
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.1/pkg/internal/controller/controller.go:227
below is the backup content where this error occurred.
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
creationTimestamp: "2024-02-17T00:00:00Z"
deletionGracePeriodSeconds: 0
deletionTimestamp: "2024-03-20T06:55:40Z"
finalizers:
- delete-backup
generateName: cron-case-rs-20240217000000-
generation: 2
labels:
ancestor: daily-s3-ap-northeast-2
cluster: case-rs
type: cron
name: cron-case-rs-20240217000000-2qsln
namespace: mongodb
resourceVersion: "270491211"
uid: d1d72d34-8cce-4d3a-8c4e-66f3bde5cad1
spec:
clusterName: case-rs
compressionLevel: 6
compressionType: gzip
storageName: s3-ap-northeast-2
type: logical
status:
completed: "2024-02-17T00:00:29Z"
destination: s3://database-backup-kr-alpha/scheduled/case-rs/2024-02-17T00:00:21Z
lastTransition: "2024-02-17T00:00:29Z"
pbmName: "2024-02-17T00:00:21Z"
replsetNames:
- shard01
s3:
bucket: database-backup-kr-alpha
prefix: scheduled/case-rs
region: ap-northeast-2
serverSideEncryption: {}
start: "2024-02-17T00:00:21Z"
state: ready
storageName: s3-ap-northeast-2
type: logical
It is presumed that this is because an error saying "error": "get storage: no s3 credentials specified for the secret name
occurred in the psmdb-backup controller.
I understand that the backup function has recently become available without S3 Credentials.
Still, why doesn’t psmdb-backup selectively check if the Credentials Secret exists?