Backup : connection failure to S3-compatible storage

Dear experts,
here is my Helm deployment:

NAME                                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                 APP VERSION
percona-operator-k8saas-integration     reactivedb-integ        2               2021-11-30 15:00:08.192240068 +0000 UTC deployed        psmdb-operator-1.10.0 1.10.0     
perconadb                               reactivedb-integ        10              2021-11-30 18:06:43.185969276 +0100 CET deployed        psmdb-db-1.10.1       1.10.0     

Trying to setup manual S3 backup from these Helm custom values - By the way, I can connect my S3 target by the means of Cyberduck software.

  enabled: true
  restartOnFailure: false
  serviceAccountName: percona-server-mongodb-operator
      type: s3
        bucket: my-percona-bucket
        credentialsSecret: s3-fe-devflex
        region: eu-west-0
    enabled: false

Here is the log from Container backup-agent from Pod perconadb-psmdb-db-rs0-0:

2021-12-01T10:50:16.000+0000 E [agentCheckup] check storage connecion: storage check failed with: get S3 object header: RequestError: send request failed 
caused by: Head "": Service Unavailable 

There is no way that such an URL does work … which component is appending the bucket name and /pbm.init to the S3 connection URL ?

Does Container backup agent support such ENV variable as proxy http(s):


Thanks & Best regards.

1 Like

According to extra tests, it looks like ENV proxy http(s) variables are supported :grinning:

1 Like

To compare, here is the requested URL sent by Cyberduck tool:

1 Like

Hey there. Just a thought, when you created your secret to base64, did you use the -n option? Ie: echo -n “mykey” | base64?

1 Like

Thanks @regis for your reply.
I do confirm I used the -n option. By the way, when checking the related secret in k8s cluster, I can figure out that expected key/value are properly registered

1 Like

And in your secret you have both, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set correct?

1 Like

Yes, I guess @regis . Here is my secret in k8s cluster:

apiVersion: v1
  AWS_ACCESS_KEY_ID: RUhUVkhVSFhXSxxxxxxxxxxxxxxxxxxxxxxxx
  AWS_SECRET_ACCESS_KEY: U0w1WDF1Sxxxxxxxxxxxxxxxxxxxxxxxx
kind: Secret
  annotations: |
  creationTimestamp: "2021-11-30T09:38:45Z"
  - apiVersion: v1
    fieldsType: FieldsV1
        .: {}
        f:AWS_ACCESS_KEY_ID: {}
          .: {}
      f:type: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2021-11-30T09:38:45Z"
  name: s3-fe-devflex
  namespace: reactivedb-integ
  resourceVersion: "7243273"
  uid: 27101ec6-0822-4305-8d95-9cf1eb2293bd
type: Opaque
1 Like

Can you check your region setting in kubernetes? On your kubernetes cluster, it is set to eu-west0, whereas your other link shows as eu-west-0 (mind the - between west and 0).

Just to be clear, I think your endpoint should be and not

1 Like

My fault @regis … once I fixed my eu-west-0 typo error, I got my MongoDB backup in my bucket :+1:

1 Like

Glad to hear your issue is fixed. Don’t worry about it, we’ve all done it and sometime it’s hard to see small details like this and a fresh eye can make a big difference. Have a good one.

1 Like