Backup : connection failure to S3-compatible storage

Dear experts,
here is my Helm deployment:

NAME                                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                 APP VERSION
percona-operator-k8saas-integration     reactivedb-integ        2               2021-11-30 15:00:08.192240068 +0000 UTC deployed        psmdb-operator-1.10.0 1.10.0     
perconadb                               reactivedb-integ        10              2021-11-30 18:06:43.185969276 +0100 CET deployed        psmdb-db-1.10.1       1.10.0     

Trying to setup manual S3 backup from these Helm custom values - By the way, I can connect my S3 target by the means of Cyberduck software.

backup:
  enabled: true
  restartOnFailure: false
  serviceAccountName: percona-server-mongodb-operator
  storages:
    s3-fe-devflex:
      type: s3
      s3:
        bucket: my-percona-bucket
        credentialsSecret: s3-fe-devflex
        region: eu-west-0
        endpointUrl: https://oss.eu-west0.prod-cloud-ocb.orange-business.com:443
  pitr:
    enabled: false
  tasks:

Here is the log from Container backup-agent from Pod perconadb-psmdb-db-rs0-0:

2021-12-01T10:50:16.000+0000 E [agentCheckup] check storage connecion: storage check failed with: get S3 object header: RequestError: send request failed 
caused by: Head "https://oss.eu-west0.prod-cloud-ocb.orange-business.com/my-percona-bucket/.pbm.init": Service Unavailable 

There is no way that such an URL does work … which component is appending the bucket name and /pbm.init to the S3 connection URL ?

Does Container backup agent support such ENV variable as proxy http(s):

HTTP_PROXY=http://xxxxx:3128
HTTPS_PROXY=https://xxxx:3128
NO_PROXY=localhost,127.0.0.1

Thanks & Best regards.
Richard

1 Like

According to extra tests, it looks like ENV proxy http(s) variables are supported :grinning:

1 Like

To compare, here is the requested URL sent by Cyberduck tool:

https://<AWS_ACCESS_KEY_ID>@oss.eu-west-0.prod-cloud-ocb.orange-business.com/
1 Like

Hey there. Just a thought, when you created your secret to base64, did you use the -n option? Ie: echo -n “mykey” | base64?

1 Like

Thanks @regis for your reply.
I do confirm I used the -n option. By the way, when checking the related secret in k8s cluster, I can figure out that expected key/value are properly registered

1 Like

And in your secret you have both, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set correct?

1 Like

Yes, I guess @regis . Here is my secret in k8s cluster:

apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: RUhUVkhVSFhXSxxxxxxxxxxxxxxxxxxxxxxxx
  AWS_SECRET_ACCESS_KEY: U0w1WDF1Sxxxxxxxxxxxxxxxxxxxxxxxx
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"AWS_ACCESS_KEY_ID":"RUhUVkhVSFhXS0ZPSxxxxxxx","AWS_SECRET_ACCESS_KEY":"U0w1WDF1Sxxxxxxxxxxx"},"kind":"Secret","metadata":{"annotations":{},"name":"s3-fe-devflex","namespace":"reactivedb-integ"},"type":"Opaque"}
  creationTimestamp: "2021-11-30T09:38:45Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:AWS_ACCESS_KEY_ID: {}
        f:AWS_SECRET_ACCESS_KEY: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:type: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2021-11-30T09:38:45Z"
  name: s3-fe-devflex
  namespace: reactivedb-integ
  resourceVersion: "7243273"
  uid: 27101ec6-0822-4305-8d95-9cf1eb2293bd
type: Opaque
1 Like

Can you check your region setting in kubernetes? On your kubernetes cluster, it is set to eu-west0, whereas your other link shows as eu-west-0 (mind the - between west and 0).

Just to be clear, I think your endpoint should be https://oss.eu-west-0.prod-cloud-ocb.orange-business.com:443 and not https://oss.eu-west0.prod-cloud-ocb.orange-business.com:443

1 Like

My fault @regis … once I fixed my eu-west-0 typo error, I got my MongoDB backup in my bucket :+1:

1 Like

Glad to hear your issue is fixed. Don’t worry about it, we’ve all done it and sometime it’s hard to see small details like this and a fresh eye can make a big difference. Have a good one.

1 Like