Worked example request - on demand backup after install via Helm

Hi,

Struggling a little with the documentaion, which at a guess seems to be biased toward installation of the operator via git and kubectl. I’ve taken the Helm approach, and would like to take an on demand backup to test that everythings working correctly before creating a cronjob to run nightly backups, and to do a test restore.

I’ve created and applied a Kubernetes secret, which contains our AWS access key id and secret. and now just need to put together a custom resource which ties everythinig together. It would seen that I need to do 2 things - to run the on demand backup, use the example shown here - Making on-demand backup - Percona Operator for MongoDB

That needs a storagename, which in my case is s3-eu-west. The cr I’ve put together is failing with the error

error: error parsing s3-ms-cr.yaml: error converting YAML to JSON: yaml: did not find expected node content

s3-ms-cr.yaml

backup:
enabled: true
restartOnFailure: true
image: percona/percona-server-mongodb-operator:1.13.0-backup
serviceAccountName: perc-mongo-op-psmdb-operator
serviceAccountName: percona-server-mongodb-operator
storages:
s3-eu-west:
type: s3
s3:
bucket: s3://backups.example.com/archive/server20/
region: eu-west-1
credentialsSecret: secretname-backup-s3

I’ve tried with both service account names, same error

Is this the correct way to go, and if so, does anyone have a pointer as to what I might have done wrong, file appears to be ok , no strange characters.

Many thanks,

Mike

Hi @Michael_Shield, please attach your whole CR. Thanks.

Hi Slava,

I don’t think that would do any good, as further research tended to backup what I was trying to say in the first place. Changes to Helm appear to have to be done using Helm upgrade, and by passing the params you’d like to change in a file, together with the correct “upgrade” options so as not to make unwanted alterations to the existing setup. Which as previously mentioned, isn’t really covered in the current documentation.

I’d managed to uncover what variables Helm was holding, and the update would probably have been straightforward, as the schedule was already in place, and just needed the comments removing, or adding again as a copy with them removed.

Either way, I’ve run into another issue that take presidence, in that the 2 test clusters I’ve built to check this out on, before modifying the production setup, both ran into certificate issues which caused the liveness test to fail, which I will log seperately, and come back to this one once that issue is resolved

Thanks,

Mike

Solved it, I think.

Worked example to follow. All the details are provided, but the docs don’t put the pieces together, as they
do for the kubectl example. It would just be nice if a simple example of updating Helm could be included, as its rather counter intuitive to be using “Helm upgrade --reuse-values” as an update method.

Helm issue

install

Prod cluster

helm install mongodb-clu1 percona/psmdb-db --version 1.13.0 --namespace mongodb --set “replsets[0].volumeSpec.pvc.storageClassName=local-hostpath-mongo-slow-sc”
–set “replsets[0].name=rs0” --set “replsets[0].size=3” --set “replsets[0].volumeSpec.pvc.resources.requests.storage=2000Gi” --set backup.enabled=true --set sharding.enabled=false --set pmm.enabled=true

root@kube-master-1 ~ # helm get values mongodb-clu1 -n mongodb
USER-SUPPLIED VALUES:
backup:
enabled: true
pmm:
enabled: true
replsets:

  • name: rs0
    size: 3
    volumeSpec:
    pvc:
    resources:
    requests:
    storage: 2000Gi
    storageClassName: local-hostpath-mongo-slow-sc
    sharding:
    enabled: false

This shows the user supplied values from the install command, some of which override existing default values.

Test cluster - slightly smaller storage options available

helm install mongodb-clu1 percona/psmdb-db --version 1.14.0 --namespace mongodb --set “replsets[0].volumeSpec.pvc.storageClassName=local-hostpath-mongo-prod-sc”
–set “replsets[0].name=rs0” --set “replsets[0].size=3” --set “replsets[0].volumeSpec.pvc.resources.requests.storage=15Gi” --set backup.enabled=true --set sharding.enabled=false --set pmm.enabled=true

root@kube-1:~# root@kube-1:~# helm get values mongodb-clu1 -n mongodb
USER-SUPPLIED VALUES:
backup:
enabled: true
pmm:
enabled: true
replsets:

  • name: rs0
    size: 3
    volumeSpec:
    pvc:
    resources:
    requests:
    storage: 15Gi
    storageClassName: local-hostpath-mongo-prod-sc
    sharding:
    enabled: false

key commands

helm get all mongodb-clu1 -n mongodb - shows all the values of the current helm release, in this case mongodb-clu1

helm show values percona/psmdb-db - a handy file with all the values we need to do S3 backups and more, but just commented out !

Just copy the backup section to a file, choose between S3 and minio storage option, and select a scheduling option from the tasks section.

helm upgrade mongodb-clu1 percona/psmdb-db -n mongodb --reuse-values -f storage.yaml

where

root@kube-1:~# more storage.yaml
backup:
storages:
s3-eu-west:
type: s3
s3:
bucket: s3://backups.example.com/archive/servers/
credentialsSecret: backup-s3
region: eu-west-1
prefix: “”
uploadPartSize: 10485760
maxUploadParts: 10000
storageClass: STANDARD
insecureSkipTLSVerify: false

and remember to use --reuse-values, or previously selected values will be overwritten

root@kube-1:~# root@kube-1:~# helm get values mongodb-clu1 -n mongodb
USER-SUPPLIED VALUES:
backup:
enabled: true
storages:
s3-eu-west:
s3:
bucket: s3://backups.example.com/archive/servers/
credentialsSecret: backup-s3
insecureSkipTLSVerify: false
maxUploadParts: 10000
prefix: “”
region: eu-west-1
storageClass: STANDARD
uploadPartSize: 10485760
type: s3
pmm:
enabled: true
replsets:

  • name: rs0
    size: 3
    volumeSpec:
    pvc:
    resources:
    requests:
    storage: 15Gi
    storageClassName: local-hostpath-mongo-prod-sc
    sharding:
    enabled: false

@mikes, good news. I will discuss with our technical writer how we can improve our doc.

credentialsSecret has been previously defined to hold our aws s3 credentials