How to configure S3 backup for PostgreSQL using values.yaml

How to configure S3 backup storage using values.yaml file or helm chart options?
I do

      type: s3
      bucket: "my-backup"
      region: default
      endpointUrl: "https://s3.mystorage.local"
    - name: "daily-s3"
      schedule: "0 3 * * *"
      keep: 5
      storage: s3-storage
      type: full

Get an error:
Error: INSTALLATION FAILED: YAML parse error on pg-db/templates/cluster.yaml: error converting YAML to JSON: yaml: line 70: did not find expected key

How to fix this?

Hello everyone, I managed to set up that part in values, I even set up s3 parameters in values of pg operator, but to no avail.
This seems to work:

    - name: repo1
        type: "s3"
        bucket: "percona-backup"

A backup pod is always running this:

msg="command to execute is [pgbackrest backup --type=full --db-host= --db-path=/pgdata/pg-db]"

I managed to run arguments using pgtask:

kind: Pgtask
    pg-cluster: pg-db
    pgouser: admin
  name: pg-db-backrest-full-backup
  name: pg-db-backrest-full-backup
    backrest-command: backup
# backup type can be:
# Differential: Create a backup of everything since the last full backup was taken
# --type=diff
# Full: Back up the entire database
# --type=full
# Incremental: Back up everything since the last backup was taken, whether it was full, differential, or incremental
# --type=incr
# backup retention can be:
# --repo1-retention-full=2 how many full backups to retain
# --repo1-retention-diff=4 how many differential backups to retain
# --repo1-retention-archive=6 how many sets of WAL archives to retain alongside
# the full and differential backups that are retained
    backrest-opts: --type=full --repo1-retention-full=5 --repo1-type=s3 --repo1-s3-bucket=percona-backup --repo1-s3-endpoint=.......
    backrest-s3-verify-tls: "false"
    backrest-storage-type: "s3"
    job-name: pg-db-backrest-full-backup
    pg-cluster: pg-db
  tasktype: backrest

but in this case, it’s not allowing me to use repo-s3-key in cmd line :confused:

I suppose it should be using environment variables from pg-db-backrest-repo-config (which I configured too) but it just isn’t. I don’t know what else to try. If anyone can help me it would be great. Thanks

1 Like


Could you please share the helm CLI call on that?

1 Like

Hello, helm command is the ususal

helm upgrade --install pg-db percona/pg-db --version 1.2.0 -n percona --create-namespace -f values-pg-db.yaml

And the values.yaml is:

  image: perconalab/percona-postgresql-operator:main-ppg14-pgbackrest
#    imagePullPolicy: Always
  backrestRepoImage: perconalab/percona-postgresql-operator:main-ppg14-pgbackrest-repo
      cpu: "200m"
      memory: "48Mi"
#      limits:
#        cpu: "1"
#        memory: "64Mi"
#     affinity:
#       antiAffinityType: preferred
    size: 1G
    accessmode: ReadWriteOnce
    storagetype: dynamic
    storageclass: "local-path"
#      matchLabels: ""
    - name: repo1
        type: "s3"
        bucket: "percona-backup"
        region: "east-us"
        uriStyle: "host"
        verifyTLS: "false"
        endpointUrl: ""
    - name: "5min-backup"
      schedule: "*/5 * * * *"
      keep: 3
      type: full
      storage: local,s3

And I used this file to add pgbackrest params, to try and change the command in the backup pod

I followed and tried everything from this doc:

It creates a local backup, and everything else is working fine, but I can’t get s3 upload to work.
Thank you for your reply

1 Like

Thank you for the detailed output. We appreciate it.

Helm chart for database cluster object has one small difference inside storage configuration - it’s not an array in terms of yaml but map.
Please check this link percona-helm-charts/ at pg-operator-1.2.0 · percona/percona-helm-charts · GitHub up. It has a small hint how to set backups storage as operator expects.

1 Like

Hello, and thank you Ivan for your response.
I used this to deploy (might be useful for others)

helm upgrade --install pg-db percona/pg-db --version 1.2.0 -n percona --create-namespace -f values-pg-db.yaml \
  --set \
  --set \
  --set \
  --set'' \
  --set'west-us' \
  --set backup.storages.storage0.verifyTLS=false \
  --set backup.storages.storage0.uriStyle='path' \
  --set bucket.key=IK6Oa8DM9HUgP3uC \
  --set bucket.secret=Ec8xH9WU6CCDpXtMC2CNy1ko2b9hQJU9 \
  --set \
  --set "backup.schedule[0].name=sat-night-backup" \
  --set "backup.schedule[0].schedule=*/5 * * * *" \
  --set "backup.schedule[0].keep=5" \
  --set "backup.schedule[0].type=full" \
  --set "backup.schedule[0].storage=s3" \
  --set pgPrimary.volumeSpec.storageclass=local-path \
  --set backup.volumeSpec.storageclass=local-path \
  --set replicas.volumeSpec.storageclass=local-path

So now it does try to use S3, but stanza won’t create. Looks like it is joining the bucket name to minio host domain. I think I set it up correctly. So not sure what is going on

time="2022-08-05T08:18:51Z" level=info msg="stderr=[ERROR: [039]: HTTP request failed with 404 (Not Found):\n       *** Path/Query ***:\n       GET /?delimiter=%2F&list-type=2&prefix=backrestrepo%2Fpg-db-backrest-shared-repo%2Farchive%2Fdb%2F\n       *** Request Headers ***:\n       authorization: <redacted>\n       content-length: 0\n       host:\n
1 Like


Try to use uriStyle: host instead. It should not append bucket name to the overall S3 URI.

1 Like

Thank you very much Ivan, I made it work. The problem was that I used the wrong storage name in the code below.

1 Like