How to configure S3 backup for PostgreSQL using values.yaml

How to configure S3 backup storage using values.yaml file or helm chart options?
I do

backup:
  storages:
    s3-storage:
      type: s3
      bucket: "my-backup"
      region: default
      endpointUrl: "https://s3.mystorage.local"
  schedule:
    - name: "daily-s3"
      schedule: "0 3 * * *"
      keep: 5
      storage: s3-storage
      type: full

Get an error:
Error: INSTALLATION FAILED: YAML parse error on pg-db/templates/cluster.yaml: error converting YAML to JSON: yaml: line 70: did not find expected key

How to fix this?

Hello everyone, I managed to set up that part in values, I even set up s3 parameters in values of pg operator, but to no avail.
This seems to work:

  storages:
    - name: repo1
      s3:
        type: "s3"
        bucket: "percona-backup"
.....

A backup pod is always running this:

msg="command to execute is [pgbackrest backup --type=full --db-host=10.42.0.184 --db-path=/pgdata/pg-db]"

I managed to run arguments using pgtask:

apiVersion: pg.percona.com/v1
kind: Pgtask
metadata:
  labels:
    pg-cluster: pg-db
    pgouser: admin
  name: pg-db-backrest-full-backup
spec:
  name: pg-db-backrest-full-backup
  parameters:
    backrest-command: backup
# backup type can be:
# Differential: Create a backup of everything since the last full backup was taken
# --type=diff
# Full: Back up the entire database
# --type=full
# Incremental: Back up everything since the last backup was taken, whether it was full, differential, or incremental
# --type=incr
# backup retention can be:
# --repo1-retention-full=2 how many full backups to retain
# --repo1-retention-diff=4 how many differential backups to retain
# --repo1-retention-archive=6 how many sets of WAL archives to retain alongside
# the full and differential backups that are retained
    backrest-opts: --type=full --repo1-retention-full=5 --repo1-type=s3 --repo1-s3-bucket=percona-backup --repo1-s3-endpoint=.......
    backrest-s3-verify-tls: "false"
    backrest-storage-type: "s3"
    job-name: pg-db-backrest-full-backup
    pg-cluster: pg-db
  tasktype: backrest

but in this case, it’s not allowing me to use repo-s3-key in cmd line :confused:

I suppose it should be using environment variables from pg-db-backrest-repo-config (which I configured too) but it just isn’t. I don’t know what else to try. If anyone can help me it would be great. Thanks

1 Like

Hi

Could you please share the helm CLI call on that?

1 Like

Hello, helm command is the ususal

helm upgrade --install pg-db percona/pg-db --version 1.2.0 -n percona --create-namespace -f values-pg-db.yaml

And the values.yaml is:

backup:
  image: perconalab/percona-postgresql-operator:main-ppg14-pgbackrest
#    imagePullPolicy: Always
  backrestRepoImage: perconalab/percona-postgresql-operator:main-ppg14-pgbackrest-repo
  resources:
    requests:
      cpu: "200m"
      memory: "48Mi"
#      limits:
#        cpu: "1"
#        memory: "64Mi"
#     affinity:
#       antiAffinityType: preferred
  volumeSpec:
    size: 1G
    accessmode: ReadWriteOnce
    storagetype: dynamic
    storageclass: "local-path"
#      matchLabels: ""
  storages:
    - name: repo1
      s3:
        type: "s3"
        bucket: "percona-backup"
        region: "east-us"
        uriStyle: "host"
        verifyTLS: "false"
        endpointUrl: "objectstorage.XXX.host"
  schedule:
    - name: "5min-backup"
      schedule: "*/5 * * * *"
      keep: 3
      type: full
      storage: local,s3

And I used this file to add pgbackrest params, to try and change the command in the backup pod

https://github.com/percona/percona-postgresql-operator/blob/main/deploy/backup/backup.yaml

I followed and tried everything from this doc:

It creates a local backup, and everything else is working fine, but I can’t get s3 upload to work.
Thank you for your reply

1 Like

Thank you for the detailed output. We appreciate it.

Helm chart for database cluster object has one small difference inside storage configuration - it’s not an array in terms of yaml but map.
Please check this link percona-helm-charts/README.md at pg-operator-1.2.0 · percona/percona-helm-charts · GitHub up. It has a small hint how to set backups storage as operator expects.

1 Like

Hello, and thank you Ivan for your response.
I used this to deploy (might be useful for others)

helm upgrade --install pg-db percona/pg-db --version 1.2.0 -n percona --create-namespace -f values-pg-db.yaml \
  --set backup.storages.my-s3.bucket=percona-backup \
  --set backup.storages.my-s3.type=s3 \
  --set backup.storages.my-s3.name=my-s3 \
  --set backup.storages.my-s3.endpointUrl='objectstorage.XXXXXXXX.host' \
  --set backup.storages.my-s3.region='west-us' \
  --set backup.storages.storage0.verifyTLS=false \
  --set backup.storages.storage0.uriStyle='path' \
  --set bucket.key=IK6Oa8DM9HUgP3uC \
  --set bucket.secret=Ec8xH9WU6CCDpXtMC2CNy1ko2b9hQJU9 \
  --set backup.storages.my-local.type=local \
  --set "backup.schedule[0].name=sat-night-backup" \
  --set "backup.schedule[0].schedule=*/5 * * * *" \
  --set "backup.schedule[0].keep=5" \
  --set "backup.schedule[0].type=full" \
  --set "backup.schedule[0].storage=s3" \
  --set pgPrimary.volumeSpec.storageclass=local-path \
  --set backup.volumeSpec.storageclass=local-path \
  --set replicas.volumeSpec.storageclass=local-path

So now it does try to use S3, but stanza won’t create. Looks like it is joining the bucket name to minio host domain. I think I set it up correctly. So not sure what is going on

time="2022-08-05T08:18:51Z" level=info msg="stderr=[ERROR: [039]: HTTP request failed with 404 (Not Found):\n       *** Path/Query ***:\n       GET /?delimiter=%2F&list-type=2&prefix=backrestrepo%2Fpg-db-backrest-shared-repo%2Farchive%2Fdb%2F\n       *** Request Headers ***:\n       authorization: <redacted>\n       content-length: 0\n       host: percona-backup.objectstorage.XXXXX.host\n
1 Like

HI

Try to use uriStyle: host instead. It should not append bucket name to the overall S3 URI.

1 Like

Thank you very much Ivan, I made it work. The problem was that I used the wrong storage name in the code below.

1 Like

Hi,

I am having a similar issue with setting up s3 backup on minio with values file.
My values file looks like this:



finalizers:


crVersion: 2.3.1
repository: percona/percona-postgresql-operator
image: ""
imagePullPolicy: Always
postgresVersion: 15

pause: false
unmanaged: false
standby:
  enabled: false


customTLSSecret:
  name: ""
customReplicationTLSSecret:
  name: ""



instances:
- name: service-1
  replicas: 3

  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        podAffinityTerm:
          labelSelector:
            matchLabels:
              postgres-operator.crunchydata.com/data: postgres
          topologyKey: kubernetes.io/hostname


  walVolumeClaimSpec:
    storageClassName: "default"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 2Gi

  dataVolumeClaimSpec:
    storageClassName: "default"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 2Gi

proxy:
  pgBouncer:
    replicas: 3
    image: ""


    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          podAffinityTerm:
            labelSelector:
              matchLabels:
                postgres-operator.crunchydata.com/role: pgbouncer
            topologyKey: kubernetes.io/hostname


backups:
  pgbackrest:

    image: ""

    global:
      repo1-retention-full: "14"
      repo1-retention-full-type: time

      repo2-retention-full: "14"
      repo2-retention-full-type: time
      repo2-s3-uri-style: path
      repo2-s3-verify-tls: false
      repo2-s3-key: "..."
      repo2-s3-key-secret: "..."
      repo2-retention-full: "14"
      repo2-retention-full-type: time
      repo2-path: /repo2


    repoHost:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  postgres-operator.crunchydata.com/data: pgbackrest
              topologyKey: kubernetes.io/hostname

     priorityClassName: high-priority

     topologySpreadConstraints:
     - maxSkew: 1
       topologyKey: my-node-label
       whenUnsatisfiable: ScheduleAnyway
       labelSelector:
         matchLabels:
           postgres-operator.crunchydata.com/pgbackrest: ""

    manual:
      repoName: repo1
      options:
      - --type=full
    repos:
    - name: repo1
      schedules:
        full: "* * * 7 *"

        incremental: "* * * 5 *"
      volume:
        volumeClaimSpec:
          storageClassName: "default"
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi
    - name: repo2
      s3:
        bucket: "..."
        endpoint: "..."

        region: "default"
      schedules:
        full: "*/5 * * * *"


pmm:
  enabled: false
  image:
    repository: percona/pmm-client
    tag: 2.41.0

  secret: cluster1-pmm-secret
  serverHost: monitoring-service

secrets:

  primaryuser: test123

  postgres: test123

  pgbouncer:

  pguser: test123

I have been researching for a week and couldn’t make it work
The pods aren’t even starting because pgbackrest-config volume is not mounting. There are no logs

Hello @mboncalo.

I do not see any secrets. Have you created any for your S3?