Backup Issued to Google Storage (GS)

Description:

Trying to get backup to a GCS bucket to work with either a S3 compatible pattern or Google storage pattern. Neither patter seems to work.

Steps to Reproduce:

Backup config:

  backups:
    pgbackrest:
      image: perconalab/percona-postgresql-operator:main-pgbackrest17
      configuration:
        - secret:
            name: dashboard-v2-6-0-pgbackrest-secrets
      global:
        repo1-path: /pgbackrest/postgres-operator/dashboard-v2-6-0/repo1
        repo1-retention-full: "7"
        repo2-path: /pgbackrest/developer-grafana12/postgres/dashboard-v2-6-0
        repo2-retention-full: "7"
      repoHost:
      manual:
        repoName: repo2
        options:
         - --type=full
      repos:
      - name: repo1
        volume:
          volumeClaimSpec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: "20Gi"
      - name: repo2
        schedules:
          full: "0 0 * * 6"
          differential: "0 1 * * 1-6"
          incremental: "0 1 * * 1-6"
        gcs:
          bucket: "dev-dashboards"

dashboard-v2-6-0-pgbackrest-secrets

contains two data values:

gcs.conf: (base64 encoded)

[global]
repo2-gcs-key=/etc/pgbackrest/conf.d/gcs-key.json

gcs-key.json: (base64 of service file)

Trigger backup:

apiVersion: pgv2.percona.com/v2
kind: PerconaPGBackup
metadata:
  name: backuptest
spec:
  pgCluster: dashboard-v2-6-0
  repoName: repo2
  options:
    - --type=full

kubectl -n postgres-operator get PerconaPGBackup

backuptest   dashboard-v2-6-0   repo2                                             18m

Version:

2.6.0 Operator
Postgres 17

Logs:

kubectl -n postgres-operator logs pod/dashboard-v2-6-0-repo-host-0

Defaulted container "pgbackrest" out of: pgbackrest, pgbackrest-config, pgbackrest-log-dir (init), nss-wrapper-init (init)
P00   INFO: server command begin 2.55.0: --exec-id=71-94e46f60 --log-level-console=detail --log-level-file=off --log-level-stderr=error --log-path=/pgbackrest/repo1/log --no-log-timestamp --tls-server-address=0.0.0.0 --tls-server-auth=pgbackrest@9094c020-80d0-427a-ad1b-ad9df66de08e=* --tls-server-ca-file=/etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt --tls-server-cert-file=/etc/pgbackrest/server/server-tls.crt --tls-server-key-file=/etc/pgbackrest/server/server-tls.key

kubectl -n postgres-operator logs pod/dashboard-v2-6-0-repo-host-0 -c pgbackrest-config

environment: line 8: pkill: command not found
environment: line 2: pkill: command not found
environment: line 8: pkill: command not found
environment: line 2: pkill: command not found
environment: line 8: pkill: command not found
environment: line 2: pkill: command not found
environment: line 8: pkill: command not found
environment: line 2: pkill: command not found
environment: line 8: pkill: command not found

Expected Result:

Working backup

Actual Result:

No backup suceeded

Additional Information:

I’m currently testing this on K3D.

k3d version v5.6.0
k3s version v1.27.4-k3s1 (default)

Hi @samir_esnet ,

I have noticed that you are using a pgbackrest main image from perconalab : perconalab/percona-postgresql-operator:main-pgbackrest17. The perconalab is used for test images and are not recommended for usage. The main images are the ones in development (at this moment based for 2.7.0) and these images can be incompatible with the 2.6.0 version. The list of officially supported images for 2.6.0 can be found at: Percona certified images - Percona Operator for PostgreSQL . Could you check with image supported for 2.6.0 and update if it works? Thanks.

2 Likes

Ah, good to know. I had pulled the deploy/cr.yml from main. Fair enough.

I update all cr to match the tag so 2.6.0 and I’m still having similar issues.

PostgresVersion: 16

  backups:
    pgbackrest:
      image: percona/percona-postgresql-operator:2.6.0-ppg16.8-pgbackrest2.54.2
      configuration:
        - secret:
            name: dashboard-v2-6-0-pgbackrest-secrets
      global:
        repo1-path: /pgbackrest/postgres-operator/dashboard-v2-6-0/repo1
        repo1-retention-full: "7"
        repo2-path: /pgbackrest/dev-staging/postgres/dashboard-v2-6-0
        repo2-storage-verify-tls: "true"
        repo2-s3-uri-style: path
        repo2-retention-full: "7"

kubectl -n postgres-operator get pg-backup

NAME         CLUSTER            REPO    DESTINATION                                                            STATUS     TYPE   COMPLETED   AGE
backuptest   dashboard-v2-6-0   repo2   gs://dev-dashboards/pgbackrest/dev-staging/postgres/dashboard-v2-6-0   Starting                      8m43s

kubectl -n postgres-operator logs pod/dashboard-v2-6-0-repo-host-0

no output

Hi @samir_esnet ,

Could you provide whole cr.yaml you use and check operator log for ERRORs?

Thanks!

cr.yml

---
apiVersion: pgv2.percona.com/v2
kind: PerconaPGCluster
metadata:
  name: dashboard-v2-6-0
spec:
  crVersion: 2.6.0
  users:
    - name: postgres
    - name: grafana
      databases:
        - dashboard
      password:
        type: ASCII

  image: percona/percona-postgresql-operator:2.6.0-ppg16.8-postgres
  imagePullPolicy: Always
  postgresVersion: 16
  instances:
  - name: dashboard
    replicas: 3
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          podAffinityTerm:
            labelSelector:
              matchLabels:
                postgres-operator.crunchydata.com/data: postgres
            topologyKey: kubernetes.io/hostname
    dataVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: "20Gi"

  proxy:
    pgBouncer:
      replicas: 3
      image: percona/percona-postgresql-operator:2.6.0-ppg16.8-pgbouncer1.24.0
      exposeSuperusers: true
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  postgres-operator.crunchydata.com/role: pgbouncer
              topologyKey: kubernetes.io/hostname
  backups:
    pgbackrest:
      image: percona/percona-postgresql-operator:2.6.0-ppg16.8-pgbackrest2.54.2
      configuration:
        - secret:
            name: dashboard-v2-6-0-pgbackrest-secrets
      global:
        repo1-path: /pgbackrest/postgres-operator/dashboard-v2-6-0/repo1
        repo1-retention-full: "7"
        repo2-path: /pgbackrest/dev-staging/postgres/dashboard-v2-6-0
        repo2-storage-verify-tls: "y"
        repo2-s3-uri-style: path
        repo2-retention-full: "7"
      repoHost:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                   matchLabels:
                     postgres-operator.crunchydata.com/data: pgbackrest
                 topologyKey: kubernetes.io/hostname
      manual:
        repoName: repo1
        options:
         - --type=full
      repos:
      - name: repo1
        volume:
          volumeClaimSpec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: "20Gi"
      - name: repo2
        schedules:
          full: "0 0 * * 6"
          differential: "0 1 * * 1-6"
          incremental: "0 1 * * 1-6"
        gcs:
          bucket: "dev-dashboards"


  pmm:
    enabled: false
    image: perconalab/pmm-client:dev-latest
    secret: cluster1-pmm-secret
    serverHost: monitoring-service

backup request:

apiVersion: pgv2.percona.com/v2
kind: PerconaPGBackup
metadata:
  name: backuptest
spec:
  pgCluster: dashboard-v2-6-0
  repoName: repo2
  options:
    - --type=full

Here’s the output from the operator:

2025-07-07T15:28:46.998Z        INFO    Superusers are exposed through PGBouncer        {"controller": "postgrescluster", "controllerGroup": "postgres-operator.crunchydata.com", "controllerKind": "PostgresCluster", "PostgresCluster": {"name":"dashboard-v2-6-0","namespace":"postgres-operator"}, "namespace": "postgres-operator", "name": "dashboard-v2-6-0", "reconcileID": "aa52c21d-db90-49bc-82d8-85a408e0e74d"}
2025-07-07T15:28:47.073Z        INFO    Waiting for backup to start     {"controller": "perconapgbackup", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGBackup", "PerconaPGBackup": {"name":"backuptest","namespace":"postgres-operator"}, "namespace": "postgres-operator", "name": "backuptest", "reconcileID": "d881cd4f-c8d6-4242-819a-2b0cd4085877", "request": {"name":"backuptest","namespace":"postgres-operator"}}
2025-07-07T15:28:52.074Z        INFO    Waiting for backup to start     {"controller": "perconapgbackup", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGBackup", "PerconaPGBackup": {"name":"backuptest","namespace":"postgres-operator"}, "namespace": "postgres-operator", "name": "backuptest", "reconcileID": "5d3eb4ea-5f23-465a-8975-629b9c97f8b7", "request": {"name":"backuptest","namespace":"postgres-operator"}}
2025-07-07T15:28:57.078Z        INFO    Waiting for backup to start     {"controller": "perconapgbackup", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGBackup", "PerconaPGBackup": {"name":"backuptest","namespace":"postgres-operator"}, "namespace": "postgres-operator", "name": "backuptest", "reconcileID": "a31da13b-5838-49c9-8fc2-18e9624ba4b7", "request": {"name":"backuptest","namespace":"postgres-operator"}}
2025-07-07T15:28:57.648Z        ERROR   unable to create stanza {"controller": "postgrescluster", "controllerGroup": "postgres-operator.crunchydata.com", "controllerKind": "PostgresCluster", "PostgresCluster": {"name":"dashboard-v2-6-0","namespace":"postgres-operator"}, "namespace": "postgres-operator", "name": "dashboard-v2-6-0", "reconcileID": "26a42cd1-bd07-482b-a4e2-de5696d33d15", "reconciler": "pgBackRest", "error": "command terminated with exit code 32: repo1-path = /pgbackrest/postgres-operator/dashboard-v2-6-0/repo1\nP00  ERROR: [032]: boolean option 'repo2-storage-verify-tls' must be 'y' or 'n'\nP00  ERROR: [032]: boolean option 'repo2-storage-verify-tls' must be 'y' or 'n'\n ", "errorVerbose": "command terminated with exit code 32: repo1-path = /pgbackrest/postgres-operator/dashboard-v2-6-0/repo1\nP00  ERROR: [032]: boolean option 'repo2-storage-verify-tls' must be 'y' or 'n'\nP00  ERROR: [032]: boolean option 'repo2-storage-verify-tls' must be 'y' or 'n'\n \ngithub.com/percona/percona-postgresql-operator/internal/pgbackrest.Executor.StanzaCreateOrUpgrade\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/pgbackrest/pgbackrest.go:105\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcileStanzaCreate\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:2759\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcilePGBackRest\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:1487\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:383\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:328\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:288\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:249\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_arm64.s:1223\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcileStanzaCreate\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:2765\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcilePGBackRest\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:1487\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:383\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:328\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:288\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:249\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_arm64.s:1223"}
github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile
        /go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:383
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:118
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:328
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:288
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.1/pkg/internal/controller/controller.go:249

It seems like the error calls out the path for repo1, even though I’m trying to backup to repo2.

Even so, they should both work.