Need help with upgrade from pg-db v1.x to 2.2.0

Description:

We have pg-db v1 cluster on production with configured s3 backup and custom pgdump pipelines and I need help with upgrading it or migrating to 2.2.0

Steps to Reproduce:

Upgrade v1 cluster to 2.2.0 or migrate / restore to 2.2.0

Version:

1.3.0

Expected Result:

Upgraded pg-db cluster to 2.2.0

I deployed a new v2 cluster for testing, and it feels much better that v1 helm chart. But still have some questions that I don’t know how to solve.
For instance, first chart info is incorrect in this part:

To get a PostgreSQL prompt inside your new cluster you can run:

  POSTGRES_USER=$(kubectl -n percona get secrets pg-db--secret -o jsonpath="{.data.username}" | base64 --decode)
  POSTGRES_PASSWORD=$(kubectl -n percona get secrets pg-db--secret -o jsonpath="{.data.password}" | base64 --decode)

And then
  $ kubectl run -i --rm --tty percona-client --image=perconalab/percona-distribution-postgresql:15 --restart=Never \
  -- psql "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@pg-db-pgbouncer.percona.svc.cluster.local/"

Only secret in the namespace I see is pg-db-pguser-pg-db with only pg-db user info. When I try to run and create a db for pgrestore:

$ kubectl run -i --rm --tty percona-client --image=perconalab/percona-distribution-postgresql:13.2 --restart=Never -- psql "postgresql://pg-db:XXXXXXXXXXXXXXXXX@pg-db-pgbouncer.percona.svc:5432/pg-db" -c "CREATE USER harbor WITH PASSWORD 'XXXXXX';" -c "CREATE DATABASE harbor OWNER harbor;"
If you don't see a command prompt, try pressing enter.
ERROR:  permission denied to create role
ERROR:  role "harbor" does not exist
pod "percona-client" deleted
pod default/percona-client terminated (Error)

I need postgres password for our custom pipelines with which we take pgdumps.

Next thing is clarification to setup s3 backup repo.
I’m not sure what paths should I use?

#    global:
#      repo1-retention-full: "14"
#      repo1-retention-full-type: time
#      repo1-path: /pgbackrest/postgres-operator/cluster1/repo1
#      repo1-cipher-type: aes-256-cbc
#      repo1-s3-uri-style: path
#      repo2-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo2
#      repo3-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo3
#      repo4-path: /pgbackrest/postgres-operator/cluster1-multi-repo/repo4
#    repoHost:
#      priorityClassName: high-priority

I see this params for s3 bucket, but where do I add access and secret key for minio s3 repo?

#    - name: repo2
#      s3:
#        bucket: "<YOUR_AWS_S3_BUCKET_NAME>"
#        endpoint: "<YOUR_AWS_S3_ENDPOINT>"
#        region: "<YOUR_AWS_S3_REGION>"

Last question is, can I run helm deployment over our current v1 deployment (because pods seems different, structure is with primary and replica pods…) or we must do restore from old cluster and what are steps to do that? I suppose there is a better way than to recreate users / dbs and restore from pgdumps?

I wasn’t aware there are guides for this.