Error: UPGRADE FAILED: failed to create resource: PerconaPGCluster.pgv2.percona.com "" is invalid: spec.postgresVersion: Invalid value: 16

Hey,

Please someone could help ma, I’m kinda newbie with percona. My problem is that, after trying to install percona pg-db version 16, I got an error message:
Error: UPGRADE FAILED: failed to create resource: PerconaPGCluster.pgv2.percona.com “test-pg-db” is invalid: spec.postgresVersion: Invalid value: 16: spec.postgresVersion in body should be less than or equal to 15
i installed the mentioned chart in other cluster “v1.23.14” successfully, however in this cluster where i want to install, there are other versions of namespace scoped operators and pg-clusters)
Kubernetes verison of the cluster: v1.23.15
crVersion: 2.3.1
repository: percona/percona-postgresql-operator
image: “percona/percona-postgresql-operator:2.3.1-ppg16-postgres-gis”
postgresVersion: 16
dependencies:

My question is what should I do to solve the error above and install the chart as i did before?

Best,
Mate

Hi @Mate_Kassa could you please provide the command which you used to install the cluster?

Hi @Slava_Sarzhan , I used a self-made values and “helm upgrade test -f values.yaml . -n test” command with the following almost default values:

pmm:
  enabled: true
  image:
    repository: percona/pmm-server
    pullPolicy: IfNotPresent
    tag: "2.41.0"
    imagePullSecrets: []
  service:
    name: monitoring-service
    type: NodePort
    ports:
      ## @param service.ports[0].port https port number
      - port: 443
        ## @param service.ports[0].targetPort target port to map for statefulset and ingress
        targetPort: https
        ## @param service.ports[0].protocol protocol for https
        protocol: TCP
        ## @param service.ports[0].name port name
        name: https
      ## @param service.ports[1].port http port number
      - port: 80
        ## @param service.ports[1].targetPort target port to map for statefulset and ingress
        targetPort: http
        ## @param service.ports[1].protocol protocol for http
        protocol: TCP
        ## @param service.ports[1].name port name
        name: http
  pmmEnv:
    DISABLE_UPDATES: "1"
  ingress:
    enabled: false
    nginxInc: false
  secret: 
    create: false
    name: test-pg-db-pmm-secret
    pmm_password: "test"
    serverKey: "test"
  serverHost: monitoring-service
  resources:
    requests:
      memory: 200M
      cpu: 500m
  nameOverride: "test-pmm"
  readyProbeConf:
    initialDelaySeconds: 1
    periodSeconds: 5
    failureThreshold: 6
  storage:
    name: test-pmm-storage
    storageClassName: "vmware-silver"
    size: 40Gi
  serviceAccount:
    create: true
    annotations: {}
    name: "test-pmm-service-account"
  podAnnotations: {}
  extraVolumeMounts: []
  extraVolumes: []

pg-operator:
  enabled: true
  replicaCount: 1
  operatorImageRepository: percona/percona-postgresql-operator
  imagePullPolicy: IfNotPresent
  image: "percona/percona-postgresql-operator:2.3.1"
  watchAllNamespaces: false
  imagePullSecrets: []
  nameOverride: "test-pgo"
  fullnameOverride: ""
  resources:
    limits:
      cpu: 200m
      memory: 500Mi
    requests:
      cpu: 100m
      memory: 20Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # disableTelemetry: according to
  # https://docs.percona.com/percona-operator-for-postgresql/2.0/telemetry.html
  # this is how you can disable telemetry collection
  # default is false which means telemetry will be collected
  disableTelemetry: false
  logStructured: false
  logLevel: "INFO"

pg-db:
  enabled: true
  finalizers:
  # Set this if you want that operator deletes the PVCs on cluster deletion
    - percona.com/delete-pvc
  # Set this if you want that operator deletes the ssl objects on cluster deletion
  #  - percona.com/delete-ssl
  crVersion: 2.3.1
  repository: percona/percona-postgresql-operator
  image: "percona/percona-postgresql-operator:2.3.1-ppg16-postgres-gis" # perconalab/percona-postgresql-operator:main-ppg16-postgres
  imagePullPolicy: Always
  postgresVersion: 16
  # port: 5432
  pause: false
  unmanaged: false
  standby:
    enabled: false
    # host: "<primary-ip>"
    # port: "<primary-port>"
    # repoName: repo1
  customTLSSecret:
    name: ""
  customReplicationTLSSecret:
    name: ""
  openshift: false
  users:
    - name: postgres
      databases:
        - postgres
      options: "SUPERUSER"
      password:
        type: ASCII
      secretName: "test-pg-db-postgres-secret"
  expose:
    #   annotations:
    #     my-annotation: value1
    #   labels:
    #     my-label: value2
      type: NodePort
    instances:
    - name: test-pg
      replicas: 2
    dataVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 40Gi
    proxy:
      pgBouncer:
        replicas: 1
        image: "percona/percona-postgresql-operator:2.3.1-ppg16-pgbouncer"
        exposeSuperusers: true
  backups:
    pgbackrest:
  #    metadata:
  #    labels:
      image: "percona/percona-postgresql-operator:2.3.1-ppg16-pgbackrest"
      configuration:
      manual:
        repoName: repo1
        options:
        - --type=full
      repos:
      - name: repo1
        schedules:
          full: "0 0 * * *"
        volume:
          volumeClaimSpec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 40Gi
  pmm:
    enabled: true
    image:
      repository: percona/pmm-client
      tag: 2.41.0
  #  imagePullPolicy: IfNotPresent
    secret: "test-pg-db-pmm-secret"
    serverHost: monitoring-service
  secrets:
    name: 
    # replication user password
    primaryuser:
    # superuser password
    postgres: "test-pg-db-postgres-secret"
    # pgbouncer user password
    pgbouncer:
    # pguser user password
    pguser: 

Does it possible that another already installed namespaced crd with different version number (2.2.0) cause the problem?

Ok, we do not support major updates of PG now. You can’t update PG v15 to PG v16. We plan to add it to the next PG operator release.

Well, thank you for your answer, so it is only work in a cluster where there is no other older version crd installed? Because it seems to me that this is the only possible option.