Cluster-wide installation tips

According to this doc, we have to explicitly list all (already existing) namespaces where we want to deploy pgsql clusters at operator installation step. And we can modify that list later but the operator has to be deleted then re-installed.

Is there a way to make this operator really cluster-wide (meaning available for all namespaces), as it is possible with the XtraDB one as described in this doc by setting an empty list of namespaces ?

Another question related to this cluster-wide mode, is there a limit for the number of namespaces managed by one single operator ? Is this why you write in the doc :

We recommend running Percona Operator for PostgreSQL in a traditional way, limited to a specific namespace.

Thanks

1 Like

Hello @edd ,

currently it is not possible to deploy it once and cover all namespaces. You need to specify namespaces explicitly.
We will research how hard is it to provide this feature.

What is blocking you currently to specify namespaces?

1 Like

Hi @Sergey_Pronin

Thanks for your answer.
We’re working on a shared Openshift cluster between different teams and projects and we’d like to be able to provide a cluster-wide operator in the marketplace (dev catalog in Openshift) to already existing but also to future new projects that are regularly created on the cluster, without having to reinstall the operator each time a new project is added.
As far as I understood, this is how XtraDB operator works for MySQL or even the Crunchy Data operator for Postgres and we’re looking for the same kind of feature with your operator.

2 Likes

Hi All,

We have the same usecase. Different teams in different openshift projects want to make use of the same operator installation. We don’t want to maintain various operator installations for the same purpose.
So it would be good if we don’t have to name the namespaces, but use e.g. “*” for all projects.
Furthermore we’re only allowed to install directly from operatorhub, so this would be needed also for cluster-wide installation.

Is there any plan to include this feature in future, e.g. in operator version 2.0?

Thanks & best regards,
Martin

@martin.schack yes, in Operator v2 it will work exactly as you need. It will be possible to track all namespaces at once. Stay tuned for GA release (should be this quarter).

Hi Sergey,

Now operator 2.3 is released and certified for openshift.
But we cannot find the operator 2.3 in “operatorhub” yet, only 1.4 is available.

Will this be available soon?
Will it support the cluster-wide installation from operatorhub?

Thanks & best regards,
Martin

1 Like

Hello Martin,

yes, the release happened and we are working on operatorhub and openshift proper certification. It takes longer than our QA cycles.

I will keep you posted on the progress. The expectation that it happens in upcoming week or two.

Hi Sergey,

Any update on this topic.
Is it availabe in operatorhub for cluster-wide installation already?

Hello @martin.schack ,

the operator is OpenShift certified now and present in the operator hub.
As for cluster wide, right now the bundle in operatorhub is NOT CW. We are working w redhat to see how we can have it implemented there. Seems the easiest solution is to have a second bundle (project), but looks like an overkill.

What is the current state of this?

@92fnko you can run both Operators for MySQL and PostgreSQL in cluster-wide mode with openshift.
For MySQL, we recently added a cw-bundle. Use it for cluster-wide.

I tried this yesterday by deploying the operator to openshift-operators and then created a cluster in another namespace - nothing happens. I Then tested creating a cluster within the same namespace as the operator and the cluster creates right away. Am I missing something? I mean, users that are restricted to one or more namespace should be able to create clusters regardless where the operator is installed.

@92fnko let’s start with the basics - which operator is that? What parameters do you set? Etc.

I installed the certified Percona Operator for PostgreSQL from OpenShift’s OperatorHub and use the manifest below. This only works if deployed in the same namespace as the operator.

apiVersion: pgv2.percona.com/v2
kind: PerconaPGCluster
metadata:
  name: cluster1
  namespace: team-a
spec:
  crVersion: 2.5.0
  image: percona/percona-postgresql-operator:2.5.0-ppg16.4-postgres
  imagePullPolicy: Always
  postgresVersion: 16
  instances:
  - name: instance1
    replicas: 3
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          podAffinityTerm:
            labelSelector:
              matchLabels:
                postgres-operator.crunchydata.com/data: postgres
            topologyKey: kubernetes.io/hostname
    dataVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
  proxy:
    pgBouncer:
      replicas: 3
      image: percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbouncer1.23.1
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  postgres-operator.crunchydata.com/role: pgbouncer
              topologyKey: kubernetes.io/hostname
  backups:
    pgbackrest:
      image: percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbackrest2.53-1
      repoHost:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                   matchLabels:
                     postgres-operator.crunchydata.com/data: pgbackrest
                 topologyKey: kubernetes.io/hostname
      manual:
        repoName: repo1
        options:
         - --type=full
      repos:
      - name: repo1
        schedules:
          full: "0 0 * * 6"
        volume:
          volumeClaimSpec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi
  pmm:
    enabled: false
    image: percona/pmm-client:2.43.1
    secret: cluster1-pmm-secret
    serverHost: monitoring-service