Primary pod and one of the replicas sit on the same node

*** Also at [K8SPG-180] Primary pod and one of the replicas sit on the same node - Percona JIRA

Good day,

I am testing this operator following https://www.percona.com/doc/kubernetes-operator-for-postgresql/kubernetes.html

I am encountering a strange situation, which I’d like to clarify here.

Upon deployment, the primary and replica pods are as follow:

  • “cluster1-78ff6f9bfd-b4f7h” sits on node11
  • “cluster1-repl1-8c6d4bc57-jtwnr” sits on node09
  • “cluster1-repl2-d669d5555-9f9pp” sits on node11

I believe this situation is not ideal.

Considering I have many nodes, can I prevent replicas to sit on the same node as the primary instance? I see that anti-affinity should be supported, but I can’t find reference in any documentation as to how to configure it.

Could you please help?

1 Like

Hello @Stefano ,

sorry for not coming back earlier to you on this one, but you can change default antiAffinity rule from “preferred” to “required” and this will enforce placing replicas and primary on different Kubernetes nodes.

Your cr.yaml will look like this:

spec:
...
  pgPrimary:
    antiAffinityType: required
...
  pgReplicas:
    hotStandby:
      antiAffinityType: required
    
1 Like

Hi @Sergey_Pronin ,
I’ve tried to apply the change you suggested, but it doesn’y work, the pgReplicas keep in the same node, in fact in the k8s object related to pgReplica there’s no affinity rule for antiaffinity.

In my case I’m trying to use an “advance” configuration inside pgPrimary.affinity.advanced.

Thanks in advance