Not able to setup mongodb operator on "Onprem" servers

I am trying to setup mongodb operator on my on prem servers.
I have created 4 nodes setuped k8s using kubeadm and connected all the nodes
when I run kubectl get nodes I get 3 worker and 1 control node in ready state.
and then I follow official docs to setup mongodb operator-:

  • kubectl apply --server-side -f deploy/crd.yaml

  • Create a ns and switch context to that

  • kubectl apply -f deploy/rbac.yaml

  • kubectl apply -f deploy/operator.yaml
    Everything working fine till here when I run kubectl apply -f deploy/cr.yaml and after that when I run kubectl get pods I see only operator pods and 3 cfg’s pods running and all the cfg’s pods are getting restarted one by one.

when I check opeartor logs I see this
mongoldb-operator-error-log-for-on-prem-vm-setup · GitHub” ← operator log

any help will be appreciated thanks

hello @Avinash_Rai .

A couple of questions:

  1. what are the versions - k8s and operator.
  2. please share you cr.yaml
  3. please show the describe of one of the cfp pods crashing: kubectl describe pod ...

looking at the errors you shared, it seems that you might be using old version of the Operator where PodDisruptionBudget APIs were set for beta.

hey @Sergey_Pronin, yes I think I was using older version of operator, I tried with helm installation and it’s working fine.
I think docs are not updated I was following “Generic Kubernetes installation - Percona Operator for MongoDB” this docs and I was cloning “v1.14.0” branch as mentioned in the docs and it was creating that issue
thanks for the quick response