Pods in Pending state - 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod

Hi,
I am trying to setup a MongoDB cluster on a small Kubernetes cluster using Percona Operator for MongoDB. I have followed the steps in Generic Kubernetes installation - Percona Operator for MongoDB. While the operator is running fine, I am unable to create the Replica set.

❯ kubectl get pods
NAME                                               READY   STATUS    RESTARTS      AGE
minimal-cluster-cfg-0                              0/1     Pending   0             5m38s
minimal-cluster-mongos-0                           0/1     Running   1 (58s ago)   5m27s
minimal-cluster-rs0-0                              0/1     Pending   0             5m27s

When I describe the POD, I see

Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  39s   default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..

I see PVCs in Pending State.
Am I missing something here?
Any help/Pointers will be extremely helpful.

Best,
J

Hi @athreyavc, which k8s platform are you using? This warning means that your k8s cluster can’t create PVC.

When using storageClass Kubernetes is going to enable **"Dynamic Volume Provisioning"** which is not working with the local file system.

Please read our blog post about it: Percona Operator for MongoDB with Local Storage and OpenEBS
Or you can try to use hostPath.

Hi @Slava_Sarzhan, Thanks for the reply.

I am using a clutster that is running locally on my laptop

❯ kubectl get nodes
NAME              STATUS   ROLES           AGE    VERSION
flatcar-worker1   Ready    control-plane   105d   v1.28.0
flatcar-worker2   Ready    <none>          105d   v1.28.0
flatcar-worker5   Ready    <none>          82d    v1.28.0

I am currently checking with the hostPath configuration.

Best, J

I followed this document, Percona Operator for MongoDB with Local Storage and OpenEBS, I am trying to use kubectl though. I suppose it shouldn’t matter what method I am using helm or kubctl

type or pas  allowUnsafeConfigurations: true
  upgradeOptions:
    apply: disabled
    schedule: "0 2 * * *"
  secrets:
    users: minimal-cluster
  replsets:

  - name: rs0
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: local-storage
        resources:
          requests:
            storage: 3Gi

  sharding:
    enabled: false

    configsvrReplSet:
      size: 3
      volumeSpec:
        persistentVolumeClaim:
          storageClassName: local-storage
          resources:
            requests:
              storage: 3Gite code here

As I understand, we create the storageClass beforehand to pass it on to the configuration. Still no luck.

Best, J

Can you create PVC manually? Your cluster should be able to do it when you request it.

Thank you @Slava_Sarzhan . I think that made it work.

❯ kubectl get pods -o wide
NAME                                               READY   STATUS    RESTARTS      AGE   IP          NODE              NOMINATED NODE   READINESS GATES
minimal-cluster-rs0-0                              1/1     Running   3 (92s ago)   11m   10.36.0.2   flatcar-worker5   <none>           <none>
minimal-cluster-rs0-1                              0/1     Pending   0             10m   <none>      <none>            <none>           <none>

Best,
J

Creating the PVC beforehand, did help to the make the Pods run, but the drawback was I have to create all the required PVCs before applying the manifest. So, I managed to run a longhorn on my cluster. After that, pointed the replica set to longhorn storageClass.

- name: rs0
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: longhorn
        resources:
          requests:
            storage: 3Gi

That made everything dymanic.

Thank you @Slava_Sarzhan for directions.