Issue in percona everest installation on K8s

We have tried to install as per below link but "Install Percona OLM Catalog " is not getting installed.

We are using kubeadm 1.30

Hi @cmbagga !

Can you please send some logs for the pods and kubernetes events, you can try something like:

kubectl get events --namespace everest-olm

and

kubectl describe pod --namespace everest-olm

this should print status of all pods in that namespace and hopefully show what the issue is about.

Kind regards,
Tomislav

type or paste code here

ubuntu@master-node:~$ sudo su
root@master-node:/home/ubuntu# kubectl get events --namespace everest-olm
LAST SEEN   TYPE      REASON             OBJECT                                 MESSAGE
25m         Warning   FailedScheduling   pod/catalog-operator-7464967b7-69bjw   0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
25m         Warning   FailedScheduling   pod/olm-operator-6f7b945cbb-zl9mk      0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.


root@master-node:/home/ubuntu# kubectl describe pod --namespace everest-olm
Name:             catalog-operator-7464967b7-69bjw
Namespace:        everest-olm
Priority:         0
Service Account:  olm-operator-serviceaccount
Node:             <none>
Labels:           app=catalog-operator
                  pod-template-hash=7464967b7
Annotations:      <none>
Status:           Pending
SeccompProfile:   RuntimeDefault
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/catalog-operator-7464967b7
Containers:
  catalog-operator:
    Image:      quay.io/operator-framework/olm@sha256:1b6002156f568d722c29138575733591037c24b4bfabc67946f268ce4752c3e6
    Port:       8080/TCP
    Host Port:  0/TCP
    Command:
      /bin/catalog
    Args:
      --namespace
      everest-olm
      --configmapServerImage=quay.io/operator-framework/configmap-operator-registry:latest
      --util-image
      quay.io/operator-framework/olm@sha256:1b6002156f568d722c29138575733591037c24b4bfabc67946f268ce4752c3e6
      --set-workload-user-id=true
    Requests:
      cpu:        10m
      memory:     80Mi
    Liveness:     http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wmbt (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-8wmbt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  26m (x7146 over 24d)  default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.


Name:             olm-operator-6f7b945cbb-zl9mk
Namespace:        everest-olm
Priority:         0
Service Account:  olm-operator-serviceaccount
Node:             <none>
Labels:           app=olm-operator
                  pod-template-hash=6f7b945cbb
Annotations:      <none>
Status:           Pending
SeccompProfile:   RuntimeDefault
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/olm-operator-6f7b945cbb
Containers:
  olm-operator:
    Image:      quay.io/operator-framework/olm@sha256:1b6002156f568d722c29138575733591037c24b4bfabc67946f268ce4752c3e6
    Port:       8080/TCP
    Host Port:  0/TCP
    Command:
      /bin/olm
    Args:
      --namespace
      $(OPERATOR_NAMESPACE)
      --writeStatusName
      
    Requests:
      cpu:      10m
      memory:   160Mi
    Liveness:   http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      OPERATOR_NAMESPACE:  everest-olm (v1:metadata.namespace)
      OPERATOR_NAME:       olm-operator
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h68hz (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-h68hz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  26m (x7146 over 24d)  default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.


Hi Tomislav,

Thanks for the suggestion! I’ll run those commands and share the output shortly to help diagnose the issue.

Best,
@cmbagga