Pod Crashlooping with "failed to sync secret cache: timed out waiting for the condition"

This is the third of my pods, the other 2 run fine. It suddenly started.

Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Normal   Scheduled               23m                   default-scheduler        Successfully assigned default/percona-db-psmdb-db-rs0-2 to horas-pool1-569d79c97c-64pfm
  Warning  FailedMount             23m                   kubelet                  MountVolume.SetUp failed for volume "ssl" : failed to sync secret cache: timed out waiting for the condition
  Normal   SuccessfulAttachVolume  23m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-e64ca399-7831-42d4-821f-1910e5a68fa7"
  Normal   Pulling                 23m                   kubelet                  Pulling image "percona/percona-server-mongodb-operator:1.10.0"
  Normal   Started                 23m                   kubelet                  Started container mongo-init
  Normal   Pulled                  23m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb-operator:1.10.0" in 988.673435ms
  Normal   Created                 23m                   kubelet                  Created container mongo-init
  Normal   Pulled                  23m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb:latest" in 1.00282819s
  Normal   Pulling                 23m                   kubelet                  Pulling image "percona/percona-server-mongodb-operator:1.10.0-backup"
  Normal   Pulled                  23m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb-operator:1.10.0-backup" in 1.033277801s
  Normal   Started                 23m                   kubelet                  Started container backup-agent
  Normal   Created                 23m                   kubelet                  Created container backup-agent
  Normal   Pulled                  22m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb:latest" in 1.000566721s
  Normal   Started                 22m (x2 over 23m)     kubelet                  Started container mongod
  Normal   Pulling                 22m (x3 over 23m)     kubelet                  Pulling image "percona/percona-server-mongodb:latest"
  Normal   Created                 22m (x3 over 23m)     kubelet                  Created container mongod
  Normal   Pulled                  22m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb:latest" in 986.208738ms
  Warning  BackOff                 3m9s (x111 over 22m)  kubelet                  Back-off restarting failed container

Any ideas how to troublehshoot? Looking into logs of mongo-init has not been very insightful.

1 Like

Hi,
Could you pls provide more info on it?

kubectl get pods 
kubectl logs pod/failed-pod-name -c mongo-init
kubectl logs pod/failed-pod-name -c mongod
kubectl get pv
kubectl get pvc

and you can share cr.yaml file as well.

1 Like

Thanks for your reply, much appreciated! This is the logs output (I could have thought of it myself):

percona-db-psmdb-db-rs0-0                         2/2     Running            0          8d
percona-db-psmdb-db-rs0-1                         2/2     Running            0          19d
percona-db-psmdb-db-rs0-2                         1/2     CrashLoopBackOff   301        16h
percona-operator-psmdb-operator-c5c494f8c-bnv7m   1/1     Running            0          19d
percona-operator-psmdb-operator-c5c494f8c-gqkq4   1/1     Running            0          19d
percona-operator-psmdb-operator-c5c494f8c-ws7zw   1/1     Running            0          19d

mongod

mongo-init

++ id -u
++ id -g
+ install -o 99 -g 99 -m 0755 -D /ps-entry.sh /data/db/ps-entry.sh
++ id -u
++ id -g
+ install -o 99 -g 99 -m 0755 -D /mongodb-healthcheck /data/db/mongodb-healthcheck

pv

pvc-9b1ed968-4fa8-4b49-bb24-fe00b4242391   15Gi       RWO            Delete           Bound    default/mongod-data-percona-db-psmdb-db-rs0-0   longhorn                19d
pvc-e64ca399-7831-42d4-821f-1910e5a68fa7   15Gi       RWO            Delete           Bound    default/mongod-data-percona-db-psmdb-db-rs0-2   longhorn                19d
pvc-f1ab110a-cbcc-45b3-9589-af58c1da41a3   15Gi       RWO            Delete           Bound    default/mongod-data-percona-db-psmdb-db-rs0-1   longhorn                19d

pvc

mongod-data-percona-db-psmdb-db-cfg-0   Pending                                                                        percona-volumes   20d
mongod-data-percona-db-psmdb-db-rs0-0   Bound     pvc-9b1ed968-4fa8-4b49-bb24-fe00b4242391   15Gi       RWO            longhorn          19d
mongod-data-percona-db-psmdb-db-rs0-1   Bound     pvc-f1ab110a-cbcc-45b3-9589-af58c1da41a3   15Gi       RWO            longhorn          19d
mongod-data-percona-db-psmdb-db-rs0-2   Bound     pvc-e64ca399-7831-42d4-821f-1910e5a68fa7   15Gi       RWO            longhorn          19d

1 Like