Pod Crashlooping with "failed to sync secret cache: timed out waiting for the condition"

This is the third of my pods, the other 2 run fine. It suddenly started.

Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Normal   Scheduled               23m                   default-scheduler        Successfully assigned default/percona-db-psmdb-db-rs0-2 to horas-pool1-569d79c97c-64pfm
  Warning  FailedMount             23m                   kubelet                  MountVolume.SetUp failed for volume "ssl" : failed to sync secret cache: timed out waiting for the condition
  Normal   SuccessfulAttachVolume  23m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-e64ca399-7831-42d4-821f-1910e5a68fa7"
  Normal   Pulling                 23m                   kubelet                  Pulling image "percona/percona-server-mongodb-operator:1.10.0"
  Normal   Started                 23m                   kubelet                  Started container mongo-init
  Normal   Pulled                  23m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb-operator:1.10.0" in 988.673435ms
  Normal   Created                 23m                   kubelet                  Created container mongo-init
  Normal   Pulled                  23m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb:latest" in 1.00282819s
  Normal   Pulling                 23m                   kubelet                  Pulling image "percona/percona-server-mongodb-operator:1.10.0-backup"
  Normal   Pulled                  23m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb-operator:1.10.0-backup" in 1.033277801s
  Normal   Started                 23m                   kubelet                  Started container backup-agent
  Normal   Created                 23m                   kubelet                  Created container backup-agent
  Normal   Pulled                  22m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb:latest" in 1.000566721s
  Normal   Started                 22m (x2 over 23m)     kubelet                  Started container mongod
  Normal   Pulling                 22m (x3 over 23m)     kubelet                  Pulling image "percona/percona-server-mongodb:latest"
  Normal   Created                 22m (x3 over 23m)     kubelet                  Created container mongod
  Normal   Pulled                  22m                   kubelet                  Successfully pulled image "percona/percona-server-mongodb:latest" in 986.208738ms
  Warning  BackOff                 3m9s (x111 over 22m)  kubelet                  Back-off restarting failed container

Any ideas how to troublehshoot? Looking into logs of mongo-init has not been very insightful.

1 Like

Hi,
Could you pls provide more info on it?

kubectl get pods 
kubectl logs pod/failed-pod-name -c mongo-init
kubectl logs pod/failed-pod-name -c mongod
kubectl get pv
kubectl get pvc

and you can share cr.yaml file as well.

2 Likes

Problem solved. The pod restored itself.

1 Like