Hi! Having got the operator working in minikube and OpenShift I moved on to vanilla k8s but unfortunately can’t get a working db up yet. Having followed the tutorial everything seems to be created correctly but the pods are failing after a while.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-cluster-name-rs0-0 0/1 CrashLoopBackOff 9 34m
my-cluster-name-rs0-1 1/1 Running 9 34m
my-cluster-name-rs0-2 1/1 Running 9 34m
percona-server-mongodb-operator-568f85969c-fl8jh 1/1 Running 0 35m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-cluster-name-rs0-0 0/1 CrashLoopBackOff 9 37m
my-cluster-name-rs0-1 0/1 CrashLoopBackOff 9 36m
my-cluster-name-rs0-2 0/1 CrashLoopBackOff 9 36m
percona-server-mongodb-operator-568f85969c-fl8jh 1/1 Running 0 37m
This is on k8s v1.17 as per:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:51:13Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
I am seeing errors in the operator pod:
$ kubectl logs percona-server-mongodb-operator-568f85969c-fl8jh
{"level":"info","ts":1587043326.966063,"logger":"cmd","msg":"Git commit: 44e3cb883501c2adb1614df762317911d7bb16eb Git branch: master"}
{"level":"info","ts":1587043326.9661248,"logger":"cmd","msg":"Go Version: go1.12.17"}
{"level":"info","ts":1587043326.9661362,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1587043326.966145,"logger":"cmd","msg":"operator-sdk Version: v0.3.0"}
{"level":"info","ts":1587043326.966367,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1587043327.1066792,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1587043327.112258,"logger":"controller_psmdb","msg":"server version","platform":"kubernetes","version":"v1.17.2"}
{"level":"info","ts":1587043327.1129541,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"psmdb-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1587043327.1132038,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"perconaservermongodbbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1587043327.1134188,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"perconaservermongodbbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1587043327.1136668,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"perconaservermongodbrestore-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1587043327.1138349,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"perconaservermongodbrestore-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1587043327.1138775,"logger":"cmd","msg":"Starting the Cmd."}
{"level":"info","ts":1587043327.214392,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"perconaservermongodbrestore-controller"}
{"level":"info","ts":1587043327.2144375,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"perconaservermongodbbackup-controller"}
{"level":"info","ts":1587043327.2143924,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"psmdb-controller"}
{"level":"info","ts":1587043327.3160355,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"perconaservermongodbrestore-controller","worker count":1}
{"level":"info","ts":1587043327.3161073,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"perconaservermongodbbackup-controller","worker count":1}
{"level":"info","ts":1587043327.316146,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"psmdb-controller","worker count":1}
{"level":"info","ts":1587043352.4551826,"logger":"controller_psmdb","msg":"Created a new mongo key","Request.Namespace":"psmdb","Request.Name":"my-cluster-name","KeyName":"my-cluster-name-mongodb-keyfile"}
{"level":"info","ts":1587043352.4619968,"logger":"controller_psmdb","msg":"Created a new mongo key","Request.Namespace":"psmdb","Request.Name":"my-cluster-name","KeyName":"my-cluster-name-mongodb-encryption-key"}
{"level":"error","ts":1587043352.7108507,"logger":"controller_psmdb",
"msg":"failed to reconcile cluster",
"Request.Namespace":"psmdb",
"Request.Name":"my-cluster-name",
"replset":"rs0",
"error":"handleReplsetInit:: no mongod containers in running state",
"errorVerbose":"no mongod containers in running state ...}
{"level":"error","ts":1587043352.8449605,"logger":"kubebuilder.controller",
"msg":"Reconciler error","controller":"psmdb-controller",
"request":"psmdb/my-cluster-name",
"error":"reconcile StatefulSet for rs0: update StatefulSet my-cluster-name-rs0: StatefulSet.apps \"my-cluster-name-rs0\" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden",
...}
Actually connecting to mongo while the pods are up actually works but only by connecting without using credentials. Using userAdmin/userAdmin123456 results in “Authentication Denied”. The secrets set from deploy/secrets.yaml are available in the mongo pods as env vars so looks like they are picked up. I wondered if the mongodb-healthcheck wasn’t connecting because the mongo user creds weren’t being set and that was causing the pods to fail?