Hi,
I am installing percona mongodb with TLS disabled and sharding enabled. Also, expose flag is set to True.
There is no mongos pod created and installation is failing.
cfg server and rs are going in restart mode.
NAME READY STATUS RESTARTS AGE
cluster1-psmdb-db-cfg-0 2/2 Running 3 (100s ago) 11m
cluster1-psmdb-db-cfg-1 2/2 Running 3 (58s ago) 10m
cluster1-psmdb-db-cfg-2 2/2 Running 2 (3m3s ago) 9m41s
cluster1-psmdb-db-rs0-0 2/2 Running 3 (97s ago) 11m
cluster1-psmdb-db-rs0-1 2/2 Running 3 (52s ago) 10m
cluster1-psmdb-db-rs0-2 1/2 Running 3 (10s ago) 9m36s
psmdb-operator-54c8799556-6pmrv 1/1 Running 0 50m
Operator logs as below:
2025-04-17T19:21:12.439Z ERROR Reconciler error {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"cluster1-psmdb-db","namespace":"mongodb-c1"}, "namespace": "mongodb-c1", "name": "cluster1-psmdb-db", "reconcileID": "c4593b77-0096-4c03-9038-24778c62b510", "error": "reconcile statefulsets: handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t6801548ff2ac686d53656a8c\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2\n / MongoServerSelectionError: Server selection timed out after 2000 ms\n\nhandleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t680154a3b61951b195656a8c\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2\n / MongoServerSelectionError: Server selection timed out after 2000 ms\n", "errorVerbose": "handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t6801548ff2ac686d53656a8c\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2\n / MongoServerSelectionError: Server selection timed out after 2000 ms\n\nhandleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID:\t680154a3b61951b195656a8c\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2\n / MongoServerSelectionError: Server selection timed out after 2000 ms\n\nreconcile statefulsets\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:421\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:116\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:303\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:224\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:316
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:263
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:224
2025-04-17T19:21:23.590Z INFO initiating replset {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"cluster1-psmdb-db","namespace":"mongodb-c1"}, "namespace": "mongodb-c1", "name": "cluster1-psmdb-db", "reconcileID": "b5e27f81-68a5-4f85-866a-055c555b567c", "replset": "cfg", "pod": "cluster1-psmdb-db-cfg-0"}
2025-04-17T19:21:24.046Z ERROR failed to reconcile cluster {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"cluster1-psmdb-db","namespace":"mongodb-c1"}, "namespace": "mongodb-c1", "name": "cluster1-psmdb-db", "reconcileID": "b5e27f81-68a5-4f85-866a-055c555b567c", "replset": "cfg", "error": "handleReplsetInit: exec --version: command terminated with exit code 137 / / ", "errorVerbose": "exec --version: command terminated with exit code 137 / / \nhandleReplsetInit\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileCluster\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:119\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileReplsets\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:549\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:419\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:116\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:303\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:224\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"}
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileReplsets
/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:551
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile
/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:419
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:116
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:303
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:263
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:224
2025-04-17T19:21:34.134Z INFO initiating replset {"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"cluster1-psmdb-db","namespace":"mongodb-c1"}, "namespace": "mongodb-c1", "name": "cluster1-psmdb-db", "reconcileID": "b5e27f81-68a5-4f85-866a-055c555b567c", "replset": "rs0", "pod": "cluster1-psmdb-db-rs0-0"}
cfg server logs:
{"t":{"$date":"2025-04-17T19:27:42.940+00:00"},"s":"I", "c":"SHARDING", "id":22727, "ctx":"ShardRegistryUpdater","msg":"Error running periodic reload of shard registry","attr":{"error":"NotYetInitialized: Config shard has not been set up yet","shardRegistryReloadIntervalSeconds":30}}
{"t":{"$date":"2025-04-17T19:27:42.940+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}}
{"t":{"$date":"2025-04-17T19:27:42.941+00:00"},"s":"W", "c":"SHARDING", "id":7445900, "ctx":"initandlisten","msg":"Started with ShardServer role, but no shardIdentity document was found on disk.","attr":{"namespace":"admin.system.version"}}
{"t":{"$date":"2025-04-17T19:27:42.942+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigStartingUp","oldState":"ConfigPreStart"}}
{"t":{"$date":"2025-04-17T19:27:42.942+00:00"},"s":"I", "c":"REPL", "id":6005300, "ctx":"initandlisten","msg":"Starting up replica set aware services"}
{"t":{"$date":"2025-04-17T19:27:42.948+00:00"},"s":"I", "c":"REPL", "id":4280500, "ctx":"initandlisten","msg":"Attempting to create internal replication collections"}
{"t":{"$date":"2025-04-17T19:27:43.014+00:00"},"s":"W", "c":"REPL", "id":21533, "ctx":"ftdc","msg":"Rollback ID is not initialized yet"}
{"t":{"$date":"2025-04-17T19:27:43.019+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2025-04-17T19:27:43.034+00:00"},"s":"I", "c":"REPL", "id":4280501, "ctx":"initandlisten","msg":"Attempting to load local voted for document"}
{"t":{"$date":"2025-04-17T19:27:43.034+00:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"}
{"t":{"$date":"2025-04-17T19:27:43.034+00:00"},"s":"I", "c":"REPL", "id":4280502, "ctx":"initandlisten","msg":"Searching for local Rollback ID document"}
{"t":{"$date":"2025-04-17T19:27:43.134+00:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}}
{"t":{"$date":"2025-04-17T19:27:43.134+00:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}}
{"t":{"$date":"2025-04-17T19:27:43.135+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigUninitialized","oldState":"ConfigStartingUp"}}
{"t":{"$date":"2025-04-17T19:27:43.135+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2025-04-17T19:27:43.138+00:00"},"s":"I", "c":"QUERY", "id":7080100, "ctx":"ChangeStreamExpiredPreImagesRemover","msg":"Starting Change Stream Expired Pre-images Remover thread"}
{"t":{"$date":"2025-04-17T19:27:43.140+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2025-04-17T19:27:43.140+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2025-04-17T19:27:43.140+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
{"t":{"$date":"2025-04-17T19:27:43.141+00:00"},"s":"I", "c":"CONTROL", "id":8423403, "ctx":"initandlisten","msg":"mongod startup complete","attr":{"Summary of time elapsed":{"Startup from clean shutdown?":true,"Statistics":{"Transport layer setup":"0 ms","Run initial syncer crash recovery":"0 ms","Create storage engine lock file in the data directory":"0 ms","Get metadata describing storage engine":"0 ms","Validate options in metadata against current startup options":"0 ms","Create storage engine":"11996 ms","Write current PID to file":"9 ms","Initialize FCV before rebuilding indexes":"0 ms","Drop abandoned idents and get back indexes that need to be rebuilt or builds that need to be restarted":"0 ms","Rebuild indexes for collections":"0 ms","Build user and roles graph":"0 ms","Set up the background thread pool responsible for waiting for opTimes to be majority committed":"0 ms","Initialize the sharding components for a config server":"3 ms","Initialize information needed to make a mongod instance shard aware":"0 ms","Start up the replication coordinator":"194 ms","Create an oplog view for tenant migrations":"0 ms","Start transport layer":"0 ms","_initAndListen total elapsed time":"12316 ms"}}}}
{"t":{"$date":"2025-04-17T19:27:43.142+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":400}}
{"t":{"$date":"2025-04-17T19:27:43.144+00:00"},"s":"I", "c":"CONTROL", "id":20712, "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"NotYetInitialized: Config shard has not been set up yet"}}
{"t":{"$date":"2025-04-17T19:27:43.145+00:00"},"s":"I", "c":"CONTROL", "id":20710, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Config shard has not been set up yet"}}
{"t":{"$date":"2025-04-17T19:27:43.543+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":600}}
{"t":{"$date":"2025-04-17T19:27:43.641+00:00"},"s":"I", "c":"ACCESS", "id":20248, "ctx":"conn1","msg":"note: no users configured in admin.system.users, allowing localhost access"}
{"t":{"$date":"2025-04-17T19:27:43.642+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"127.0.0.1:56476","client":"conn1","negotiatedCompressors":[],"doc":{"application":{"name":"pbm-agent"},"driver":{"name":"mongo-go-driver","version":"1.16.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.22.8","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-04-17T19:27:43.645+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"127.0.0.1:56480","client":"conn2","negotiatedCompressors":[],"doc":{"application":{"name":"pbm-agent"},"driver":{"name":"mongo-go-driver","version":"1.16.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.22.8","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-04-17T19:27:44.015+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
rs0 logs as below:
{"t":{"$date":"2025-04-17T19:27:53.920+00:00"},"s":"I", "c":"CONTROL", "id":20710, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"ShardingStateNotInitialized: sharding state is not yet initialized"}}
{"t":{"$date":"2025-04-17T19:27:53.920+00:00"},"s":"I", "c":"CONTROL", "id":20712, "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"ShardingStateNotInitialized: sharding state is not yet initialized"}}
{"t":{"$date":"2025-04-17T19:27:53.921+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2025-04-17T19:27:53.921+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2025-04-17T19:27:53.921+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
{"t":{"$date":"2025-04-17T19:27:53.922+00:00"},"s":"I", "c":"CONTROL", "id":8423403, "ctx":"initandlisten","msg":"mongod startup complete","attr":{"Summary of time elapsed":{"Startup from clean shutdown?":true,"Statistics":{"Transport layer setup":"0 ms","Run initial syncer crash recovery":"0 ms","Create storage engine lock file in the data directory":"0 ms","Get metadata describing storage engine":"0 ms","Validate options in metadata against current startup options":"0 ms","Create storage engine":"20826 ms","Write current PID to file":"12 ms","Initialize FCV before rebuilding indexes":"1 ms","Drop abandoned idents and get back indexes that need to be rebuilt or builds that need to be restarted":"0 ms","Rebuild indexes for collections":"0 ms","Build user and roles graph":"0 ms","Set up the background thread pool responsible for waiting for opTimes to be majority committed":"0 ms","Initialize information needed to make a mongod instance shard aware":"0 ms","Start up cluster time keys manager with a local/direct keys client":"0 ms","Start up the replication coordinator":"113 ms","Create an oplog view for tenant migrations":"0 ms","Start transport layer":"1 ms","_initAndListen total elapsed time":"21049 ms"}}}}
{"t":{"$date":"2025-04-17T19:27:53.968+00:00"},"s":"I", "c":"ACCESS", "id":20248, "ctx":"conn1","msg":"note: no users configured in admin.system.users, allowing localhost access"}
{"t":{"$date":"2025-04-17T19:27:53.968+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"127.0.0.1:34714","client":"conn1","negotiatedCompressors":[],"doc":{"application":{"name":"pbm-agent"},"driver":{"name":"mongo-go-driver","version":"1.16.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.22.8","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-04-17T19:27:53.982+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"127.0.0.1:34728","client":"conn2","negotiatedCompressors":[],"doc":{"application":{"name":"pbm-agent"},"driver":{"name":"mongo-go-driver","version":"1.16.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.22.8","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-04-17T19:27:54.069+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":400}}
{"t":{"$date":"2025-04-17T19:27:54.071+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2025-04-17T19:27:54.470+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":600}}
{"t":{"$date":"2025-04-17T19:27:55.003+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2025-04-17T19:27:55.072+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":800}}
{"t":{"$date":"2025-04-17T19:27:55.874+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":1000}}
{"t":{"$date":"2025-04-17T19:27:56.003+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2025-04-17T19:27:56.879+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":1200}}
{"t":{"$date":"2025-04-17T19:27:57.069+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2025-04-17T19:27:58.003+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2025-04-17T19:27:58.081+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":1400}}
{"t":{"$date":"2025-04-17T19:27:59.003+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2025-04-17T19:27:59.482+00:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":1600}}
{"t":{"$date":"2025-04-17T19:28:00.069+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
PSMDB cluster config as below:
k describe psmdb cluster1-psmdb-db -n mongodb-c1
Name: cluster1-psmdb-db
Namespace: mongodb-c1
Labels: app.kubernetes.io/instance=cluster1
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=psmdb-db
app.kubernetes.io/version=1.19.1
helm.sh/chart=psmdb-db-1.19.1
Annotations: meta.helm.sh/release-name: cluster1
meta.helm.sh/release-namespace: mongodb-c1
API Version: psmdb.percona.com/v1
Kind: PerconaServerMongoDB
Metadata:
Creation Timestamp: 2025-04-17T19:14:56Z
Finalizers:
percona.com/delete-psmdb-pods-in-order
Generation: 1
Resource Version: 20940
UID: 18a61dbd-2b50-433d-91ee-8c141cb5e2db
Spec:
Backup:
Enabled: true
Image: percona/percona-backup-mongodb:2.8.0-multi
Pitr:
Enabled: false
Cr Version: 1.19.1
Enable Volume Expansion: false
Image: percona/percona-server-mongodb:7.0.15-9-multi
Image Pull Policy: Always
Multi Cluster:
Enabled: false
Pause: false
Pmm:
Enabled: false
Image: percona/pmm-client:2.44.0
Server Host: monitoring-service
Replsets:
Affinity:
Anti Affinity Topology Key: kubernetes.io/hostname
Arbiter:
Affinity:
Anti Affinity Topology Key: kubernetes.io/hostname
Enabled: false
Size: 1
Expose:
Enabled: true
Type: ClusterIP
Name: rs0
Nonvoting:
Affinity:
Anti Affinity Topology Key: kubernetes.io/hostname
Enabled: false
Pod Disruption Budget:
Max Unavailable: 1
Resources:
Limits:
Cpu: 300m
Memory: 0.5G
Requests:
Cpu: 300m
Memory: 0.5G
Size: 3
Volume Spec:
Persistent Volume Claim:
Resources:
Requests:
Storage: 3Gi
Pod Disruption Budget:
Max Unavailable: 1
Resources:
Limits:
Cpu: 300m
Memory: 0.5G
Requests:
Cpu: 300m
Memory: 0.5G
Size: 3
Volume Spec:
Persistent Volume Claim:
Resources:
Requests:
Storage: 3Gi
Secrets:
Users: cluster1-psmdb-db-secrets
Sharding:
Balancer:
Enabled: true
Configsvr Repl Set:
Affinity:
Anti Affinity Topology Key: kubernetes.io/hostname
Expose:
Enabled: true
Type: ClusterIP
Pod Disruption Budget:
Max Unavailable: 1
Resources:
Limits:
Cpu: 300m
Memory: 0.5G
Requests:
Cpu: 300m
Memory: 0.5G
Size: 3
Volume Spec:
Persistent Volume Claim:
Resources:
Requests:
Storage: 3Gi
Enabled: true
Mongos:
Affinity:
Anti Affinity Topology Key: kubernetes.io/hostname
Expose:
Type: ClusterIP
Pod Disruption Budget:
Max Unavailable: 1
Resources:
Limits:
Cpu: 300m
Memory: 0.5G
Requests:
Cpu: 300m
Memory: 0.5G
Size: 3
Tls:
Mode: disabled
Unmanaged: false
Unsafe Flags:
Backup If Unhealthy: false
Mongos Size: false
Replset Size: false
Termination Grace Period: false
Tls: true
Update Strategy: SmartUpdate
Upgrade Options:
Apply: disabled
Schedule: 0 2 * * *
Set FCV: false
Version Service Endpoint: https://check.percona.com
Status:
Conditions:
Last Transition Time: 2025-04-17T19:14:56Z
Status: True
Type: sharding
Last Transition Time: 2025-04-17T19:15:06Z
Status: True
Type: initializing
Last Transition Time: 2025-04-17T19:17:10Z
Message: handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID: 6801539acf69df140f656a8c
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2
/ MongoServerSelectionError: Server selection timed out after 2000 ms
handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID: 680153b2934d5fead8656a8c
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2
/ MongoServerSelectionError: Server selection timed out after 2000 ms
Reason: ErrorReconcile
Status: True
Type: error
Host: cluster1-psmdb-db-mongos.mongodb-c1.svc.cluster.local
Message: Error: handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID: 680156e55ffaadd7d1656a8c
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2
/ MongoServerSelectionError: Server selection timed out after 2000 ms
handleReplsetInit: exec rs.initiate: command terminated with exit code 1 / Current Mongosh Log ID: 680156f8c46f49591f656a8c
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2
/ MongoServerSelectionError: Server selection timed out after 2000 ms
Mongos:
Ready: 0
Size: 0
Status: initializing
Observed Generation: 1
Ready: 2
Replsets:
Cfg:
Ready: 1
Size: 3
Status: initializing
rs0:
Ready: 1
Size: 3
Status: initializing
Size: 6
State: error
Events: <none>