Getting errors UserNotFound , Read Concern Majority Not Available Yet, livness fails

Description:

I am have a multi-node cluster of amazon-linux 2023 with storage fsx, the psmdb-db storageClass is gp3, using ebs-csi-driver i configure my own user for mongodb authentication, getting failures such as UserNotFound , ReadConcernMajorityNotAvailableYet and the livness is failing.

Steps to Reproduce:

environemnt - multi-node (3) of amazon-linux-2023, psmdb-db storageClass is gp3
installing psmbd with custom-user, pod is crashing with error of UserNotFound, ReadConcernMajorityNotAvailableYet

my helm values configuration:

psmdb-db:
  backup:
    enabled: false
  enabled: true
  users:
    - name: sisense
      db: admin
      passwordSecretRef:
        name: sisense-mongodb-secrets
        key: mongo-password
      roles:
        - db: admin
          name: root
        - db: admin
          name: userAdminAnyDatabase
        - db: admin
          name: readWriteAnyDatabase
        - db: admin
          name: dbAdminAnyDatabase
        - db: admin
          name: clusterAdmin
  unsafeFlags:
    replsetSize: true
  sharding:
    enabled: false
  replsets:
    rs0:
      podSecurityContext:
        fsGroup: 1000
      containerSecurityContext:
        runAsUser: 1000
        runAsGroup: 1000
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: "node-Application"
                operator: In
                values:
                - "true"
            - matchExpressions:
              - key: "node-Build"
                operator: In
                values:
                - "true"
      volumeSpec:
        pvc:
          resources:
            requests:
              storage: "20Gi"
          storageClassName: gp3

Version:

psmdb-operator 1.19.0
psmdb-db charts 1.19
mongodb image 6.0.20-17

Logs:

{"t":{"$date":"2025-03-20T14:43:20.204+00:00"},"s":"I",  "c":"ACCESS",   "id":20251,   "ctx":"conn64","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":{"user":"sisense","db":"admin"}}}
{"t":{"$date":"2025-03-20T14:43:20.204+00:00"},"s":"I",  "c":"ACCESS",   "id":20249,   "ctx":"conn64","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"sisense","authenticationDatabase":"admin","remote":"10.42.120.227:39198","extraInfo":{},"error":"UserNotFound: Could not find user \"sisense\" for db \"admin\""}}
{"t":{"$date":"2025-03-20T14:43:20.204+00:00"},"s":"I",  "c":"ACCESS",   "id":20249,   "ctx":"conn64","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"sisense","authenticationDatabase":"admin","remote":"10.42.120.227:39198","extraInfo":{},"error":"UserNotFound: Could not find user \"sisense\" for db \"admin\""}}
{"t":{"$date":"2025-03-20T14:43:20.205+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn62","msg":"Interrupted operation as its client disconnected","attr":{"opId":15236}}
{"t":{"$date":"2025-03-20T14:43:23.853+00:00"},"s":"W",  "c":"NETWORK",  "id":23235,   "ctx":"conn67","msg":"SSL peer certificate validation failed","attr":{"reason":"certificate signature failure"}}
{"t":{"$date":"2025-03-20T14:43:23.853+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn67","msg":"client metadata","attr":{"remote":"127.0.0.1:54138","client":"conn67","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"1.17.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.23.4","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-03-20T14:43:23.858+00:00"},"s":"W",  "c":"NETWORK",  "id":23235,   "ctx":"conn68","msg":"SSL peer certificate validation failed","attr":{"reason":"certificate signature failure"}}
{"t":{"$date":"2025-03-20T14:43:23.858+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn68","msg":"client metadata","attr":{"remote":"127.0.0.1:54150","client":"conn68","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"1.17.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.23.4","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-03-20T14:43:27.117+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":7600}}
{"t":{"$date":"2025-03-20T14:43:33.829+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn67","msg":"Interrupted operation as its client disconnected","attr":{"opId":15653}}
{"t":{"$date":"2025-03-20T14:43:34.722+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":7800}}
{"t":{"$date":"2025-03-20T14:43:36.230+00:00"},"s":"I",  "c":"NETWORK",  "id":23838,   "ctx":"conn73","msg":"SSL mode is set to 'preferred' and connection to remote is not using SSL.","attr":{"connectionId":73,"remote":"10.42.61.22:42006"}}
{"t":{"$date":"2025-03-20T14:43:36.230+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn73","msg":"client metadata","attr":{"remote":"10.42.61.22:42006","client":"conn73","negotiatedCompressors":[],"doc":{"application":{"name":"configuration"},"driver":{"name":"nodejs","version":"5.9.2"},"platform":"Node.js v22.14.0, LE","os":{"name":"linux","architecture":"x64","version":"6.1.129-138.220.amzn2023.x86_64","type":"Linux"}}}}
{"t":{"$date":"2025-03-20T14:43:42.529+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":8000}}
{"t":{"$date":"2025-03-20T14:43:46.735+00:00"},"s":"I",  "c":"NETWORK",  "id":23838,   "ctx":"conn77","msg":"SSL mode is set to 'preferred' and connection to remote is not using SSL.","attr":{"connectionId":77,"remote":"10.42.61.22:47042"}}
{"t":{"$date":"2025-03-20T14:43:46.735+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn77","msg":"client metadata","attr":{"remote":"10.42.61.22:47042","client":"conn77","negotiatedCompressors":[],"doc":{"application":{"name":"configuration"},"driver":{"name":"nodejs","version":"5.9.2"},"platform":"Node.js v22.14.0, LE","os":{"name":"linux","architecture":"x64","version":"6.1.129-138.220.amzn2023.x86_64","type":"Linux"}}}}
{"t":{"$date":"2025-03-20T14:43:50.201+00:00"},"s":"I",  "c":"NETWORK",  "id":23838,   "ctx":"conn79","msg":"SSL mode is set to 'preferred' and connection to remote is not using SSL.","attr":{"connectionId":79,"remote":"10.42.120.227:57668"}}
{"t":{"$date":"2025-03-20T14:43:50.201+00:00"},"s":"I",  "c":"NETWORK",  "id":23838,   "ctx":"conn80","msg":"SSL mode is set to 'preferred' and connection to remote is not using SSL.","attr":{"connectionId":80,"remote":"10.42.120.227:57670"}}
{"t":{"$date":"2025-03-20T14:43:50.201+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn79","msg":"client metadata","attr":{"remote":"10.42.120.227:57668","client":"conn79","negotiatedCompressors":[],"doc":{"application":{"name":"mongodb_exporter"},"driver":{"name":"mongo-go-driver","version":"v1.12.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.21.3"}}}
{"t":{"$date":"2025-03-20T14:43:50.201+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn80","msg":"client metadata","attr":{"remote":"10.42.120.227:57670","client":"conn80","negotiatedCompressors":[],"doc":{"application":{"name":"mongodb_exporter"},"driver":{"name":"mongo-go-driver","version":"v1.12.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.21.3"}}}
{"t":{"$date":"2025-03-20T14:43:50.203+00:00"},"s":"I",  "c":"NETWORK",  "id":23838,   "ctx":"conn81","msg":"SSL mode is set to 'preferred' and connection to remote is not using SSL.","attr":{"connectionId":81,"remote":"10.42.120.227:57686"}}
{"t":{"$date":"2025-03-20T14:43:50.203+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn81","msg":"client metadata","attr":{"remote":"10.42.120.227:57686","client":"conn81","negotiatedCompressors":[],"doc":{"application":{"name":"mongodb_exporter"},"driver":{"name":"mongo-go-driver","version":"v1.12.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.21.3"}}}
{"t":{"$date":"2025-03-20T14:43:50.204+00:00"},"s":"I",  "c":"ACCESS",   "id":20251,   "ctx":"conn81","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":{"user":"sisense","db":"admin"}}}
{"t":{"$date":"2025-03-20T14:43:50.204+00:00"},"s":"I",  "c":"ACCESS",   "id":20249,   "ctx":"conn81","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"sisense","authenticationDatabase":"admin","remote":"10.42.120.227:57686","extraInfo":{},"error":"UserNotFound: Could not find user \"sisense\" for db \"admin\""}}
{"t":{"$date":"2025-03-20T14:43:50.204+00:00"},"s":"I",  "c":"ACCESS",   "id":20249,   "ctx":"conn81","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"sisense","authenticationDatabase":"admin","remote":"10.42.120.227:57686","extraInfo":{},"error":"UserNotFound: Could not find user \"sisense\" for db \"admin\""}}
{"t":{"$date":"2025-03-20T14:43:50.205+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn79","msg":"Interrupted operation as its client disconnected","attr":{"opId":18632}}
{"t":{"$date":"2025-03-20T14:43:50.534+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":8200}}
{"t":{"$date":"2025-03-20T14:43:53.250+00:00"},"s":"I",  "c":"NETWORK",  "id":23838,   "ctx":"conn83","msg":"SSL mode is set to 'preferred' and connection to remote is not using SSL.","attr":{"connectionId":83,"remote":"10.42.120.199:51396"}}
{"t":{"$date":"2025-03-20T14:43:53.250+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn83","msg":"client metadata","attr":{"remote":"10.42.120.199:51396","client":"conn83","negotiatedCompressors":[],"doc":{"application":{"name":"configuration"},"driver":{"name":"nodejs","version":"5.9.2"},"platform":"Node.js v22.14.0, LE","os":{"name":"linux","architecture":"x64","version":"6.1.129-138.220.amzn2023.x86_64","type":"Linux"}}}}
{"t":{"$date":"2025-03-20T14:43:53.882+00:00"},"s":"W",  "c":"NETWORK",  "id":23235,   "ctx":"conn85","msg":"SSL peer certificate validation failed","attr":{"reason":"certificate signature failure"}}
{"t":{"$date":"2025-03-20T14:43:53.883+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn85","msg":"client metadata","attr":{"remote":"127.0.0.1:60266","client":"conn85","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"1.17.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.23.4","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-03-20T14:43:53.887+00:00"},"s":"W",  "c":"NETWORK",  "id":23235,   "ctx":"conn86","msg":"SSL peer certificate validation failed","attr":{"reason":"certificate signature failure"}}
{"t":{"$date":"2025-03-20T14:43:53.888+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn86","msg":"client metadata","attr":{"remote":"127.0.0.1:60268","client":"conn86","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"1.17.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.23.4","env":{"container":{"orchestrator":"kubernetes"}}}}}
{"t":{"$date":"2025-03-20T14:43:58.739+00:00"},"s":"I",  "c":"-",        "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":8400}}
{"t":{"$date":"2025-03-20T14:44:03.756+00:00"},"s":"I",  "c":"NETWORK",  "id":23838,   "ctx":"conn90","msg":"SSL mode is set to 'preferred' and connection to remote is not using SSL.","attr":{"connectionId":90,"remote":"10.42.120.199:46188"}}
{"t":{"$date":"2025-03-20T14:44:03.756+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn90","msg":"client metadata","attr":{"remote":"10.42.120.199:46188","client":"conn90","negotiatedCompressors":[],"doc":{"application":{"name":"configuration"},"driver":{"name":"nodejs","version":"5.9.2"},"platform":"Node.js v22.14.0, LE","os":{"name":"linux","architecture":"x64","version":"6.1.129-138.220.amzn2023.x86_64","type":"Linux"}}}}
{"t":{"$date":"2025-03-20T14:44:03.860+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn85","msg":"Interrupted operation as its client disconnected","attr":{"opId":19057}}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"CONTROL",  "id":23377,   "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"CONTROL",  "id":23378,   "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":0,"uid":0}}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"CONTROL",  "id":23381,   "ctx":"SignalHandler","msg":"will terminate after current cmd ends"}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"REPL",     "id":4794602, "ctx":"SignalHandler","msg":"Attempting to enter quiesce mode"}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"-",        "id":6371601, "ctx":"SignalHandler","msg":"Shutting down the FLE Crud thread pool"}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"FLECrudNetwork","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"REPL",     "id":40441,   "ctx":"SignalHandler","msg":"Stopping TopologyVersionObserver"}
{"t":{"$date":"2025-03-20T14:44:03.879+00:00"},"s":"I",  "c":"REPL",     "id":40447,   "ctx":"TopologyVersionObserver","msg":"Stopped TopologyVersionObserver"}
{"t":{"$date":"2025-03-20T14:44:03.880+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"MirrorMaestro","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.880+00:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2025-03-20T14:44:03.880+00:00"},"s":"I",  "c":"CONTROL",  "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"}
{"t":{"$date":"2025-03-20T14:44:03.880+00:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2025-03-20T14:44:03.880+00:00"},"s":"I",  "c":"NETWORK",  "id":23017,   "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"CONTROL",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"REPL",     "id":4784907, "ctx":"SignalHandler","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"ReplNodeDbWorkerNetwork","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"CONTROL",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"REPL",     "id":5074000, "ctx":"SignalHandler","msg":"Shutting down the replica set aware services."}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"REPL",     "id":5123006, "ctx":"SignalHandler","msg":"Shutting down PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","numInstances":0,"numOperationContexts":0}}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"TenantMigrationRecipientServiceNetwork","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"REPL",     "id":5123006, "ctx":"SignalHandler","msg":"Shutting down PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","numInstances":0,"numOperationContexts":0}}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"ShardSplitDonorServiceNetwork","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.881+00:00"},"s":"I",  "c":"REPL",     "id":5123006, "ctx":"SignalHandler","msg":"Shutting down PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","numInstances":0,"numOperationContexts":0}}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"TenantMigrationDonorServiceNetwork","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"REPL",     "id":21328,   "ctx":"SignalHandler","msg":"Shutting down replication subsystems"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"ReplNetwork","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":6}}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"TENANT_M", "id":5093807, "ctx":"SignalHandler","msg":"Shutting down all TenantMigrationAccessBlockers on global shutdown"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2025-03-20T14:44:03.882+00:00"},"s":"I",  "c":"REPL",     "id":4784920, "ctx":"SignalHandler","msg":"Shutting down the LogicalTimeValidator"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"COMMAND",  "id":4784923, "ctx":"SignalHandler","msg":"Shutting down the ServiceEntryPoint"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"CONTROL",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"CONTROL",  "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"INDEX",    "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"INDEX",    "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"CONTROL",  "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"QUERY",    "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"QUERY",    "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"CONTROL",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"CONTROL",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
{"t":{"$date":"2025-03-20T14:44:03.883+00:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
{"t":{"$date":"2025-03-20T14:44:03.884+00:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
{"t":{"$date":"2025-03-20T14:44:03.885+00:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
{"t":{"$date":"2025-03-20T14:44:03.885+00:00"},"s":"I",  "c":"WTCHKPT",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1742481843,"ts_usec":885873,"thread":"1:0x7ffb48a70640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 8, snapshot max: 8 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 234"}}}
{"t":{"$date":"2025-03-20T14:44:03.894+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1742481843,"ts_usec":894469,"thread":"1:0x7ffb48a70640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 8 milliseconds"}}}
{"t":{"$date":"2025-03-20T14:44:03.894+00:00"},"s":"I",  "c":"WTRECOV",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1742481843,"ts_usec":894571,"thread":"1:0x7ffb48a70640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG","verbose_level_id":1,"msg":"shutdown was completed successfully and took 9ms, including 0ms for the rollback to stable, and 8ms for the checkpoint."}}}
{"t":{"$date":"2025-03-20T14:44:03.909+00:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":24}}
{"t":{"$date":"2025-03-20T14:44:03.935+00:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
{"t":{"$date":"2025-03-20T14:44:03.935+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2025-03-20T14:44:03.935+00:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
{"t":{"$date":"2025-03-20T14:44:03.940+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
{"t":{"$date":"2025-03-20T14:44:03.940+00:00"},"s":"I",  "c":"CONTROL",  "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":{"Summary of time elapsed":{"Statistics":{"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"0 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"0 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"1 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"0 ms","Shut down the replica set aware services":"1 ms","Shut down replication":"0 ms","Shut down replication":"0 ms","Shut down external state":"0 ms","Shut down external state":"0 ms","Shut down replication executor":"0 ms","Shut down replication executor":"0 ms","Join replication executor":"0 ms","Join replication executor":"0 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images remover":"0 ms","Shut down the storage engine":"52 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"61 ms"}}}}
{"t":{"$date":"2025-03-20T14:44:03.940+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}

operator-logs

2025-03-20T13:53:02.345Z	INFO	server version	{"platform": "kubernetes", "version": "v1.32.1+rke2r1"}
2025-03-20T13:53:02.349Z	INFO	controller-runtime.metrics	Starting metrics server
2025-03-20T13:53:02.349Z	INFO	starting server	{"name": "health probe", "addr": "[::]:8081"}
2025-03-20T13:53:02.349Z	INFO	controller-runtime.metrics	Serving metrics server	{"bindAddress": ":8080", "secure": false}
I0320 13:53:02.349524       1 leaderelection.go:254] attempting to acquire leader lease sisense/08db0feb.percona.com...
I0320 13:53:02.355067       1 leaderelection.go:268] successfully acquired lease sisense/08db0feb.percona.com
2025-03-20T13:53:02.355Z	INFO	Starting EventSource	{"controller": "psmdb-controller", "source": "kind source: *v1.PerconaServerMongoDB"}
2025-03-20T13:53:02.355Z	INFO	Starting Controller	{"controller": "psmdb-controller"}
2025-03-20T13:53:02.355Z	INFO	Starting EventSource	{"controller": "psmdbrestore-controller", "source": "kind source: *v1.PerconaServerMongoDBRestore"}
2025-03-20T13:53:02.355Z	INFO	Starting EventSource	{"controller": "psmdbrestore-controller", "source": "kind source: *v1.Pod"}
2025-03-20T13:53:02.355Z	INFO	Starting Controller	{"controller": "psmdbrestore-controller"}
2025-03-20T13:53:02.355Z	INFO	Starting EventSource	{"controller": "psmdbbackup-controller", "source": "kind source: *v1.PerconaServerMongoDBBackup"}
2025-03-20T13:53:02.355Z	INFO	Starting EventSource	{"controller": "psmdbbackup-controller", "source": "kind source: *v1.Pod"}
2025-03-20T13:53:02.355Z	INFO	Starting Controller	{"controller": "psmdbbackup-controller"}
2025-03-20T13:53:02.556Z	INFO	Starting workers	{"controller": "psmdb-controller", "worker count": 1}
2025-03-20T13:53:02.560Z	INFO	Starting workers	{"controller": "psmdbrestore-controller", "worker count": 1}
2025-03-20T13:53:02.560Z	INFO	Starting workers	{"controller": "psmdbbackup-controller", "worker count": 1}
2025-03-20T13:53:24.977Z	INFO	Created a new mongo key	{"controller": "psmdb-controller", "object": {"name":"sisense-psmdb-db","namespace":"sisense"}, "namespace": "sisense", "name": "sisense-psmdb-db", "reconcileID": "73525246-72a3-48c6-8faa-605a24548e15", "KeyName": "sisense-psmdb-db-mongodb-keyfile"}
2025-03-20T13:53:24.983Z	INFO	Created a new mongo key	{"controller": "psmdb-controller", "object": {"name":"sisense-psmdb-db","namespace":"sisense"}, "namespace": "sisense", "name": "sisense-psmdb-db", "reconcileID": "73525246-72a3-48c6-8faa-605a24548e15", "KeyName": "sisense-psmdb-db-mongodb-encryption-key"}
2025-03-20T13:53:25.084Z	INFO	Waiting for the pods	{"controller": "psmdb-controller", "object": {"name":"sisense-psmdb-db","namespace":"sisense"}, "namespace": "sisense", "name": "sisense-psmdb-db", "reconcileID": "73525246-72a3-48c6-8faa-605a24548e15", "replset": "rs0", "size": 3, "pods": 1}
2025-03-20T13:53:25.258Z	INFO	add new job	{"controller": "psmdb-controller", "object": {"name":"sisense-psmdb-db","namespace":"sisense"}, "namespace": "sisense", "name": "sisense-psmdb-db", "reconcileID": "73525246-72a3-48c6-8faa-605a24548e15", "name": "ensure-version/sisense/sisense-psmdb-db", "schedule": "0 2 * * *"}

Expected Result:

other deployment and clusters i have works fine such as eks, ubuntu cluster and so on

Additional Information:

I Really dont understand how to debug this issue, same configuration works for me on other deployments, i cannot log into the mongodb while the users seems to be missing on admin db.