Basic cluster with TLS not working, operator k8s/helm setup

Hello!

We are attempting to test out this operator and server for potential prod usage.
However we have not been able to get tls working.
We are deploying in k8s using helm for the operator, and I have tried helm and crd for the cluster.

We have crt-manager set up, and it creates the certs. It also looks like the pods are picking up and using the certs. But the connections are failing.

This is happening with the psmdb-db helm chart with default values

And this is what we have been testing lately with the crd. Which I would prefer to use when interacting with an operator.

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: minimal
  namespace: percona-operator
spec:
  crVersion: 1.13.0
  image: percona/percona-server-mongodb:latest
  replsets:
  - affinity:
      antiAffinityTopologyKey: kubernetes.io/hostname
    name: rs0
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi
  secrets:
    users: minimal
  sharding:
    enabled: false
  upgradeOptions:
    apply: disabled
    schedule: 0 2 * * *

Tried the following. It picks up the confdig, but it does not seem to make a difference.

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: minimal
  namespace: percona-operator
spec:
  crVersion: 1.13.0
  image: percona/percona-server-mongodb:latest
  replsets:
  - affinity:
      antiAffinityTopologyKey: kubernetes.io/hostname
    configuration: |
      net:
        tls:
          allowInvalidHostnames: true
    name: rs0
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi
  secrets:
    users: minimal
  sharding:
    enabled: false
  upgradeOptions:
    apply: disabled
    schedule: 0 2 * * *

except of logs from rs0 pod


{"t":{"$date":"2022-10-04T17:23:44.584+00:00"},"s":"W", "c":"CONTROL", "id":22124, "ctx":"initandlisten","msg":"While invalid X509 certificates may be used to connect to this server, they will not be considered permissible for authentication","tags":["startupWarnings"]}
{"t":{"$date":"2022-10-04T17:24:10.799+00:00"},"s":"W", "c":"NETWORK", "id":23236, "ctx":"conn6","msg":"Client connecting with server's own TLS certificate"}
{"t":{"$date":"2022-10-04T17:24:10.800+00:00"},"s":"W", "c":"ACCESS", "id":20430, "ctx":"conn6","msg":"Client isn't a mongod or mongos, but is connecting with a certificate with cluster membership"}
{"t":{"$date":"2022-10-04T17:24:10.827+00:00"},"s":"W", "c":"NETWORK", "id":23236, "ctx":"conn7","msg":"Client connecting with server's own TLS certificate"}
{"t":{"$date":"2022-10-04T17:24:27.368+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn15","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
{"t":{"$date":"2022-10-04T17:24:27.370+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn16","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
{"t":{"$date":"2022-10-04T17:24:27.372+00:00"},"s":"W", "c":"NETWORK", "id":23236, "ctx":"conn17","msg":"Client connecting with server's own TLS certificate"}
{"t":{"$date":"2022-10-04T17:24:27.374+00:00"},"s":"W", "c":"ACCESS", "id":20430, "ctx":"conn17","msg":"Client isn't a mongod or mongos, but is connecting with a certificate with cluster membership"}
{"t":{"$date":"2022-10-04T17:24:27.406+00:00"},"s":"W", "c":"NETWORK", "id":23236, "ctx":"conn18","msg":"Client connecting with server's own TLS certificate"}
{"t":{"$date":"2022-10-04T17:24:27.440+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn19","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
{"t":{"$date":"2022-10-04T17:24:27.445+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn20","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
{"t":{"$date":"2022-10-04T17:24:27.481+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn22","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
{"t":{"$date":"2022-10-04T17:24:27.482+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn21","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
{"t":{"$date":"2022-10-04T17:24:32.448+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn27","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
{"t":{"$date":"2022-10-04T17:24:32.448+00:00"},"s":"W", "c":"NETWORK", "id":23235, "ctx":"conn26","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
1 Like

Redid everything into the default namespace, did not change the results.

1 Like

Changing to cluster-wise, adding the additional rbac, also did not change the results

1 Like

Same problem when deploying percona-server-mongodb-operator/cr.yaml at main · percona/percona-server-mongodb-operator · GitHub with crVersion: 1.13.0

:confused:

1 Like

Ok, I have some Idea about what is happening.

Everything is using pod ip’s and the pod ips are not in the ssl cert dnsNames
When I setup externalNodes and add them to the cert, it no longer yells. It then only yells when the operator pod tries to connect.

I am searching for doc/config but not finding what I need. Is there some way to switch from ip’s to dns names for pod communication?

1 Like

Experimenting with using LB and external names does not help,

and setting

  clusterServiceDNSSuffix: svc.cluster.local
  clusterServiceDNSMode: "Internal"

seems to do nothing as well

1 Like

Hi @jonathon,

The operator should use DNS names instead of IPs if you don’t expose your replset pods. From what I see in your files you didn’t expose them, did you?

The operator started using TLS certs in everywhere starting from version 1.13. We had another bug report for exposed replsets. You can also create a ticket for your issue.

1 Like

I’m having the same problem as jonathon right now. I would like to know if theres a fix. We store the data from within the cluster, but would like to access it from outside the cluster.

Adding tlsAllowInvalidHostnames to the config also doesn’t change anything (the operator throws an error “The server certificate does not match the remote host name”). The pods still cant contact each other and fail the health checks.

1 Like

Folks, apparently there’s a bug. I just confirmed it without using helm chart too. I created the ticket K8SPSMDB-796 for the fix, you can track the status in there.

2 Likes

The fix is merged to main branch. If you try to deploy using main branch and see if it works for you, it’d be speed things up significantly. Please do it on your test clusters.

1 Like

I am still seeing the exact same behavior.
Installed with kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/deploy/bundle.yaml

Then deployed this cluster

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: minimal
  namespace: default
spec:
  image: percona/percona-server-mongodb:latest
  replsets:
  - affinity:
      antiAffinityTopologyKey: kubernetes.io/hostname
    name: rs0
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi
  secrets:
    users: minimal
  sharding:
    enabled: false
  upgradeOptions:
    apply: disabled
    schedule: 0 2 * * *

This is the cert it makes

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: minimal-ssl
  namespace: default
  ownerReferences:
  - apiVersion: psmdb.percona.com/v1
    controller: true
    kind: PerconaServerMongoDB
    name: minimal
    uid: 6271257d-6ece-4da5-8d51-54ecdebc6ad1
  resourceVersion: "18897670"
  uid: 32945f74-5399-4c6a-9375-a23dcf547986
spec:
  commonName: minimal
  dnsNames:
  - localhost
  - minimal-rs0
  - minimal-rs0.default
  - minimal-rs0.default.svc.cluster.local
  - '*.minimal-rs0'
  - '*.minimal-rs0.default'
  - '*.minimal-rs0.default.svc.cluster.local'
  - minimal-rs0.default.svc.clusterset.local
  - '*.minimal-rs0.default.svc.clusterset.local'
  - '*.default.svc.clusterset.local'
  - minimal-mongos
  - minimal-mongos.default
  - minimal-mongos.default.svc.cluster.local
  - '*.minimal-mongos'
  - '*.minimal-mongos.default'
  - '*.minimal-mongos.default.svc.cluster.local'
  - minimal-cfg
  - minimal-cfg.default
  - minimal-cfg.default.svc.cluster.local
  - '*.minimal-cfg'
  - '*.minimal-cfg.default'
  - '*.minimal-cfg.default.svc.cluster.local'
  - minimal-mongos.default.svc.clusterset.local
  - '*.minimal-mongos.default.svc.clusterset.local'
  - minimal-cfg.default.svc.clusterset.local
  - '*.minimal-cfg.default.svc.clusterset.local'
1 Like

Not able to upload file, so here is one of the containers logs, at start up only partial

+ exec mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=rs0 --storageEngine=wiredTiger --relaxPermChecks --clusterAuthMode=x509 --enableEncryption --encryptionKeyFile=/etc/mongodb-encryption/encryption-key --wiredTigerIndexPrefixCompression=true --tlsMode preferTLS --tlsCertificateKeyFile /tmp/tls.pem --tlsAllowInvalidCertificates --tlsClusterFile /tmp/tls-internal.pem --tlsCAFile /etc/mongodb-ssl/ca.crt --tlsClusterCAFile /etc/mongodb-ssl-internal/ca.crt
"ctx":"-","msg":"Certificate information","attr":{"subject":"CN=minimal,O=PSMDB","issuer":"CN=minimal,O=PSMDB","thumbprint":"D56C979416F41887DF50864C493C5161059436C1","notValidBefore":{"$date":"2022-10-06T14:30:45.000Z"},"notValidAfter":{"$date":"2023-01-04T14:30:45.000Z"},"keyFile":"/tmp/tls.pem","type":"Server"}}
"ctx":"-","msg":"Certificate information","attr":{"subject":"CN=minimal,O=PSMDB","issuer":"CN=minimal,O=PSMDB","thumbprint":"404FF05C34044323A5FC9FBAD12FB045ABEDEFB9","notValidBefore":{"$date":"2022-10-06T14:30:45.000Z"},"notValidAfter":{"$date":"2023-01-04T14:30:45.000Z"},"keyFile":"/tmp/tls-internal.pem","type":"Cluster"}}
"ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
"ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
"ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
"ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
"ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
"ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
"ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
"ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
"ctx":"main","msg":"Multi threading initialized"}
"ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"minimal-rs0-0"}}
"ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.11-10","gitVersion":"c054e0c94e2a96ad8d03e79a425fb90219fd02b3","openSSLVersion":"OpenSSL 1.1.1k  FIPS 25 Mar 2021","modules":[],"allocator":"tcmalloc","environment":{"distarch":"x86_64","target_arch":"x86_64"}}}}
"ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Oracle Linux Server release 8.6","version":"Kernel 4.18.0-372.19.1.el8_6.x86_64"}}}
"ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*","port":27017,"tls":{"CAFile":"/etc/mongodb-ssl/ca.crt","allowInvalidCertificates":true,"certificateKeyFile":"/tmp/tls.pem","clusterCAFile":"/etc/mongodb-ssl-internal/ca.crt","clusterFile":"/tmp/tls-internal.pem","mode":"preferTLS"}},"replication":{"replSet":"rs0"},"security":{"authorization":"enabled","clusterAuthMode":"x509","enableEncryption":true,"encryptionKeyFile":"/etc/mongodb-encryption/encryption-key","relaxPermChecks":true},"storage":{"dbPath":"/data/db","engine":"wiredTiger","wiredTiger":{"indexConfig":{"prefixCompression":true}}}}}}
"ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
"ctx":"initandlisten","msg":"Initializing KeyDB with wiredtiger_open config: {cfg}","attr":{"cfg":"create,config_base=false,extensions=[local=(entry=percona_encryption_extension_init,early_load=true,config=(cipher=AES256-CBC,rotation=false))],encryption=(name=percona,keyid=\"\"),log=(enabled,file_max=5MB),transaction_sync=(enabled=true,method=fsync),"}}
"ctx":"initandlisten","msg":"Encryption keys DB is initialized successfully"}
"ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=15455M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],encryption=(name=percona,keyid=\"/default\"),extensions=[local=(entry=percona_encryption_extension_init,early_load=true,config=(cipher=AES256-CBC)),],"}}
"ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1665066658:344179][1:0x7f9dc81fab80], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)"}}
"ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1665066658:344270][1:0x7f9dc81fab80], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)"}}
"ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":38}}
"ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
"ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}}
"ctx":"initandlisten","msg":"Timestamp monitor starting"}
"ctx":"initandlisten","msg":"While invalid X509 certificates may be used to connect to this server, they will not be considered permissible for authentication","tags":["startupWarnings"]}
"ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]}
"ctx":"initandlisten","msg":"Clearing temp directory"}
"ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
"ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
"ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.startup_log","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"4a3339df-4393-4723-81f6-4bcb46b60e86"}},"options":{"capped":true,"size":10485760}}}
"ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.startup_log","index":"_id_","commitTimestamp":null}}
"ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigStartingUp","oldState":"ConfigPreStart"}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}}
"ctx":"initandlisten","msg":"Attempting to create internal replication collections"}
"ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.replset.oplogTruncateAfterPoint","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"cf171067-a776-4c31-9c46-f6cc86c2df65"}},"options":{}}}
"ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.replset.oplogTruncateAfterPoint","index":"_id_","commitTimestamp":null}}
"ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.replset.minvalid","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"f9fdc730-b62d-4545-b530-19e8752ab6f7"}},"options":{}}}
"ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.replset.minvalid","index":"_id_","commitTimestamp":null}}
"ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.replset.election","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"6842dca4-bad0-46df-a2c5-61989d7466ae"}},"options":{}}}
"ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.replset.election","index":"_id_","commitTimestamp":null}}
"ctx":"initandlisten","msg":"Attempting to load local voted for document"}
"ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"}
"ctx":"initandlisten","msg":"Searching for local Rollback ID document"}
"ctx":"initandlisten","msg":"Did not find local Rollback ID document at startup. Creating one"}
"ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.system.rollback.id","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"aa613aab-c117-48a3-b25b-6774ba157504"}},"options":{}}}
"ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.system.rollback.id","index":"_id_","commitTimestamp":null}}
"ctx":"initandlisten","msg":"Initialized the rollback ID","attr":{"rbid":1}}
"ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}}
"ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigUninitialized","oldState":"ConfigStartingUp"}}
"ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.system.views","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"81424f7a-1bdd-4033-977b-0e4b06bfbea9"}},"options":{}}}
"ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.system.views","index":"_id_","commitTimestamp":null}}
"ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}}
"ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"}
"ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"NamespaceNotFound: config.system.sessions does not exist"}}
"ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"}
"ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
"ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
"ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"on"}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:39590","uuid":"3abdc004-a969-4a4d-bea8-bc4556c5c47d","connectionId":1,"connectionCount":1}}
"ctx":"conn1","msg":"Connection ended","attr":{"remote":"142.44.235.169:39590","uuid":"3abdc004-a969-4a4d-bea8-bc4556c5c47d","connectionId":1,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:39604","uuid":"fd90994e-d137-4730-904a-444817597267","connectionId":2,"connectionCount":1}}
"ctx":"conn2","msg":"Connection ended","attr":{"remote":"142.44.235.169:39604","uuid":"fd90994e-d137-4730-904a-444817597267","connectionId":2,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2800}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:39618","uuid":"3d856166-73ff-452a-bd2a-fc3915568ebc","connectionId":3,"connectionCount":1}}
"ctx":"conn3","msg":"Connection ended","attr":{"remote":"142.44.235.169:39618","uuid":"3d856166-73ff-452a-bd2a-fc3915568ebc","connectionId":3,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3000}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:43470","uuid":"8826ea34-c7e8-429a-8b37-79b8f4cbcbfc","connectionId":4,"connectionCount":1}}
"ctx":"conn4","msg":"Connection ended","attr":{"remote":"142.44.235.169:43470","uuid":"8826ea34-c7e8-429a-8b37-79b8f4cbcbfc","connectionId":4,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3200}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:43474","uuid":"f9b2bbed-8467-416e-a85e-181d92d883a1","connectionId":5,"connectionCount":1}}
"ctx":"conn5","msg":"Connection ended","attr":{"remote":"142.44.235.169:43474","uuid":"f9b2bbed-8467-416e-a85e-181d92d883a1","connectionId":5,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3400}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:43484","uuid":"0f3832f2-c75d-42b2-9071-c3c4a298be95","connectionId":6,"connectionCount":1}}
"ctx":"conn6","msg":"Connection ended","attr":{"remote":"142.44.235.169:43484","uuid":"0f3832f2-c75d-42b2-9071-c3c4a298be95","connectionId":6,"connectionCount":0}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:43498","uuid":"11d11c71-5f95-4ec4-a8db-59489b903262","connectionId":7,"connectionCount":1}}
"ctx":"conn7","msg":"Connection ended","attr":{"remote":"142.44.235.169:43498","uuid":"11d11c71-5f95-4ec4-a8db-59489b903262","connectionId":7,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3600}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:36594","uuid":"cda453dc-7f4d-43fd-a481-dededece2037","connectionId":8,"connectionCount":1}}
"ctx":"conn8","msg":"Connection ended","attr":{"remote":"142.44.235.169:36594","uuid":"cda453dc-7f4d-43fd-a481-dededece2037","connectionId":8,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":3800}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:36596","uuid":"1a09dbd2-ccc0-421c-ac1e-cfa8895aafa8","connectionId":9,"connectionCount":1}}
"ctx":"conn9","msg":"Connection ended","attr":{"remote":"142.44.235.169:36596","uuid":"1a09dbd2-ccc0-421c-ac1e-cfa8895aafa8","connectionId":9,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":4000}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:36608","uuid":"aa401659-7adf-4504-9604-5405b4eabb1d","connectionId":10,"connectionCount":1}}
"ctx":"conn10","msg":"Connection ended","attr":{"remote":"142.44.235.169:36608","uuid":"aa401659-7adf-4504-9604-5405b4eabb1d","connectionId":10,"connectionCount":0}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":4200}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:55396","uuid":"a2ad9bf2-492e-4858-9a4b-8bd57c871629","connectionId":11,"connectionCount":1}}
"ctx":"conn11","msg":"Connection ended","attr":{"remote":"142.44.235.169:55396","uuid":"a2ad9bf2-492e-4858-9a4b-8bd57c871629","connectionId":11,"connectionCount":0}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:58776","uuid":"387af180-a3ab-48a9-adbd-9787f747e9fc","connectionId":12,"connectionCount":1}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:58766","uuid":"f580460d-077f-4893-a633-6cd656c44f18","connectionId":13,"connectionCount":2}}
"ctx":"conn12","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn12","msg":"client metadata","attr":{"remote":"10.42.175.17:58776","client":"conn12","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"conn13","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn13","msg":"client metadata","attr":{"remote":"10.42.175.17:58766","client":"conn13","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:55402","uuid":"3897afd9-c201-4a03-9fd1-6f1eac4f7fe7","connectionId":14,"connectionCount":3}}
"ctx":"conn14","msg":"Connection ended","attr":{"remote":"142.44.235.169:55402","uuid":"3897afd9-c201-4a03-9fd1-6f1eac4f7fe7","connectionId":14,"connectionCount":2}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":4400}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:55410","uuid":"aeb430c9-088e-44bd-beef-8aa730986645","connectionId":15,"connectionCount":3}}
"ctx":"conn15","msg":"Connection ended","attr":{"remote":"142.44.235.169:55410","uuid":"aeb430c9-088e-44bd-beef-8aa730986645","connectionId":15,"connectionCount":2}}
"ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":4600}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:33376","uuid":"cc14a882-2473-4176-afd4-01f3df4498ef","connectionId":16,"connectionCount":3}}
"ctx":"conn16","msg":"Connection ended","attr":{"remote":"142.44.235.169:33376","uuid":"cc14a882-2473-4176-afd4-01f3df4498ef","connectionId":16,"connectionCount":2}}
"ctx":"conn12","msg":"Interrupted operation as its client disconnected","attr":{"opId":4935}}
"ctx":"conn13","msg":"Connection ended","attr":{"remote":"10.42.175.17:58766","uuid":"f580460d-077f-4893-a633-6cd656c44f18","connectionId":13,"connectionCount":1}}
"ctx":"conn12","msg":"Connection ended","attr":{"remote":"10.42.175.17:58776","uuid":"387af180-a3ab-48a9-adbd-9787f747e9fc","connectionId":12,"connectionCount":0}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:41312","uuid":"a13b34ce-51ef-40b6-99a7-fec893ff937e","connectionId":17,"connectionCount":1}}
"ctx":"conn17","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn17","msg":"note: no users configured in admin.system.users, allowing localhost access"}
"ctx":"conn17","msg":"client metadata","attr":{"remote":"127.0.0.1:41312","client":"conn17","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"5.0.11-10"},"os":{"type":"Linux","name":"Oracle Linux Server release 8.6","architecture":"x86_64","version":"Kernel 4.18.0-372.19.1.el8_6.x86_64"}}}}
"ctx":"conn17","msg":"replSetInitiate admin command received from client"}
"ctx":"conn17","msg":"Setting new configuration state","attr":{"newState":"ConfigInitiating","oldState":"ConfigUninitialized"}}
"ctx":"conn17","msg":"createCollection","attr":{"namespace":"admin.system.version","uuidDisposition":"provided","uuid":{"uuid":{"$uuid":"4a04167e-5705-4348-b88a-0f8c31cb10f4"}},"options":{"uuid":{"$uuid":"4a04167e-5705-4348-b88a-0f8c31cb10f4"}}}}
"ctx":"conn17","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.version","index":"_id_","commitTimestamp":null}}
"ctx":"conn17","msg":"Setting featureCompatibilityVersion","attr":{"newVersion":"5.0"}}
"ctx":"conn17","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}}
"ctx":"conn17","msg":"Skip closing connection for connection","attr":{"connectionId":17}}
"ctx":"conn17","msg":"replSetInitiate config object parses ok","attr":{"numMembers":1}}
"ctx":"conn17","msg":"Creating replication oplog","attr":{"oplogSizeMB":990}}
"ctx":"conn17","msg":"createCollection","attr":{"namespace":"local.oplog.rs","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"475d02d4-db29-42a5-abf9-c4ec68a2ddb9"}},"options":{"capped":true,"size":1038090240,"autoIndexId":false}}}
"ctx":"conn17","msg":"The size storer reports that the oplog contains","attr":{"numRecords":0,"dataSize":0}}
"ctx":"conn17","msg":"WiredTiger record store oplog processing finished","attr":{"durationMillis":0}}
"ctx":"conn17","msg":"WiredTiger message","attr":{"message":"[1665066712:495023][1:0x7f9dc81f7700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 68, snapshot max: 68 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
"ctx":"conn17","msg":"createCollection","attr":{"namespace":"local.system.replset","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"e411ad9e-b460-48ea-a51e-8c06a5fc6c8d"}},"options":{}}}
"ctx":"conn17","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.system.replset","index":"_id_","commitTimestamp":{"$timestamp":{"t":1665066712,"i":1}}}}
"ctx":"conn17","msg":"WiredTiger message","attr":{"message":"[1665066712:568273][1:0x7f9dc81f7700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 77, snapshot max: 77 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
"ctx":"conn17","msg":"Taking a stable checkpoint for replSetInitiate"}
"ctx":"conn17","msg":"Updating commit point for initiate","attr":{"_lastCommittedOpTimeAndWallTime":"{ ts: Timestamp(1665066712, 1), t: -1 }, 2022-10-06T14:31:52.566+00:00"}}
"ctx":"conn17","msg":"Triggering the first stable checkpoint","attr":{"initialDataTimestamp":{"$timestamp":{"t":1665066712,"i":1}},"prevStableTimestamp":{"$timestamp":{"t":0,"i":0}},"currStableTimestamp":{"$timestamp":{"t":1665066712,"i":1}}}}
"ctx":"conn17","msg":"WiredTiger message","attr":{"message":"[1665066712:605914][1:0x7f9dc81f7700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 79, snapshot max: 79 snapshot count: 0, oldest timestamp: (1665066712, 1) , meta checkpoint timestamp: (1665066712, 1) base write gen: 1"}}
"ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1665066712:614321][1:0x7f9db3530700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 81, snapshot max: 81 snapshot count: 0, oldest timestamp: (1665066712, 1) , meta checkpoint timestamp: (1665066712, 1) base write gen: 1"}}
"ctx":"conn17","msg":"Setting new configuration state","attr":{"newState":"ConfigSteady","oldState":"ConfigInitiating"}}
"ctx":"conn17","msg":"New replica set config in use","attr":{"config":{"_id":"rs0","version":1,"term":0,"members":[{"_id":0,"host":"minimal-rs0-0.minimal-rs0.default.svc.cluster.local:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1,"tags":{},"secondaryDelaySecs":0,"votes":1}],"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"633ee6d8050e79bc79c10d53"}}}}}
"ctx":"conn17","msg":"Found self in config","attr":{"hostAndPort":"minimal-rs0-0.minimal-rs0.default.svc.cluster.local:27017"}}
"ctx":"conn17","msg":"Replica set state transition","attr":{"newState":"STARTUP2","oldState":"STARTUP"}}
"ctx":"conn17","msg":"Starting replication storage threads"}
"ctx":"conn17","msg":"No initial sync required. Attempting to begin steady replication"}
"ctx":"conn17","msg":"Replica set state transition","attr":{"newState":"RECOVERING","oldState":"STARTUP2"}}
"ctx":"conn17","msg":"createCollection","attr":{"namespace":"local.replset.initialSyncId","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"be573832-c667-4be5-86c4-ff2bec4c8e21"}},"options":{}}}
"ctx":"conn17","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.replset.initialSyncId","index":"_id_","commitTimestamp":null}}
"ctx":"conn17","msg":"Starting replication fetcher thread"}
"ctx":"conn17","msg":"Starting replication applier thread"}
"ctx":"conn17","msg":"Starting replication reporter thread"}
"ctx":"OplogApplier-0","msg":"Starting oplog application"}
"ctx":"conn17","msg":"Slow query","attr":{"type":"command","ns":"local.system.replset","appName":"MongoDB Shell","command":{"replSetInitiate":{"_id":"rs0","version":1,"members":[{"_id":0,"host":"minimal-rs0-0.minimal-rs0.default.svc.cluster.local:27017"}]},"lsid":{"id":{"$uuid":"d00fdd04-83bd-4b34-b014-5af20e9970f8"}},"$db":"admin"},"numYields":0,"reslen":38,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":15}},"FeatureCompatibilityVersion":{"acquireCount":{"r":10,"w":8}},"ReplicationStateTransition":{"acquireCount":{"w":16}},"Global":{"acquireCount":{"r":10,"w":6,"W":2}},"Database":{"acquireCount":{"r":6,"w":6,"R":1}},"Collection":{"acquireCount":{"r":2,"w":5}},"Mutex":{"acquireCount":{"r":14}},"oplog":{"acquireCount":{"w":1}}},"flowControl":{"acquireCount":5,"timeAcquiringMicros":22},"storage":{},"remote":"127.0.0.1:41312","protocol":"op_msg","durationMillis":205}}
"ctx":"OplogApplier-0","msg":"Replica set state transition","attr":{"newState":"SECONDARY","oldState":"RECOVERING"}}
"ctx":"OplogApplier-0","msg":"Starting an election, since we've seen no PRIMARY in election timeout period","attr":{"electionTimeoutPeriodMillis":10000}}
"ctx":"OplogApplier-0","msg":"Conducting a dry run election to see if we could be elected","attr":{"currentTerm":0}}
"ctx":"ReplCoord-0","msg":"Dry election run succeeded, running for election","attr":{"newTerm":1}}
"ctx":"ReplCoord-0","msg":"Storing last vote document in local storage for my election","attr":{"lastVote":{"term":1,"candidateIndex":0}}}
"ctx":"ReplCoord-0","msg":"Election succeeded, assuming primary role","attr":{"term":1}}
"ctx":"ReplCoord-0","msg":"Replica set state transition","attr":{"newState":"PRIMARY","oldState":"SECONDARY"}}
"ctx":"ReplCoord-0","msg":"Resetting sync source to empty","attr":{"previousSyncSource":":27017"}}
"ctx":"ReplCoord-0","msg":"Entering primary catch-up mode"}
"ctx":"ReplCoord-0","msg":"Skipping primary catchup since we are the only node in the replica set."}
"ctx":"ReplCoord-0","msg":"Exited primary catch-up mode"}
"ctx":"ReplCoord-0","msg":"Stopping replication producer"}
"ctx":"ReplBatcher","msg":"Oplog buffer has been drained","attr":{"term":1}}
"ctx":"RstlKillOpThread","msg":"Starting to kill user operations"}
"ctx":"RstlKillOpThread","msg":"Stopped killing user operations"}
"ctx":"RstlKillOpThread","msg":"State transition ops metrics","attr":{"metrics":{"lastStateTransition":"stepUp","userOpsKilled":0,"userOpsRunning":0}}}
"ctx":"OplogApplier-0","msg":"Increment the config term via reconfig"}
"ctx":"OplogApplier-0","msg":"Replication config state is Steady, starting reconfig"}
"ctx":"OplogApplier-0","msg":"Setting new configuration state","attr":{"newState":"ConfigReconfiguring","oldState":"ConfigSteady"}}
"ctx":"OplogApplier-0","msg":"replSetReconfig config object parses ok","attr":{"numMembers":1}}
1 Like

And these are the kind of logs that are getting spammed


"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:33378","uuid":"c210e248-fcb1-4448-827a-980bf0bac84f","connectionId":18,"connectionCount":1}}
"ctx":"conn18","msg":"Connection ended","attr":{"remote":"142.44.235.169:33378","uuid":"c210e248-fcb1-4448-827a-980bf0bac84f","connectionId":18,"connectionCount":0}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"142.44.235.169:33392","uuid":"9b47d18e-ff07-4223-94c1-ac3d0fcc32c0","connectionId":19,"connectionCount":1}}
"ctx":"conn19","msg":"Connection ended","attr":{"remote":"142.44.235.169:33392","uuid":"9b47d18e-ff07-4223-94c1-ac3d0fcc32c0","connectionId":19,"connectionCount":0}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:41320","uuid":"117f1b7b-96b6-4e20-9c5e-f6093ee1bbf8","connectionId":20,"connectionCount":1}}
"ctx":"conn20","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn20","msg":"client metadata","attr":{"remote":"127.0.0.1:41320","client":"conn20","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"5.0.11-10"},"os":{"type":"Linux","name":"Oracle Linux Server release 8.6","architecture":"x86_64","version":"Kernel 4.18.0-372.19.1.el8_6.x86_64"}}}}
"ctx":"conn20","msg":"createCollection","attr":{"namespace":"admin.system.users","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"83440bfa-a0a3-4bfa-af84-22c501eddf9d"}},"options":{}}}
"ctx":"conn20","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.users","index":"_id_","commitTimestamp":{"$timestamp":{"t":1665066717,"i":3}}}}
"ctx":"conn20","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.users","index":"user_1_db_1","commitTimestamp":{"$timestamp":{"t":1665066717,"i":3}}}}
"ctx":"conn20","msg":"Connection ended","attr":{"remote":"127.0.0.1:41320","uuid":"117f1b7b-96b6-4e20-9c5e-f6093ee1bbf8","connectionId":20,"connectionCount":0}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52054","uuid":"908514f0-f370-412e-86f0-f870a793d55e","connectionId":21,"connectionCount":1}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52066","uuid":"41aa1fd1-2a2c-488a-9451-777474f64efd","connectionId":22,"connectionCount":2}}
"ctx":"conn22","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn22","msg":"client metadata","attr":{"remote":"10.42.175.17:52066","client":"conn22","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"conn21","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn21","msg":"client metadata","attr":{"remote":"10.42.175.17:52054","client":"conn21","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52076","uuid":"a37d8f71-39a8-410e-980c-4c0c594ed24b","connectionId":23,"connectionCount":3}}
"ctx":"conn23","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn23","msg":"client metadata","attr":{"remote":"10.42.175.17:52076","client":"conn23","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"conn23","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"userAdmin","authenticationDatabase":"admin","remote":"10.42.175.17:52076","extraInfo":{}}}
"ctx":"conn23","msg":"createCollection","attr":{"namespace":"admin.system.roles","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"f6df8423-9bd2-404e-afa9-d110b2d1e8de"}},"options":{}}}
"ctx":"conn23","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.roles","index":"_id_","commitTimestamp":{"$timestamp":{"t":1665066718,"i":2}}}}
"ctx":"conn23","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.roles","index":"role_1_db_1","commitTimestamp":{"$timestamp":{"t":1665066718,"i":2}}}}
"ctx":"conn21","msg":"Interrupted operation as its client disconnected","attr":{"opId":6224}}
"ctx":"conn22","msg":"Connection ended","attr":{"remote":"10.42.175.17:52066","uuid":"41aa1fd1-2a2c-488a-9451-777474f64efd","connectionId":22,"connectionCount":2}}
"ctx":"conn23","msg":"Connection ended","attr":{"remote":"10.42.175.17:52076","uuid":"a37d8f71-39a8-410e-980c-4c0c594ed24b","connectionId":23,"connectionCount":1}}
"ctx":"conn21","msg":"Connection ended","attr":{"remote":"10.42.175.17:52054","uuid":"908514f0-f370-412e-86f0-f870a793d55e","connectionId":21,"connectionCount":0}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52078","uuid":"bbff6210-e43b-40ee-a71d-f6add4751f4f","connectionId":24,"connectionCount":1}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52080","uuid":"25e53f9b-c5d5-4bcb-861b-c2bc2f6a0ac6","connectionId":25,"connectionCount":2}}
"ctx":"conn24","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn24","msg":"client metadata","attr":{"remote":"10.42.175.17:52078","client":"conn24","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"conn25","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn25","msg":"client metadata","attr":{"remote":"10.42.175.17:52080","client":"conn25","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52086","uuid":"22bec8b2-4ac8-45fe-9723-3739e6facf23","connectionId":26,"connectionCount":3}}
"ctx":"conn26","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn26","msg":"client metadata","attr":{"remote":"10.42.175.17:52086","client":"conn26","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"conn26","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"clusterAdmin","authenticationDatabase":"admin","remote":"10.42.175.17:52086","extraInfo":{}}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52108","uuid":"255c170b-8413-47ed-b3bc-4b39a4a0ee9a","connectionId":27,"connectionCount":4}}
"ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.42.175.17:52102","uuid":"be298a07-873e-42fa-8b41-1de5095fb4f8","connectionId":28,"connectionCount":5}}
"ctx":"conn27","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn27","msg":"client metadata","attr":{"remote":"10.42.175.17:52108","client":"conn27","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
"ctx":"conn28","msg":"SSL peer certificate validation failed","attr":{"reason":"self signed certificate"}}
"ctx":"conn28","msg":"client metadata","attr":{"remote":"10.42.175.17:52102","client":"conn28","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.0"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.7"}}}
1 Like

Sorry, for some reason the majority of my report was deleted?


Will repost as well as I can
Full details posted here: K8SPSMDB-796: Fix PBM connection if replset is exposed by egegunes · Pull Request #1060 · percona/percona-server-mongodb-operator · GitHub

1 Like

Installed operator with
kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/deploy/bundle.yaml

setup cluster with

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: minimal
  namespace: default
spec:
  image: percona/percona-server-mongodb:latest
  replsets:
  - affinity:
      antiAffinityTopologyKey: kubernetes.io/hostname
    name: rs0
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi
  secrets:
    users: minimal
  sharding:
    enabled: false
  upgradeOptions:
    apply: disabled
    schedule: 0 2 * * *
1 Like

Hi @jonathon,

Are you able to take backups with this patch? Warnings in mongod logs are expected since we use unsecure options when connecting to mongo.

I’m not sure why your post is deleted. @daniil.bazhenov anything we can do about it?

1 Like

The operator still throws the following error:

2022-10-07T07:26:31.906Z ERROR controller_psmdb failed to reconcile cluster {"Request.Namespace": "archive", "Request.Name": "percona-server-psmdb", "replset": "rs0", "error": "handleReplsetInit: exec add admin user: command terminated with exit code 252 / Percona Server for MongoDB shell version v5.0.11-10\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n{\"t\":{\"$date\":\"2022-10-07T07:26:31.882Z\"},\"s\":\"W\", \"c\":\"NETWORK\", \"id\":23238, \"ctx\":\"js\",\"msg\":\"The server certificate does not match the remote host name\",\"attr\":{\"remoteHost\":\"127.0.0.1\",\"certificateNames\":\"SAN(s): localhost, percona-server-psmdb-rs0, percona-server-psmdb-rs0.archive, percona-server-psmdb-rs0.archive.svc.cluster.local, *.percona-server-psmdb-rs0, *.percona-server-psmdb-rs0.archive, *.percona-server-psmdb-rs0.archive.svc.cluster.local, percona-server-psmdb-rs0.archive.svc.clusterset.local, *.percona-server-psmdb-rs0.archive.svc.clusterset.local, *.archive.svc.clusterset.local, percona-server-psmdb-mongos, percona-server-psmdb-mongos.archive, percona-server-psmdb-mongos.archive.svc.cluster.local, *.percona-server-psmdb-mongos, *.percona-server-psmdb-mongos.archive, *.percona-server-psmdb-mongos.archive.svc.cluster.local, percona-server-psmdb-cfg, percona-server-psmdb-cfg.archive, percona-server-psmdb-cfg.archive.svc.cluster.local, *.percona-server-psmdb-cfg, *.percona-server-psmdb-cfg.archive, *.percona-server-psmdb-cfg.archive.svc.cluster.local, percona-server-psmdb-mongos.archive.svc.clusterset.local, *.percona-server-psmdb-mongos.archive.svc.clusterset.local, percona-server-psmdb-cfg.archive.svc.clusterset.local, *.percona-server-psmdb-cfg.archive.svc.clusterset.local, CN: percona-server-psmdb\"}}\nImplicit session: session { \"id\" : UUID(\"d6a05630-78cd-4391-bc49-54c162c2457a\") }\nPercona Server for MongoDB server version: v5.0.11-10\nuncaught exception: Error: couldn't add user: command createUser requires authentication :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1367:11\n@(shell eval):1:1\nexiting with code -4\n / ", "errorVerbose": "exec add admin user: command terminated with exit code 252 / Percona Server for MongoDB shell version v5.0.11-10\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n{\"t\":{\"$date\":\"2022-10-07T07:26:31.882Z\"},\"s\":\"W\", \"c\":\"NETWORK\", \"id\":23238, \"ctx\":\"js\",\"msg\":\"The server certificate does not match the remote host name\",\"attr\":{\"remoteHost\":\"127.0.0.1\",\"certificateNames\":\"SAN(s): localhost, percona-server-psmdb-rs0, percona-server-psmdb-rs0.archive, percona-server-psmdb-rs0.archive.svc.cluster.local, *.percona-server-psmdb-rs0, *.percona-server-psmdb-rs0.archive, *.percona-server-psmdb-rs0.archive.svc.cluster.local, percona-server-psmdb-rs0.archive.svc.clusterset.local, *.percona-server-psmdb-rs0.archive.svc.clusterset.local, *.archive.svc.clusterset.local, percona-server-psmdb-mongos, percona-server-psmdb-mongos.archive, percona-server-psmdb-mongos.archive.svc.cluster.local, *.percona-server-psmdb-mongos, *.percona-server-psmdb-mongos.archive, *.percona-server-psmdb-mongos.archive.svc.cluster.local, percona-server-psmdb-cfg, percona-server-psmdb-cfg.archive, percona-server-psmdb-cfg.archive.svc.cluster.local, *.percona-server-psmdb-cfg, *.percona-server-psmdb-cfg.archive, *.percona-server-psmdb-cfg.archive.svc.cluster.local, percona-server-psmdb-mongos.archive.svc.clusterset.local, *.percona-server-psmdb-mongos.archive.svc.clusterset.local, percona-server-psmdb-cfg.archive.svc.clusterset.local, *.percona-server-psmdb-cfg.archive.svc.clusterset.local, CN: percona-server-psmdb\"}}\nImplicit session: session { \"id\" : UUID(\"d6a05630-78cd-4391-bc49-54c162c2457a\") }\nPercona Server for MongoDB server version: v5.0.11-10\nuncaught exception: Error: couldn't add user: command createUser requires authentication :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1367:11\n@(shell eval):1:1\nexiting with code -4\n / \nhandleReplsetInit\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileCluster\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:102\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:499\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571"}

1 Like

Ok I can see that I can connect to the cluster with compass, and get the replica set status.

1 Like

I’m also having issues with the connectivity between replica pods. Same errors as the OP. It’s a generic install of the helm chart (i.e. mainly the defaults). To me, the helm chart is broken and I’m not sure how or where to look to try to fix it.

Scott

Edit: I’ve now carried out the generic k8s installation and it seems all is stable now.

1 Like