Failed liveness probe in an unmanaged cluster

When running the cluster in unmanaged mode (unmanaged: true) pods restart constantly due to failed liveness probe until I add them to ReplicaSet

My operator version: 1.12.0
Kubernetes version: 1.21.14
Cluster config:

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: test
  namespace: psmdb-operator
spec:
  crVersion: 1.12.0
  image: percona/percona-server-mongodb:4.4.13
  unmanaged: true
  updateStrategy: OnDelete
  secrets:
    users: test-users
  replsets:
  - name: rs0
    size: 3
    expose:
      enabled: true
      exposeType: ClusterIP
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi

Running health-check command inside the mongod container returns this error:

bash-4.4$ /data/db/mongodb-healthcheck k8s liveness --ssl --sslInsecure --sslCAFile /etc/mongodb-ssl/ca.crt --sslPEMKeyFile /tmp/tls.pem
{"level":"info","msg":"Running Kubernetes liveness check for mongod","time":"2022-10-28T10:41:42Z"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x87338b]

goroutine 1 [running]:
main.main()
        /go/src/github.com/percona/percona-server-mongodb-operator/cmd/mongodb-healthcheck/main.go:80 +0x80b
bash-4.4$
bash-4.4$
bash-4.4$ /data/db/mongodb-healthcheck --version
mongodb-healthcheck version 0.5.0
git commit 9a969e114b50c5299c25856dff6c23368987be70, branch release-1-12-0
go version go1.17.9

Is this normal behavior?
If I correctly understood this comment, they should not restart:

1 Like