More cluster creation worked example requests - helm to kubectl

Description:

I’ve used Helm to install a 3 replica cluster, with additions to pick up our custom storage class.

helm install mongodb-clu1 percona/psmdb-db --version 1.14.0 --namespace mongodb --set “replsets[0].volumeSpec.pvc.storageClassName=local-hostpath-mongo-prod-sc”
–set “replsets[0].name=rs0” --set “replsets[0].size=3” --set “replsets[0].volumeSpec.pvc.resources.requests.storage=15Gi” --set backup.enabled=true --set sharding.enabled=false --set pmm.enabled=true

This method has been used on 2 k8s clusters, and then tweaked to add s3 backup params.

A third cluster was built using kubectl and the default setting shown in the Percona documentation, which creates 3 replicasets, 3 mongos and 3 cfg pods

my question - what changes need to be made to the helm install to produce the same result as when using kubectl, other than not using --set sharding.enabled=false ?

I’m testing this myself at the moment, and expect to have to add some params to pickup the custom storage class, but would appreciate any available advice,

Many thanks,

Mike

Hi @Michael_Shield !

Interesting question and I would go with the following approach:

  1. collect yaml from the currently running cluster (cr) into first file:
    kubectl get psmdb my-cluster-name -oyaml | yq eval 'del(.status) | del(.metadata.annotations)' > kubectl.yaml

here we delete some keys like status and metadata.annotations because they are probably going to be missing from the helm template, but you can maybe even leave it.

  1. collect yaml from helm template command into second file:
    helm template psmdb-db percona/psmdb-db --namespace helm-test > helm.yaml

You will probably need to set some existing namespace here above, also if you are using some specific older version of operator in that case you should specify helm chart version for that operator version (otherwise you might get large/strange diff).

  1. use diff and yq and try to create a meaningful diff between these yaml files:
    diff <(yq -P 'sort_keys(..)' -o=props kubectl.yaml) <(yq -P 'sort_keys(..)' -o=props helm.yaml)

This is based on the advise here: Tips, Tricks, Troubleshooting - yq

In the end you might get output similar to this below, this is only part of it but as you can see it might be pretty easy to compare (it’s not perfect, but seems good enough for me).

I hope this helps!

Have a nice weekend!

Thanks Tomislav,

I’ve managed to figure out what needs to be changed. My problem was not being able to figure out how to set the storage class name for the config pods.

This is what I ended up with in the end

allowUnsafeConfigurations: true
sharding:
enabled: true
mongos:
size: 3
configrs:
volumeSpec:
pvc:
storageClassName: local-hostpath-mongo-prod-sc
replsets:

  • name: rs0
    size: 3
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    volumeSpec:
    pvc:
    storageClassName: local-hostpath-mongo-prod-sc
    resources:
    requests:
    storage: 5Gi
    backup:
    enabled: true
    pmm:
    enabled: false

my issue is this, it’s not really clear how the following custom resource definition becomes configrs, etc, etc. Perhaps you could clarify ?

My personal opinion is that the current worked examples are still unclear, and should work to show how to achieve the same result with both operator methods.

Cheers,

Mike

Hi @Michael_Shield !

I see your point now, the thing is some of the options were shortened in the helm chart (historically for whatever reasons - and the only reason I can come up with currently is because they are shorter to set when using helm in command line).

In the recent releases we always try to use the same option names in helm and in cr.
The best way currently when working with helm chart would be to just check the README file and see what are the options available.
It’s available here: https://github.com/percona/percona-helm-charts/blob/main/charts/psmdb-db/README.md

Thanks Tomislav,

That’s really helpful, I can work with that.

Cheers,

Mike