In the article, the following command is shown as a demo for a simple cluster
helm install mytest percona/psmdb-db --set sharding.enabled=false \
–set “replsets.volumeSpec.pvc.storageClassName=openebs-hostpath” \
–set “replsets.volumeSpec.pvc.resources.requests.storage=3Gi” \
–set “replsets.name=rs0” --set “replsets.size=3”
I have 2 questions, one which I think I almost have the answer for.
We wish to use 3 servers from a group of 6 to form a MongoDB cluster. The main difference between the servers is storage. On one group of 3 servers I’ve created 2 groups of lvm’s, one on NVMe drives the other on hard disks, and can easily modify the openebs-hostpath storage class to point to either of the mount points.
Is there a simple affinity option I can use to add to the above command which can pick out the 3 servers I wish to run this on ? My efforts at editing the storage class in this regard have not gone well so far.
The second question is related, is there any way of configuring tiered storage , ie mixing the fast and slow volumes ?
Hey @Michael_Shield ,
thanks for raising it. Love the questions
- In our operators we provide flexible affinity configuration.
The way I see it working in your case is labeling nodes with different storage with different labels. Then set proper affinity configuration in the custom resource.
If you use helm chart - use
Please let me know if it helps.
- As for tiered storage - can you tell me more? What is the topology that you have in mind? Maybe some drawing would ease the explanation.
In the end I went back to something I tried 2 days earlier, and after removing a couple of extra spaces it all started to work, something you often find when working with Kubernetes.
I used a custom storage class with a BasePath modified to match the mounted LVM, and the following addition to pick out the required cluster nodes, though I will look at your option and test it as well.
It might make sense to follow the advice of OpenEBS and replace the hostname with a custom variable too.
As regards the second point, when dealing with some of the smaller cloud providers, it’s often difficult to get the optimal storage option.
We had to make a decision based on capacity versus price, and opted for large hard drives in 3 of the 6 servers we’ve allocated to the kubernetes cluster nodes. If you look at Clickhouse, which is another of the apps we hope to deploy, it appears to have a tiered option based on S3 or comptabile storage for older data. In our case, this may be slightly moot, as I’d thought of deploying Minio for this, but it really needs 4 nodes to work properly. There is always the option to use the real S3, of course.
This article descibes how it’s done, 10 lines of code defines a polices block, though it was mentioned 3 years previously in a couple of Altinity blog articles called amplifying-clickhouse-capacity-with-multi-volume-storage-part-1/2
I have no reason to believe this is possible with MogoDB, but thought that if it was, someone in your group might know.
Keep up the great work, and thanks again.
Hello @Michael_Shield ,
MongoDB does not support such tiering unfortunately. Maybe once it goes deeper into analytics - we will see smth like that.