Helm operator usage with openebs lvm-localpv storage class

Ongoing issue,

I’ve been asked to change our existing config to be single sharded and to see if we could expand the associated storage from it’s initial 2TB per pod.

I’ve had a lot of help both here and from openebs, and have managed to adjust my test setups to run as required, with value files calling the localpv openebs storage class. Further tests were then completed using the lvm-localpv storage class to demonstrate working online volume expansion.
.
What I’d like to do now is combine the two, which is where the next roadblock is. I suspect that a small adjustment is required somewhere, as the lvm’s created using this storage class are mounted on the actual pod in my test . Just switching the storage class in the value file doesn’t work, the first of each set of 3 pvc’s start but are not created.

So a couple of questions - first, is this a valid local storage class, and second, if that’s a yes, what alterations are required to get this to work with the helm operator. I’ve installed the latest version of the percona operator, 1.15, and have openebs 3.9 installed as well. Apologies for all these questions, but I’ve spent the best part of the last few days looking for an example, and so far not seen anything.

Any pointers much appreciated,

Thanks,
Mike

Hello @Michael_Shield ,

thanks for raising it.

Sorry, but I need a bit more clarity here.

  1. You have a running cluster
  2. You want to change a storage class for it on the fly

If it is the situation, then pls have a look at this blog post: Change Storage Class on Kubernetes on the Fly

TLDR - new storage class implies completely new volume in k8s world. So it means provision the new Persistent Volume, sync the data and proceed with the next node.

If I misunderstood you completely - pls let me know. We can jump into a quick call to discuss your use case in details.

Hi Sergey,

Clarification - no, this is a test cluster - destroy and rebuild as required. I’m attempting to modify the cluster params to use a different storage class, but we are starting with a blank canvas.

As you have to prebuild the volume group, there is a 20 Gb volume available on all 4 cluster members. The expansion tests created a 1Gb lvm which was then expanded to 4 Gb by simply editing the pvc definition file and reapplying with kubectl

This test cluster is defining 3 x 5 Gb pvc’s for rs0, and 3 x 3 Gb pvc’s for cfg. I’ve successfully run this using hostpath localpv as the storage class, and have tried to switch in lvm-localpv instead, as it allows volume expansion.

I’m sure that some tweak will be required to get this to work, but can’t find any hints of what that might be. Hope that makes it a little clearer.

Mike

Sorry, I’m still a bit confused about this part:

As you have to prebuild the volume group, there is a 20 Gb volume available on all 4 cluster members. The expansion tests created a 1Gb lvm which was then expanded to 4 Gb by simply editing the pvc definition file and reapplying with kubectl

This test cluster is defining 3 x 5 Gb pvc’s for rs0, and 3 x 3 Gb pvc’s for cfg. I’ve successfully run this using hostpath localpv as the storage class, and have tried to switch in lvm-localpv instead, as it allows volume expansion.

You have the cluster that was using localpv. Now you want to deploy a similar cluster, but with lvm-localpv storage class? Are you just looking for a way to properly set it up?

Can you please share you kubectl get sc and values.yaml for our Operator?

Not quite

so far, 3 stages, all on the same cluster, made of Hetzner cloud instances with an additional 20 GB of storage on each.

Stage 1 - Helm install example followed, creates 5 Gb replica set on 3 cluster members. The openebs storage class local pv is used, and the replica set storage is mounted on the node.

Stage 2 - Helm install reworked to use sharding. In addition to the 3 replica sets a 3 Gb config pvc is added to each of the 3 nodes. Storage class detalis passed as a value file.

Stage 3 - Requested to look at using expandable storage. Choose openebs lvm-localpv as storage class. Reformat 20 Gb storage to suit the requirements, and create a volume group.

Successfully test the lvm expansion example provided by openebs. Note this is pod attached storage now.

Now try to combine by switching the storage class in the values file to be based on lvm-localpv.

Helm install runs, and now nothing appears to happen at all in the mongodb namespace, pods, pvc’s etc. No obvious logs, etc.

I hope that makes sense, but I can provide all the details if requied.

Given that this storage class is rather different, I did expect some issues would arise before it works. I really just need some assurance that this can be done, and some hints on how to go about this.

Thanks,

Mike

Please share steps to reproduce. What you deploy, which lines you change and how, how you apply the changes.

Then please share the resulting CR manifest.

The way I understood your explanation is the following:

  1. Stage 1 - deploy the cluster, replica set, 3 nodes, local pv storage class
  2. Stage 2 - Update the same cluster to sharded, now you have configsrv and 3 replica sets
  3. Stage 3 - Not sure what happened from Operator perspective here. Did you just change the storage class in the values.yaml and applied? If so, pls look at Change Storage Class on Kubernetes on the Fly

Thanks Sergey,

You have Stage 1 and 2 correct. It’s the next part that’s proving tricky,

For those two stages, the operator is deploying pvc’s onto a mounted filesystem at a defined mount point. This is a 20 Gb chunk attached to all cluster nodes, so 4 in total.

I can’t do an “on the fly” change as the filesystem has to be unmounted, and then part processed through the pvcreate and vgcreate comands to create a volume group that can be referenced by the lvm-localpv storage class.

From what I can see, we define the size of the lv, in the same way as before, but the resulting pvc, when created, is then mounted on the pod, at a default setting of /datadir.

I have my doubts that this can work, as I don’t see that the operator as it stands would be expecting that.

I’ll fill out the details next, so hang on a little while. Hope this is making sense

Mike

Sergey,

This is how we started off -

helm install mongodb-clu1 percona/psmdb-db --version 1.13.0 --namespace mongodb --set “replsets[0].volumeSpec.pvc.storageClassName=local-hostpath-mongo-slow-sc” --set “replsets[0].name=rs0” --set “replsets[0].size=3” --set “replsets[0].volumeSpec.pvc.resources.requests.storage=2000Gi” --set backup.enabled=true --set sharding.enabled=false --set pmm.enabled=true

This is our production install command - where we chose to use the slower, high capacity HDD based storage class. This developed on the test cluster as follows

helm install mongodb-clu1 percona/psmdb-db --version 1.14.0 --namespace mongodb -f ms-mdb-valules.yaml

root@kube-1:~# more ms-mdb-values.yaml
allowUnsafeConfigurations: true
sharding:
enabled: true
mongos:
size: 3
configrs:
volumeSpec:
pvc:
storageClassName: local-hostpath-mongo-prod-sc
replsets:

  • name: rs0
    size: 3
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    volumeSpec:
    pvc:
    storageClassName: local-hostpath-mongo-prod-sc
    resources:
    requests:
    storage: 5Gi
    backup:
    enabled: true
    pmm:
    enabled: false

root@kube-1:~# cat local-hostpath-mongo-prod-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath-mongo-prod-sc
annotations:
OpenEBS - Kubernetes storage simplified local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /mnt/mongo-cluster-ssd
provisioner: OpenEBS - Kubernetes storage simplified
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:

  • matchLabelExpressions:
    • key: kubernetes.io/hostname
      values:
      • kube-1
      • kube-2
      • kube-3
        .
        Where the filesystem mounted at /mnt/mongo-cluster-ssd is the 20 Gb device mentioned previously. It’s in the values file that I had assistance getting the configrs section correct.

This produced a working sharded cluster - so the next step was to switch the storage class

two sets of value files - just to pick up a slight change in the storage class definition - possible openebs issue which is open with them, removed the reclaim policy

ms-mdb-lvm-values.yaml

allowUnsafeConfigurations: true
sharding:
enabled: true
mongos:
size: 3
configrs:
volumeSpec:
pvc:
storageClassName: openebs-lvmpv
replsets:

  • name: rs0
    size: 3
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    volumeSpec:
    pvc:
    storageClassName: openebs-lvmpv
    resources:
    requests:
    storage: 5Gi
    backup:
    enabled: true
    pmm:
    enabled: false

ms-mdb-lvm-int-values.yaml

allowUnsafeConfigurations: true
sharding:
enabled: true
mongos:
size: 3
configrs:
volumeSpec:
pvc:
storageClassName: test-immediate-lvm
replsets:

  • name: rs0
    size: 3
    affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
    volumeSpec:
    pvc:
    storageClassName: test-immediate-lvm
    resources:
    requests:
    storage: 5Gi
    backup:
    enabled: true
    pmm:
    enabled: false

storage classes

lvm-localpv-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvmpv
allowVolumeExpansion: true
parameters:
storage: “lvm”
volgroup: lvmvg"
provisioner: local.csi.openebs.io
reclaimPolicy: Retain
allowedTopologies:

lvm-localpv-int-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-immediate-lvm
allowVolumeExpansion: true
parameters:
storage: “lvm”
volgroup: “lvmvg”
provisioner: local.csi.openebs.io
allowedTopologies:

root@kube-1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdc lvmvg lvm2 a-- <20.00g <20.00g

root@kube-1:~# vgs
VG #PV #LV #SN Attr VSize VFree
lvmvg 1 0 0 wz–n- <20.00g <20.00g

root@kube-1:~# lvs
root@kube-1:~#

so to the current install comannd

helm install mongodb-clu1 percona/psmdb-db --version 1.15.0 --namespace mongodb -f ms-mdb-lvm-int-values.yam

The operator has been updated to the latest version - result as follows

root@kube-1:~# helm install mongodb-clu1 percona/psmdb-db --version 1.15.0 --namespace mongodb -f ms-mdb-lvm-int-values.yaml
NAME: mongodb-clu1
LAST DEPLOYED: Wed Oct 18 13:24:44 2023
NAMESPACE: mongodb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:

                %                        _____
               %%%                      |  __ \
             ###%%%%%%%%%%%%*           | |__) |__ _ __ ___ ___  _ __   __ _
            ###  ##%%      %%%%         |  ___/ _ \ '__/ __/ _ \| '_ \ / _` |
          ####     ##%       %%%%       | |  |  __/ | | (_| (_) | | | | (_| |
         ###        ####      %%%       |_|   \___|_|  \___\___/|_| |_|\__,_|
       ,((###         ###     %%%        _      _          _____                       _
      (((( (###        ####  %%%%       | |   / _ \       / ____|                     | |
     (((     ((#         ######         | | _| (_) |___  | (___   __ _ _   _  __ _  __| |
   ((((       (((#        ####          | |/ /> _ </ __|  \___ \ / _` | | | |/ _` |/ _` |
  /((          ,(((        *###         |   <| (_) \__ \  ____) | (_| | |_| | (_| | (_| |
////             (((         ####       |_|\_\\___/|___/ |_____/ \__, |\__,_|\__,_|\__,_|

/// (((( #### | |
/////////////(((((((((((((((((######## |_| Join @ percona.com/k8s

Join Percona Squad! Get early access to new product features, invite-only ”ask me anything” sessions with Percona Kubernetes experts, and monthly swag raffles.

Percona Kubernetes Squad - Join the community today - Percona <<<

Percona Server for MongoDB cluster is deployed now. Get the username and password:

ADMIN_USER=$(kubectl -n mongodb get secrets mongodb-clu1-psmdb-db-secrets -o jsonpath=“{.data.MONGODB_USER_ADMIN_USER}” | base64 --decode)
ADMIN_PASSWORD=$(kubectl -n mongodb get secrets mongodb-clu1-psmdb-db-secrets -o jsonpath=“{.data.MONGODB_USER_ADMIN_PASSWORD}” | base64 --decode)

Connect to the cluster:

kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0 --restart=Never
– mongo “mongodb://${ADMIN_USER}:${ADMIN_PASSWORD}@mongodb-clu1-psmdb-db-mongos.mongodb.svc.cluster.local/admin?ssl=false”
root@kube-1:~#

root@kube-1:~# kubectl -n mongodb get all
No resources found in mongodb namespace.
root@kube-1:~# kubectl -n mongodb get pvc
No resources found in mongodb namespace.
root@kube-1:~# kubectl get pvc -A
No resources found

Not even a pending this time - so I can’t even describe the state of the pods - hope that helps a little

Mike

Folks,

Going to have to think about this, but it’s possible some confusion was created during some work with openebs to sort out some other issues. Lets just say this is all working as expected, including online volume expansion, the only note to make is to wait a little while, rather than restart the pod.

More details later on - but I’m taking a break for a day or two. But for now, everything seems to be working seamlessly, so hurrah to both teams.