Operator blocks PVC when requested storage < PV size (differs from K8s standard behavior)
Description:
The Percona Operator for MongoDB rejects PVC creation when the requested storage is smaller than the target PV’s actual size, even though this is valid in standard Kubernetes.
I am migrating from Bitnami MongoDB to Percona Operator. My cluster pre-provisions 161Gi PVs, but my original configuration requests 10Gi. In standard Kubernetes, the PVC binds successfully to the larger PV. With Percona Operator, it fails.
This blocks:
- Migration from other MongoDB operators
- HA scaling when nodes have different pre-provisioned PV sizes
- Multi-environment deployments
Steps to Reproduce:
Step 1: Create Pre-Existing PV (161Gi)
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-mongo-rs
labels:
app: mongo-rs
spec:
capacity:
storage: 161Gi
accessModes:
- ReadWriteOnce
storageClassName: local-volume
local:
path: /data/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-0
Step 2: Apply Percona Operator CR (Request 10Gi)
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
name: my-cluster-name
spec:
crVersion: 1.21.1
image: percona/percona-server-mongodb:7.0.12-7
replsets:
- name: rs0
size: 1
volumeSpec:
persistentVolumeClaim:
storageClassName: local-volume
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi # Requesting 10Gi
selector:
matchLabels:
app: "mongo-rs" # Targets 161Gi PV
Step 3: Observe Error
kubectl apply -f cr.yaml
kubectl get psmdb my-cluster-name
# Expected: ready
# Actual: error
Version:
- Operator Version: 1.21.1
- MongoDB Image:
percona/percona-server-mongodb:7.0.12-7 - Kubernetes: v1.28.15+k3s1 (k3s)
- Storage: Local PVs with local-provisioner:
storageClassName: local-volume
Logs:
Error from Operator
ERROR: requested storage (10Gi) is less than actual storage (161Gi)
Location: psmdb_controller.go:478
Status: error (cluster won't start)
Full Error Message
ERROR Reconciler error { "controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"my-cluster-name","namespace":"default"}, "namespace": "default", "name": "my-cluster-name", "reconcileID": "7d4bf114-e956-4a8d-a5af-05d150b924dd", "error": "reconcile statefulsets: reconcile replset rs0: reconcile StatefulSet for rs0: reconcile PVCs for my-cluster-name-rs0: resize volumes if needed: requested storage (10Gi) is less than actual storage (161Gi)", "errorVerbose": "reconcile StatefulSet for rs0: reconcile PVCs for my-cluster-name-rs0: resize volumes if needed: requested storage (10Gi) is less than actual storage (161Gi)\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileReplset\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:478\n..."}
PSMDB Status
$ kubectl describe psmdb my-cluster-name
Status:
Conditions:
Message: Error: reconcile replset rs0: reconcile StatefulSet for rs0: reconcile PVCs for my-cluster-name-rs0: resize volumes if needed: requested storage (10Gi) is less than actual storage (161Gi)
Reason: ErrorReconcile
Status: True
Type: error
State: error
Expected Result:
Standard Kubernetes behavior allows PVCs to request smaller sizes than the bound PV:
| Action | Standard K8s | Percona Operator |
|---|---|---|
| PVC requests 10Gi, PV is 161Gi | ||
| PVC capacity after binding | 161Gi (actual PV size) | N/A (never created) |
Expected: PVC should bind to the 161Gi PV successfully, with final capacity showing 161Gi.
Actual Result:
- Operator performs pre-validation:
requested < actual - Validation fails → Error returned
- PVC never created
- Cluster stuck in “error” state
This prevents:
- ✗ Migration from Bitnami with original configs
- ✗ HA scaling when nodes have different PV sizes
- ✗ Multi-environment deployments with varying storage
Additional Information:
Attempted Workarounds
1. Manual Scaling Approach
Percona docs: Manual scaling without Volume Expansion
Result:
Didn’t work - operator validates before creating PVCs
2. Delete StatefulSet Approach
Forum thread: PVC size reconcile errors
Result:
Didn’t work - validation happens in reconciliation loop
3. Hardcode Exact PV Size
storage: 161Gi # Must match actual PV size
Result:
Works but problematic
- Only works for a single-node
- Blocks HA scaling with different PV sizes per node
- Requires querying each PV size before deployment
Questions for the Team
-
Is there a parameter to disable this validation?
- Looking for
unsafeFlags.skipPVCSizeValidationor similar - Checked CR docs, but couldn’t find anything
- Looking for
-
Is there a workaround without deleting PVs or hardcoding sizes?
- Standard K8s allows
PVC request < PV capacity - This blocks legitimate migration scenarios
- Standard K8s allows
-
Why enforce this at the operator level?
- Understand preventing shrinking (data loss)
- But why block initial creation when
request < PV size?
-
Can this be made optional?
- Enterprise migrations need pre-existing PV reuse.
- Declarative deployments need flexible configurations
Impact
- Migration friction: Cannot reuse pre-existing PVs with original configurations
- HA blocking: Cannot scale to 3 nodes when PVs have different sizes
- Helm complexity: Must query PV size dynamically before install
- Multi-environment: The same chart cannot work across clusters
Environment Details
# Kubernetes Version
$ kubectl version
Client Version: v1.28.15+k3s1
Server Version: v1.28.15+k3s1
# Storage Class
$ kubectl get storageclass local-volume
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
local-volume kubernetes.io/no-provisioner Delete WaitForFirstConsumer false
# Pre-existing PV
$ kubectl get pv -l app=mongo-rs
NAME CAPACITY STATUS STORAGECLASS
local-pv-mongo-rs-... 161Gi Available local-volume
Looking for Guidance
Any help would be appreciated:
- Does a config option exist to bypass this validation?
- Is there a workaround I’m missing?
- Is this intended behavior? If so, what is it protecting against?
- Can this be made optional for migration scenarios?
This differs from standard Kubernetes behavior and creates significant challenges for enterprise migrations.
Thank you for your time and help!
Current Status: Temporarily using hardcoded PV size (161Gi) in CR, but this blocks HA scaling and multi-environment deployments.