I tried this again this time spinning up a new cluster k3s to see if i can replicate the issue that happened in rke2.
Installed latest everest
Install single node mysql 1 cpu, 2 gb, and 25gb disk
apiVersion: everest.percona.com/v1alpha1
kind: DatabaseCluster
metadata:
creationTimestamp: '2024-08-17T06:36:52Z'
finalizers:
- everest.percona.com/upstream-cluster-cleanup
- foregroundDeletion
generation: 4
labels:
clusterName: mysql-u3b
....
replicas: 1
resources:
cpu: '1'
memory: 2G
storage:
class: local-path
size: 25G
type: pxc
userSecretsName: everest-secrets-mysql-u3b
version: 8.0.36-28.1
when checking the xtradb crd
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
...
pxc:
# skipping config since it is default set by operator
image: percona/percona-xtradb-cluster:8.0.36-28.1
lifecycle: {}
livenessProbes:
timeoutSeconds: 450
podDisruptionBudget:
maxUnavailable: 1
readinessProbes:
timeoutSeconds: 450
resources:
limits:
cpu: 600m
memory: 1825361100800m
requests:
cpu: 570m
memory: 1728724336640m
serviceType: ClusterIP
sidecarResources: {}
size: 1
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 25G
storageClassName: local-path
Now unsintall this mysql instance
Install mysql single node 6 cpu 16gb ram 50gb disk
apiVersion: everest.percona.com/v1alpha1
kind: DatabaseCluster
labels:
clusterName: mysql-37y
...
engine:
config: ....
replicas: 1
resources:
cpu: '6'
memory: 16G
storage:
class: local-path
size: 50G
type: pxc
userSecretsName: everest-secrets-mysql-37y
version: 8.0.36-28.1
and in the xtra db cluster crd we have
resources:
limits:
cpu: 3200m
memory: 7Gi
requests:
cpu: 3040m
memory: 7140383129600m
I even check in the node limits (kubectl describe node xxx) just to be sure that
Resource Requests Limits
-------- -------- ------
cpu 4117m (25%) 5080m (31%)
memory 8803424665600m (13%) 9434Mi (14%) ## only 7gb request; rest(~2g is others running in cluster)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Interestingly
when i request 12 cpu and 52gb ram with 80gb disk the below us the resource
resources:
limits:
cpu: 3200m
memory: 28Gi ## <--seems limited to 28gb and not 52gb
requests:
cpu: 3040m
memory: 28561532518400m
checked on ndoe too
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 4403m (27%) 5381m (33%)
memory 32458208706560m (49%) 33190Mi (52%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
This seems to be clearly a bug according to my understanding of how it is working. This looks like a logic issue around how resources are assigned.
it seems to limit to some ram value in ranges and cpu is limited to 3200m when requesting more than 4 cpu.
btw, I am tested on standard k3s cluster(ipv6 enabled) with cilium, on single a aws ec2 m6a.4xlarge instance.
cc @Sergey_Pronin @silent