@vadimtk PXC and other containers do not need more than 100m CPUs to run, but it is without load.
But I don’t understand how CPU requirements for containers and "message: ‘0/6 nodes are available: 6 Insufficient cpu.’ error relate.
Insufficient cpu error indicates that there are not enough resources on the k8s node to accommodate the containers. Calculation is made based on requests.
About 3x requests
@Rub_Av in PXC pod we have 3 containers:
- PXC itself. Requests are set in pxc.resources.requests section. I assume there you set cpu: 1500m. Right?
2 and 3) logs and logrotate containers. They are used to keep the logs of the container locally on the PVC in case the Pod crashes. It is useful for debugging.
Both containers have equal requests and they are set in logcollector.resources.requests section. You can disable logcollector by setting logcollector.enabled: false
in the CR.
We also have init container, but it consumes requests only when the pod starts.
We do not set 3x requests for PXC and you can verify it by describing the Pod.
This is how much PXC container consumes if I have 1500m CPUs for PXC and 500m CPU for logcollector:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
pxc cluster1-pxc-0 2500m (63%) 0 (0%) 1400M (10%) 0 (0%) 13m
I also see you have ProxySQL enabled. In there we also have 3 containers, two are for monitoring. Only one container consumes requests - proxysql itself.
This is how much ProxySQL consumes when I set 1500m in proxysql.resources.requests.cpu:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
pxc cluster1-proxysql-0 1500m (38%) 0 (0%) 1G (7%) 0 (0%) 15m
As you see it is exactly 1.5 CPUs.
Insufficient CPU
I have 3 x 4 CPU nodes in GKE and tried setting the following requests
- PXC 1500m
- logcollector 500m
- ProxySQL 1500m
And the pods are not scheduling. If I look into kubectl describe
of one of the nodes I see the following:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system event-exporter-gke-564fb97f9-4nrvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14h
kube-system fluentbit-gke-ddcrs 100m (2%) 0 (0%) 200Mi (1%) 500Mi (4%) 47h
kube-system gke-metrics-agent-t42gc 3m (0%) 0 (0%) 50Mi (0%) 50Mi (0%) 47h
kube-system kube-dns-c598bd956-tmzd7 260m (6%) 0 (0%) 110Mi (0%) 210Mi (1%) 47h
kube-system kube-proxy-gke-sergey-26338-default-pool-d72e912e-dmkn 100m (2%) 0 (0%) 0 (0%) 0 (0%) 47h
kube-system metrics-server-v0.3.6-7b5cdbcbb8-frgm5 48m (1%) 143m (3%) 105Mi (0%) 355Mi (2%) 47h
kube-system pdcsi-node-tff4f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47h
kube-system stackdriver-metadata-agent-cluster-level-67859c6554-z8vmn 98m (2%) 48m (1%) 202Mi (1%) 202Mi (1%) 14h
pxc cluster1-proxysql-1 1500m (38%) 0 (0%) 1G (7%) 0 (0%) 21m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 2109m (53%) 191m (4%)
memory 1699400192 (13%) 1317Mi (10%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-gce-pd 0 0
Events: <none>
- I have 3920m CPU allocatable
- existing pods already consume 2109m CPU (proxysql 1500m + other pods that GKE deploy automatically)
- So I have 3920 - 2109 = 1811m free
But I need 2500m CPU for PXC pod - 1500m PXC + (500m + 500m) for logcollector, that is why I get insufficient CPU for this node for PXC pod.
I believe you have some similar story.
To debug it I recommend you do kubectl describe node <node>
and review the consumption of requests on the nodes one by one.
To reiterate
- we do not set 3x requests for PXC pods and it can be easily checked
- logcollector has 2 containers and they can be disabled in PXC pod
- proxysql has 3 containers, but only one consumes requests