Unable to create Xtradb cluster and haproxy when the quota is created

We are using multitenant environment in kubernetes cluster where creating many xtradb cluster in different namespace & Trying to restrict the resource utilization using the quota as mentioned in the link:

Whenever there is Quota mentioned Percona XtraDB Operator is unable to create the Cluster and HAProxy pods and pxc is in the initializing state only.

Please verify and do the needful.

Thanks,
Sivapriya.

1 Like

Please find the below quota details for your reference:
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota
namespace: test-namespace
spec:
hard:
limits.cpu: “7.2”
limits.memory: 30Gi
requests.cpu: “2”
requests.memory: 6Gi

1 Like

Hello Team,

Any updates on this issue? We are still facing the same issue even if the quota size was increased as: requests.memory:10Gi and requests.cpu: 5 - Facing the same problem.

Kindly provide some updates on the same.

Thanks
Sivapriya.

1 Like

Hello @sivapriyas ,

could you please check the state of a StatefulSet? I assume you would see something like this in kubectl describe sts cluster1-pxc:

Events:
  Type     Reason            Age                 From                    Message
  ----     ------            ----                ----                    -------
  Normal   SuccessfulCreate  89s                 statefulset-controller  create Claim datadir-cluster1-pxc-0 Pod cluster1-pxc-0 in StatefulSet cluster1-pxc success                                                Warning  FailedCreate      58s (x24 over 89s)  statefulset-controller  create Pod cluster1-pxc-0 in StatefulSet cluster1-pxc failed error: pods "cluster1-pxc-0" is forbidden: failed quota: quota: must specify limits.cpu,limits.memory    

This indicates that you need to specify both requests and limits in your cr.yaml. For example in spec.pxc:

    resources:
      requests:
        memory: 1G
        cpu: 600m
      limits:
        memory: 1G
        cpu: 600m

The same is valid for all other containers and other stateful sets. If you use haproxy make sure that you set sidecarResources properly - it is used by pxc-monit container.

1 Like

Hi Team,
Thanks for your update!

In my scenario, If there is a quota.yaml applied and then trying to apply the cr.yaml I am seeting the below behaviour:

$ kubectl get pxc
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
enhancement 11s

$ kubectl describe sts cluster1-pxc
Error from server (NotFound): statefulsets.apps “cluster1-pxc” not found

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
percona-xtradb-cluster-operator-566848cf48-8x9l9 1/1 Running 0 93s

Where the Cluster itself is in initializing state and not creating any other pods even if i wait for ever.

After removing the quota.yaml and then if i apply the cr.yaml then it is creating the pxc and all required pods as expected. My query is whether we can have the namespace level quota creation and allocation possible & whether the operator will accept it?

Please confirm.

1 Like

Could you please run
kubectl get pxc
kubectl get sts
in the namespace with quota
?

1 Like

Here you go:

$ kubectl get pxc
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
enhancement enhancement-haproxy.dbaas-enhancement initializing 11m
$ kubectl get sts
NAME READY AGE
enhancement-haproxy 0/3 10m
enhancement-pxc 0/3 10m

1 Like

So you use a different name and trying to describe sts which is not there.

Please do kubectl describe sts enhancement-pxc.

1 Like

$ kubectl describe sts enhancement-pxc
Name: enhancement-pxc
Namespace: dbaas-enhancement
CreationTimestamp: Wed, 19 Jan 2022 14:18:24 +0200
Selector: app.kubernetes.io/component=pxc,app.kubernetes.io/instance=enhancement,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
Labels:
Annotations: percona.com/last-config-hash:
eyJyZXBsaWNhcyI6Mywic2VsZWN0b3IiOnsibWF0Y2hMYWJlbHMiOnsiYXBwLmt1YmVybmV0ZXMuaW8vY29tcG9uZW50IjoicHhjIiwiYXBwLmt1YmVybmV0ZXMuaW8vaW5zdGFuY2

Replicas: 3 desired | 0 total
Update Strategy: OnDelete
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/component=pxc
app.kubernetes.io/instance=enhancement
app.kubernetes.io/managed-by=percona-xtradb-cluster-operator
app.kubernetes.io/name=percona-xtradb-cluster
app.kubernetes.io/part-of=percona-xtradb-cluster
customer=enhancement
environment=production
Annotations: percona.com/configuration-hash: e6cf517e3f78a4f2d95dd0850b92d26f
percona.com/ssl-hash: 851b16b61ba24215e3b32e18d5181177
percona.com/ssl-internal-hash: d3f75fd6f8d6553df7d5e65e0e39a244
Service Account: default
Init Containers:
pxc-init:
Image: percona/percona-xtradb-cluster-operator:1.10.0
Port:
Host Port:
Command:
/pxc-init-entrypoint.sh
Limits:
cpu: 1
memory: 8G
Requests:
cpu: 600m
memory: 2G
Environment:
Mounts:
/var/lib/mysql from datadir (rw)
Containers:
pmm-client:
Image: percona/pmm-client:2.23.0
Ports: 7777/TCP, 30100/TCP, 30101/TCP, 30102/TCP, 30103/TCP, 30104/TCP, 30105/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Requests:
cpu: 300m
memory: 150M
Liveness: http-get http://:7777/local/Status delay=60s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
enhancement-env-vars-pxc Secret Optional: true
Environment:
PMM_SERVER: pmm-enhancement.dbaas.net:20000
PMM_USER: admin
PMM_PASSWORD: <set to the key ‘pmmserver’ in secret ‘internal-enhancement’> Optional: false
CLIENT_PORT_LISTEN: 7777
CLIENT_PORT_MIN: 30100
CLIENT_PORT_MAX: 30105
POD_NAME: (v1:metadata.name)
POD_NAMESPASE: (v1:metadata.namespace)
PMM_AGENT_SERVER_ADDRESS: pmm-enhancement.dbaas.net:20000
PMM_AGENT_SERVER_USERNAME: admin
PMM_AGENT_SERVER_PASSWORD: <set to the key ‘pmmserver’ in secret ‘internal-enhancement’> Optional: false
PMM_AGENT_LISTEN_PORT: 7777
PMM_AGENT_PORTS_MIN: 30100
PMM_AGENT_PORTS_MAX: 30105
PMM_AGENT_CONFIG_FILE: /usr/local/percona/pmm2/config/pmm-agent.yaml
PMM_AGENT_SERVER_INSECURE_TLS: 1
PMM_AGENT_LISTEN_ADDRESS: 0.0.0.0
PMM_AGENT_SETUP_NODE_NAME: $(POD_NAMESPASE)-$(POD_NAME)
PMM_AGENT_SETUP_METRICS_MODE: push
PMM_AGENT_SETUP: 1
PMM_AGENT_SETUP_FORCE: 1
PMM_AGENT_SETUP_NODE_TYPE: container
DB_TYPE: mysql
DB_USER: monitor
DB_PASSWORD: <set to the key ‘monitor’ in secret ‘internal-enhancement’> Optional: false
DB_ARGS: --query-source=perfschema
DB_CLUSTER: pxc
DB_HOST: localhost
DB_PORT: 33062
CLUSTER_NAME: enhancement
PMM_ADMIN_CUSTOM_PARAMS:
PMM_AGENT_PRERUN_SCRIPT: pmm-admin status --wait=10s;
pmm-admin add $(DB_TYPE) $(PMM_ADMIN_CUSTOM_PARAMS) --skip-connection-check --metrics-mode=push --username=$(DB_USER) --password=$(DB_PASSWORD) --cluster=$(CLUSTER_NAME) --service-name=$(PMM_AGENT_SETUP_NODE_NAME) --host=$(POD_NAME) --port=$(DB_PORT) $(DB_ARGS);
pmm-admin annotate --service-name=$(PMM_AGENT_SETUP_NODE_NAME) ‘Service restarted’
PMM_AGENT_SIDECAR: true
PMM_AGENT_SIDECAR_SLEEP: 5
Mounts:
/var/lib/mysql from datadir (rw)
logs:
Image: percona/percona-xtradb-cluster-operator:1.10.0-logcollector-8.0.25
Port:
Host Port:
Requests:
cpu: 200m
memory: 100M
Environment Variables from:
my-log-collector-secrets Secret Optional: true
Environment:
LOG_DATA_DIR: /var/lib/mysql
POD_NAMESPASE: (v1:metadata.namespace)
POD_NAME: (v1:metadata.name)
Mounts:
/var/lib/mysql from datadir (rw)
logrotate:
Image: percona/percona-xtradb-cluster-operator:1.10.0-logcollector-8.0.25
Port:
Host Port:
Args:
logrotate
Requests:
cpu: 200m
memory: 100M
Environment:
SERVICE_TYPE: mysql
MONITOR_PASSWORD: <set to the key ‘monitor’ in secret ‘internal-enhancement’> Optional: false
Mounts:
/var/lib/mysql from datadir (rw)
pxc:
Image: percona/percona-xtradb-cluster:8.0.25-15.1
Ports: 3306/TCP, 4444/TCP, 4567/TCP, 4568/TCP, 33062/TCP, 33060/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
/var/lib/mysql/pxc-entrypoint.sh
Args:
mysqld
Limits:
cpu: 1
memory: 8G
Requests:
cpu: 600m
memory: 2G
Liveness: exec [/var/lib/mysql/liveness-check.sh] delay=300s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/var/lib/mysql/readiness-check.sh] delay=15s timeout=15s period=30s #success=1 #failure=5
Environment Variables from:
enhancement-env-vars-pxc Secret Optional: true
Environment:
PXC_SERVICE: enhancement-pxc-unready
MONITOR_HOST: %
MYSQL_ROOT_PASSWORD: <set to the key ‘root’ in secret ‘internal-enhancement’> Optional: false
XTRABACKUP_PASSWORD: <set to the key ‘xtrabackup’ in secret ‘internal-enhancement’> Optional: false
MONITOR_PASSWORD: <set to the key ‘monitor’ in secret ‘internal-enhancement’> Optional: false
LOG_DATA_DIR: /var/lib/mysql
IS_LOGCOLLECTOR: yes
CLUSTER_HASH: 2448979
OPERATOR_ADMIN_PASSWORD: <set to the key ‘operator’ in secret ‘internal-enhancement’> Optional: false
LIVENESS_CHECK_TIMEOUT: 5
READINESS_CHECK_TIMEOUT: 15
Mounts:
/etc/my.cnf.d from auto-config (rw)
/etc/mysql/mysql-users-secret from mysql-users-secret-file (rw)
/etc/mysql/ssl from ssl (rw)
/etc/mysql/ssl-internal from ssl-internal (rw)
/etc/mysql/vault-keyring-secret from vault-keyring-secret (rw)
/etc/percona-xtradb-cluster.conf.d from config (rw)
/tmp from tmp (rw)
/var/lib/mysql from datadir (rw)
Volumes:
tmp:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: enhancement-pxc
Optional: true
ssl-internal:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-ssl-internal
Optional: true
ssl:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-ssl
Optional: false
auto-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: auto-enhancement-pxc
Optional: true
vault-keyring-secret:
Type: Secret (a volume populated by a Secret)
SecretName: keyring-secret-vault
Optional: true
mysql-users-secret-file:
Type: Secret (a volume populated by a Secret)
SecretName: internal-enhancement
Optional: false
Volume Claims:
Name: datadir
StorageClass: openebs-hostpath
Labels:
Annotations:
Capacity: 60G
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message


Warning FailedCreate 27s (x1197 over 60m) statefulset-controller create Pod enhancement-pxc-0 in StatefulSet enhancement-pxc failed error: pods “enhancement-pxc-0” is forbidden: failed quota: quota: must specify limits.cpu,limits.memory

1 Like

$ kubectl get quota
NAME AGE REQUEST LIMIT
quota 66m requests.cpu: 100m/5, requests.memory: 20Mi/20Gi limits.cpu: 200m/15, limits.memory: 500Mi/60Gi

1 Like

@sivapriyas ,

so as you see you don’t have limits set for the stateful set.
Please go to my first reply - you need to set limits in cr.yaml. By default our cr.yaml comes with requests only.

1 Like

Hi @Sergey_Pronin

Please find the below snapshot in the CR.yaml file:
spec:
pxc:
size: 3
labels:
environment: production
customer: enhancement
resources:
requests:
memory: 2G
cpu: 600m
limits:
memory: 8G
cpu: “1”

There are limits mentioned in the file.

1 Like

Are they applied?
Have you changed it for logcollector as well (it is a container in PXC Pod).

spec:
  logcollector:
    ...
1 Like

Thanks Spronin for looking at this. Please find the below resource configuration mentioned in the cr.yaml:

apiVersion: pxc.percona.com/v1-10-0
kind: PerconaXtraDBCluster
metadata:
name: enhancement
finalizers:

  • delete-pxc-pods-in-order
    spec:
    pxc:
    size: 3
    labels:
    environment: production
    customer: enhancement
    resources:
    requests:
    memory: 2G
    cpu: 600m
    limits:
    memory: 8G
    cpu: “1”
    volumeSpec:
    persistentVolumeClaim:
    storageClassName: openebs-hostpath
    accessModes: [ “ReadWriteOnce” ]
    resources:
    requests:
    storage: 60G
    gracePeriod: 600
    haproxy:
    enabled: true
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.10.0-haproxy
    labels:
    environment: production
    customer: enhancement
    resources:
    requests:
    memory: 1G
    cpu: 600m
    affinity:
    antiAffinityTopologyKey: “kubernetes.io/hostname”
    podDisruptionBudget:
    maxUnavailable: 1
    gracePeriod: 30
    affinity:
    antiAffinityTopologyKey: “kubernetes.io/hostname”
    volumeSpec:
    persistentVolumeClaim:
    resources:
    requests:
    storage: 2G
    podDisruptionBudget:
    maxUnavailable: 1
    gracePeriod: 30
    logcollector:
    enabled: true
    image: percona/percona-xtradb-cluster-operator:1.10.0-logcollector
    resources:
    requests:
    memory: 100M
    cpu: 200m
    pmm:
    enabled: true
    resources:
    requests:
    memory: 150M
    cpu: 300m
    backup:
    pitr:
    enabled: true
    storageName: s3-storage
    resources:
    requests:
    memory: 100M
    cpu: 100m
    storages:
    s3-storage:
    type: s3
    s3:
    bucket: dbaas-enhancement-backup
    credentialsSecret: my-cluster-name-backup-s3
    resources:
    requests:
    storage: 6G

Can you please let me know whether the limits should be added in haproxy, logcollector, pmm, pitr as well? If so whether the same limits can be added or any specific recommendations are there?

By default in the cr.yaml, We see the limits are commented and following are the inputs provided:
haproxy:
limits:

memory: 1G

cpu: 700m

Where as in logcollector / pitr or any other place the limits are not provided in the cr.yaml file.

1 Like

Yes, you need to set the limits for all containers in a Pod. This includes logcollector and other. Just add limits if they are not under resources section. Same syntax as in regular k8s objects.

1 Like