Kubernetes VPA can not find pxc pods

I am trying to set up Kubernetes vertical scaling for pxd cluster. Following this blog. After setting up everything, my pods are starting, VPA is running. After waiting for about 10mins it does not providing any of recomendations. When using kubectl describe vpa I receive following status:

Status:
  Conditions:
    Last Transition Time:  2022-02-03T14:37:23Z
    Message:               The targetRef controller has a parent but it should point to a topmost well-known or scalable controller
    Status:                True
    Type:                  ConfigUnsupported
    Last Transition Time:  2022-02-03T14:37:23Z
    Message:               No pods match this VPA object
    Reason:                NoPodsMatched
    Status:                True
    Type:                  NoPodsMatched
    Last Transition Time:  2022-02-03T14:37:23Z
    Message:               No pods match this VPA object
    Reason:                NoPodsMatched
    Status:                False
    Type:                  RecommendationProvided
  Recommendation:
Events:  <none>

I am using following configuration to define my cluster and VPA:

apiVersion: pxc.percona.com/v1-11-0
kind: PerconaXtraDBCluster
metadata:
  name: mysql-cluster
spec:
  crVersion: 1.11.0
  secretsName: mysql-cluster-secrets
  allowUnsafeConfigurations: true
  upgradeOptions:
    apply: 8.0-recommended
    schedule: '0 4 * * *'
  pxc:
    size: 2
    image: percona/percona-xtradb-cluster:8.0.23-14.1
    autoRecovery: true
    affinity:
      antiAffinityTopologyKey: 'kubernetes.io/hostname'
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
    resources:
      requests:
        memory: 128Mi
        cpu: 200m
        limits:
          memory: 256Mi
          cpu: 400m
    volumeSpec:
      persistentVolumeClaim:
        accessModes: ['ReadWriteOnce']
        resources:
          requests:
            storage: 10G
      storageClassName: hcloud-volumes
  haproxy:
    enabled: true
    size: 2
    image: perconalab/percona-xtradb-cluster-operator:main-haproxy
    affinity:
      antiAffinityTopologyKey: 'kubernetes.io/hostname'
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      requests:
        memory: 128Mi
        cpu: 200m
        limits:
          memory: 256Mi
          cpu: 400m
    gracePeriod: 30
  logcollector:
    enabled: true
    image: perconalab/percona-xtradb-cluster-operator:main-logcollector
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: pxc-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: mysql-cluster-pxc
  updatePolicy:
    updateMode: 'Off'

I can use kubectl top command, so metrics-server is working. What could be the reason of this behavior?

1 Like

@paul_sinke could you please share your VPA configuration (yaml that you apply)? Does VPA point to a StatefulSet?

1 Like

At first I used basic vpa configuration that was taken from blog post:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: pxc-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: mysql-cluster-pxc
  updatePolicy:
    updateMode: 'Off'

I was checking autoscaler docs and updated it a little bit referring to examples and ended up with this one:

apiVersion: 'autoscaling.k8s.io/v1'
kind: VerticalPodAutoscaler
metadata:
  name: pxc-vpa
spec:
  targetRef:
    apiVersion: 'apps/v1'
    kind: StatefulSet
    name: mysql-cluster-pxc
  resourcePolicy:
    containerPolicies:
      - containerName: '*'
        minAllowed:
          cpu: 100m
          memory: 50Mi
        maxAllowed:
          cpu: 1
          memory: 500Mi
        controlledResources: ['cpu', 'memory']
  updatePolicy:
    updateMode: 'Off'

In both cases targetRef pointing to StatefulSet, and i receiving same message saying The targetRef controller has a parent but it should point to a topmost well-known or scalable controller

1 Like

@paul_sinke Interesting. I will try to reproduce it and will get back to you.

2 Likes

Hello @paul_sinke ,

we had a look and seems there are some new checks introduced in the latest version of the VPA.
We checked version 0.5.0 and it works as expected, whereas newew versions do not.

I have created this jira issue to track the progress for a fix here: [K8SPXC-948] Research the possibility to use VPA - Percona JIRA

2 Likes

Hi @spronin ,
thank you for information. I was able to deploy everything using kubernetes v1.19.16 and autoscaler version 0.5.0. At this time I did not get any errors, but recommendations were not appearing. Could you provide some information about environment and api versions you were testing this with?

1 Like

Hello @paul_sinke ,

I was running GKE 1.20 + whatever VPA they had. Will try to play with it a bit more this week and seek for a solution.

2 Likes