Autoscaling Percona DBs in Kubernetes

Hello!

We have a Kubernetes setup where we use Percona to manage the database. Based on the load we would like to increase the number of databases. If the load decreases then again reduce it.

Or we would like to have the database resources CPU / RAM increase depending on the incoming load for the database pods.

Has anyone done something like this? If you have experience please contact me ($ reward included!).

Thanks,

Rolands

Hello @rolandspopovs ,

if it is about Operator for PXC please have a look at this blog post: https://www.percona.com/blog/2020/11/13/kubernetes-scaling-capabilities-with-percona-xtradb-cluster/

We do support Vertical Pod Autoscaling (VPA), but do not support Horizontal (HPA). HPA limitation comes from how our operator monitors the state of the StatefulSet and protects it from being changed.

HPA support is in our roadmap, but not at the top of the list.

Do you believe VPA would work for you?

If it is not about our PXC Operator - please let me know how Percona is deployed, I’ll try to suggest something.

Hi @spronin ,

Thanks for the reply.

VPA would work. Do I understand correctly, it gives recommendations based on which it can update the resource request? How fast can it act on the recommendations to update the request?

In our case the faster the better.

Thanks,

Rolands

Hello @rolandspopovs ,

yes, you understand it correctly. Read more about VPA here: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler

On “how fast” - you need to understand that VPA relies on historic data from metrics server. In their architecture doc here they claim to be able to provide almost real time recommendations:

Recommender is the main component of the VPA. It is responsible for computing recommended resources. On startup the recommender fetches historical resource utilization of all Pods (regardless of whether they use VPA) together with the history of Pod OOM events from the History Storage. It aggregates this data and keeps it in memory.

During normal operation the recommender consumes real time updates of resource utilization and new events via the Metrics API from the Metrics Server. Additionally it watches all Pods and all VPA objects in the cluster. For every Pod that is matched by some VPA selector the Recommender computes the recommended resources and sets the recommendation on the VPA object.

  1. But from my experience relying on VPA for spiky workloads is not a good idea. Take into account that database pods will not start in a second - it takes time.

  2. I would strongly suggest using Cluster Autoscaler along with VPA, as VPA can scale up resources and exceed the cluster capacity.