What are are the limitations of installing the operator cluster-wide in regards to number of db clusters or/and nodes? The operator listens to all k8s events related to the Percona Db Cluster objects and takes action when any of them get changed. Has anyone tested the limits of how many clusters you can manage with a single Percona Kubernetes operator before starting to see issues? We are using EKS right now.
1 Like
Hello @jmilushev ,
good question!
Please remember that it is not only # of clusters that Operator needs to handle, but also backups and restore objects. The more objects you have, the more time Go routines would take to check and process each of them.
In this JIRA ticket one of our users started to see issues at 100 clusters: [K8SPXC-739] Operator doesn't scale for more than one pod - Percona JIRA
In a nutshell we do not have any particular strong limits per operator, but for scalability I think having more than 50 clusters per Operator is not a good approach.
So the options are:
- One operator = one database cluster
- cluster wide but with limited namespaces coverage (see WATCH_NAMESPACE flag).
1 Like