Why does the Orchestrator has a ServiceAccount for PerconaServerForMySQL?

Description:

When you create the operator by using the bundle, it creates a ServiceAccount for the Orchestrator. This is bound to a cluster role which gets the PerconaServerMySQL resource, and get/patch Pods. This ServiceAccount is then provided to all Orchestrator Pods through the StatefulSet.

I am confused of why this is needed. Do the Orchestrator Pods perform API calls to Kubernetes at all? I actually tried removing the ServiceAccountName from the StatefulSet and the Orchestrator was working as before.

So, why does the Orchestrator has a ServiceAccount at all?

Steps to Reproduce:

Following are some details of a cluster in async replication mode, with Orchestrator enabled.

  1. Get the ServiceAccounts created by the bundle.yaml:

    [user@host]$ kubectl get serviceaccount -n mysql-operator
    NAME                                         SECRETS   AGE
    default                                      0         7d2h
    percona-server-mysql-operator                0         7d2h
    percona-server-mysql-operator-orchestrator   0         7d2h
    
  2. View the Role which the above ServiceAccount binds:

    [user@host]$ kubectl -n mysql-operator get role percona-server-mysql-operator-orchestrator -o yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    [...]
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - list
      - patch
    - apiGroups:
      - ps.percona.com
      resources:
      - perconaservermysqls
      verbs:
      - get
    
  3. View the Orchestrator StatefulSet:

    [user@host]$ kubectl get sts cluster1-orc -o yaml
    apiVersion: apps/v1
    kind: StatefulSet
    [...]
    spec:
      [...]
      serviceName: cluster1-orc
      template:
        spec:
    	  [...]
          serviceAccount: percona-server-mysql-operator-orchestrator
          serviceAccountName: percona-server-mysql-operator-orchestrator
    [...]
    

Version:

This is the version 0.6.0 of the Percona Operator for MySQL.

Hi @Marios_Cako, thanks for the question. I will recheck it. It can be some leftover.

1 Like

Hi @Marios_Cako! Yes, orchestrator perform Kubernetes API calls to label the primary pod after a failover: percona-server-mysql-operator/build/orchestrator.conf.json at main · percona/percona-server-mysql-operator · GitHub and that service account is needed for this.

1 Like

Thanks @Ege_Gunes for the response, indeed I noticed that when I do not provide the ServiceAccount there are some errors in the mysql-monit container of the Orchestrator Pods.

I understand that the above snippet of code is used by the orc-handler binary in the mysql-monit container of the Orchestrator Pods.

By zooming in a bit in the code, I noticed that the orc-handler sets the labels mysql.percona.com/primary=true but beyond that the operator does not use that label [except some tests], nor I have seen a Service to match Pods with that label.

Does the above hold? How is that label used by the system?

You’re right Marios, it doesn’t seem to be used in services. Earlier we had a service called something like cluster1-mysql-primary and it was matching that label, but it seems we removed it in latest releases. Right now HAProxy is responsible for detecting primary.

I think there are two options:

  1. Bring back the primary service
  2. Remove service account from Orchestrator

Which would be more useful for you?

I think we should focus on what is best for the project. I do think having a Service discovery for all cluster types [this means also the group-replication type, not just the asynchronous replication] is a great feature, since one might not want to rely on HAProxy or MySQL router.

Having said that, I do think that the setting of the primary-pod label procedure should not be part of the Orchestrator either. This should be responsibility of the operator so that it can cover all cluster types for seamless behavior.

I have already open a ticket for supporting Service discovery [K8SPS-323] - Percona JIRA and I have started working on this. I will include this context also there.