Expose the MySQL outside the cluster

Hi,

I installed the Percona MySQL using the Percona K8S Operator.

And all seems to work fine :).

I now wants to access the database from outside the cluster (for testing)

The services are created as ClusterIP

How can I define the service (which service?) to be loadbalancer? (Working on DigitalOcean if it matter)

How can I define NodePort if for some reason the load balancer option is not working

Can you direct me to some documentation as I could find none

Thanks

Ok found it at last

Custom Resource options

Hello @Nehemia,

it should be simple as haproxy.serviceType: LoadBalancer in cr.yaml.

Did it work for you on DigitalOcean?

Hi @Nehemia ,

Could you please share the cr that you used to create the service type as no deport here?

I am bit stuck how to access the crunchy Postgres cluster in my ocp platform from my local terminal.

I am sorry to say I have ziro knowledge about this problem 3 years later
Good luck

Hello folks.

So it seems it is a bit of unclear how to expose the cluster.

  1. We have the following document: Exposing the cluster - Percona Operator for MySQL based on Percona XtraDB Cluster

If it does not help, please let me know what should be clarified to make it more useful. I would be glad to tune it.

  1. Here is the exposure step by step.

To expose the cluster we recommend using built-in load balancers - ProxySQL or HAProxy.
As HAProxy is a default, I will provide examples for it.

Visually it works in the following way:

The Operator automatically does the following:

  1. Creates Database cluster (MySQL)
  2. Creates HAProxy fleet
  3. Exposes HAProxy through a load balancer

Users connect to HAProxy through a load balancer outside of the k8s cluster.

To expose the cluster we need to change the following in the cr.yaml:

spec:
...
  haproxy:
    exposePrimary:
      enabled: true
      type: LoadBalancer
    exposeReplicas:
      enabled: true
      type: LoadBalancer

This instructs the Operator to create two Services that are going to expose Primary and Replica nodes through load balancers.

Now we can apply this custom resource:

kubectl apply -f cr.yaml

I took default cr.yaml and altered it to expose HAProxy. You can find it in this gist.

You can apply it like this:

kubectl apply -f https://gist.githubusercontent.com/spron-in/75bfed5942e13d0c6760267dd94425ff/raw/6fb9bc6eb63de1148d3c258934ea31203c518fb9/cr.yaml

Wait till the cluster is ready. Check kubectl get pxc. Once it is ready the STATUS will be in ready state. It usually takes 2-3 minutes. If it takes too long, it might be that there is smth wrong.

In kubectl get pxc you will also see ENDPOINT. There you should see the address of the Load Balancer to connect to. It is for Primary.

% kubectl get pxc
NAME       ENDPOINT         STATUS   PXC   PROXYSQL   HAPROXY   AGE
cluster1   35.XX.XX.245       ready    3                3         4m36s

Connect to this IP address as to usual mysql server:

mysql -h 35.XX.XX.235 -u root -p 

You can get the password from a secret as follows:

kubectl get secret cluster1-secrets -o jsonpath='{.data.root}' | base64 --decode

I hope this helps.
I would encourage you to read the following docs:

  1. Quickstart - it has connect to database section that might clarify some things
  2. Exposing the cluster - it talks about various other options for exposure.

Using the documentation here I attempted to expose the cluster and ran into an issue. Troubleshooting, I came across this article and decided to give your cr.yaml file a go since my didn’t work but I get the same error.
A bit of output to demonstrate the issue.

kubectl get pod -n namespace
NAME                                  READY   STATUS    RESTARTS   AGE
my-db-pxc-db-haproxy-0                2/2     Running   0          19m
my-db-pxc-db-haproxy-1                2/2     Running   0          18m
my-db-pxc-db-haproxy-2                2/2     Running   0          17m
my-db-pxc-db-pxc-0                    3/3     Running   0          19m
my-db-pxc-db-pxc-1                    3/3     Running   0          18m
my-db-pxc-db-pxc-2                    3/3     Running   0          17m
my-op-pxc-operator-786c68989b-hw7fp   1/1     Running   0          19m

But if I run the kubectl apply command, I get an error about unknown fields. The error is the same with your gist or my config file:

kubectl apply -f https://gist.githubusercontent.com/spron-in/75bfed5942e13d0c6760267dd94425ff/raw/6fb9bc6eb63de1148d3c258934ea31203c518fb9/cr.yaml
The request is invalid: patch: Invalid value: "{\"apiVersion\":\"pxc.percona.com/v1\",\"kind\":\"PerconaXtraDBCluster\",\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"pxc.percona.com/v1\\\",\\\"kind\\\":\\\"PerconaXtraDBCluster\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"finalizers\\\":[\\\"delete-pxc-pods-in-order\\\"],\\\"name\\\":\\\"cluster1\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"allowUnsafeConfigurations\\\":false,\\\"backup\\\":{\\\"image\\\":\\\"percona/percona-xtradb-cluster-operator:1.14.0-pxc8.0-backup-pxb8.0.35\\\",\\\"pitr\\\":{\\\"enabled\\\":false,\\\"storageName\\\":\\\"STORAGE-NAME-HERE\\\",\\\"timeBetweenUploads\\\":60,\\\"timeoutSeconds\\\":60},\\\"schedule\\\":[{\\\"keep\\\":5,\\\"name\\\":\\\"daily-backup\\\",\\\"schedule\\\":\\\"0 0 * * *\\\",\\\"storageName\\\":\\\"fs-pvc\\\"}],\\\"storages\\\":{\\\"fs-pvc\\\":{\\\"type\\\":\\\"filesystem\\\",\\\"volume\\\":{\\\"persistentVolumeClaim\\\":{\\\"accessModes\\\":[\\\"ReadWriteOnce\\\"],\\\"resources\\\":{\\\"requests\\\":{\\\"storage\\\":\\\"6G\\\"}}}}}}},\\\"crVersion\\\":\\\"1.14.0\\\",\\\"haproxy\\\":{\\\"affinity\\\":{\\\"antiAffinityTopologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"enabled\\\":true,\\\"exposePrimary\\\":{\\\"enabled\\\":true,\\\"type\\\":\\\"LoadBalancer\\\"},\\\"exposeReplicas\\\":{\\\"enabled\\\":true,\\\"type\\\":\\\"LoadBalancer\\\"},\\\"gracePeriod\\\":30,\\\"image\\\":\\\"percona/percona-xtradb-cluster-operator:1.14.0-haproxy\\\",\\\"podDisruptionBudget\\\":{\\\"maxUnavailable\\\":1},\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"600m\\\",\\\"memory\\\":\\\"1G\\\"}},\\\"size\\\":3},\\\"logcollector\\\":{\\\"enabled\\\":true,\\\"image\\\":\\\"percona/percona-xtradb-cluster-operator:1.14.0-logcollector\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"200m\\\",\\\"memory\\\":\\\"100M\\\"}}},\\\"pmm\\\":{\\\"enabled\\\":false,\\\"image\\\":\\\"percona/pmm-client:2.41.1\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"150M\\\"}},\\\"serverHost\\\":\\\"monitoring-service\\\"},\\\"proxysql\\\":{\\\"affinity\\\":{\\\"antiAffinityTopologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"enabled\\\":false,\\\"gracePeriod\\\":30,\\\"image\\\":\\\"percona/percona-xtradb-cluster-operator:1.14.0-proxysql\\\",\\\"podDisruptionBudget\\\":{\\\"maxUnavailable\\\":1},\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"600m\\\",\\\"memory\\\":\\\"1G\\\"}},\\\"size\\\":3,\\\"volumeSpec\\\":{\\\"persistentVolumeClaim\\\":{\\\"resources\\\":{\\\"requests\\\":{\\\"storage\\\":\\\"2G\\\"}}}}},\\\"pxc\\\":{\\\"affinity\\\":{\\\"antiAffinityTopologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"autoRecovery\\\":true,\\\"gracePeriod\\\":600,\\\"image\\\":\\\"percona/percona-xtradb-cluster:8.0.35-27.1\\\",\\\"podDisruptionBudget\\\":{\\\"maxUnavailable\\\":1},\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"600m\\\",\\\"memory\\\":\\\"1G\\\"}},\\\"size\\\":3,\\\"volumeSpec\\\":{\\\"persistentVolumeClaim\\\":{\\\"resources\\\":{\\\"requests\\\":{\\\"storage\\\":\\\"6G\\\"}}}}},\\\"updateStrategy\\\":\\\"SmartUpdate\\\",\\\"upgradeOptions\\\":{\\\"apply\\\":\\\"disabled\\\",\\\"schedule\\\":\\\"0 4 * * *\\\",\\\"versionServiceEndpoint\\\":\\\"https://check.percona.com\\\"}}}\\n\"},\"creationTimestamp\":\"2024-05-01T21:57:53Z\",\"finalizers\":[\"delete-pxc-pods-in-order\"],\"generation\":1,\"managedFields\":[{\"apiVersion\":\"pxc.percona.com/v1\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:kubectl.kubernetes.io/last-applied-configuration\":{}},\"f:finalizers\":{\".\":{},\"v:\\\"delete-pxc-pods-in-order\\\"\":{}}},\"f:spec\":{\".\":{},\"f:allowUnsafeConfigurations\":{},\"f:backup\":{\".\":{},\"f:image\":{},\"f:pitr\":{\".\":{},\"f:enabled\":{},\"f:storageName\":{},\"f:timeBetweenUploads\":{}},\"f:schedule\":{},\"f:storages\":{\".\":{},\"f:azure-blob\":{\".\":{},\"f:azure\":{\".\":{},\"f:container\":{},\"f:credentialsSecret\":{}},\"f:type\":{}},\"f:fs-pvc\":{\".\":{},\"f:type\":{},\"f:volume\":{\".\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:accessModes\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:storage\":{}}}}}},\"f:s3-us-west\":{\".\":{},\"f:s3\":{\".\":{},\"f:bucket\":{},\"f:credentialsSecret\":{},\"f:region\":{}},\"f:type\":{},\"f:verifyTLS\":{}}}},\"f:crVersion\":{},\"f:haproxy\":{\".\":{},\"f:affinity\":{\".\":{},\"f:antiAffinityTopologyKey\":{}},\"f:enabled\":{},\"f:gracePeriod\":{},\"f:image\":{},\"f:podDisruptionBudget\":{\".\":{},\"f:maxUnavailable\":{}},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}}},\"f:size\":{}},\"f:logcollector\":{\".\":{},\"f:enabled\":{},\"f:image\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}}}},\"f:pmm\":{\".\":{},\"f:enabled\":{},\"f:image\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}}},\"f:serverHost\":{}},\"f:proxysql\":{\".\":{},\"f:affinity\":{\".\":{},\"f:antiAffinityTopologyKey\":{}},\"f:enabled\":{},\"f:gracePeriod\":{},\"f:image\":{},\"f:podDisruptionBudget\":{\".\":{},\"f:maxUnavailable\":{}},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}}},\"f:size\":{},\"f:volumeSpec\":{\".\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:storage\":{}}}}}},\"f:pxc\":{\".\":{},\"f:affinity\":{\".\":{},\"f:antiAffinityTopologyKey\":{}},\"f:autoRecovery\":{},\"f:gracePeriod\":{},\"f:image\":{},\"f:podDisruptionBudget\":{\".\":{},\"f:maxUnavailable\":{}},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}}},\"f:size\":{},\"f:volumeSpec\":{\".\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:resources\":{\".\":{},\"f:requests\":{\".\":{},\"f:storage\":{}}}}}},\"f:updateStrategy\":{},\"f:upgradeOptions\":{\".\":{},\"f:apply\":{},\"f:schedule\":{},\"f:versionServiceEndpoint\":{}}}},\"manager\":\"kubectl-client-side-apply\",\"operation\":\"Update\",\"time\":\"2024-05-01T21:57:53Z\"}],\"name\":\"cluster1\",\"namespace\":\"default\",\"resourceVersion\":\"318373922\",\"uid\":\"a738e159-acaf-4d17-847f-e62d6c6da0b0\"},\"spec\":{\"allowUnsafeConfigurations\":false,\"backup\":{\"image\":\"percona/percona-xtradb-cluster-operator:1.14.0-pxc8.0-backup-pxb8.0.35\",\"pitr\":{\"enabled\":false,\"storageName\":\"STORAGE-NAME-HERE\",\"timeBetweenUploads\":60,\"timeoutSeconds\":60},\"schedule\":[{\"keep\":5,\"name\":\"daily-backup\",\"schedule\":\"0 0 * * *\",\"storageName\":\"fs-pvc\"}],\"storages\":{\"fs-pvc\":{\"type\":\"filesystem\",\"volume\":{\"persistentVolumeClaim\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"6G\"}}}}}}},\"crVersion\":\"1.14.0\",\"haproxy\":{\"affinity\":{\"antiAffinityTopologyKey\":\"kubernetes.io/hostname\"},\"enabled\":true,\"exposePrimary\":{\"enabled\":true,\"type\":\"LoadBalancer\"},\"exposeReplicas\":{\"enabled\":true,\"type\":\"LoadBalancer\"},\"gracePeriod\":30,\"image\":\"percona/percona-xtradb-cluster-operator:1.14.0-haproxy\",\"podDisruptionBudget\":{\"maxUnavailable\":1},\"resources\":{\"requests\":{\"cpu\":\"600m\",\"memory\":\"1G\"}},\"size\":3},\"logcollector\":{\"enabled\":true,\"image\":\"percona/percona-xtradb-cluster-operator:1.14.0-logcollector\",\"resources\":{\"requests\":{\"cpu\":\"200m\",\"memory\":\"100M\"}}},\"pmm\":{\"enabled\":false,\"image\":\"percona/pmm-client:2.41.1\",\"resources\":{\"requests\":{\"cpu\":\"300m\",\"memory\":\"150M\"}},\"serverHost\":\"monitoring-service\"},\"proxysql\":{\"affinity\":{\"antiAffinityTopologyKey\":\"kubernetes.io/hostname\"},\"enabled\":false,\"gracePeriod\":30,\"image\":\"percona/percona-xtradb-cluster-operator:1.14.0-proxysql\",\"podDisruptionBudget\":{\"maxUnavailable\":1},\"resources\":{\"requests\":{\"cpu\":\"600m\",\"memory\":\"1G\"}},\"size\":3,\"volumeSpec\":{\"persistentVolumeClaim\":{\"resources\":{\"requests\":{\"storage\":\"2G\"}}}}},\"pxc\":{\"affinity\":{\"antiAffinityTopologyKey\":\"kubernetes.io/hostname\"},\"autoRecovery\":true,\"gracePeriod\":600,\"image\":\"percona/percona-xtradb-cluster:8.0.35-27.1\",\"podDisruptionBudget\":{\"maxUnavailable\":1},\"resources\":{\"requests\":{\"cpu\":\"600m\",\"memory\":\"1G\"}},\"size\":3,\"volumeSpec\":{\"persistentVolumeClaim\":{\"resources\":{\"requests\":{\"storage\":\"6G\"}}}}},\"updateStrategy\":\"SmartUpdate\",\"upgradeOptions\":{\"apply\":\"disabled\",\"schedule\":\"0 4 * * *\",\"versionServiceEndpoint\":\"https://check.percona.com\"}}}": strict decoding error: unknown field "spec.backup.pitr.timeoutSeconds", unknown field "spec.haproxy.exposePrimary", unknown field "spec.haproxy.exposeReplicas"

Did version 1.14.1 change expected fields found in haproxy?

If it helps:

$ helm search repo percona
NAME                  	CHART VERSION	APP VERSION	DESCRIPTION
percona/pg-db         	2.3.15       	2.3.1      	A Helm chart to deploy the PostgreSQL database ...
percona/pg-operator   	2.3.4        	2.3.1      	A Helm chart to deploy the Percona Operator for...
percona/pmm           	1.3.13       	2.41.2     	A Helm chart for Percona Monitoring and Managem...
percona/ps-db         	0.7.0        	0.7.0      	A Helm chart for installing Percona Server Data...
percona/ps-operator   	0.7.0        	0.7.0      	A Helm chart for Deploying the Percona Operator...
percona/psmdb-db      	1.15.3       	1.15.0     	A Helm chart for installing Percona Server Mong...
percona/psmdb-operator	1.15.4       	1.15.0     	A Helm chart for deploying the Percona Operator...
percona/pxc-db        	1.14.3       	1.14.0     	A Helm chart for installing Percona XtraDB Clus...
percona/pxc-operator  	1.14.1       	1.14.0     	A Helm chart for deploying the Percona Operator...

In 1.14 we standardized our exposure section, deprecated some fields and added new ones:

Seems you are running an older version now. Old fields still work.

haproxy.serviceType
haproxy.replicasServiceEnabled
haproxy.replicasServiceType

Read more in 1.13 docs here.