Wsrep sst replication setup fails when using intermediate certificates in a chain

We let cert-manager create custom certificates from our own Intermediate CA signed by our Root-CA by an cm issuer which connects to hashi-corp vault where the CAs reside.

The problem comes up when using the intermediate CA, wsrep (sst) complains about the unknown CA-Certificate, which breaks the cluster deployment. Seams that it does not read the whole chain but only the first certificate (did no deeper digging)

On the other hand If the certs get signed by the Root-CA everything runs as expected. So we are able to add CNs on the mysqld service at our will all signed by our Roo-CA. But it would be much better to only use the intermediate CA for that.

I’ve found an old bug report may related to this which got fixed in MySQL version 8.0.30.

https://bugs.mysql.com/bug.php?id=54158

We’re testing with the latest Percona Release 8.0.32-24.

Great! Please let us know your feedback on this issue.

Uhm, maybe my problem description was misleading, sorry for that.

I wanted to know if you plan to port the fix or if you already did and what we are doing possibly wrong… :slight_smile:

You said:

Let us know your feedback with your testing, if 8.0.30+ solves the issue as noted in the bug report.

All fixes applied to community/upstream MySQL are automatically part of Percona Server. If something is fixed in MySQL 8.0.30, then it is automatically fixed/included in Percona Server 8.0.30.

It’s not working with the latest version. I’m using the helm chart version 1.12.0 for operator and db.

On the second pxc-db-node the following error occurs:

{"log":"2023-06-06T15:45:47.193696Z 0 [ERROR] [MY-000000] [WSREP-SST] ******** FATAL ERROR ****************************************** \n","file":"/var/lib/mysql/mysqld-error.log"}
{"log":"2023-06-06T15:45:47.193754Z 0 [ERROR] [MY-000000] [WSREP-SST] * The certifcate and CA (certificate authority) do not match.   \n","file":"/var/lib/mysql/mysqld-error.log"}
{"log":"2023-06-06T15:45:47.193765Z 0 [ERROR] [MY-000000] [WSREP-SST] * It does not appear that the certificate was issued by the CA. \n","file":"/var/lib/mysql/mysqld-error.log"}
{"log":"2023-06-06T15:45:47.193781Z 0 [ERROR] [MY-000000] [WSREP-SST] * Please check your certificate and CA files.                   \n","file":"/var/lib/mysql/mysqld-error.log"}
{"log":"2023-06-06T15:45:47.193788Z 0 [ERROR] [MY-000000] [WSREP-SST] * Line 424\n","file":"/var/lib/mysql/mysqld-error.log"}
{"log":"2023-06-06T15:45:47.193793Z 0 [ERROR] [MY-000000] [WSREP-SST] *************************************************************** \n","file":"/var/lib/mysql/mysqld-error.log"}

The pxc-db charts TLS config looks like:

tls:
  SANs:
    - pxc.$custom_dns_name
  issuerConf:
    name: $custom_issuer
    kind: ClusterIssuer

The $custom_issuer is bound to the intermediate CA.

This results in a production-ssl secret in the pxc namespace where ca.crt references our Root-CA and tls.crt the chain consisting of the actual cert with CN production-proxysql¹ and below our intermediate ca cert.

¹which is also a problem, because we have to allow that names explicitly, but it’s another topic.

And you verified the helm chart is downloading/running the Percona Server 8.0.32 image? I did not see this in the log you provided.

Yes, the snippet of the ArgoCD computed manifest.

  pxc:
    affinity:
      antiAffinityTopologyKey: kubernetes.io/hostname
    annotations: {}
    autoRecovery: true
    configuration: |
 
      [mysqld]
      proxy_protocol_networks = *
      skip-name-resolve
      require_secure_transport=ON
    gracePeriod: 600
    image: 'percona/percona-xtradb-cluster:8.0.32-24.2'
And the events log from first node/pod

Events:
  Type     Reason                  Age   From                     Message
  ----     ------                  ----  ----                     -------
  Normal   Scheduled               95s   default-scheduler        Successfully assigned pxc/production-pxc-0 to ip-10-11-141-218.eu-central-1.compute.internal
  Normal   SuccessfulAttachVolume  93s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-628c5050-7e35-4992-83cc-e049a0b93dfc"
  Normal   Pulling                 89s   kubelet                  Pulling image "percona/percona-xtradb-cluster-operator:1.12.0"
  Normal   Pulled                  88s   kubelet                  Successfully pulled image "percona/percona-xtradb-cluster-operator:1.12.0" in 741.22306ms
  Normal   Created                 88s   kubelet                  Created container pxc-init
  Normal   Started                 88s   kubelet                  Started container pxc-init
  Normal   Pulling                 87s   kubelet                  Pulling image "percona/percona-xtradb-cluster-operator:1.12.0-logcollector"
  Normal   Pulled                  87s   kubelet                  Successfully pulled image "percona/percona-xtradb-cluster-operator:1.12.0-logcollector" in 736.996273ms
  Normal   Created                 87s   kubelet                  Created container logs
  Normal   Started                 87s   kubelet                  Started container logs
  Normal   Pulling                 87s   kubelet                  Pulling image "percona/percona-xtradb-cluster-operator:1.12.0-logcollector"
  Normal   Pulled                  86s   kubelet                  Successfully pulled image "percona/percona-xtradb-cluster-operator:1.12.0-logcollector" in 730.269016ms
  Normal   Created                 86s   kubelet                  Created container logrotate
  Normal   Started                 86s   kubelet                  Started container logrotate
  Normal   Pulling                 86s   kubelet                  Pulling image "percona/percona-xtradb-cluster:8.0.32-24.2"
  Normal   Pulled                  85s   kubelet                  Successfully pulled image "percona/percona-xtradb-cluster:8.0.32-24.2" in 741.267694ms
  Normal   Created                 85s   kubelet                  Created container pxc
  Normal   Started                 85s   kubelet                  Started container pxc

Ok. I’ll see if someone more experienced with our K8S operator has any insight on this aspect.

1 Like