Description:
I have the need to disable the TLS functions on an instance of MySQL created by the Operator and have gone through the details as documented however the clients are still connecting with TLS. Since the deployment uses HAProxy to load balance things via it’s service using an AWS Network Load Balancer, is this the source of the TLS connections from the clients (the clients run outside the Kubernetes cluster in a different VPC)
The consumer with the TLS problem cannot be configured to disable TLS, nor can it be replaced/updated/downgraded. TLS MUST be disabled at the server side for this client to function.
Steps to Reproduce:
Apply the following CRD
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
annotations:
meta.helm.sh/release-name: mysql
meta.helm.sh/release-namespace: default
finalizers:
- percona.com/delete-pxc-pods-in-order
labels:
app.kubernetes.io/instance: mysql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: pxc-db
app.kubernetes.io/version: 1.17.0
helm.sh/chart: pxc-db-1.17.0
helm.toolkit.fluxcd.io/name: mysql
helm.toolkit.fluxcd.io/namespace: default
name: mysql-pxc-db
namespace: default
spec:
backup:
image: percona/percona-xtradb-cluster-operator:1.17.0-pxc8.0-backup-pxb8.0.35
pitr:
enabled: true
resources:
limits: {}
requests: {}
storageName: binlogs
timeBetweenUploads: 300
timeoutSeconds: 60
schedule:
- keep: 360
name: every-six-hours
schedule: 0 */6 * * *
storageName: sql
storages:
binlogs:
s3:
bucket: redacted/sql/voicecore-binlogs/
credentialsSecret: redacted
endpointUrl: https://s3.us-west-2.amazonaws.com
region: us-west-2
type: s3
sql:
s3:
bucket: redacted/sql/voicecore/
credentialsSecret: redacted
endpointUrl: https://s3.us-west-2.amazonaws.com
region: us-west-2
type: s3
crVersion: 1.17.0
enableCRValidationWebhook: false
haproxy:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
annotations:
prometheus.io/port: "8404"
prometheus.io/scrape: "true"
enabled: true
exposePrimary:
annotations:
external-dns.alpha.kubernetes.io/hostname: db-vpbx.redacted.local,db-freeswitch.redacted.local,db-cdrs.redacted.local
external-dns.alpha.kubernetes.io/ttl: "60"
kubernetes.io/ingress.class: internal-nginx
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: creation:Tool=flux,project:Name=redacted,Purpose=SQL
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-name: nv-internal-haproxy-sql
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-type: external
loadBalancerSourceRanges:
- 10.0.0.0/8
type: LoadBalancer
gracePeriod: 30
image: percona/haproxy:2.8.14
labels: {}
livenessDelaySec: 300
livenessProbes:
failureThreshold: 4
initialDelaySeconds: 60
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
nodeSelector: {}
podDisruptionBudget:
maxUnavailable: 1
readinessDelaySec: 15
readinessProbes:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
resources:
limits: {}
requests:
cpu: 600m
memory: 1G
sidecarPVCs: []
sidecarResources:
limits: {}
requests: {}
sidecarVolumes: []
sidecars: []
size: 3
tolerations: []
volumeSpec:
emptyDir: {}
logCollectorSecretName: mysql-pxc-db-log-collector
logcollector:
enabled: true
image: percona/percona-xtradb-cluster-operator:1.17.0-logcollector-fluentbit4.0.0
resources:
limits: {}
requests:
cpu: 200m
memory: 100M
pmm:
enabled: false
proxysql:
enabled: false
pxc:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
annotations:
prometheus.io/port: "9104"
prometheus.io/scrape: "true"
autoRecovery: true
configuration: |2
[mysqld]
local_infile = 1
[client]
local_infile = 1
gracePeriod: 600
image: percona/percona-xtradb-cluster:8.0.41-32.1
labels: {}
livenessDelaySec: 300
livenessProbes:
failureThreshold: 3
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
nodeSelector:
topology.kubernetes.io/zone: us-west-2a
podDisruptionBudget:
maxUnavailable: 1
readinessDelaySec: 15
readinessProbes:
failureThreshold: 5
initialDelaySeconds: 15
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 15
resources:
limits:
cpu: 2000m
memory: 8G
requests:
cpu: 600m
memory: 4G
sidecarPVCs: []
sidecarResources:
limits: {}
requests: {}
sidecarVolumes: []
sidecars:
- args:
- --web.listen-address=0.0.0.0:9104
- --mysqld.username=monitor
- --mysqld.address=127.0.0.1:3306
- --tls.insecure-skip-verify
env:
- name: MYSQLD_EXPORTER_PASSWORD
valueFrom:
secretKeyRef:
key: monitor
name: redacted
image: prom/mysqld-exporter
imagePullPolicy: Always
name: mysqld-exporter
size: 3
tolerations: []
volumeSpec:
persistentVolumeClaim:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: csi-gp3-retained
secretsName: redacted
tls:
enabled: false
issuerConf:
group: cert-manager.io
kind: ClusterIssuer
name: redacted
unsafeFlags:
tls: true
updateStrategy: SmartUpdate
upgradeOptions:
apply: disabled
schedule: 0 4 * * *
versionServiceEndpoint: https://check.percona.com
vaultSecretName: mysql-pxc-db-vault
status:
backup: {}
haproxy:
labelSelectorPath: app.kubernetes.io/component=haproxy,app.kubernetes.io/instance=mysql-pxc-db,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
ready: 3
size: 3
status: ready
host: nv-internal-haproxy-sql-30d1b04289e93037.elb.us-west-2.amazonaws.com
logcollector: {}
observedGeneration: 31
pmm: {}
proxysql: {}
pxc:
image: percona/percona-xtradb-cluster:8.0.41-32.1
labelSelectorPath: app.kubernetes.io/component=pxc,app.kubernetes.io/instance=mysql-pxc-db,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
ready: 3
size: 3
status: ready
version: 8.0.41-32.1
ready: 6
size: 6
state: ready
Connect to the client
Run the SQL to check if the clients are connecting using TLS or not.
select t.THREAD_ID,
t.PROCESSLIST_USER,
t.PROCESSLIST_HOST,
t.CONNECTION_TYPE,
sbt.VARIABLE_VALUE AS cipher
FROM performance_schema.threads t
LEFT JOIN performance_schema.status_by_thread sbt ON (
t.THREAD_ID = sbt.THREAD_ID
AND sbt.VARIABLE_NAME = 'Ssl_cipher'
)
WHERE t.PROCESSLIST_USER IS NOT NULL;
Version:
- percona/percona-xtradb-cluster-operator:1.17.0-pxc8.0-backup-pxb8.0.35
- percona/percona-xtradb-cluster:8.0.41-32.1
Logs:
[If applicable, include any relevant log files or error messages]
Excerpt from the SQL query results to show the state, the vpbx
user is the problem.
mysql> select t.THREAD_ID, t.PROCESSLIST_USER, t.PROCESSLIST_HOST, t.CONNECTION_TYPE, sbt.VARIABLE_VALUE AS cipher FROM performance_schema.threads t LEFT JOIN performance_schema.status_by_thread sbt ON (t.THREAD_ID = sbt.THREAD_ID AND sbt.VARIABLE_NAME = 'Ssl_cipher') WHERE t.PROCESSLIST_USER IS NOT NULL;
+-----------+------------------+----------------------------------------------------------------------------+-----------------+-----------------------------+
| THREAD_ID | PROCESSLIST_USER | PROCESSLIST_HOST | CONNECTION_TYPE | cipher |
+-----------+------------------+----------------------------------------------------------------------------+-----------------+-----------------------------+
| 5 | system user | NULL | NULL | NULL |
| 6 | system user | NULL | NULL | NULL |
| 2878 | vpbx | 10-65-0-42.mysql-pxc-db-haproxy-replicas.default.svc.cluster.local | SSL/TLS | ECDHE-RSA-AES128-GCM-SHA256 |
| 2879 | freeswitch | 10-65-0-47.mysql-pxc-db-haproxy-replicas.default.svc.cluster.local | SSL/TLS | ECDHE-RSA-AES128-GCM-SHA256 |
| 1009 | vpbx | 10-65-0-42.mysql-pxc-db-haproxy-replicas.default.svc.cluster.local | SSL/TLS | ECDHE-RSA-AES128-GCM-SHA256 |
| 2238 | vpbx | 10-65-0-195.mysql-pxc-db-haproxy-replicas.default.svc.cluster.local | SSL/TLS | ECDHE-RSA-AES128-GCM-SHA256 |
| 2404 | cdr_reader | 10-65-0-42.mysql-pxc-db-haproxy-replicas.default.svc.cluster.local | TCP/IP | |
| 59 | event_scheduler | localhost | NULL | NULL |
| 61 | system user | NULL | NULL | NULL |
| 2078 | vpbx | 10-65-0-42.mysql-pxc-db-haproxy-replicas.default.svc.cluster.local | SSL/TLS | ECDHE-RSA-AES128-GCM-SHA256 |
| 3205 | vpbx | mysql-pxc-db-haproxy-2.mysql-pxc-db-haproxy.default.svc.cluster.local | SSL/TLS | ECDHE-RSA-AES128-GCM-SHA256 |
| 1564 | vpbx | 10-65-0-47.mysql-pxc-db-haproxy-replicas.default.svc.cluster.local | SSL/TLS | ECDHE-RSA-AES128-GCM-SHA256 |
| 3688 | operator | 10-65-0-219.percona-xtradb-cluster-operator.pxc-operator.svc.cluster.local | SSL/TLS | TLS_AES_128_GCM_SHA256 |
Expected Result:
The vpbx
user can connect to the database without TLS
Actual Result:
The vpbx
user connects to the database with TLS even with TLS disabled.
Additional Information:
I’ve also tried to set tls_version=
in the [mysqld]
and [client]
configuration sections to disable TLS but that causes things to break since the various operator/sidecars/sync systems all seem to require TLS to make the cluster functional.
I’ll have to look at the ProxySQL setup to see if that can solve the problem, but I don’t have any experience with it so that’s going to be fun to work with.