Xtradb operator - mysql multimaster and replica

Good morning
Is the procedure in the link below also valid in case of installation of Mysql Multimaster?

Hello @Giuseppe_Miraglia,
Can you please explain what you mean by ‘Mysql Multimaster’? The docs you linked are for running our Operator for PXC in Kubernetes.

When i follow the guide in the this link

i receive the error at line 331 in this page

The documentation you are referencing is for ‘multi-cluster’, not ‘multi-master’. In fact, if you look at the same source code file, just a few lines up, you see we don’t support multi-master.

Can you please explain what your end goal is? Typical PXC has 3 nodes; 1 node is writer, and the other 2 are readers. It is extremely rare to have PXC configured where all 3 nodes are writers.

Additionally, the docs you linked are for configuring a ‘source PXC’ replicating to a ‘replica PXC’. These are not configured in a way that you would write to either cluster. You always write to a single node within a single cluster.

Sorry, I was imprecise in the instructions. I installed following the instructions at this link, using haproxy

After I used the directions for multisite replication
But I received the error indicated in the previous post

Please share your full cr.yaml so we can examine.

Sorry for delay.

My error:

Operator values:
COMPUTED VALUES:
USER-SUPPLIED VALUES: null

affinity: {}
containerSecurityContext: {}
disableTelemetry: false
extraEnvVars: []
fullnameOverride: ""
image: ""
imagePullPolicy: IfNotPresent
imagePullSecrets: []
logLevel: INFO
logStructured: false
nameOverride: ""
nodeSelector:
  wrk: apigw-db
operatorImageRepository: percona/percona-xtradb-cluster-operator
podAnnotations: {}
rbac:
  create: true
replicaCount: 1
resources:
  limits:
    cpu: 200m
    memory: 500Mi
  requests:
    cpu: 100m
    memory: 20Mi
serviceAccount:
  create: true
tolerations: []
watchAllNamespaces: false

Db value:
USER-SUPPLIED VALUES:
annotations: {}
crVersion: 1.16.1
enableCRValidationWebhook: false
enableVolumeExpansion: false
finalizers:
- percona.com/delete-pxc-pods-in-order
fullnameOverride: ""
haproxy:
  affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
  annotations: {}
  enabled: true
  gracePeriod: 30
  image: percona/haproxy:2.8.11
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  labels: {}
  livenessDelaySec: 300
  livenessProbes:
    failureThreshold: 4
    initialDelaySeconds: 60
    periodSeconds: 30
    successThreshold: 1
    timeoutSeconds: 5
  nodeSelector:
    wrk: apigw-db
  podDisruptionBudget:
    maxUnavailable: 1
  readinessDelaySec: 15
  readinessProbes:
    failureThreshold: 3
    initialDelaySeconds: 15
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 1
  resources:
    limits: {}
    requests:
      cpu: 600m
      memory: 1G
  sidecarPVCs: []
  sidecarResources:
    limits: {}
    requests: {}
  sidecarVolumes: []
  sidecars: []
  size: 3
  tolerations: []
ignoreAnnotations: []
ignoreLabels: []
logcollector:
  enabled: true
  image: percona/percona-xtradb-cluster-operator:1.16.1-logcollector-fluentbit3.2.2
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  resources:
    limits: {}
    requests:
      cpu: 200m
      memory: 100M
nameOverride: ""
operatorImageRepository: percona/percona-xtradb-cluster-operator
pause: false
pxc:
  affinity:
    antiAffinityTopologyKey: kubernetes.io/hostname
  annotations: {}
  autoRecovery: true
  certManager: false
  gracePeriod: 600
  image:
    repository: percona/percona-xtradb-cluster
    tag: 8.4.3-3.1
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  labels: {}
  livenessDelaySec: 300
  livenessProbes:
    failureThreshold: 3
    initialDelaySeconds: 300
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  [spoiler]
***replicationChannels:***
***    - name: na_to_ba***
***      isSource: true***
[/spoiler]
  expose:
    enabled: true
    type: LoadBalancer
  nodeSelector:
    wrk: apigw-db
  persistence:
    accessMode: ReadWriteOnce
    enabled: true
    size: 2Gi
  podDisruptionBudget:
    maxUnavailable: 1
  readinessDelaySec: 15
  readinessProbes:
    failureThreshold: 5
    initialDelaySeconds: 15
    periodSeconds: 30
    successThreshold: 1
    timeoutSeconds: 15
  resources:
    limits: {}
    requests:
      cpu: 600m
      memory: 1G
  sidecarPVCs: []
  sidecarResources:
    limits: {}
    requests: {}
  sidecarVolumes: []
  sidecars: []
  size: 3
  tolerations: []
secrets:
  tls: {}
tls:
  enabled: true
updateStrategy: SmartUpdate
upgradeOptions:
  apply: disabled
  schedule: 0 4 * * *
  versionServiceEndpoint: https://check.percona.com

I don’t see the replica configuration in your cr.yaml. If you look at this example, Multi-cluster and multi-region deployment - Percona Operator for MySQL, you can see two sections for replicationChannels: one for the source, and one for replicas. Can you add the replica config into your cr?

I use helm.
Do I need to insert the source section and the replica section in both the source and replica clusters?
I understood that in the master cluster I had to set the replicationchannel only once for each installation. Setting it to true on the master and false on the replica
I only posted the master values, because it’s the one that gives me the error

I’m looking at the code, percona-xtradb-cluster-operator/pkg/apis/pxc/v1/pxc_types.go at main · percona/percona-xtradb-cluster-operator · GitHub

if c.PXC.ReplicationChannels > 0, which you have, then look at first entry, ReplicationChannels[0].IsSource and set isSrc. In your case, that’s “true”. Then, loop over all c.PXC.ReplicationChannels. Check if .Name has issues. Then check if isSrc (set to true just above) is not equal to this same key, which on first iteration of the loop matches ReplicationChannels[0].IsSource.

Seems to me that it should be passing the check since we are comparing the same path.key to itself.

I would open a bug at https://jira.percona.com/ so our developers can investigate this further. Be sure to supply your ENTIRE cr.yaml (without comments)