Create PXC Cluster: sed: -e expression

Hello,
I have had the following problem (bug?):

  • PXC Operator 1.7.0
  • Cluster with 3 Instance

Pod Log / Error with create cluster, first instance:

+ sed ‘/^[mysqld]/a wsrep_provider_options=“pc.weight=10”\n’ /etc/mysql/node.cnf
+ sed -r ‘s|[1]?server_id=.$|server_id=10|’ /etc/mysql/node.cnf
+ sed -r ‘s|[2]?coredumper$|coredumper|’ /etc/mysql/node.cnf
+ sed -r 's|[3]?wsrep_node_address=.
$|wsrep_node_address=x.x.x.x|’ /etc/mysql/node.cnf
+ sed -r ‘s|[4]?wsrep_cluster_name=.$|wsrep_cluster_name=btest-pxc|’ /etc/mysql/node.cnf
+ sed -r 's|[5]?wsrep_sst_donor=.
$|wsrep_sst_donor=|’ /etc/mysql/node.cnf
+ sed -r ‘s|[6]?wsrep_cluster_address=.$|wsrep_cluster_address=gcomm://|’ /etc/mysql/node.cnf
+ sed -r 's|[7]?wsrep_node_incoming_address=.
$|wsrep_node_incoming_address=btest-pxc-0.btest- pxc.pxc.svc.cluster.local:3306|’ /etc/mysql/node.cnf
sed: -e expression #1, char 72: unterminated `s’ command

If I define the variable XTRABACKUP_PASSWORD in pxc-configure-pxc.sh then it continues to run (XTRABACKUP_PASSWORD=backup_password):

....
NODE_IP=$(hostname -I | awk ' { print $1 } ')
CLUSTER_NAME="$(hostname -f | cut -d'.' -f2)"
SERVER_ID=${HOSTNAME/$CLUSTER_NAME-}
NODE_NAME=$(hostname -f)
NODE_PORT=3306
XTRABACKUP_PASSWORD=backup_password
...

Password is originally defined in secret, of course.

thx
Zoltan


  1. # ↩︎

  2. # ↩︎

  3. # ↩︎

  4. # ↩︎

  5. # ↩︎

  6. # ↩︎

  7. # ↩︎

1 Like

Do you use a custom cr.yaml file?
Can you share it ?

1 Like

yes, my custom cr YAML:

#
# PXC Test Cluster
# Node          : 3
# SQL Proxy     : NO
# Monitor (pmm) : YES
# Backup        : NO
#
apiVersion: pxc.percona.com/v1-7-0
kind: PerconaXtraDBCluster
metadata:
  name: booqer
  namespace: pxc
  finalizers:
    - delete-pxc-pods-in-order
#    - delete-proxysql-pvc
    - delete-pxc-pvc
#  annotations:
#    percona.com/issue-vault-token: "true"
spec:
  crVersion: 1.7.0
  secretsName: pxc-secrets-booqer
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  logCollectorSecretName: my-log-collector-secrets
  allowUnsafeConfigurations: true
#  pause: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: recommended
    schedule: "11 4 * * 6"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.21-12.1
    autoRecovery: true
    configuration: |
        [client]
        default-character-set=utf8mb4
        [mysql]
        default-character-set=utf8mb4
        [mysqld]
        character-set-client-handshake=false
        character-set-server=utf8mb4
        collation-server="utf8mb4_unicode_ci"
        #innodb_buffer_pool_size=805306368
        max_connections=2048
        sql_mode = "STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE"
        #wsrep_debug=ON
        #wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
    containerSecurityContext:
      privileged: false
    podSecurityContext:
      runAsUser: 1001
      runAsGroup: 1001
      supplementalGroups: [1001]
    resources:
      requests:
        memory: 1G
        cpu: "1"
#        ephemeral-storage: 1Gi
      limits:
        memory: 4G
        cpu: "4"
#        ephemeral-storage: 1Gi
#    nodeSelector:
#      disktype: ssd
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: "cephfs"
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 21Gi
    gracePeriod: 600
  haproxy:
    enabled: true
    size: 2
    image: percona/percona-xtradb-cluster-operator:1.7.0-haproxy
    resources:
      requests:
        memory: 30Mi
        cpu: 50m
      limits:
        memory: 500Mi
        cpu: 500m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        memory: 1G
#        cpu: 500m
#      limits:
#        memory: 2G
#        cpu: 600m
#    serviceAccountName: percona-xtradb-cluster-operator-workload
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#   loadBalancerSourceRanges:
#     - 10.0.0.0/8
#   serviceAnnotations:
#     service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
  proxysql:
    enabled: false
    size: 1
    image: percona/percona-xtradb-cluster-operator:1.7.0-proxysql
    resources:
      requests:
        memory: 1G
        cpu: 600m
#      limits:
#        memory: 1G
#        cpu: 700m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        memory: 1G
#        cpu: 500m
#      limits:
#        memory: 2G
#        cpu: 600m
#    serviceAccountName: percona-xtradb-cluster-operator-workload
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 2Gi
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#   loadBalancerSourceRanges:
#     - 10.0.0.0/8
#   serviceAnnotations:
#     service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
  logcollector:
    enabled: false
    image: percona/percona-xtradb-cluster-operator:1.7.0-logcollector
  pmm:
    enabled: true
    image: percona/pmm-client:2.12.0
    serverHost: monitoring-service
    serverUser: pmm
#    pxcParams: "--disable-tablestats-limit=2000"
#    proxysqlParams: "--custom-labels=CUSTOM-LABELS"
    resources:
      requests:
        memory: 200M
        cpu: 500m
  backup:
    enabled: false
    image: percona/percona-xtradb-cluster-operator:1.7.0-pxc8.0-backup
#    serviceAccountName: percona-xtradb-cluster-operator
#    imagePullSecrets:
#      - name: private-registry-credentials
    pitr:
      enabled: false
    #   storageName: STORAGE-NAME-HERE
    #   timeBetweenUploads: 60
    # storages:
    #   ceph-fs:
    #     type: filesystem
    #     volume:
    #       persistentVolumeClaim:
    #         storageClassName: "cephfs"
    #         accessModes:
    #           - ReadWriteOnce
    #         resources:
    #           requests:
    #             storage: 56Mi
    # schedule:
    #   - name: "sat-night-backup"
    #     schedule: "0 0 * * 6"
    #     keep: 3
    #     storageName: s3-us-west
    #   - name: "daily-backup"
    #     schedule: "0 0 * * *"
    #     keep: 5
    #     storageName: fs-pvc%
1 Like

Hello @Zoltan_Morvai ,

I have tried to deploy the cr.yaml you shared and it worked for me. I removed podSecurityContext section completely and changed the storage configuration (I have another class).
Do you have any other customizations?
How did you come to the conclusion that pxc-configure-pxc.sh needs to be changed?

1 Like

Thank you for the reply.
Yes, I have also changed the secret (see below).
I saw at which line the script error occurred, then it was clear to me that the variable XTRABACKUP_PASSWORD is not defined.

apiVersion: v1
kind: Secret
metadata:
  name: pxc-secrets-booqer
type: Opaque
stringData:
  root: yu73Pu8ueZJ8WyRj9947iaoa
  xtrabackup: 4fW9d27i3KT3p4UsQm8
  monitor: monitory
  clustercheck: clustercheckpassword
  proxyadmin: admin_password
  pmmserver: supa|^|pazz
  operator: xTc7Fwpg8N2344G79qg
1 Like

On a first glance it seems that the use of ‘|’ character in the password is braking the sed. As can be seen from the first snippet, sed is using the same ‘|’ for logical delimiter.
May be you can try the same setup, just change ‘|’ with some other special character before that. Please write back if this did help as it can be helpful for others.

1 Like

It worked for me with these Secrets as well :confused:
Special char is not a problem.

1 Like

Maybe one more point: I have 2 namespaces where I have installed the operators separately. In the first NS it worked without problems, there is only 1 replica. I only get the error in the 2nd NS with 3 replicas.

1 Like

Another interesting thing: I wanted to install 2 different PXC in the same NS. Let’s call them PXC-A and PXC-B. So 1 operator with 2 PXC clusters (not NS overlapping operators).
After having successfully installed the PXC-A, the PXC-B cannot be installed. But what was super interesting, I saw in the PXC-A DB log that it tried to reach the members of the PXC-B!
It shouldn’t happen, should it!
Then I moved the PXC-B to a separate NS, but that didn’t work either. :frowning:

1 Like

Hello spronin,

any news?
Have I been right with the 2 Instazen (reproducible), or the problem is just me?

Thank you.

1 Like

Hello @Zoltan_Morvai ,

sorry for not coming back sooner. I’m running two clusters in one namespace:

cluster1-haproxy-0                                 2/2     Running   0          40m
cluster1-haproxy-1                                 2/2     Running   0          39m
cluster1-haproxy-2                                 2/2     Running   0          38m
cluster1-pxc-0                                     3/3     Running   0          36m
cluster1-pxc-1                                     3/3     Running   0          39m
cluster1-pxc-2                                     3/3     Running   0          38m
cluster2-haproxy-0                                 2/2     Running   1          34m
cluster2-pxc-0                                     3/3     Running   0          4m46s
cluster2-pxc-1                                     3/3     Running   0          29m
cluster2-pxc-2                                     3/3     Running   0          27m

They are both healthy and ready:

$ kubectl get pxc
NAME       ENDPOINT                   STATUS   PXC   PROXYSQL   HAPROXY   AGE
cluster1   cluster1-haproxy.default   ready    3                3         41m
cluster2   cluster2-haproxy.default   ready    3                1         34m

And I don’t see logs overlapping.
I would be curious to learn more about your use case. Are you running a cluster-wide mode of the operator or a namespace-scoped?

1 Like

no, I do not use cluster-wide mode

1 Like

Ok, thanks.
It is then due to my environment.
One more thing: my test environment (filesystem) is relatively slow. Initialization takes 3-4 minutes or longer, can it cause problems?

1 Like

@Zoltan_Morvai I don’t believe slow storage can cause any of these problems, but I will see if I can reproduce it with chaos eng.
Maybe we are catching some weird race condition.

1 Like

We have the same issue with the Operator for OKD / OpenShift.
After we install the Percona XtraDB Cluster Operator via the OperatorHub (latest v1.7.0).
Before “Create PerconaXtraDBClusters”, we created the namespace pxc and applied the secrets.yaml (See “Before You Start” section in operatorhubDOTio/operator/percona-xtradb-cluster-operator - we even tried with the default secrets in the before you start-section)

We tried also with the default YAML (Since I am only allowed to post two links, I replaced .com and https:// with XXX and . with DOT):

apiVersion: pxc.perconaXXX/v1-7-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - delete-pxc-pods-in-order
  namespace: pxc
spec:
  crVersion: 1.7.0
  secretsName: my-cluster-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  logCollectorSecretName: my-log-collector-secrets
  allowUnsafeConfigurations: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: 'XXXcheck.perconaXXX'
    apply: disabled
    schedule: 0 4 * * *
  pxc:
    size: 3
    image: 'percona/percona-xtradb-cluster:8.0.21-12.1'
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: kubernetesDOTio/hostname
    podDisruptionBudget:
      maxUnavailable: 1
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 6G
    gracePeriod: 600
  haproxy:
    enabled: true
    size: 3
    image: 'percona/percona-xtradb-cluster-operator:1.7.0-haproxy'
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: kubernetesDOTio/hostname
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  proxysql:
    enabled: false
    size: 3
    image: 'percona/percona-xtradb-cluster-operator:1.7.0-proxysql'
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: kubernetesDOTio/hostname
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  logcollector:
    enabled: true
    image: 'percona/percona-xtradb-cluster-operator:1.7.0-logcollector'
  pmm:
    enabled: false
    image: 'percona/pmm-client:2.12.0'
    serverHost: monitoring-service
    serverUser: pmm
  backup:
    image: 'percona/percona-xtradb-cluster-operator:1.7.0-pxc8.0-backup'
    pitr:
      enabled: false
      storageName: STORAGE-NAME-HERE
      timeBetweenUploads: 60
    storages:
      s3-us-west:
        type: s3
        s3:
          bucket: S3-BACKUP-BUCKET-NAME-HERE
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-2
      fs-pvc:
        type: filesystem
        volume:
          persistentVolumeClaim:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 6G
    schedule:
      - name: sat-night-backup
        schedule: 0 0 * * 6
        keep: 3
        storageName: s3-us-west
      - name: daily-backup
        schedule: 0 0 * * *
        keep: 5
        storageName: fs-pvc

After creating the pod cluster1-pxc-0 will stuck always in CrashLoopBackOff. The logs of the container pxc shows this error:

+ egrep -q '^[#]?log-error' /etc/mysql/node.cnf
+ sed '/^\[mysqld\]/a log-error=/var/lib/mysql/mysqld-error.log\n' /etc/mysql/node.cnf
+ egrep -q '^[#]?wsrep_sst_donor' /etc/mysql/node.cnf
+ sed '/^\[mysqld\]/a wsrep_sst_donor=\n' /etc/mysql/node.cnf
+ egrep -q '^[#]?wsrep_node_incoming_address' /etc/mysql/node.cnf
+ egrep -q '^[#]?wsrep_provider_options' /etc/mysql/node.cnf
+ sed '/^\[mysqld\]/a wsrep_provider_options="pc.weight=10"\n' /etc/mysql/node.cnf
+ sed -r 's|^[#]?server_id=.*$|server_id=10|' /etc/mysql/node.cnf
+ sed -r 's|^[#]?coredumper$|coredumper|' /etc/mysql/node.cnf
+ sed -r 's|^[#]?wsrep_node_address=.*$|wsrep_node_address=10.200.16.49|' /etc/mysql/node.cnf
+ sed -r 's|^[#]?wsrep_cluster_name=.*$|wsrep_cluster_name=cluster1-pxc|' /etc/mysql/node.cnf
+ sed -r 's|^[#]?wsrep_sst_donor=.*$|wsrep_sst_donor=|' /etc/mysql/node.cnf
+ sed -r 's|^[#]?wsrep_cluster_address=.*$|wsrep_cluster_address=gcomm://|' /etc/mysql/node.cnf
+ sed -r 's|^[#]?wsrep_node_incoming_address=.*$|wsrep_node_incoming_address=cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local:3306|' /etc/mysql/node.cnf
sed: -e expression #1, char 65: unterminated `s' command
, err: exit status 1
1 Like

Hi @mygov ,

We have a task regarding this issue [K8SPXC-573] Pod cluster1-pxc-0 fails with error: sed: -e expression #1, char 65: unterminated `s' command on OpenShift 4.6.9 - Percona JIRA . It was fixed for the latest PXCO release. The fix will be available in next operator release (1.8.0).

1 Like

can you already know when this 1.8.0 release is coming?

1 Like

if you fix it manually runs the PXC cluster then clean?
I have been getting lock file problems all the time.

1 Like

We are starting the release process. 1.8.0 will be available in two or three weeks.

1 Like

@Slava_Sarzhan
When can we expect the 1.8.0 release of the XtraDB Cluster Operator in operatorhub.io?

1 Like