Update status error

hi

I’m getting this error in operator log:

{"level":"error","ts":1605766525.9552927,"logger":"controller_perconaxtradbcluster","msg":"Update status","error":"send update: Operation cannot be fulfilled on perconaxtradbclusters.pxc.percona.com \"cluster1\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:189\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:494\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
$ k get pods
NAME                                               READY   STATUS                  RESTARTS   AGE
cluster1-proxysql-0                                3/3     Running                 0          28m
cluster1-proxysql-1                                3/3     Running                 0          27m
cluster1-proxysql-2                                3/3     Running                 0          27m
cluster1-pxc-0                                     0/1     Init:CrashLoopBackOff   2          52s
percona-xtradb-cluster-operator-749b86b678-h8jlp   1/1     Running                 0          13m
$ k describe pod cluster1-pxc-0
Name:         cluster1-pxc-0
Namespace:    pxc
Priority:     0
Node:         node-1.internal/192.168.250.112
Start Time:   Thu, 19 Nov 2020 06:15:51 +0000
Labels:       app.kubernetes.io/component=pxc
              app.kubernetes.io/instance=cluster1
              app.kubernetes.io/managed-by=percona-xtradb-cluster-operator
              app.kubernetes.io/name=percona-xtradb-cluster
              app.kubernetes.io/part-of=percona-xtradb-cluster
              controller-revision-hash=cluster1-pxc-74f85d4868
              statefulset.kubernetes.io/pod-name=cluster1-pxc-0
Annotations:  percona.com/configuration-hash: d41d8cd98f00b204e9800998ecf8427e
              percona.com/ssl-hash: 57ef53b333e4d4eedbb9aad081fd4ef6
              percona.com/ssl-internal-hash: ee70e6b562c7e24292590033ce7c4652
Status:       Pending
IP:           10.233.100.16
IPs:
  IP:           10.233.100.16
Controlled By:  StatefulSet/cluster1-pxc
Init Containers:
  pxc-init:
    Container ID:  docker://b0b4a6440649c341a2843236af75acbe127342d3abbc4e7258d67aee78b598ca
    Image:         percona/percona-xtradb-cluster-operator:1.6.0
    Image ID:      docker-pullable://percona/percona-xtradb-cluster-operator@sha256:9871d6fb960b4ec498430a398a44eca08873591a6b6efb8a35349e79e24f3072
    Port:          <none>
    Host Port:     <none>
    Command:
      /pxc-init-entrypoint.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 19 Nov 2020 06:19:06 +0000
      Finished:     Thu, 19 Nov 2020 06:19:06 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:        2
      memory:     2G
    Environment:  <none>
    Mounts:
      /var/lib/mysql from datadir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l2p5j (ro)
Containers:
  pxc:
    Container ID:
    Image:         percona/percona-xtradb-cluster:8.0.20-11.1
    Image ID:
    Ports:         3306/TCP, 4444/TCP, 4567/TCP, 4568/TCP, 33062/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      /var/lib/mysql/pxc-entrypoint.sh
    Args:
      mysqld
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      2
      memory:   2G
    Liveness:   exec [/var/lib/mysql/liveness-check.sh] delay=300s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/var/lib/mysql/readiness-check.sh] delay=15s timeout=15s period=30s #success=1 #failure=5
    Environment:
      PXC_SERVICE:              cluster1-pxc-unready
      MONITOR_HOST:             %
      MYSQL_ROOT_PASSWORD:      <set to the key 'root' in secret 'internal-cluster1'>          Optional: false
      XTRABACKUP_PASSWORD:      <set to the key 'xtrabackup' in secret 'internal-cluster1'>    Optional: false
      MONITOR_PASSWORD:         <set to the key 'monitor' in secret 'internal-cluster1'>       Optional: false
      CLUSTERCHECK_PASSWORD:    <set to the key 'clustercheck' in secret 'internal-cluster1'>  Optional: false
      OPERATOR_ADMIN_PASSWORD:  <set to the key 'operator' in secret 'internal-cluster1'>      Optional: false
    Mounts:
      /etc/my.cnf.d from auto-config (rw)
      /etc/mysql/ssl from ssl (rw)
      /etc/mysql/ssl-internal from ssl-internal (rw)
      /etc/mysql/vault-keyring-secret from vault-keyring-secret (rw)
      /etc/percona-xtradb-cluster.conf.d from config (rw)
      /tmp from tmp (rw)
      /var/lib/mysql from datadir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l2p5j (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  datadir:
    Type:          HostPath (bare host directory volume)
    Path:          /app/test/mysql-operator
    HostPathType:  Directory
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      cluster1-pxc
    Optional:  true
  ssl-internal:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-ssl-internal
    Optional:    true
  ssl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-ssl
    Optional:    false
  auto-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      auto-cluster1-pxc
    Optional:  true
  vault-keyring-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  keyring-secret-vault
    Optional:    true
  default-token-l2p5j:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-l2p5j
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/unreachable:NoExecute for 30s
                 node.kubernetes.io/not-ready:NoExecute for 30s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m27s                  default-scheduler  Successfully assigned pxc/cluster1-pxc-0 to eranet-node-1.internal
  Normal   Pulled     4m30s (x4 over 5m23s)  kubelet            Successfully pulled image "percona/percona-xtradb-cluster-operator:1.6.0"
  Normal   Created    4m30s (x4 over 5m23s)  kubelet            Created container pxc-init
  Normal   Started    4m30s (x4 over 5m22s)  kubelet            Started container pxc-init
  Normal   Pulling    3m38s (x5 over 5m25s)  kubelet            Pulling image "percona/percona-xtradb-cluster-operator:1.6.0"
  Warning  BackOff    23s (x23 over 5m18s)   kubelet            Back-off restarting failed container

This is a standard K8s 1.18 cluster built using Kubespray on bare metal.

Using version 1.6.0

Any hint what could be wrong? Is it init container which fails to load using pxc-init-entrypoint.sh? There is no log produced at this phase.

The only one version which works okay is 1.4.0. Haven’t had a luck to run successfully 1.5.0 or 1.6.0 in several tries.

Many thanks

2 Likes

Hello @Laimis ,

interesting.

  1. Could you please share your cr.yaml?

  2. Are you doing an installation from scratch or it is something else going on? Upgrade?

1 Like
send update: Operation cannot be fulfilled on perconaxtradbclusters.pxc.percona.com \"cluster1\": the object has been modified; please apply your changes to the latest version and try again

such warning is sad, needs better engineering, but it is not an error. feel free to ignore it.

reason: during one reconcile loop operator can change custom resource twice. such situation is detected and reported to logs. during the next reconcile loop operator do the second try to change custom resource. if you don’t see such message every 5 seconds - nothing to worry.

1 Like

This is an installation from scratch.

My cr.yaml

apiVersion: pxc.percona.com/v1-6-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - delete-pxc-pods-in-order
spec:
  crVersion: 1.6.0
  secretsName: my-cluster-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  allowUnsafeConfigurations: false
  updateStrategy: Disabled
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: recommended
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.20-11.1
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    volumeSpec:
      hostPath:
        path: /app/test/mysql-operator
        type: Directory
    gracePeriod: 600
  haproxy:
    enabled: false
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.6.0-haproxy
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    gracePeriod: 30
  proxysql:
    enabled: true
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.6.0-proxysql
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    volumeSpec:
      hostPath:
        path: /app/test/mysql-operator
        type: Directory
    gracePeriod: 30
  pmm:
    enabled: false
    image: percona/percona-xtradb-cluster-operator:1.6.0-pmm
    serverHost: monitoring-service
    serverUser: pmm

Many thanks

1 Like

Sorry @Mykola didn’t reply to you. That error is happening every 5 seconds.

1 Like

This is an installation from scratch.

My cr.yaml

apiVersion: pxc.percona.com/v1-6-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - delete-pxc-pods-in-order
spec:
  crVersion: 1.6.0
  secretsName: my-cluster-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  allowUnsafeConfigurations: false
  updateStrategy: Disabled
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: recommended
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.20-11.1
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    volumeSpec:
      hostPath:
        path: /app/test/mysql-operator
        type: Directory
    gracePeriod: 600
  haproxy:
    enabled: false
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.6.0-haproxy
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    gracePeriod: 30
  proxysql:
    enabled: true
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.6.0-proxysql
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    volumeSpec:
      hostPath:
        path: /app/test/mysql-operator
        type: Directory
    gracePeriod: 30
  pmm:
    enabled: false
    image: percona/percona-xtradb-cluster-operator:1.6.0-pmm
    serverHost: monitoring-service
    serverUser: pmm

Many thanks

1 Like

Hello @Laimis ,

thank you for submitting this. I have a suspicion that this is somehow related to permissions on a hostpath that you use for storage.

Could you please try to fetch the logs from init container of PXC?

1 Like

This is an installation from scratch.

My cr.yaml

apiVersion: pxc.percona.com/v1-6-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - delete-pxc-pods-in-order
spec:
  crVersion: 1.6.0
  secretsName: my-cluster-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  allowUnsafeConfigurations: false
  updateStrategy: Disabled
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: recommended
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.20-11.1
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    volumeSpec:
      hostPath:
        path: /app/test/mysql-operator
        type: Directory
    gracePeriod: 600
  haproxy:
    enabled: false
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.6.0-haproxy
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    gracePeriod: 30
  proxysql:
    enabled: true
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.6.0-proxysql
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    tolerations:
    - key: "node.alpha.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 30
    volumeSpec:
      hostPath:
        path: /app/test/mysql-operator
        type: Directory
    gracePeriod: 30
  pmm:
    enabled: false
    image: percona/percona-xtradb-cluster-operator:1.6.0-pmm
    serverHost: monitoring-service
    serverUser: pmm

Many thanks

1 Like

I used this command

sudo mkdir -p /app/test/mysql-operator && sudo chown 1001:1000 -R /app/test/mysql-operator

I was not able to see logs (or anything useful) using

kubectl logs cluster1-pxc-0

so maybe the right command is kubectl logs cluster1-pxc-0 pxc-init

Let me create a percona cluster some next day from same yaml. I’m pretty sure, I will get same issue in 1.6.0 or 1.5.0.

1 Like

I have created a cluster to keep reporting you. It seems issue is permission related.

era@master-1:~$ k logs -n redmine-t redmine-cluster-pxc-0 -c pxc-init

++ id -u

++ id -g

+ install -o 99 -g 99 -m 0755 -D /pxc-entrypoint.sh /var/lib/mysql/pxc-entrypoint.sh

install: cannot create regular file '/var/lib/mysql/pxc-entrypoint.sh': Permission denied

era@master-1:~$

so I executed

sudo chown 99:99 /app/test/mysql-operator

and it went further until reached next issue is

era@master-1:~$ k logs -n redmine-t redmine-cluster-pxc-0 pxc -f
+ echo 'Initializing database'
+ mysqld --initialize-insecure --skip-ssl --datadir=/var/lib/mysql//sst-xb-tmpdir
Initializing database
mysqld: Can't create directory '/var/lib/mysql//sst-xb-tmpdir/' (OS errno 13 - Permission denied)

I could probably solve it on my own so many thanks for the tip. I did quite stupid mistake by not checking log for init container.

If you know how to organise permissions straightaway that would be even better or anything else I could check, please let me know.

1 Like

…and the magic permission is:

sudo chown 99:1001 /app/test/mysql-operator
sudo chmod 775 /app/test/mysql-operator

Thanks for narrowing down the issue

1 Like

@Laimis great to hear that! I learnt smth as well, thank you :slight_smile:

By the way - on chown - UID and GID are specific to your container and users on the OS. It might change.

1 Like

Hi! I used the same method, but it dose’t work.
image

1 Like

Can you create files using any regular Linux account in that directory which is mapped as hostPath in Kubernetes? You may need to give 775 to each folder in the directories tree in up to /folder level. Of course, you can narrow it down later for dedicated user 99. I believe you can play with command similar to this: sudo -u 99 touch /folder/subfolder/test.sh

1 Like

Thanks! I have gived 775 to each floder, it’s working.

1 Like

Good to hear.

I believe the folder which is configured in hostPath could have chmod 770, while other folders in upper tree usually configured with 775. That is standard about 775 in Linux systems, but there are ways to narrow them down if anything else apart DB is there.

P.S. Having 770 ensures that other users can’t read DB data for security measures.

Please correct me, if anyone sees better ways to restrict access.

1 Like

Thanks for the code.

1 Like

hello, can you please explain to me how did you give these commands through the cr.yaml? Thank you

1 Like

It’s not possible to do it via Kubernetes. If you want to avoid touching Linux separatelly, you could use hostpath based PVCs. For instance, you could explore OpenEBS hostpath option.

1 Like