Not the answer you need?
Register and ask your own question!

Network Parition results in two non-primary components.

jscaltretojscaltreto EntrantCurrent User Role Beginner
I have a cluster with 6 nodes and one garbd (7 total members) split across two datacenters. During network maintenance the connection between the two datacenters was lost. What I'd expect to happen is for the side with 3 members to go non-prim, and the side with 4 members to remain primary. However, what happened was both went non-primary.

From the logs on a node in the 4-member DC:
2018-10-13T02:04:00.174208Z 0 [Note] WSREP: declaring 28a64560 at tcp://10.70.56.64:4567 stable
2018-10-13T02:04:00.174239Z 0 [Note] WSREP: declaring de4135a3 at tcp://10.70.56.30:4567 stable
2018-10-13T02:04:00.174274Z 0 [Note] WSREP: declaring ef2cf2a7 at tcp://10.70.56.31:4567 stable
2018-10-13T02:04:00.175339Z 0 [Note] WSREP: Current view of cluster as seen by this node
view (view_id(NON_PRIM,01084192,116)
memb {
        01084192,1
        28a64560,0
        de4135a3,1
        ef2cf2a7,1
        }
joined {
        }
left {
        }
partitioned {
        118c3740,2
        2118c991,2
        31e159f5,2
        }
)
2018-10-13T02:04:00.175366Z 0 [Warning] WSREP: node uuid: 118c3740 last_prim(type: 3, uuid: 01084192) is inconsistent to restored view(type: V_NON_PRIM, uuid: 01084192
2018-10-13T02:04:00.561619Z 0 [Note] WSREP: gcomm: connected
2018-10-13T02:04:00.561705Z 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2018-10-13T02:04:00.561787Z 0 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 4
2018-10-13T02:04:00.561810Z 0 [Note] WSREP: Waiting for SST/IST to complete.
2018-10-13T02:04:00.561817Z 0 [Note] WSREP: Flow-control interval: [500, 500]
2018-10-13T02:04:00.561835Z 0 [Note] WSREP: Trying to continue unpaused monitor
2018-10-13T02:04:00.561839Z 0 [Note] WSREP: Received NON-PRIMARY.
2018-10-13T02:04:00.562088Z 1 [Note] WSREP: New cluster view: global state: dc9de3aa-f248-11e7-b06c-e7837e3639b5:48150346, view# -1: non-Primary, number of nodes: 4, my index: 0, protocol version -1
2018-10-13T02:04:00.562101Z 1 [Note] WSREP: Setting wsrep_ready to false
2018-10-13T02:04:00.562108Z 1 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-13T02:04:03.061881Z 0 [Note] WSREP: (01084192, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.60.84.54:4567 timed out, no messages seen in PT3S (gmcast.peer_timeout)

And from the other side:
2018-10-13T02:03:39.503205Z 0 [Note] WSREP: declaring 118c3740 at tcp://10.60.84.54:4567 stable
2018-10-13T02:03:39.503220Z 0 [Note] WSREP: declaring 31e159f5 at tcp://10.60.84.56:4567 stable
2018-10-13T02:03:39.503207Z 0 [Note] WSREP: Flow-control interval: [500, 500]
2018-10-13T02:03:39.503238Z 0 [Note] WSREP: Trying to continue unpaused monitor
2018-10-13T02:03:39.503242Z 0 [Note] WSREP: Received NON-PRIMARY.
2018-10-13T02:03:39.503246Z 0 [Note] WSREP: Shifting SYNCED -> OPEN (TO: 48150346)
2018-10-13T02:03:39.503259Z 0 [Warning] WSREP: Action message in non-primary configuration from member 0
2018-10-13T02:03:39.503264Z 0 [Warning] WSREP: Action message in non-primary configuration from member 0
2018-10-13T02:03:39.503268Z 0 [Warning] WSREP: Action message in non-primary configuration from member 0
2018-10-13T02:03:39.503271Z 0 [Warning] WSREP: Action message in non-primary configuration from member 0
2018-10-13T02:03:39.503277Z 0 [Warning] WSREP: Action message in non-primary configuration from member 0
2018-10-13T02:03:39.503281Z 0 [Warning] WSREP: Action message in non-primary configuration from member 0
2018-10-13T02:03:39.503284Z 0 [Warning] WSREP: Action message in non-primary configuration from member 0
2018-10-13T02:03:39.503341Z 2 [Note] WSREP: New cluster view: global state: dc9de3aa-f248-11e7-b06c-e7837e3639b5:48150346, view# -1: non-Primary, number of nodes: 3, my index: 1, protocol version 3
2018-10-13T02:03:39.503358Z 2 [Note] WSREP: Setting wsrep_ready to false
2018-10-13T02:03:39.503400Z 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-13T02:03:39.503674Z 0 [Note] WSREP: Current view of cluster as seen by this node
view (view_id(NON_PRIM,118c3740,111)
memb {
        118c3740,2
        2118c991,2
        31e159f5,2
        }
joined {
        }
left {
        }
partitioned {
        01084192,1
        28a64560,0
        de4135a3,1
        ef2cf2a7,1
        }
)
2018-10-13T02:03:39.503737Z 0 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 1, memb_num = 3
2018-10-13T02:03:39.503750Z 0 [Note] WSREP: Flow-control interval: [500, 500]
2018-10-13T02:03:39.503753Z 0 [Note] WSREP: Trying to continue unpaused monitor
2018-10-13T02:03:39.503756Z 0 [Note] WSREP: Received NON-PRIMARY.
2018-10-13T02:03:39.503803Z 2 [Note] WSREP: New cluster view: global state: dc9de3aa-f248-11e7-b06c-e7837e3639b5:48150346, view# -1: non-Primary, number of nodes: 3, my index: 1, protocol version 3
2018-10-13T02:03:39.503819Z 2 [Note] WSREP: Setting wsrep_ready to false
2018-10-13T02:03:39.503823Z 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-10-13T02:03:41.780368Z 0 [Note] WSREP: (2118c991, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.70.56.30:4567 timed out, no messages seen in PT3S (gmcast.peer_timeout)

So my first question is why did both sets of nodes go non-prim?

Since I'm in this state, I'd like to restart the cluster. According to docs, I should be able to run
SET GLOBAL wsrep_provider_options='pc.bootstrap=YES';

However, I'm unable to connect to any of the nodes either by tcp or unix socket.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111)

Because of this I also can't connect to gracefully shut down a node. Is there any way for me to restart the cluster without SIGKILLing a node and running pxc-bootstrap?

Comments

  • przemekprzemek Percona Support Engineer Percona Staff Role
    The DC with 4 members suffered some inconsistency issue around this time, hence we can see "Waiting for SST/IST to complete." message. So, apparently the network split between two DCs either happened in bad time or something else happened before it.
    Can you upload complete logs from these nodes to provide more details?

    Also, a node that switched to a non-primary state isn't a reason to refuse connections yet. Were all nodes refusing connections?
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.