1 of 3 nodes started achives quorum? Huh?

I am new to XtraDB and bootstrapped a 3 node cluster. Then I verified the cluster was sync’d and connected through show status and a simple insert followed by select. All works well! Then I shutdown two nodes to simulate an outage and processed an insert to the last remaining running node successfully. How can this be? Quorum can’t be achieved with 1 of 3 nodes connected? show status on the last running node has a wsrep_cluster_size of 1. Shouldn’t this be 3? I’m assuming I have something set-up wrong… I am not ignoring quorum or split brain…

I used the tutorial here for initial configuration:

Any help is appreciated. Thank you!

I figured it out. Properly shutting down the servers reduces the wsrep_cluster_size which then reduces the servers needed to meet quorum. To make it fail I interrupted the traffic using iptables and now the lone node fails queries properly. If I have 3 nodes and take one down for maintenance leaving two. And now some network problem occurs, it’s possible I could have a split brain occur again. Is there any way to set a minimum cluster size for quorum or a minimum quorum count? any other way to avoid a possible split brain in the scenario other than only writing to one server or not properly shutting down the one server for maintenance leaving the wsrep_cluster_size @ 3? TY

You are correct: graceful shutdown of a node in the cluster will reduce the cluster size. The cluster membership is determined by what nodes are (or were) in the cluster, not by the my.cnf file (don’t be confused about what wsrep_cluster_address does!).

Well, don’t leave only 2 nodes in the cluster without watching it. If you must leave 2 nodes for an extended period, you can weight the quorum (http://www.codership.com/wiki/doku.php?id=weighted_quorum) so one node would always win a split-brain, or you could spin up an arbitrator to replace the full node you are working on as a voting member in the cluster (http://www.codership.com/wiki/doku.php?id=galera_arbitrator)