Nodes stop syncing after update to primary key --- autoincrement ???

I am not sure if I need to set the auto_increment_increment and auto_increment_offset or set the slave_exec_mode=IDEMPOTENT setting.

We have a 3 node cluster. Does this mean I need to do the following?

for server 1, this server generate ids 1,4,7,10,etc..
auto_increment_increment = 3
auto_increment_offset = 1
for server 2, this server generate ids 2,5,8,11,etc..
auto_increment_increment = 3
auto_increment_offset = 2
for server 2, this server generate ids 3,6,9,12,etc..
auto_increment_increment = 3
auto_increment_offset = 3

The reason I ask is because we get this error when one node updates/deletes/changes a primary key:

140523 1:32:00 [ERROR] Slave SQL: Could not execute Update_rows event on table core.unitIdentity; Can't find record in 'unitIdentity', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log FIRST, end_log_pos 977, Error_code: 1032
140523 1:32:00 [Warning] WSREP: RBR event 2 Update_rows apply warning: 120, 25647031
140523 1:32:00 [ERROR] WSREP: Failed to apply trx: source: f34d5522-b9d6-11e3-0800-4357f09cb2c2 version: 2 local: 0 state: APPLYING flags: 1 conn_id: 13636572 trx_id: 520371507 seqnos (l: 4304736, g: 25647031, s: 25647030, d: 25647029, ts: 1400823120103528904)
140523 1:32:00 [ERROR] WSREP: Failed to apply app buffer: seqno: 25647031, status: WSREP_FATAL
at galera/src/replicator_smm.cpp:apply_wscoll():52
at galera/src/replicator_smm.cpp:apply_trx_ws():118
140523 1:32:00 [ERROR] WSREP: Node consistency compromized, aborting...
140523 1:32:00 [Note] WSREP: Closing send monitor...
140523 1:32:00 [Note] WSREP: Closed send monitor.
140523 1:32:00 [Note] WSREP: gcomm: terminating thread
140523 1:32:00 [Note] WSREP: gcomm: joining thread
140523 1:32:00 [Note] WSREP: gcomm: closing backend
140523 1:32:05 [Note] WSREP: evs::proto(0c861e3d-b9d7-11e3-0800-f2e0a3aaac9f, LEAVING, view_id(REG,0c861e3d-b9d7-11e3-0800-f2e0a3aaac9f,16)) suspecting node: 134a6b08-b9d7-11e3-0800-c4344cdf375f
140523 1:32:06 [Note] WSREP: evs::proto(0c861e3d-b9d7-11e3-0800-f2e0a3aaac9f, LEAVING, view_id(REG,0c861e3d-b9d7-11e3-0800-f2e0a3aaac9f,16)) suspecting node: 134a6b08-b9d7-11e3-0800-c4344cdf375f
140523 1:32:06 [Note] WSREP: view(view_id(NON_PRIM,0c861e3d-b9d7-11e3-0800-f2e0a3aaac9f,16) memb {
0c861e3d-b9d7-11e3-0800-f2e0a3aaac9f,
} joined {
} left {
} partitioned {
134a6b08-b9d7-11e3-0800-c4344cdf375f,
f34d5522-b9d6-11e3-0800-4357f09cb2c2,
})
140523 1:32:06 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
140523 1:32:06 [Note] WSREP: view((empty))
140523 1:32:06 [Note] WSREP: gcomm: closed
140523 1:32:06 [Note] WSREP: Flow-control interval: [16, 16]
140523 1:32:06 [Note] WSREP: Received NON-PRIMARY.
140523 1:32:06 [Note] WSREP: Shifting SYNCED -> OPEN (TO: 25647031)
140523 1:32:06 [Note] WSREP: Received self-leave message.
140523 1:32:06 [Note] WSREP: Flow-control interval: [0, 0]
140523 1:32:06 [Note] WSREP: Received SELF-LEAVE. Closing connection.
140523 1:32:06 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 25647031)
140523 1:32:06 [Note] WSREP: RECV thread exiting 0: Success
140523 1:32:06 [Note] WSREP: recv_thread() joined.
140523 1:32:06 [Note] WSREP: Closing slave action queue.
140523 1:32:06 [Note] WSREP: /usr/sbin/mysqld: Terminated.
140523 01:32:06 mysqld_safe Number of processes running now: 0
140523 01:32:06 mysqld_safe WSREP: not restarting wsrep node automatically
140523 01:32:06 mysqld_safe mysqld from pid file /var/lib/mysql/balpercona2.bal.sagedining.com.pid ended

I found some info from this link : [URL]http://www.mysqlperformanceblog.com/2011/01/12/conflict-avoidance-with-auto_increment_incremen-and-auto_increment_offset/[/URL]

hi shockwave

Im not sure exactly if its a primary key error but if you do have auto increment fields in your table which are primary keys, you will need to set server offsets for so that you do not get primary key conflicts

Im not an expert but there are two ways of doing this,
one using
variable wsrep_auto_increment_control [TABLE=“class: wysiwyg_table_docutils wysiwyg_table_field-list”]
[TR=“class: wysiwyg_table_field-odd wysiwyg_table_field”]
[TD=“class: wysiwyg_table_field-body”]Yes[/TD]
[/TR]
[TR=“class: wysiwyg_table_field-even wysiwyg_table_field”]
[TD=“class: wysiwyg_table_field-body”]Yes[/TD]
[/TR]
[TR=“class: wysiwyg_table_field-odd wysiwyg_table_field”]
[TD=“class: wysiwyg_table_field-body”]Global[/TD]
[/TR]
[TR=“class: wysiwyg_table_field-even wysiwyg_table_field”]
[TD=“class: wysiwyg_table_field-body”]Yes[/TD]
[/TR]
[TR=“class: wysiwyg_table_field-odd wysiwyg_table_field”]
[TD=“class: wysiwyg_table_field-body”]ON[/TD]
[/TR]
[/TABLE]

By setting

in our my.cnf file we we set them statically like this,

for server 1, this server generate ids 1,4,7,10,etc…
auto_increment_increment = 3
auto_increment_offset = 1
for server 2, this server generate ids 2,5,8,11,etc…
auto_increment_increment = 3
auto_increment_offset = 2
for server 2, this server generate ids 3,6,9,12,etc…
auto_increment_increment = 3
auto_increment_offset = 3

x
By setting them statically, we know for sure that our ids are set correctly as what we noticed if a node leaves the cluster, when using wsrep_auto_increment_control the node count changes based on the total node connected, not the total node that suppose to be in the cluster ( ie one fails due to network problems, it drops the id by 1 ). We were not sure how that will affect our application

So you recommend that I add

to each nodes my.cnf and it will automatically set the proper ID when the cluster grows and shrinks?

Or do you recommend the static setup that you mentioned?