Third Node cannot connect / 2 Nodes in AWS / 1 node inhouse with NAT

Hi guys …

I always read all posts and normally they solve my issues.
I would like to take the time to explain my situation. Please bear with me …
All node using Ubuntu 12.04
AWS nodes using percona Server version: 5.5.34-31.1 Percona XtraDB Cluster (GPL), Release 31.1, wsrep_25.9.r3928
Both AWS have elastic public IP
In house node using Percona 5.5.41-25.11-853

AWS configured of course with the internal IPs 10.xxx.xxx.xxx
and In house its a 172.21.12.11 through NAT on a public IP 199.xxx.xxx.xxx

Both AWS nodes have in the gcomm the public IP of the in house server and the in house server has the 2 elastic ips and the internal 172.21.12.11

And here is the dump of the

tail -f /var/lib/mysql/novo.err

150612 23:25:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
150612 23:25:52 mysqld_safe WSREP: Running position recovery with --log_error=‘/var/lib/mysql/wsrep_recovery.UoqEvJ’ --pid-file=‘/var/lib/mysql/noomedici-recover.pid’
150612 23:25:55 mysqld_safe WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
150612 23:25:55 [Note] WSREP: wsrep_start_position var submitted: ‘00000000-0000-0000-0000-000000000000:-1’
150612 23:25:55 [Note] WSREP: Read nil XID from storage engines, skipping position init
150612 23:25:55 [Note] WSREP: wsrep_load(): loading provider library ‘/usr/lib/libgalera_smm.so’
150612 23:25:55 [Note] WSREP: wsrep_load(): Galera 2.12(r318911d) by Codership Oy <info@codership.com> loaded successfully.
150612 23:25:55 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
150612 23:25:55 [Note] WSREP: Reusing existing ‘/var/lib/mysql//galera.cache’.
150612 23:25:55 [Note] WSREP: Passing config to GCS: base_host = 172.21.12.11; base_port = 4567; cert.log_conflicts = no; debug = no; evs.inactivecheck_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_eport_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.kep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0;gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 923372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignor_quorum = false; pc.ignore_sb = false; pc.npvo = false; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = P30S; pc.weight = 1; protonet.ackend = asio; pr
150612 23:25:55 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
150612 23:25:55 [Note] WSREP: wsrep_sst_grab()
150612 23:25:55 [Note] WSREP: Start replication
150612 23:25:55 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
150612 23:25:55 [Note] WSREP: protonet asio version 0
150612 23:25:55 [Note] WSREP: backend: asio
150612 23:25:55 [Note] WSREP: GMCast version 0
150612 23:25:55 [Note] WSREP: (aa293f06, ‘tcp://0.0.0.0:4567’) listening at tcp://0.0.0.0:4567
150612 23:25:55 [Note] WSREP: (aa293f06, ‘tcp://0.0.0.0:4567’) multicast: , ttl: 1
150612 23:25:55 [Note] WSREP: EVS version 0
150612 23:25:55 [Note] WSREP: PC version 0
150612 23:25:55 [Note] WSREP: gcomm: connecting to group ‘Percona-XtraDB-Cluster’, peer ‘107.xx.xxx.xx:,54.xxx.xx.xxx:,172.21.12.11:’
150612 23:25:55 [Warning] WSREP: (aa293f06, ‘tcp://0.0.0.0:4567’) address ‘tcp://172.21.12.11:4567’ points to own listening address, blacklisting
150612 23:25:55 [Note] WSREP: (aa293f06, ‘tcp://0.0.0.0:4567’) turning message relay requesting on, nonlive peers: tcp://10.147.174.188:4567
150612 23:25:55 [Note] WSREP: declaring 919c045d at tcp://10.147.174.188:4567 stable
150612 23:25:55 [Note] WSREP: declaring af980804 at tcp://10.154.176.220:4567 stable
150612 23:25:56 [Note] WSREP: Node 919c045d state prim
150612 23:25:57 [Note] WSREP: gcomm: connected
150612 23:25:57 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
150612 23:25:57 [Note] WSREP: Shifting CLOSED → OPEN (TO: 0)
150612 23:25:57 [Note] WSREP: Opened channel ‘Percona-XtraDB-Cluster’
150612 23:25:57 [Note] WSREP: Waiting for SST to complete.
150612 23:25:57 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 3
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: sent state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: got state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734 from 0 (db2)
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: got state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734 from 2 (db1)
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: got state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734 from 1 (db3)
150612 23:25:57 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 2,
members = 2/3 (joined/total),
act_id = 35909744,
last_appl. = -1,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = d3538475-b069-11e3-92aa-c323b5f792a4
150612 23:25:57 [Note] WSREP: Flow-control interval: [28, 28]
150612 23:25:57 [Note] WSREP: Shifting OPEN → PRIMARY (TO: 35909744)
150612 23:25:57 [Note] WSREP: State transfer required:
Group state: d3538475-b069-11e3-92aa-c323b5f792a4:35909744
Local state: 00000000-0000-0000-0000-000000000000:-1
150612 23:25:57 [Note] WSREP: New cluster view: global state: d3538475-b069-11e3-92aa-c323b5f792a4:35909744, view# 3: Primary, number of nodes: 3,my index: 1, protocol version 2
150612 23:25:57 [Note] WSREP: closing client connections for protocol change 3 → 2
150612 23:25:58 [Note] WSREP: (aa293f06, ‘tcp://0.0.0.0:4567’) turning message relay requesting off
150612 23:25:59 [Warning] WSREP: Gap in state sequence. Need state transfer.
150612 23:25:59 [Note] WSREP: Running: ‘wsrep_sst_xtrabackup --role ‘joiner’ --address ‘172.21.12.11’ --auth ‘sstuser:zASDF@7890@yhn’ --datadir ‘/ar/lib/mysql/’ --defaults-file ‘/etc/mysql/my.cnf’ --parent ‘25936’’
WSREP_SST: [INFO] Streaming with tar (20150612 23:25:59.618)
WSREP_SST: [INFO] Using socat as streamer (20150612 23:25:59.624)
WSREP_SST: [INFO] Stale sst_in_progress file: /var/lib/mysql//sst_in_progress (20150612 23:25:59.636)
WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | tar xfi - --recursive-unlink -h; RC=( ${PIPESTATUS[@]} ) (20150612 23:25:5.664)
150612 23:25:59 [Note] WSREP: Prepared SST request: xtrabackup|172.21.12.11:4444/xtrabackup_sst
150612 23:25:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
150612 23:25:59 [Note] WSREP: Assign initial position for certification: 35909744, protocol version: 2
150612 23:25:59 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does no match group state UUID (d3538475-b069-11e3-92aa-c323b5f792a4): 1 (Operation not permitted)
at galera/src/replicator_str.cpp:prepare_for_IST():447. IST will be unavailable.
150612 23:25:59 [Note] WSREP: Node 1 (db3) requested state transfer from ‘any’. Selected 0 (db2)(SYNCED) as donor.
150612 23:25:59 [Note] WSREP: Shifting PRIMARY → JOINER (TO: 35909744)
150612 23:25:59 [Note] WSREP: Requesting state transfer: success, donor: 0

150612 23:27:04 [Warning] WSREP: 0 (db2): State transfer to 1 (db3) failed: -1 (Operation not permitted)
150612 23:27:04 [ERROR] WSREP: gcs/src/gcs_group.cpp:long int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():717: Will never recive state. Need to abort.
150612 23:27:04 [Note] WSREP: gcomm: terminating thread
150612 23:27:04 [Note] WSREP: gcomm: joining thread
150612 23:27:04 [Note] WSREP: gcomm: closing backend
150612 23:27:04 [Note] WSREP: gcomm: closed
150612 23:27:04 [Note] WSREP: /usr/sbin/mysqld: Terminated.
Aborted (core dumped)
150612 23:27:04 mysqld_safe mysqld from pid file /var/lib/mysql/novomedici.pid ended

FYI Galera used on the AWS servers is …

| wsrep_provider_version | 2.8(r165) |

and on the in house server as you can see above Galera 2.12(r318911d)

could this be the problem ??

Thank you everyone and sorry if I posted on the wrong place hope not …

Ed