Setting up a PXC WAN on AWS: Replication issues

I’m trying to do a tutorial with WAN setup of PXC 8.0 on AWS. The primary node had no issue bootstrapping however, I encounter issues when trying to add a new node to this newly created one-node cluster. 
Environment: The setup is in AWS, the OS is Centos7 and I’m using PXC 8.0

 Primary node: 

Joining node:

Mysqld.log file:
<img src=“//” alt=“” title="Image: 've spent weeks trying to setup this WAN on AWS and for some reasons, I just can’t. If someone has done this before on AWS, please advise.

Verify that port 4567 is open between your networks. AWS security groups are VERY restrictive by default. Ensure 3306, 4444, and 4567 are open between your VPCs.

Unless there’s another setup on AWS, please find attached the existing setup:
Primary Node(

Here’s that node iptable:

Secondary Node(

Here’s the node iptable:

Please advise if there’s something wrong in these settings.

Your iptables rules are out of order.  Line #5 says to reject everything. Line #5 evaluates BEFORE 7, 8, & 9. I would simply flush away all iptables rules (iptables -F) and only use AWS security groups. Using both can cause networking issues because you are blocking from multiple locations. Additionally, I would change your security groups to allow ALL from both VPCs to the other VPC. Do this just to make sure things work, then you can start being more restrictive.

I took care of the iptables.
However, I don’t see how to enable VPCs to the other VPC on AWS. Can you please give an example? There are
only three options and out of them I chose the “Anywhere” which translates to (IP4) and ::/0(IP6)


IIRC, the security group should allow you to pick the source/destination as the security group which your VMs are currently assigned. See the example for ‘All traffic’ below.

I set in SG as indicated but the problem is still there. In the mysqld.log of the Primary Node, I see these lines:

In the Joining node, the same errors as before:

I’m afraid I can’t help you out any more than this without having access to your AWS to do some troubleshooting. The issue still remains that your nodes cannot talk to each other due to networking issues. I would recommend you to launch new nodes in the SAME VPC and confirm you can get it working this way before attempting multi-AZ.

I’ve already tried the same VPC and it’s working; but I’m not after that. I want to have a geo-distributed app and thus a multi-AZ cluster. My AWS is in free-mode while I’m testing this. Therefore, I have no issue giving you access.

Ok. So if it worked just fine with all nodes in the same VPC, but now it does not work with the nodes in different VPC you absolutely have either A) security group issues, or B) VPC-VPC networking issues. To rule out B, configure a “completely open” security group for both VPCs, then attempt to SSH from a server in VPC1 to a server in VPC2. If that doesn’t work, that’s the root of your issues and you would need to fix the VPC-VPC connectivity.