I’m trying to do a tutorial with WAN setup of PXC 8.0 on AWS. The primary node had no issue bootstrapping however, I encounter issues when trying to add a new node to this newly created one-node cluster. Environment: The setup is in AWS, the OS is Centos7 and I’m using PXC 8.0
Verify that port 4567 is open between your networks. AWS security groups are VERY restrictive by default. Ensure 3306, 4444, and 4567 are open between your VPCs.
Your iptables rules are out of order. Line #5 says to reject everything. Line #5 evaluates BEFORE 7, 8, & 9. I would simply flush away all iptables rules (iptables -F) and only use AWS security groups. Using both can cause networking issues because you are blocking from multiple locations. Additionally, I would change your security groups to allow ALL from both VPCs to the other VPC. Do this just to make sure things work, then you can start being more restrictive.
Matthew, I took care of the iptables. However, I don’t see how to enable VPCs to the other VPC on AWS. Can you please give an example? There are only three options and out of them I chose the “Anywhere” which translates to 0.0.0.0/0 (IP4) and ::/0(IP6)
IIRC, the security group should allow you to pick the source/destination as the security group which your VMs are currently assigned. See the example for ‘All traffic’ below.
I’m afraid I can’t help you out any more than this without having access to your AWS to do some troubleshooting. The issue still remains that your nodes cannot talk to each other due to networking issues. I would recommend you to launch new nodes in the SAME VPC and confirm you can get it working this way before attempting multi-AZ.
Matthew, I’ve already tried the same VPC and it’s working; but I’m not after that. I want to have a geo-distributed app and thus a multi-AZ cluster. My AWS is in free-mode while I’m testing this. Therefore, I have no issue giving you access. Thanks
Ok. So if it worked just fine with all nodes in the same VPC, but now it does not work with the nodes in different VPC you absolutely have either A) security group issues, or B) VPC-VPC networking issues. To rule out B, configure a “completely open” security group for both VPCs, then attempt to SSH from a server in VPC1 to a server in VPC2. If that doesn’t work, that’s the root of your issues and you would need to fix the VPC-VPC connectivity.