I am currently running a 5 node XtraDB cluster in AWS US-EAST while spreading nodes across availability zones. One of the nodes is setup as a traditional master and it replicates to AWS US-WEST as part of or DR plan. This setup is working fine, however we would like to have this server be a full member of the cluster, rather than just a slave. Replication is running over an SSH tunnel connection EAST to WEST.
What would you recommend as the best practice for setting up a cluster node in a different region? Would you use SSL, or some other method? This node will not receive reads or writes unless we completely lost everything on the east coast. For it to function properly, I am assuming it will need gcomm:// configured so that it can talk to all nodes in the east, however doing this with SSH tunnels isn’t practical. Any advice would be greatly appreciated.
Thanks,
Nick
Hi Nick,
Adding that distant slave as a member of the cluster will mean a much higher commit time than you’re currently enjoying. Also, you’ll need to manually bootstrap it if your primary region fails because it will be in a quorum minority.
That being said, I don’t believe there is a private internal network between regions for EC2 nodes, so you’ll have to use the public interfaces. I believe the trick for EC2 here is to use the DNS name for the nodes which will resolve differently depending on an external or internal lookup (this is from memory, so bear with me). There might need to be some tweaking of the sst and ist recv addresses. I need to re-do one of these setups for my memory to be refreshed, it’s been a while.
For encryption, you can just use the SSL support galera already has: [url]http://www.mysqlperformanceblog.com/2013/05/03/percona-xtradb-cluster-for-mysql-and-encrypted-galera-replication/[/url] Note that this works for Galera replication and IST. SST SSL is not supported until the 5.5.33 release: [url]Percona XtraDB Cluster
Thanks Jay. Basically I had the same idea about assigning an EIP to each node instance and setting up permanent DNS names. All the cluster communication would need to happen over the public network, but the internal application traffic could stay on private addresses. Looks like we will have to upgrade as we are running…
Server version: 5.5.30-30.2 Percona Server (GPL), Release 30.2, wsrep_23.7.4.r3843
To improve performance and locking issues across AZs I added these settings…
wsrep_provider_options=gcs.fc_limit=500; gcs.fc_master_slave=YES; gcs.fc_factor=1.0
Would you recommend any other options for improving commit times?