HaProxy configuration to prevent multiple node writing with Percona Cluster

So as recommended with Percona Cluster, we are (trying) to only write to one cluster node at a time. The application is using JDBC connection pooling. Whenever there is a flap in the service, it seems we end up with multiple nodes being written and then cluster deadlocks / local certification errors. We’ve improved this a little by changing our configuration to ‘stick on dst’ instead of ‘stick on src’.
Below is the configuration. Any suggestions? Should we not be using sticky sessions?
global
log 127.0.0.1 local0
maxconn 4960
#debug
#quiet
user haproxy
group haproxy
stats socket /var/run/haproxy-stats uid haproxy mode 770
stats maxconn 10
noepoll

defaults
log global
option dontlognull
retries 2
option redispatch
maxconn 2000
timeout connect 4s
timeout client 1800s
timeout server 1800s

peers hapeers
peer xxxxxxx yyyyyy:1024
peer aaaaaa bbbbbb:1024

frontend percona_cluster
bind 0.0.0.0:3306
default_backend percona_cluster

backend percona_cluster
mode tcp
option tcpka
option tcplog
option mysql-check
stick-table type ip size 1 peers hapeers nopurge
stick on dst
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www

set up application listeners configured via json

listen ssl_cert
bind 0.0.0.0:443 ssl crt /etc/haproxy/haproxy.pem no-sslv3
balance roundrobin
stick-table type ip size 200k peers hapeers expire 30m
mode http
stats enable
stats scope .
stats uri /haproxy-hp?stats

Set up another haproxy frontend and backend on port 3307 and set 2 of the nodes to “backup” mode.

You’ll need to set up another data source in whatever ORM you’re using, and make sure all your select/non-writing queries in your application go to port 3306, and all the write queries (update, delete, etc.) queries go to the 3307 datasource.

This way you can effectively write to only one node unless the first node is unavailable, then haproxy will start to write to the backup node. You’ll spread the read load evenly over all three nodes (on port 3306)

You should also set up haproxy to check the status of the cluster with this tool:
[url]https://github.com/olafz/percona-clustercheck[/url]

Currently the team we work with doesn’t split read/writes.

The issue i noticed when using two of the instances tagged as backup, is that when a failover happens, the pooling keeps writing to the backup even after the primary node is back.

We are using the latest clustercheck script also.