An existing connection was forcibly closed by the remote host

Hello. I have 3-nodes multi-master cluster in different datacenters: Frankfurt, Amsterdam, New York. Also there are 3 photon server instances in each location. They connect to the database through Entity Framework v 6.2. Every server instance send queries to the nearest node (NY to NY; FR to FR; AMS to AMS). So there are no issues with Frankfurt and Amsterdam photon instances but sometimes(every second/third day) on New York instance I have an exception:
System.Data.Entity.Infrastructure.CommitFailedException:
An error was reported while committing a database transaction but it could not be determined whether the transaction succeeded or failed on the database server.
See the inner exception and [url]Bing for more information.
—> MySql.Data.MySqlClient.MySqlException: Fatal error encountered during command execution.
—> MySql.Data.MySqlClient.MySqlException: Fatal error encountered attempting to read the resultset.
—> MySql.Data.MySqlClient.MySqlException: Reading from the stream has failed.
—> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
—> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host.
There are no differences in those photon server instances. Most of the users are connecting to New York photon server so the NY node in cluster so it is the most loaded node. Maybe there are wrong settings for my node? The average online of users is ±1000. My NY node general insides are:

CPU(s): 2
Model name: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
RAM: 4GB

Percona cluster settings:

wait_timeout=60
interactive_timeout=60
max_connections=2048
wsrep_sst_method=xtrabackup-v2
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_slave_threads=8
wsrep_retry_autocommit=1
wsrep_provider_options=base_dir = /var/lib/mysql/; base_host = <HOST_IP>; base_port =<BASE_PORT>; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.causal_keepalive_period = PT1S; evs.debug_log_mask = 0x1; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.info_log_mask = 0; evs.install_timeout = PT7.5S; evs.join_retrans_period = PT1S; evs.keepalive_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 10; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.use_aggregate = true; evs.user_send_window = 4; evs.version = 0; evs.view_forget_timeout = P1D; gcache.dir = /var/lib/mysql/; gcache.freeze_purge_at_seqno = -1; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 100; gcs.fc_master_slave = no; gcs.max_packet_size = 6
thread_pool_size=2
query_cache_size=1048576
query_cache_limit=query_cache_limit
open_files_limit=open_files_limit

So the question is: could it be some bugs in Entity Framework / network issues or my settings are wrong for the use case? Thanks.

Hi, seri52, thanks for your question.

Could you post a few more details:
[LIST]
[]versions of Percona XtraDB Cluster
[
]version of OS
[]if there are any error logs, could you post those, please
[
]the my.conf for each node of the cluster
[*]any other information that you think might be relevant
[/LIST] Thanks!

Hello!
Thanks for reply! I’ve solved this problem long time ago :smiley:
Just forgot about my question on forum, sorry!