PXC 5.6 Memory leak?

Hi, we have a 5-node cluster

Server version: 5.6.34-79.1-56-log Percona XtraDB Cluster (GPL), Release rel79.1, Revision 7c38350, WSREP version 26.19, wsrep_26.19

All members uses memory and it seems not to be release them, so we have to restart database sometimes.

In particular (the easiest to analyze), we have one node with few or none connections, and uses memory, and never release.

Server config

Linux: Linux version 2.6.32-642.11.1.el6.x86_64 (mockbuild@c1bm.rdu2.centos.org)
Build: (gcc version 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) )
Release : 2.6.32-642.11.1.el6.x86_64
Version : #1 SMP Fri Nov 18 19:25:05 UTC 2016
cpuinfo: model name : Intel(R) Xeon(R) CPU E5506 @ 2.13GHz
cpuinfo: vendor_id : GenuineIntel
cpuinfo: microcode : 21
cpuinfo: cpuid level : 11

Memory Stats ──────────────────────────────────────────────
RAM High Low Swap
Total MB 7856.8 -0.0 -0.0 8000.0
Free MB 6287.8 -0.0 -0.0 7277.9
Free Percent 80.0% 100.0% 100.0% 91.0%
MB MB MB
Cached= 579.8 Active= 883.9
Buffers= 147.5 Swapcached= 6.7 Inactive = 397.9
Dirty = 2.3 Writeback = 0.0 Mapped = 211.7
Slab = 97.2 Commit_AS = 5285.5 PageTables= 24.4

/etc/my.cnf (node with less or none connections)

[mysqld_safe]
pid-file = /var/run/mysql/mysql.pid

[mysqld]
user = mysql
port = 3306
socket = /var/lib/mysql/mysql.sock
pid-file = /var/run/mysql/mysql.pid
log-error = /var/log/mysql/mysql.err
datadir = /mysql/data
expire_logs_days = 2

gtid_mode = ON
enforce-gtid-consistency = 1
server-id = 5
#thread_handling = pool-of-threads
#thread_pool_idle_timeout = 60
#thread_pool_high_prio_mode = transactions
#thread_pool_max_threads = 3000
#thread_pool_size = 4
#extra_port = 3307
#extra_max_connections = 2

max_connections = 10000
max_connect_errors = 99999
max_allowed_packet = 16M
skip-host-cache
skip-name-resolve

explicit_defaults_for_timestamp = 1
performance_schema = off
#thread_statistics = on
#userstat = on
#query_response_time_stats = on

transaction_isolation = READ-COMMITTED
log-bin = /mysql/binlog/mysql-bin
slow_query_log = 0
log_output = FILE
long_query_time = 5
event_scheduler = ON
innodb_undo_tablespaces = 3
innodb_undo_directory = /mysql/undo

*** INNODB Specific options ***

innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:256M:autoextend
innodb_thread_concurrency = 16
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_file_per_table

*** Galera Cluster Settings ***

binlog_format = ROW
default-storage-engine = innodb
innodb_autoinc_lock_mode = 2
log_slave_updates
query_cache_size = 0
query_cache_type = 0

wsrep Provider Settings

wsrep_provider = /usr/lib64/galera3/libgalera_smm.so
wsrep_provider_options = “gcache.size=8G;gcache.page_size=128M;cert.log_conflicts=ON;gcs.fc_limit=32”
wsrep_cluster_address = “gcomm://server-05,server-04,server-03,server-02,server-01”
wsrep_cluster_name = “server”
wsrep_node_address = “10.1.3.225”
wsrep_node_name = “server-05”
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = “sstuser:despegar#mysql”
wsrep_node_incoming_address = “server-01:33033”
wsrep_slave_threads = 8
wsrep_notify_cmd = galeranotify.py
wsrep_sst_donor = server-05,server-04,server-03,server-02,server-01
wsrep_log_conflicts = ON
[sst]
progress = 1
time = 1

We want to know why happen this.

Thanks in advance,

Fernando.