XtraDB not cleaning up galera.cache pages

We are seeing a strange issue where xtradb on one of our 3 cluster nodes just stopped using galera.cache file and started creating gcache.page.xxxxx at a rate of about one file every 5 minutes.
Looking wsrep% variables, wsrep_gcache_pool_size is now 113021942801 even though it is supposedly capped at 64GB in my.cnf.
All the other nodes seem to honor the cap. The node in question is synced and all the transactions are replicated to it.
Cached down to variable is extremely low. Lowest in the entire cluster.

| wsrep_local_cached_downto | 83655 |

One of the other nodes:
| wsrep_local_cached_downto | 62524343 |

Anyone seen anything like that?

All the nodes are:
Server version: 5.6.34-79.1-56-log Percona XtraDB Cluster (GPL), Release rel79.1, Revision 7c38350, WSREP version 26.19, wsrep_26.19

See this bug report: https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1488530
Basically it can happen if you have big transactions affecting relatively small number of rows. For example writes to [long]blob/text columns.

This blog post may also help you: https://www.percona.com/blog/2016/11/16/all-you-need-to-know-about-gcache-galera-cache/

I have seen this bug. This is not it. According to Alexander Gubanov its been fixed since Server version: 5.6.23-1~u14.04+mos1 (Ubuntu), wsrep_25.10 as per the below

I am afraid it wasn’t fixed entirely in that version.
Please test the gcache.keep_pages_count + gcache.keep_pages_size combo, set on all cluster nodes, to see if that helps.
Please read also last update in https://www.percona.com/doc/percona-xtradb-cluster/5.6/release-notes/Percona-XtraDB-Cluster-5.6.35-26.20-3.html