Strange performance numbers

We are moving a database from a single Percona master/slave setup to a dual-data center, PXC setup. I did some benchmarks today that seem strange. They seem to indicate that read performance on a local PXC node is as bad as (or worse) than reading between data centers, which doesn’t make sense to me.

Here’s my methodology:

I took four read queries from the slow-query log on a a Percona MySQL server and wrote a script that would clear the query cache and run all four queries 10 times in a loop. So the loop is toreset the query cache, run four long queries, loop again–10 times.

First I tried the script in our first data center talking to a Percona MySQL Master in the same data center. over a LAN. The times were:

real 0m32.881s
user 0m0.319s
sys 0m0.147s

real 0m31.964s
user 0m0.321s
sys 0m0.148s

real 0m32.101s
user 0m0.318s
sys 0m0.137s

Then I copied the scripts to our other data center and ran them again, still pointing to that same Percona MySQL master in the first data center. The purpose here was to measure the effect of network latency on read operations across the internet. The times were:

real 0m42.846s
user 0m0.816s
sys 0m0.279s

real 0m43.470s
user 0m0.680s
sys 0m0.275s

real 0m43.072s
user 0m0.727s
sys 0m0.283s

So we can see that network latency adds about 10-11 seconds to my loop on a READ operation.

I have a PXC cluster set up with two nodes in each data center. All four nodes are configured exactly the same as the Percona MySQL server, with the same data set. I ran my scripts locally against a PXC node in the same data center. This would be comparble to the first test above, since the data set is local to the PXC node. I would have expected the results to be the same as those of the first test case above, because PXC keeps an entire copy of the data locally, and these were only read operations.

Instead I saw:

real 0m43.065s
user 0m0.683s
sys 0m0.237s

real 0m44.907s
user 0m0.775s
sys 0m0.271s

real 0m43.689s
user 0m0.755s
sys 0m0.260s

real 0m43.360s
user 0m0.763s
sys 0m0.276s

The performance was worse than reading between data centers.

Any ideas on why?

Both servers (the PXC cluster and the Percona) have /tmp/mysqltmp mounted as RAM drives to speed tmp table operations.

Software on the PXC node was:
Percona-XtraDB-Cluster-client-5.5.29-23.7.2.389.rhel6.x86_64
Percona-XtraDB-Cluster-galera-2.0-1.143.rhel6.x86_64
Percona-XtraDB-Cluster-shared-5.5.29-23.7.2.389.rhel6.x86_64
Percona-XtraDB-Cluster-server-5.5.29-23.7.2.389.rhel6.x86_64
Percona-Server-shared-compat-5.5.31-rel30.3.520.rhel6.x86_64

On the MySQL Master:

Percona-Server-shared-55-5.5.21-rel25.1.234.rhel6.x86_64
Percona-Server-server-55-5.5.21-rel25.1.234.rhel6.x86_64
Percona-Server-shared-compat-5.5.21-rel25.1.234.rhel6.x86_64
Percona-Server-client-55-5.5.21-rel25.1.234.rhel6.x86_64

Any thoughts or insight would be helpful.

sean

A little more strange info. Using top, the Virt RAM footprint of the database on Percona MySQL 5.5 is 11.0G. On the PXC cluster nodes (I checked two of them now), the Virt RAM column is 42.5 and 42.6 respectively. Mysqltuner.pl on all machines returns the same:

-------- Performance Metrics -------------------------------------------------
[–] Up for: 77d 0h 58m 7s (399K q [0.060 qps], 185K conn, TX: 625M, RX: 20M)
[–] Reads / Writes: 100% / 0%
[–] Total buffers: 8.9G global + 14.2M per thread (800 max threads)
[!!] Maximum possible memory usage: 20.0G (124% of installed RAM)
[OK] Slow queries: 0% (1/399K)
[OK] Highest usage of available connections: 1% (11/800)
[OK] Key buffer size / total MyISAM indexes: 512.0M/296.4M
[OK] Key buffer hit rate: 100.0% (76K cached / 1 reads)
[!!] Query cache efficiency: 0.0% (0 cached / 249 selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 10 sorts)
[OK] Temporary tables created on disk: 0% (10 on disk / 177K total)
[OK] Thread cache hit rate: 99% (11 created / 185K connections)
[OK] Table cache hit rate: 81% (30 open / 37 opened)
[OK] Open file limit used: 0% (21/21K)
[OK] Table locks acquired immediately: 100% (126 immediate / 126 locks)
[!!] Connections aborted: 28%
[OK] InnoDB data size / buffer pool: 5.3G/8.0G

-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
Reduce your overall MySQL memory footprint for system stability
Your applications are not closing MySQL connections properly
Variables to adjust:
*** MySQL’s maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
query_cache_limit (> 2M, or use smaller result sets)

Any thoughts or ideas? I’m wondering if there is a setting I am missing for PXC that is unique to PXC.

sean