Percona IO read perf vs other mysql


I’ve been asked to compare perf between percona, oracle mysql enteprise edition, mysql community edition. OLTP tests (sysbench) show similar results.
However, I tried to do a simple “IO Read” test.
On a server with innodb_buffer_pool_size=128M, I created a 1GB table without index and executed “select SQL_NO_CACHE count(*)” on the table.
On the same machine, oracle mysql enterprise edition installed with a tarball executed the select in 5 seconds, compiled community edition executed the select in 5 seconds, while percona … need 70 seconds.
I repeat the test on several distinct environments. VM/redhat6/percona tarball, VM/centos7/percona rpm, VM/redhat6/compiled percona, PhysicalMachine(used by our biggest oracle databases, 40 cores, 512GB RAM, 4HBA/4Gbps)/redhat6/percona tarball, nothing changed. Result always came after 70/75 seconds on percona, 5 seconds on other mysql. I also tried to modify some parameters without any positive effect.
Obviously, if I increase the innodb_buffer_pool_size, perf is better but that’s not the point of my test.

Do someone have an explanation to this huge difference between percona and other mysql? I don’t have any clue and no workaround. I tried to strace percona, saw a lot of timeout or unfinished operation but I didn’t find the root cause.

Thanks for your help.


edit: I conducted additionals tests: I increased innodb_buffer_pool_size just below the table size. The select was faster but stays slower than the same select on same table on different MySQL forks (oracle enterprise edition or standard edition). As soon as innodb_buffer_pool_size is larger than table size, performance became similar.
Furthermore, I observed something really strange. On my 1GB table with an innodb_buffer_pool_size set to 512MB, 1st select after server restart was faster than 2nd select (43sec vs 65sec!). I reproduced it with several innodb_buffer_pool_size until the table size. At that point, 1st select was slower than 2nd as expected…
Note that I was not able to reproduce this on Enterprise edition.
Does it mean that innodb_buffer_pool_size has to be always larger than larger table size to avoid extreme slowness on physical IO and/or full table scan?
Has percona some problem to flush cache or deal with table larger than cache? Is someone able to explain or reproduce, any help will be appreciated…