more memory = slower performance?


I found the following link:

The contents of which are:

"I blogged several times about some testing I have been doing recently. Well, I have run across something that just doesn’t make sense. I checked kernels and I/O schedulers and worked out what seems to do the best (2.6.22 kernel with deadline for those who are curious — I will post results soon). I had done all this with two servers that were identical and had 8 gb of RAM each. I then moved the 8 gb of RAM from one server to the other so one server had 16 gb of RAM. I then re-ran the last test with the only change being I raised the innodb buffer variable from four gb to eight gb.

I would have expected if there was any change it would be faster. I ran querybench five times and averaged the results as I had done with previous test. It was almost exactly 1500 qps slower than the previous run.

Anyone have any thoughts? I have run the test several times. I am rerunning the tests while closely examining the output of some of normal profile tools — top and iostat. It appears to me that the runs are io-bound if anything. The iowait percentage runs between the 25% - 40+ %. Top shows memory usage isn’t excessive (30% or 4.8 GB) and CPU percentage never tops 100% (so it doesn’t even max out one core of eight total cores).

My hunch is that the server is trying to fill up the buffer and IO is just bogging down the system. If I had a larger input file (I have 135 megs total of queries I am feeding into querybench) it might level out after “warming up” more.

My current formulae:

  • I split the input queries into two files to warm the server up with the first set of queries

  • I time both runs and then record the results of the second part (which is always higher than the warm up run)

Results show that . However I could be really wrong )

Feedback would be very appreciated!!"

I know this is not my data, but I’d be really interested to hear anybody’s comments, as the blof this was posted on attracted none.