I’ve been running iostat on one of our two read servers during the past week, hoping to rule out an i/o bottleneck as the cause of some sporadic performance issues during peak traffic times.
The output is curious, though. iostat shows a modest amount of write requests, probably due to replication, but very few read requests. I’m not sure if this is an anomaly with iostat or if it has something to do with MySQL caching or internals.
I would expect r/s to be much higher. The site is getting several hundred thousand requests per day, all with many multiple database intensive queries.
Any thoughts?
Here is some sample output:
iostat -d -x 5 8
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.01 70.15 1.30 30.79 18.43 75.59 2.93 0.12 3.69 3.19 10.24dm-0 0.00 0.00 1.28 100.32 17.43 70.58 0.87 0.05 0.52 1.00 10.15dm-1 0.00 0.00 0.03 0.62 1.00 4.99 9.11 0.01 22.37 2.86 0.19dm-2 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 304.40 14.12 0.00dm-3 0.00 0.00 0.00 0.00 0.00 0.02 8.00 0.00 11.09 6.74 0.00Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 215.40 0.00 81.40 0.00 2376.00 29.19 5.20 63.90 3.17 25.78dm-0 0.00 0.00 0.00 292.40 0.00 2339.20 8.00 12.48 42.69 0.86 25.26dm-1 0.00 0.00 0.00 4.60 0.00 36.80 8.00 0.06 10.70 6.26 2.88dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 160.20 0.40 76.80 3.20 1936.00 25.12 6.12 74.23 3.01 23.26dm-0 0.00 0.00 0.40 241.20 3.20 1929.60 8.00 10.83 42.92 0.94 22.78dm-1 0.00 0.00 0.00 0.20 0.00 1.60 8.00 0.03 202.00 13.00 0.26dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.00 6.67 4.00 0.24Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 128.40 0.60 98.20 4.80 1772.80 17.99 6.51 69.82 2.92 28.84dm-0 0.00 0.00 0.60 221.60 4.80 1772.80 8.00 10.50 49.24 1.30 28.88dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 225.00 0.00 70.80 0.00 2364.80 33.40 2.97 42.06 2.90 20.54dm-0 0.00 0.00 0.00 291.20 0.00 2329.60 8.00 10.41 35.79 0.69 20.16dm-1 0.00 0.00 0.00 3.80 0.00 30.40 8.00 0.02 4.95 0.47 0.18dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.00 6.00 3.67 0.22Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 139.80 0.20 99.20 1.60 1913.60 19.27 5.50 55.31 3.23 32.06dm-0 0.00 0.00 0.20 239.20 1.60 1913.60 8.00 9.90 41.35 1.34 32.08dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 99.40 0.00 50.20 0.00 1196.80 23.84 1.29 25.79 3.30 16.58dm-0 0.00 0.00 0.00 144.20 0.00 1153.60 8.00 3.34 23.17 1.14 16.38dm-1 0.00 0.00 0.00 4.80 0.00 38.40 8.00 0.03 5.92 0.42 0.20dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.03 44.67 24.33 1.46Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 87.20 0.20 21.20 1.60 867.20 40.60 1.32 61.53 2.19 4.68dm-0 0.00 0.00 0.20 108.40 1.60 867.20 8.00 4.52 41.66 0.43 4.68dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00