2 questions about innodb_buffer_pool_size parameter on RDS

Hi All !
I am using mysql 5.5.40 , running on amazon RDS. (instance - m1.large)
I ran the following query:
[COLOR=#8A4A0B]SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS [COLOR=#8A4A0B]FROM ([COLOR=#8A4A0B]SELECT SUM(data_length+index_length) Total_InnoDB_Bytes [COLOR=#8A4A0B]FROM information_schema.tables [COLOR=#8A4A0B]WHERE engine=‘InnoDB’) A that suppose to give me the recommended value for this parameter , and I got 85 GB.
The current size is 5.3GB .(
I have 2 questions ,

  1. This query is reliable ? should I upgrade the instance ?
  2. If I understand , in this situation , most of the queries using IO to the disk . There is any way to check if query wrote to the disk?

Thank you!!

Hi haifa;

There is no formula that will give you the proper innodb_buffer_pool_size in all cases.

If the server is dedicated to MySQL (i.e. no other major applications/services running on it like a web server), then it is generally recommended to set your innodb_buffer_pool_size to about 75-80% of the available memory on the server (the extra 20-25% goes to the rest of MySQL and system overhead). The exact percentage you use would depend on your system, so you would just need to set it and watch for memory contention / swapping. I typically start at 75% and then move it up a little if that proves to be stable over time.

If the server is not dedicated to MySQL, or you simply want to keep the extra memory free, then using the query you listed will give a reasonable value to start with for your innodb_buffer_pool_size. If you do use the query method, then just keep in mind that you will want to review it over time and make adjustments as your data size grows.

Now to answer the question of “is X innodb_buffer_pool_size large enough for my server”, that is more complex. You could have 1TB of data, but only access 1GB of it regularly, in which case you would only need a couple gigs in the buffer pool. Or you could have 20GB of data, and constantly access all of it, in which case you’d want as large a buffer pool as possible to cover the data + index size.

To see how well your server is using the InnoDB buffer pool, you can check a few different areas. Run “show engine innodb status \G” and check to see if you have any free buffers. If you consistently have a large percentage of free buffers over time compared to the buffer pool size, then the buffer pool might be too large. Then in the same “show engine innodb status \G” output, look for “evicted without access X.XX/s”. If that value is anything but 0, then your innodb_buffer_pool size is probably too small (basically it means data is getting loaded into the buffer pool but is never accessed again before it gets pushed out to make room for new data - a.k.a. buffer pool churn).

Aside from that, to get an idea of how often MySQL cannot satisfy a read request from the InnoDB buffer pool and has to go to disk, you can use this calculation: (innodb_buffer_pool_reads / innodb_buffer_pool_read_requests) * 100. These two values come from “show global status”, and will give you a percentage of reads that end up going to disk. So if you have a high percentage here, then that means your InnoDB buffer pool is not doing as much as it could be.

As you can see, this can be (and is) a complex topic. However the basic point is simple; you need (or want) an innodb_buffer_pool_size big enough to fit all of your commonly accessed data plus indexes. If you do not have enough memory for that, then you will have churn. So the more memory you can spare for MySQL, the better. =)


Hi Scott!
Thanks for the detailed explanation !
The server is dedicated to MySql .


Total memory allocated 5839028224; in additional pool allocated 0
Dictionary memory allocated 93271517
Buffer pool size 348224
Free buffers 1
Database pages 335443
Old database pages 123806
Modified db pages 27601
Pending reads 1
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 1202566857, not young 0
131.71 youngs/s, 0.00 non-youngs/s
Pages read 1086207713, created 85166112, written 328555818
109.87 reads/s, 0.43 creates/s, 25.37 writes/s
Buffer pool hit rate 993 / 1000, young-making rate 8 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 1.85/s, Random read ahead 0.00/s

This is the output that I got,
If I increase the buffer_pool_size to 80-85% (current value is 75%) , it suppose to improve the performance?
regarding the calculation , (881810248 / 379758330493)*100 = 0.2% . Does it make sense?

Hi haifa;

Looks like you do have some churn going on in the buffer pool with your “evicted without access” value of 1.85 per second. That is not horrible, but it does mean your buffer pool is full and going through data quicker than it can get reused (lowers efficiency). Your disk read percentage is pretty good though at 0.2%. If your server is stable enough, I would go ahead and try bumping your innodb_buffer_pool_size up to 80% of your system memory (at most - whatever you are comfortable with on your system) and see how that goes. Keep in mind that MySQL can use up a lot of memory in other ways, like with temp tables, so be careful not to over allocate the buffer pool and it up running out of memory later. That’s why it is best to up the value in small increments so you can watch stability, and leave yourself a good cushion for when you get an unexpected load or your usage behavior changes all of a sudden (i.e. someone sets up an ETL job that is pumping out temp tables due to huge / poorly written queries).


OK I’ll try that and update.
Thanks again for the quick response!!