It’s not universally true. But for InnoDB tables, it’ll choose some index to scan (possibly the primary key, possibly something else) and provided that index fits into the buffer pool fully, afterwards all the pages from the index will be in the buffer pool.
Thank you for the link. I believe that is exactly where I read it before.
Another question I have is: Is it known how long such index/table will stay in memory? Will it be just decided by the part of the mysql managing InnoDB Buffer Pool, and is out of our control?
Is there a way to look into InnoDB Buffer Pool to see what is there?
I ask because we have a stored procedure which uses 2 tables. When I run it, it takes up to 1.5 seconds (depending on the input). However, if I SELECT COUNT() FROM A and SELECT COUNT() FROM B and then run it, it takes at most 0.25 seconds. I decided to schedule a cron job to run these selects once an hour. Is that too often? If my database is 20 GB and InnoDB Buffer Pool Size is 30 GB, can I assume that these tables will always stay in memory?
What is the most recent innodb plug-in? Is it part of the latest mysql release? We are using version 5.0.44, so it should not affect us, right?
I thought that entire table does not go into the buffer pool anyway. What I understood from reading Peter’s article is that we can either put a clustered index or a non-clustered index(es) into the buffer pool. Did I misunderstand something?
I’m still a bit confused. In regards to point 2, if the table is 1 GB in size, and we have more than 1 GB in the InnoDB Buffer Pool, and we do the following:
SELECT COUNT(*)
FROM A
WHERE non_indexed_column = 0
thereby loading the clustered index and thus the table data itself into the memory, why won’t the entire table load?
We recently increased memory on our machines to 35 GB and set the InnoDB Buffer Pool Size to 30 GB. Our current DB size is 20 GB and the largest table is ~ 4GB, so everything seems to fit nicely.