I’ve been doing a lot of reading about Amazon’s EBS. I am interested in running EBS without RAID-10 striping or anything like that. I know this creates issues where the disk may have quite a bit of latency at times.
From what I’ve read, if your DB can’t fit into RAM, EBS latency will be pretty bad for heavily used databases, whether on XtraDB or InnoDB. In this case, my database can fit into RAM. It is approximately 11 gigs. I am looking at an EC2 machine with 34 gigs of RAM. Obviously, if we’re doing a lot of reading from the DB, the EBS latency should be a non-issue. However, my DB also does some writing. According to munin, it does approximately .5 deletes, 10 updates and 5 inserts a second. All of these are relatively simple queries on DBs with at most 6-7 million rows.
So, I guess my question is this: How does XtraDB or InnoDB handle writing to the disk with delets, updates and inserts if binary logging is off (my data isn’t that critical that I can’t afford minor losses of recent entries in event of a crash, and I don’t run a master-slave setup)? Does it absolutely need to have fast access to the disk at all times to maintain performance if the DB fits entirely into RAM, or is it able to wait through peaks and valleys to do its writing so that these fluctuations can be evened out? My concern is that all writes need to be instantly written to the log-file, so any disk latency will really slow down the DB, regardless of how much ram you have.
Any thoughts? This level of internal DB functionality is beyond my scope of knowledge, and all the tests I saw online seemed to be regarding DBs that didn’t fit entirely into ram.