Deploy innodb in ramdisk in a replication setup?


Our core database is small (should never exceed a few GBs), and we are currently doing innodb+replication+big enough buffer pool to accomodate everything in memory. The main source of random I/O is the periodical flush of dirty pages which I’d like to avoid.

I also feel that it’s wasteful for every server in the replication setup to flush these dirty pages where really it’s enough for one of them to do it (where I’ll make my backup). That’s why I’m thinking of the following strategy:

We use ramdisk to place the tablespaces and transaction logs, and the real disk to place the binlogs. We do a hot-backup on the master every 30 minutes. If the master dies, a slave is promoted to master, and master gets re-deployed as a slave. If a slave dies, just copy the most recent backup from the master and resume replication from there. Finally, in the unlikely case that all servers die at the same time, we still have a recent backup and the binlog to recover the data.

So as long as we use a strict log flushing policy (flush trx log on commit, sync binlog, etc), there shouldn’t be any data loss right?

On the performance side, the only sources of I/O now are the binlog flushing and the periodical full backup, both are sequential so it should be fine. The backup part is actually similar to the “sequential checkpointing” described here

What I’m worrying about is that innodb doesn’t know the tablespace is now stored in memory, its decisions are based on the assumption of a slow disk, and as in such a setup we won’t be using as large buffer pools (otherwise it doubles the memory consumption), I’m not sure the performance will be as good as it should.

I’m aware of the memory storage engine, however it doesn’t seem to be implemented and maintained very well, for example it uses table-based locks which scared me off.

I think it will be best if I can simply disable dirty page flushings at once but it probably isn’t possible. I’m not sure whether it’s a easy source patch to implement it or has anyone already done it. As a newbie to mysql I certainly can’t do it myself.

Any thoughts on such an approach? Like is it feasible at all in a production settings, will the performance be significantly better, or is there a better way to do it? Thanks.


I think you understand the risks could came with a solution like this. Storing files on memory fs.

If course, I bet the main problem is NOT the innodb overhead flushing data to disk, but writing the actual tables.

Overhead, but this is what RDBMS means I think. Otherwise, please meet redis )

Memory engine is whole different. You have to know that. Every query in memory tables are full table locks. Sure it fast, but not enough if you have hundreds of threads.

I think you have to take the overhead or think more key-value based if you don’t need an rdbms.

If you just want to see how far you can tune, try memory fs. Memory consumption is a good point, but c’mon, everything got a price. If you have enough memory, store as much in memory as you can. But, if you have strong machines, just dedicate 2 different disk for binlog and innodb files (except table space) :wink: