Although Read/Write splitting has been used by most large web sites, and many guys told that it could help to improve the performance a lot, I’m still pretty confused about this conclusion:
Whenever write operations are routed to master, all slaves would synchronize the binlogs from master, at an average latency of 0.1s or so, after that, each slave will perform those write operations locally(SQL write queries). What does this read/write splitting differentiate from the solution that write operations are routed to all machines(no master/slave at all), in that case, we even do not have the data inconsistency caused by read/write splitting.
It will be much more difficult to keep your data consistent (what if a write fails on one server? what if two threads run insert queries at almost the same time and you use auto_increment?), and your application takes a longer time to execute all write queries. I see one major advantage of your approach, namely that you no longer have single threaded replication.
Yes, it brought problems. However, the conclusion of “Read/Write Splitting” could help to improve performance has not been answered.
So, such conclusion should changed to “Read/Write Splitting” helps to high availability and scaling out reading, instead of “Search performance improvement”, right ?