InnoDB performance and scalability blog

When we talk of innodb performance tuning varaibles : innodb_file_per_table, innodinnodb_buffer_pool_size
innodb_log_file_size, innodb_flush_log_at_trx_commit , innodb_flush_method…it is seen that setting up the optimum values for these values with respect to your server configurations always fine tune innodb tables…
But very recently i faced an Issue , I configured these variables in my master DB, which was earlier running fine with default values for about 2 years now….But I did it since the size of Innodb tables have increased drastically and variables were not well tuned… Below given are the values which i configured (My Server status is :: OS – Redhat 5.8 , 16GB ram, 8 CPUs)….

innodb_buffer_pool_size = 2048M
innodb_additional_mem_pool_size = 256M
innodb_log_file_size = 512M

But what i saw was, as soon as the hits started pouring in, the connections rose to about 700 and that was persistent until i restarted my Mysql and within mins it rose again….
Maximum connections was set up at 2000, but we never faced the connections issue until this time. And it was limited to about 150-200 every secs.

AND finally when i commented out the values for innodb_flush_log_at_trx_commit and innodb_flush_method…
The performance came back to normal and it was functioning well….Then what could be the reason for this undue behaviour…


Set innodb_buffer_pool_size to 10 GB if you are not running any other application on this server

innodb_flush_method affect performance if database have huge read/write operation so you need to configure it according to your storage type i.e. SAN or Local Disk.
Basically O_DIRECT open files directly without using caching so reduce double buffering of disk as well as mysql. You need to benchmark your database before setting any flushing method on your storage.