When we talk of innodb performance tuning varaibles : innodb_file_per_table, innodinnodb_buffer_pool_size
innodb_log_file_size, innodb_flush_log_at_trx_commit , innodb_flush_method…it is seen that setting up the optimum values for these values with respect to your server configurations always fine tune innodb tables…
But very recently i faced an Issue , I configured these variables in my master DB, which was earlier running fine with default values for about 2 years now….But I did it since the size of Innodb tables have increased drastically and variables were not well tuned… Below given are the values which i configured (My Server status is :: OS – Redhat 5.8 , 16GB ram, 8 CPUs)….
innodb_file_per_table=1
innodb_buffer_pool_size = 2048M
innodb_additional_mem_pool_size = 256M
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT
But what i saw was, as soon as the hits started pouring in, the connections rose to about 700 and that was persistent until i restarted my Mysql and within mins it rose again….
Maximum connections was set up at 2000, but we never faced the connections issue until this time. And it was limited to about 150-200 every secs.
AND finally when i commented out the values for innodb_flush_log_at_trx_commit and innodb_flush_method…
The performance came back to normal and it was functioning well….Then what could be the reason for this undue behaviour…
thanks…