Not the answer you need?
Register and ask your own question!

MySQL Performance Degrades after hitting ~100% CPU Load

EZboyEZboy EntrantInactive User Role Beginner
Hi everyone,

I am having some interesting performance issues with MySQL installation and looking to get some suggestions as I have exhausted any ways to troubleshoot I know.

First of all, info:
OS - Fedora 2.6.40.4-5.fc15.x86_64
MySQL - mysql-5.5.27-linux2.6-x86_64
Hardware - HP DL580g7, 2 x E74820,64Gb ram, Raid 10 storage with 1Gb flash buffer

my.cnf:

[mysqld]port = 3306socket = /tmp/mysql.sockdefault-storage-engine = myisamtmpdir = /var/tmp/tmpmysql #Logslog=/var/log/mysql.loggeneral_log = 0slow_query_log = 1long_query_time = 10skip-external-lockingconcurrent_insert = 2 thread_concurrency = 16thread_cache_size = 8key_buffer_size = 28Gmax_allowed_packet = 32Mtable_open_cache = 2048sort_buffer_size = 2Mread_buffer_size = 2Mjoin_buffer_size = 128Mread_rnd_buffer_size = 32Mmyisam_sort_buffer_size = 2Gmyisam_max_sort_file_size = 8Gbulk_insert_buffer_size = 64M query_cache_size = 128Mquery_cache_limit = 64Mtmp_table_size = 256Mmax_tmp_tables = 86max_heap_table_size = 2Gback_log = 70max_connections = 500max_connect_errors = 10#InnoDBinnodb_thread_concurrency = 16innodb_write_io_threads = 8innodb_read_io_threads = 8innodb_buffer_pool_size = 4Ginnodb_data_home_dir = /usr/local/mysql/datainnodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextendinnodb_log_group_home_dir = /usr/local/mysql/datainnodb_additional_mem_pool_size = 20Minnodb_log_file_size = 1Ginnodb_log_buffer_size = 8Minnodb_flush_method=O_DIRECTinnodb_flush_log_at_trx_commit = 0innodb_lock_wait_timeout = 120</pre>



So the issue I am having is that MySQL would suddenly lose performance executing queries after running in heavy load for couple of hours. The same queries start running 10 -100 times slower. After restart everything is back to normal.

This behaviour got more noticeable with addition of load. When running 1K queries per second, this used to never happen, 1.5K qps - would happen once a week, 2K qps - happens every day or more often.
With this load the server is often at 100%, but this doesn't not explain why performance does not recover after the load is removed.

Now about query types:
60% selects on myisam tables
25% inserts/updates on myisam tables
15% innodb tables

Around 50-60 concurrent connections

The most common query executed:

SELECT ( 4 * HOUR( a.date ) + FLOOR( MINUTE( a.date ) /15 ) ) AS period, SUM( a.mv ) AS count, DATE( date ) AS dateFROM table aWHERE a.date > DATE_SUB( '2012-08-08 16:14:55', INTERVAL 10MINUTE )AND a.date <= '2012-08-08 16:44:58'AND location = 'some_location'AND ...GROUP BY period</pre>


It looks like I am hitting a bottleneck somewhere, but can not exactly pinpoint the spot. Things i have checked:

Proper index - yes
Tmp tables - all in memory
Key cache - always enough
Query execution plan does not change when problem occurs
iostat - storage is loaded 10% at most
CPU - hits ~ 100%, but goes back to 10%-20% with mysql degraded performance
Ram - 36G out of 64G used by MySQL, the rest is for OS, never goes to swap


What else can I check or do? Any suggestions are appreciated!
Thanks.

Comments

  • EZboyEZboy Entrant Inactive User Role Beginner
    After some digging on internet and trying some things, I found that executing "FLUSH TABLES" seems to restore performance close to original levels temporarily. Any thoughts about that?
  • EZboyEZboy Entrant Inactive User Role Beginner
    Just wanted to add the screenshot of "top" to show the database load. The database is at that load level approximately 50% of the time. It shows 32 Cpu, but really its 2 - 8 core cpu's + hyper-threading.
    Is it possible MySQL does not have enough cpu cycles left to clean up some internal queues/caches ... and "flush tables" explicitly does that?

    rio_cpu.jpg
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.