why sql queries take time to execute ??

Hello Everyone,

I have this nagging problem that come up again and again once or twice a month :frowning:

[COLOR=#FF0000]High Server CPU Load !!!

Then i have to [COLOR=#0000FF]reboot the server, i have no other options…

During the course of my trouble shooting i have learnt that database server shows over [COLOR=#FF0000]3000 concurrent connections from the web server, Where as there are not that many users in the organization !!!, The average concurrent users is generally between [COLOR=#0000FF]50-100.

Also, my free memory gets exhausted completely !!!

I have also run the pt-kill command to check the queries that are taking more than 30 seconds to execute, i do get them and i also use the pt-kill utility to kill those, but it does no good :frowning:

I also tried this query from Peter Zaitsev locate at mysqlperformanceblog but in vein :frowning:

What i would like to know is, Why does it only happen some time, and how can i isolate the cause of the issue, What is the reason behind this crash and why can my server not keep up !!

Any suggestions would be highly appreciated !!

Thank you all !

Hello again everyone,

I have faced the same issue once again almost at the same time today.

and the user reported seeing a very unfamiliar error stating :- “Package power failure notification” on the server?

I am not too sure as to what this issue is, But the server goes in a spiral and the number of connections just spiral up in numbers.

This has happened twice in 2 consecutive days !!!

Some one please suggest something, I am not sure which way to go :frowning:

Well i have just checked my error logs on the server and have learnt the exact error and it is as follows :-
Sep 18 07:15:41 Newdbsrv kernel: CPU19: Package power limit notification (total events = 2) Sep 18 07:15:41 Newdbsrv kernel: CPU21: Package power limit notification (total events = 2) Sep 18 07:15:41 Newdbsrv kernel: CPU9: Package power limit notification (total events = 2) Sep 18 07:15:41 Newdbsrv kernel: CPU23: Package power limit notification (total events = 2) Sep 18 07:15:41 Newdbsrv kernel: CPU11: Package power limit notification (total events = 2) Sep 18 07:15:41 Newdbsrv kernel: CPU17: Package power limit notification (total events = 2)
I would appreciate if some one on board could let me know how can i go about fixing this issue ?
Thank you all once again !!!

The “Package power limit notification” message appears to be a Linux kernel bug that occurs under high system load from what I can see through Google searches, but no first hand knowledge on it. Some users reported the a reset did not help, but a hard power off did (shut it down fully, then start it back up).

As for your connection stampedes causing high system load, my guess would either be that queries are getting backed up in the DB and the application is abandoning the connection and opening a new one thereby compounding the problem exponentially, or there is a cache refresh stampede. Does the application use some sort of cache, or an ORM like Hibernate with a built-in cache?

What you still need to do is to get the “show engine innodb status \G” when this starts to happen. Since you know connections shoot up, you could create a simple bash script that checks the number of connections, and once it gets over say 300 (or whatever number is well above your average connections to the point where you know it’s going bad, but before it really gets bad), have the bash script start running “show engine innodb status \G” every 5-30 seconds or something and saving the results. That could help you identify what is actually going on in the database at the time, and hopefully will help you narrow down the source.

Hello Scott,

Firstly let me tell you, I was sure you would definitely reply to my query, and i am sincerely indebted to you for that :slight_smile:

Yes i too, did the digging on the internet and came out with these 2 links and they both point fingers at the kernel…



But, The first post does say something about disabling kernel notification in the bios, some thing like this :-

Also another issue on my end :frowning:

My bad, but i will be having a hard time finding the exact bash script for this specific issue :frowning:

Can you in any way assist me in this ?

my apologies for being so pesky :frowning:

Please !!

Thank you

Hello Scott,

I have tried my best to write a bash script that will count the number of concurrent connections and if it is higher than 300 it should execute the said command.

Can you please look into this script and comment ?

Since cron does not support executing scheduled jobs in seconds, I have set the cron to execute this script every minute.

I would appreciate if you could let me know how do i execute it every 30 seconds ?

Thank you a ton :slight_smile:

Hello Scott,

I think i have finally done it :slight_smile:


Since cron does not support seconds i used another command called “sleep” and using this sleep i called my script every 30 seconds to check the number of concurrent connections, Once the connections match the condition the “show engine innodb status\G” is executed.

To get my previous script working, I had to write another one called “every-30seconds.sh”

This in turn will call my count.sh.

To get this every-30seconds.sh running i have used the “nohup” command like this :- nohup ./every-30seconds.sh &

I think i have manged to do it, But i am sure this can still be done in a much efficient way.

Please let me know if you know of any ?

Thank you so very much.

Glad you gave it a whirl to start with; looks like you got something going so that’s good. =)

I would probably start with just having the check run once a minute, which should be adequate. The only time this will not work is if the ramp up from a healthy server to a broken server is less than a minute, which is definitely possible. But to start with, I would try something like the below which you would call from cron once a minute. Note I just put this together, so make sure to test it yourself and make sure you are comfortable with it before using it:


THREADS=`mysqladmin --user=user --password=password status | awk '{print $4}'`

if [ $THREADS -gt 300 ] && [ ! -e /tmp/mysql_proc_check.flg ]
touch /tmp/mysql_proc_check.flg
for i in {1..30}
echo "" >> /tmp/processlist.log
echo `date` >> /tmp/processlist.log
mysqladmin --user=user --password=pass processlist >> /tmp/processlist.log
echo "" >> /tmp/processlist.log
sleep 10
rm /tmp/mysql_proc_check.flg

Theoretically this should check for connections greater than 300, and if that is the case, it then loops once every 10 seconds for 30 times, or 5 minutes in total (about). It will output the processlist each iteration to a file along with the date/time so you can track it easier. Note I added in a “flag” as well which should prevent this from running a bunch of times at once, as that would make things a lot worse. Which brings up the point that checking the processlist does add load, so do be careful with this.


Thank you soo very much once again :slight_smile:

Firstly, [COLOR=#252C2F]Thank you very much for your script !!!

Pardon my ignorance, But i did not understand what you meant by " this will not work is if the ramp up from a healthy server to a broken server is less than a minute"

I’l have the script run on my server and see how it goes :slight_smile:

Thank you,

Meaning that if you run the script as a cron task every minute, and lets say the cron task runs at 12:00:00 am, and things look fine, but then at 12:00:10 am (10 seconds later) the connections ramp up to 900 and things go bad, then the cron task will not run again for another 50 seconds, which then could be too late if the server crashes / becomes unresponsive before the script has a chance to run. However as long as the script can connect to the server then you should be fine, so you’ll just have to see how it goes. =)

No problem; glad to help!

Thank you so very much for the clarification Scott :slight_smile:

Well in that case, I have gone ahead and come up with a refined my script further, and this will run every 30 seconds as i have used the “sleep 30” in it.

I have executed this script using "nohup ./count.sh &" and now it is running at the moment even if i exit my session.

Comments would help me further …

Pheww…i guess my first script :stuck_out_tongue:

Thank you for your time!!!

The idea is right, so as long as it does what you need then should be good to go. =)

You may also want to look into “screen” if you have never used it. Screen is a great tool that you won’t be able to live without once you start using it. It allows you to more easily run scripts and such in the background (similar to nohup, but easier).


wow…you are still awake…

Thank you for the quick revert…Yes i have heard about screen, But never used it, I guess it is time now :slight_smile:

An thank you for sharing the link…

I’ll update this post, if i have any further updates on it ( i hope not ) from the server :slight_smile:

Gnite !!!

Hello Scott,

We finally got to put our script to test…The server just went down and i had the script running and hence i am trying to attach the output for your reference, but unfortunately, the file size exceeds the limit imposed by the forum admins and hence can not upload.

Can you let me know what part of the Out Put to you want, so that i can selectively upload the same ?

Thank you so very much once again four time and attention.

Basically you want to look at the “show engine innodb status \G” that you know is capturing the issue. Assuming the number of connections that triggered the data being captured is accurate, then the first entry in the log will likely show what is causing the backup. So just cut out the first entire “show engine innodb status \G” section from the log file.