I have extreme problems with mysql on 2xdual Opteron + 2GB RAM server which is more than enough to handle many requests, I believe. I don’t know if my queries are not optimized (but I think they are ok). When I start mysql, it takes 2 minutes to kill server because and sites become not accessible. While Apache is processing about 100-200 requests at the same time, there are hundreds of mysql proccesses for unknown reasons. I’m attaching screenshot how everything looks. Screenshot also displays what queries are used, may be queries are wrong?
Any help is more than welcome, because all my sites are down about 22-23 hours per day due to heavy sql load and I can’t do anything (
Can you also run a:
SHOW GLOBAL STATUS;
and post the output here.
As it looks like is that MySQL is spending a lot of time trying to open tables.
This means that something is either locking the tables or that you have a very low setting of table_cache compared to the amount of connections that you have.
One database is about 100-200 MB of size (about 20.000-25.000 rows per database). This is output of SHOW GLOBAL STATUS;
mysql> SHOW GLOBAL STATUS;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘STATUS’ at line 1
If I run SHOW GLOBAL STATUS (without ; at the end), nothing is displayed then.
P.S. 200 apache requests and 500 active mysql proccesses (with SELECT commands) at the moment (
keywords | CREATE TABLE keywords ( id int(11) NOT NULL auto_increment, keyword varchar(255) NOT NULL default ‘’, yahoo_body text NOT NULL, markov text NOT NULL, whois tinyint(4) NOT NULL default ‘0’, meta_keywords varchar(255) NOT NULL default ‘’, meta_description varchar(255) NOT NULL default ‘’, meta_title varchar(255) NOT NULL default ‘’, featured tinyint(4) NOT NULL default ‘0’, featured_body text NOT NULL, published tinyint(4) NOT NULL default ‘0’, tb_direct tinyint(4) NOT NULL default ‘0’, tb_direct_google tinyint(4) NOT NULL default ‘0’, coppermine tinyint(4) NOT NULL default ‘0’, social tinyint(4) NOT NULL default ‘0’,
PRIMARY KEY (id),
KEY keyword (keyword)
) ENGINE=MyISAM AUTO_INCREMENT=22894 DEFAULT CHARSET=latin1
Looking at you table you basically don’t have any indexes.
Only two of the columns id (primary key) and keyword is indexed.
But if you look at the queries in your processlist a lot of them are of the kind:
SELECT … FROM keywords WHERE social = 0 LIMIT 1;… FROM keywords WHERE published = 1 ORDER BY id DESC LIMIT 5;
And:
SELECT … FROM trackback_direct WHERE posted = 0;
So here are some suggestions:
Create some indexes that target the queries above:
ALTER TABLE keywords ADD INDEX kw_ix_published_id (published, id);ALTER TABLE keywords ADD INDEX kw_ix_social (social);ALTER TABLE trackback_direct ADD INDEX tb_ix_posted(posted);
Increase sort_buffer_size 1MB is very small (it’s actually smaller than the default of 2MB which is a bit odd since you seem to have a pretty hefty server). Set it to about 5MB instead.
Try increasing the query cache variables:
query_cache_size=40MB
query_cache_limit=2MB
Try those to begin with and we shall see what happens.
Thanks, I tweaked my.cnf now and will try to add indexes to these columns.
Actually, I had index on “published” column previously, but after running EXPLAIN, I decided to remove index. Just can’t remember what was wrong with it )
Ok, I just added index to published column, and attached image with EXPLAIN output (1st query is without index, and 2nd query is with published index added). As you may see, 2nd query displays “Using where; Using filesort” and I read somewhere that “Using filesort” indicates slower query?
You didn’t run the ALTER TABLE statement that I gave you. Shame on you!
If you look at it I am creating a combined index with the columns (published, id).
And it is only this combined index that makes it possible to both retrieve the appropriate records and in the right order without a filesort.
Your first explain does not have a filesort on it because mysql choose to make an index scan, which means that it goes thru all rows in the primary index trying to find matching rows.
In your second explain it finds the matching rows by using the index but then has to sort them to deliver them in the right order.
You must not stare yourself blind on the last part of the explain. It is all of it that is interresting.
For instance if you have a query where you only have 4 rows left before the sorting it will still be faster than having to perform a table scan.
For example in your case your first explain reports that mysql has to examin 14804 rows while in you second explain it only has 1243 rows left after using the index.
So you see the index does make a difference.
But to speed it up even more, drop the index “published” that you created and create my combined index instead.
Added all these indexes on all websites hosted on server, but still no luck… Tons of same queries in active mysql proccesses list. I have no more ideas what causes it )
Some more opinions:
1.
Some general important questions:
What is actually your server doing during this high load?
Is it high CPU load or disk load?
How much of the RAM memory is used?
What OS are you running on?
Is the server a strict mysql DB server or are you running Apache or any other software on the same server?
Are the DB files located on a locally connected harddrive?
What kind of disk is it?
I think some of your my.cnf options are very odd. And unless you know why they are set as they are then I have some suggestions:
Start by increasing this value:
key_buffer_size=256M
Your setting of 24M seems very low.
Decrease this:
tmp_table_size=5M
Your 768M looks rediculously big and that memory can be used much better.
Then you have this setting:
max_connections=900
that looks awfully large compared to this:
table_cache=768
Since mysql is multithreaded and each thread that wants to read data from a MyISAM table needs a separate file handler the table_cache should be nr of connections times tables part of the query. But the strange part in that is that your status variables didn’t indicate that you had many opened tables. Which contradict this.
All my settings are configured according to mysql tuning primer ( [URL=“http://MySQL”]MySQL ) - launched it multiple times and software recommended these values to be used on server.
It’s high CPU load. Memory usage is normal, about 20-40% of memory is used total. I’m running CentOS 4.4, and it’s standard webserver hosting many different sites (it’s not dedicated mysql server). Server has SATA disk drives and server just starts making tons of mysql proccesses every X minutes. But the traffic is the same all the time, so it looks strange for me - same number of visitors, but sometims load goes high, and sometimes it doesn’t.
I’ll tweak my.cnf according to your recommendations now.
UPDATE: Server is still crazy. I was looking at server for about 1 hour and it was ok. Many apache requests, load was normal. Suddenly server started making hundreds of mysql proccesses (like you saw in my previous attachment) and was overloaded, while number of apache requests is the same.
Tried already without success. Log file displays all these queries (displayed in 1st screenshot). Of course, it happens not all the time, only every X minutes (when load is skyrocketed) all queries are marked as slow.
Do you have any cron jobs or does the application has any re-indexing jobs that it runs regularily?
And when you get it the next time, run the SHOW PROCESSLIST command and read thru all the rows. Or better yet post them her also in an attachment. And please make it a text file since it is a much easier format to filter.
Because you have something that is taking a lot of CPU and it isn’t the amount of processes since most of them was waiting for opening a table, I think that it is just a secondary effect since they can’t open the table they want to and then the mysql threads are delayed and that is when they start to pile up.
Yes, there are some cronjobs, but they just insert some rows (up to 100 rows per run, usually about 20-50 rows) to database and exit. I’ll check these cronjobs, but they are really simple - a few lines of code to insert several rows of data.