I have an innodb with multiple database (MyIASM & InnoDB) but one InnoDB one is causing me major issues.
I have 71 tables using about 4.9G in data.
The machine is an Intel P4 single processor with 2G ram (this use to run fine before I converted to InnoDB…)
I have a few tables that tend to be my problems, one of which has 19 million records, the others 11 & 6.
I believe I’ve tuned the system as best I can but still see select(*) from history (19MM table) take 15+ minutes! Now if you consider that there are going to be lots of selects / updates on that table every minute then you can see part of my problem )
Any and all suggestions welcome!
Also, on some of the other tables with MM records select count runs in < 1 minute. What should I expect for read/update/inserts in databases / tables of this size? Why would this be good before i switched to innodb?
David
[mysqld]
old_passwords = false # inserted by debconf
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /data/mysql
tmpdir = /tmp
language = /usr/share/mysql/english #skip-external-locking
Well, for one thing, InnoDB doesn’t keep an exact count of the exact number of records in a table, so I believe a COUNT(*) results in a full table scan (it’s going to have to count the number of records in the primary key, but since that index is clustered, that’s probably going to be as intensive as a full table scan).
MyISAM does keep an exact count of the number of records in a table, so COUNT(*) should be instantaneous (or near instantaneous, at least).
One thing you might try is a COUNT(x) where x is a column that is a non-primary key index and that always has a value.
I have an InnoDB table with over 300 million records on a server with an Athlon and one gig of RAM, and this works fine on it.
Does it use PRIMARY KEY to execute count(*) ? (Check explain)
If your primary key is fragmented it may be much faster to use other index to compute count which can be forced with force index clause.
query:
select curr.nodeid,curr.tablename,curr.recordid from node_cksum curr left join node_cksum prev on curr.tablename=prev.tablename and curr.recordid=prev.recordid and curr.fieldname=prev.fieldname and curr.nodeid=prev.nodeid and curr.cksumtype<>prev.cksumtype where prev.cksumid is null and curr.cksumtype=0;
This table has almost 7million records and the above query takes about 2300-3000 seconds to complete each time it is called.
I didn’t write this code. In fact it comes from the zabbix monitor. I’m just trying to figure out how i can tune the system to run reasonable query responses )
Also, I ran the following
select count(itemid) from history;
trying to count an indexed item, and it’s still running 15 minutes.