Huge MyISAM table and slow SQL queries

OS: Solaris 10
DB: MySQL 5.0.45
CPU: 2

NB !!! Database was installed inside of the Solaris zone (may be this is problem).

Problem: After a NavisRadius log’s parsing, the data was placed into the one table, as a result we have 23 million records.

This query is very slow about 5~10 min.:

SELECT COUNT(*) FROM table1 tb1, table2 tb2
WHERE tb1.session_name = tb2.session_name
AND tb1.session_status = ‘Start’
AND tb2.session_status = ‘Stop’
AND tb1.access_ip LIKE ‘’;

Table structure:

CREATE TABLE rad_records (
server_name varchar(50) NOT NULL default ‘’,
service_name varchar(50) NOT NULL default ‘’,
session_name varchar(50) NOT NULL default ‘’,
datetime timestamp NOT NULL default ‘0000-00-00 00:00:00’,
session_status varchar(50) NOT NULL default ‘’,
access_ip varchar(50) default NULL,
login_name varchar(255) default NULL,
PRIMARY KEY (server_name,service_type,session_name,datetime,ses sion_status),
KEY sessionname_sessionstatus_accessip (session_name,session_status,access_ip)

Database configuration dile: /etc/my.cnf was created basing on support-files/my-huge.cnf

Current main parameters:

key_buffer_size: 402653184
table_cache: 512


  1. Is there any opportunity to improve this situation
  2. Is MySQL comfortable with this size tables?

Thanks in an advance.

My first tip is that you try using EXPLAIN on the query and either try to interpret the result or post it here.
Because the output from EXPLAIN is showing how the DBMS is going about to solve the query.

Now to your query.
Since you have different table names in your query and the CREATE TABLE I’m not entirely sure how you use it.
Is table1 and table2 the same table it’s just that you are doing a self join?

If so then you only need to create this index:

ALTER TABLE rad_records ADD INDEX rad_ix_accip_sesstat_sesname(access_ip, session_status, session_name);

Because in your current setup you don’t have an index that finds the matching records in table 1.

Another suggestion is to use the more explicit INNER JOIN syntax. It makes it much easier to read the query.

SELECT COUNT(*)FROM table1 tb1INNER JOIN table2 tb2 ON ( tb1.session_name = tb2.session_name AND tb2.session_status = ‘Stop’ )WHERE tb1.session_status = ‘Start’ AND tb1.access_ip LIKE ‘’;

Sorry guys, my mistake,

table1 and table2 are the same.

what is the difference between


KEY sessionname_sessionstatus_accessip (session_name,session_status,access_ip)

and yours INDEX:

INDEX rad_ix_accip_sesstat_sesname(access_ip, session_status, session_name)

The difference is that the order of the columns are important.

Your index worked excellent on the second table in the join condition where you already know the session_name from the first table AND’ed together with session_status.

BUT you didn’t have any index where session_status or access_ip or a combination of these where forming the leftmost columns in a index.
So your query always had to perform a full table scan on the first table in the join and could only use a index for second table in the join.

But you also want to have an index making it possible for the DB to find the matching records in the first table in the join.
So you have two separate indexes involved in the join, one index for the first table and one index for the second table.

And then I threw in session_name as the last column in my index making it possible for the DBMS to read that data directly from the index instead of having to jump to the row in the table to read it, saving extra seek time.

Thank you so much.

I will try to do it right away because creating an index of 23 million records is taking so long time.

How about if I will try to minimize a row’s length. I mean if I transfer VARCHAR types into the INT where it is possible and create different tables for example server_name etc.

Will it improve my situation?

It depends on how long the average server_name is and how unique it is.

If the server_name is very common and long then you win.
But if it is very unique then the overhead of creating a new table and joining with the original table gets to big to gain anything.

And if you have a lot of inserts on this table then these inserts will also slow down since you must first check if the server name is already in the server_name table and then find that id to use in the table etc. So that can also slow things down a lot, especially on a log table which tend to get a lot of writes.

Thanks sterin,
Your recommendations helped me. The speed was significant improved from 10 min to 1 sec.

[B]vpestovnikov wrote on Sun, 09 December 2007 05:03[/B]
Thanks sterin, Your recommendations helped me. The speed was significant improved from 10 min to 1 sec.
God news! )