Speed up large table SQL select query.

Hi!

I got a very large SQL table (50 million rows). The simple select query is running repeatedly and is identified a bottleneck.

Here is the detail info:

  1. table

userinfo | CREATE TABLE userinfo (
uid int(11) NOT NULL auto_increment,
login varchar(45) NOT NULL,
domain_id int(11) NOT NULL,
fname varchar(40) default NULL,
lname varchar(20) default NULL,
address varchar(40) default NULL,
city varchar(20) default NULL,
state varchar(20) default NULL,
zip varchar(10) default NULL,
country varchar(10) default NULL,
sex char(1) default NULL,
phone varchar(20) default NULL,
PRIMARY KEY (uid),
UNIQUE KEY email (login,domain_id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

  1. the select statement
    mysql> explain select uid from userinfo where login = ‘name’ and domain_id = 1;
    ±—±------------±---------±-----±--------------±----- -±--------±------------±-----±-------------------------+
    | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
    ±—±------------±---------±-----±--------------±----- -±--------±------------±-----±-------------------------+
    | 1 | SIMPLE | userinfo | ref | email | email | 51 | const,const | 1 | Using where; Using index |
    ±—±------------±---------±-----±--------------±----- -±--------±------------±-----±-------------------------+
    1 row in set (0.00 sec)

We are trying to insert data into table and the select statement is used to look up the uid according to login + domain_id;

By increasing the system parameter like innodb_buffer_pool_size doesn’t help much. The cache hit rate is extremely low.

Are there any way to speed up the query? By optimizing the table, fine tune mysql?

Thanks
-ll

If you just need the uid, try splitting the table perhaps?

CREATE TABLE userinfo (
uid int(11) NOT NULL auto_increment,
login varchar(45) NOT NULL,
domain_id int(11) NOT NULL,
PRIMARY KEY (uid),
UNIQUE KEY email (login,domain_id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE userdata (
uid int(11) NOT NULL,
fname varchar(40) default NULL,
lname varchar(20) default NULL,
address varchar(40) default NULL,
city varchar(20) default NULL,
state varchar(20) default NULL,
zip varchar(10) default NULL,
country varchar(10) default NULL,
sex char(1) default NULL,
phone varchar(20) default NULL,
PRIMARY KEY (uid)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

Second: look which is the highest value for domain_id, perhaps you can use a SMALLINT or even TINYINT instead of INT. And set uid AND domain_id to UNSIGNED if you don’t have any negative values.

Depending on how many Domains you have, you could try to split the data by domain id using merge tables…

For each domain another table… merged to one big mergetable for “global” overall access… if needed.

You cold also normalize usernames (limit special chars, store in uppercase…) and use binary collation for comparison…

That scares me, because I’m writing a social media application that will have a practically identical query.