Hi,
i am in the process of optimizing my schema for my guestbook table.
That’s the situation:
CREATE TABLE guestbook00
(
id
int(10) unsigned NOT NULL auto_increment,
userId
int(10) unsigned NOT NULL default ‘0’,
timestamp
int(10) unsigned default NULL,
text
text NOT NULL,
PRIMARY KEY (userId
,id
),
KEY id
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
- ~ 400.000.000 guestbook entries
- ~ 3.000.000 inserts/day
- 1000 guestbook tables
- primary key: userId, id → FAST: select * from guestbook00 where userId = x order by id DESC
And now my questions:
If I’m not mistaken, every new insert results in higher IO-load because of the primary key. Is that correct?
We do have users with > 100.000 entries. If they view their guestbook with a high offset (e.g. LIMIT 100000,20) the query is veeery slow. Any suggestions?
Does it actually make sense to split all the data into 1000 tables?
Do you have any idea how to better set the index?
thanks in advance
chris