MySQL Performance Concerns

I run a semi-high traffic website that we hope to grow significantly in the future. I use a MySQL database for two main purposes:

1.Store data about real estate listings from several remote databases.
2.Record each hit to each page, and some relevant information with it.

I am not too concerned about the listing tables. Each data source we get data from has its own MyISAM table, and there is a MERGE table that the PHP scripts look at when searching for listings and/or creating search results. Currently there are only about 1,600 rows in the MERGE table, but that could be as high as 7,000 in the next months as we add new data sources.

My concern is with the hit counter table. The site went live March 1st of this year, and already we are at about 12,000 rows. The table only has 6 columns plus an AUTO_INCREMENTing primary key and index. I am worried that eventually the hit counter table will become too large and cause problems. I have Google’ed around looking for some guide to MySQL’s safe limits and guidelines, but I have found no definitive answer for my situation. Each time a page is loaded, it inserts a new row into the hit counter table. That is all that the common user does to interact with it. However, I am also working on a PHP-based statistics program that will look at the entire table and give out statistics to me and a few other people involved in the operation of this site.

I guess my primary questions/worries are:

1.Will there be a point there there is just too much data for MySQL/PHP to handle?
2.Will there be a point when even just inserting into this massive table when a common user loads a page will cause long processing lags or strain on system resources?
3.Am I just being paranoid and underestimating how powerful MySQL really is? Before this site, the largest MySQL table I ever saw was 800 rows.

I thank anyone who can help for their input. Please feel free to respond to me via this forum, private message, or email to andrew{at}aregner{dot}com.

You certainly seem to be frightened about nothing IMO )

12,000 is a tiny table in relation to what MySQL is capable of.

There will be a point where MAX ROWS could be reached, to future proof your tables run:

ALTER TABLE table_name MAX_ROWS = 99999999999999999999999999999;

This will auto adjust to the limit on your machine (architecture independant).

On 32 Bit: (2^32) - 1 = 4,294,967,295 rows (4+ Billion!)

And 64 Bit: (2^64) - 1 = 18,446,744,073,709,551,615 rows (A ridiculous number)

Copying your table schema will give us a better idea of your setup.

Thanks for your input… I ran SHOW CREATE TABLE […] for 3 main different tables, and attached the results to this message.

So, on my 32-bit system, there is currently a limit of 4+ billion? And once I execute ALTER TABLE […] MAX_ROWS = n, then the max will be n, no matter the CPU? There has to be a point somewhere in there when the 1.7 GHz Intel Celeron and 512 MB RAM can’t handle it all, isn’t there?

Point of usless trivia: 18,446,744,073,709,551,615 is 18 quintillion, 446 quadrillion, etc. After thousands, it follows the common greek prefixes with the -illion sufix. Million, billion, trillion, quadrillion, quintillion, etc…

The address space on 32Bit CPU’s is 2^32 so limited to 4 od bil.

Table schemas look ok, your “hit_counter” should perform well on inserts as it only has 1 index.

Typical solution for this problem is to have each day stored in its own table (you can when use merge table to query all data)

But your sizes are far from something you should be concerned about.