Design Very large database(15 TeraByte)

Hello All,
I am in a process of using a 15 Terabyte of database on MYSQL 5.1
This is not a true OLTP nor a datawarehosuing stuff. We need to query say 20 times an hours only.

The way i am planning to design this is

  1. We’ll have 10 nodes and each node will have 5-6 databases
  2. Engine would by MyISAM and we are planning to use Partitioning and it will be partitioned on a field A which is main searchable field.
  3. This is going to be a semi normalized database with 10 tables.
  4. Data load would be done by load data infile(We have a complete .txt file 3 GB in size, we are writteing a C parser which will parse the file and create 10 files(one for each table)

What consideration should we need to take to carry out such huge database.

  • Search is the major critaria in this application
    ** Data loading:- is this thr right approach (Data load would be a daily affair)
    *** What change are important as far as My.ini is concern(For cache and memeory ) point of view. All nodes have a shared SAN drive, and memeory on each node is minimum 2 GB.

Any help owuld really make my life easier.


i had done optimization for reading ie. for select queries.
the query read buffer should be set to a reasonably high value so that your reads happen faster.
and regarding query cache : if the same queries are fired repeatedly then even set that value reasonably high .
also note your system should have enough RAM.
but i myself am a junior. so reading times can be significantly improved.
but caution should be taken if you have other critical applications running on the same system.
as increasing the buffer size may also impact the RAM.

If you’re going to load data once, and aren’t planning to modify via MySQL, you should consider compressing/packing the MyISAM tables.

Have you completed the implementation yet. How is the performance?

We are planning to implement something similar and would like to know MySQL performance for large databases.