Large table

Hi,

I have a script that creates between 60 and 2000 records a minute in a table with only 4 columns, over time this table will get very large…

Is there any way I can build the database for this to run fast even with large ammounts of data?

Richrad

Partitioning and indexes. What’s the read load expected to be vs. the write load?

I can imagine the read to be alot more…

As there will be only one script that will write a load of data once every minute… but multiple clients will generate graphs and stats from that data throughout the day…