archiving methodology

Hi,

query response time is going very slow over the time because of exponential growth in data everyday.

i would like to implement archiving methodology to keep 30 days data in table and rest in archive.

any body have idea/URL/docs to implement archiving Please share.

Are you sure there is no other way of increasing performance?

we have increased performance by increasing RAM, qaud core CPU, RAID 5 15 k, optimizing query ( slow query log analysis ), partitioning, adding more slaves to have select queries there, removing key constraints, adding proper indexes all we found at percona and mysql and our experience.

for you information our DB size is 500 GB with 10K qps on master and 5 qps on slaves.
Also we have selective replication for specific tables.

do you have any other performance tips?

Data growth alone can not be a reason for slowness. If you have good indices such that old data pages are never read, archiving won’t noticably increase performance. If such good indices do not exist because the queries do not allow them, denormalization might help. I really see archiving as a last attempt because it increases complexity.

slow query log might miss some bad queries that you can find with maatkit:
http://www.mysqlperformanceblog.com/2009/07/01/gathering-que ries-from-a-server-with-maatkit-and-tcpdump/

and you got a pm.

this is very good article, size of dump file is increases exponentially and mk-query-digest on big dump taking lots of time.

can i add more filter while dumping?

It should grow only linearly over time. Just keep the capture time low.

If you are not interested in response size, decrease the snaplen so that it is just sufficient to capture the queries in the incoming packets.

No reply to my pm?

Data archiving and purging is in my top 5 ways to keep an app working well over time. See mk-archiver for a tool that can help you with it.