MySQL Partitioning can handle that automatically for you, but it’s very inflexible (small hard limit of partitions, not recommended if you are going to add one partition every day). I don’t know if you were thinking about the merge engine, but if you were, don’t.
pt-archive, from Percona Toolkit is a small utility for safe archiving of data (in your case, to other tables), but all the data logic should be still implemented on your code.
There are solutions for transparent multi-server querying (sharding) that, even used on a single server, could be useful: shard-query and the spider engine. Those are my suggestions. Sometimes, implementing your own specific way of archiving and querying multiple tables can be ok. In any case, try to optimize your data types and minimize index size to make things bearable.
If your data becomes larger than what MySQL standard engines can scale efficiently (2GB/day doesn’t seem to be the case yet), you can complement it with other engines for big data (or external tools for indexing and analytics) like the tokudb engine.