Best way to handle compressed (read only) tables in replication?

I currently have a database server with around 100+ databases on it. The number of databases grows on a weekly basis. Except for one control database, all of the databases contain only compressed (read only) MyISAM tables. Those databases are updated at most weekly (and most are updated every quarter).

My understanding is that, with replication, the compressed tables will end up as uncompressed MyISAM tables on the slaves, which isn’t the worst thing in the world, but having them compressed gives me much better performance, and allows me to keep the varchar()s in the table definition (and have them vary from database to database) without performance hits (although I suppose I could just set all of the tables to have fixed-length rows, but the compressing would likely reduce the size of the tables over that).

I can imagine replicating only the single control database, but since I only want to write to the master version of that, the replication would only be for reporting and backup, which is a relatively small load on the database compared to the writes it takes.

So I’ve been thinking about just using scp to get the database directories from the master to the slaves, and probably using MySQL’s replication to do the control database for the backup benefits only.

Any other ideas?

Why do you assume that you can’t store them as ACHIVE tables on the slave?

Without having tested it myself I would still say that you should have no problem with it.

Because the replication and storage engine is working on totally different levels.

Replication in MySQL is basically that the master keeps a binary log about which INSERT/UPDATE/DELETE’s that is beeing performed and the slave has to store which was the last position in this log it read.
So in the event of a replication failure when the slave has established connection with the master again it sends the position to the master and the master sends all changes that has been performed after that position.

So test it and you will probably find out that is works just fine (with the normal limitations of ARCHIVE table type, of course ) ).

The problem is that changes to the databases come by dropping and recreating the tables (via a mysqldump file generated off a test server) on the master. Then, the tables are compressed via myisampack, also on the master. So I’d assume that the replication would be able to handle the table dropping, creating, and inserting, but it wouldn’t compress the tables, right?

In addition to running replication, I’d have to have some script on all of the slaves running to compress the tables that were uncompressed, right? But the only way I’d know that they were full and ready for compression would be to have some checksum somewhere (probably in a table on the master?)…

So all that makes me think that it would be faster to scp all of the binary MyISAM files after they’re loaded (I can build it into the loading script from test to production)… Right?