What are best possible ways to compare structure and content of two huge databases dump files almost 10GB file sizes.
Options already considered:
1). I can take the meta data and compare the tables row by row after loading the databases, but that will take over a month to get it done. Not acceptable.
2). Normal file diff doesn’t work.
3). I may have to do this operation multiple times in a typical cycle. Checksum works good for small DB’s but not for huge ones.
Appreciate your suggestions.
Why didn’t normal file diff work?
Or did you use a diff with GUI that tries to load the entire files?
Otherwise to compare the structure I would dump only the definition:
mysqldump … -no-data
Then use a normal diff.
When that is out of the way if a diff is not possible then I would break down the diff into diffing the data for only one table at a time or something like that.
Either way my solution would probably involve some PERL scripts or one liners since that is my choice of tool to quickly write something that processes large amounts of text and/or data.