Percona crash at ha_innobase_add_index

I have not been able to replicate this particular behavior with non-production load testing against the server. However, when adding the server into the slave pool, it consistently crashes with the same stack trace:

~]$ sudo /usr/bin/resolve_stack_dump -s ./symbols -n stack | c++filt0x7a42a5 my_print_stacktrace + 530x67fd14 handle_fatal_signal + 11880x7f9617e244a0 _end + 3834236480x82e570 ha_innobase_add_index::~ha_innobase_add_index() + 1743680x7f9617e1c7f1 _end + 3833917290x7f96170a2ccd _end + 369261773

This server has 1.2TB of data across 2000 databases, so dumping and restoring isn’t really a practical option to clean out any corruption that may have crept in. However, if that’s the only route, I’ll have to figure out a way to make that work.

I don’t have a support contract, but if someone could point me in the correct direction, I would appreciate it greatly.

Quick addition: I’m not actively adding indexes to the server. A bunch were added prior to rolling it out, so it’s possible one of the add index statements did not finish along the way.

Also, as just a pure replicated slave with no traffic aside from the replication thread, I do not see crashes. Only when multiple clients (50+ at any one time) are reading from the server.

I’d like to be able to narrow down which DBs/tables are being accessed at the time of the crash, but all I can think of is enabling the general log and seeing where it cuts out or having a daemon continually dump the processlist to a flat file. Any suggestions?