We’re interested in identifying indexes that are (relatively) unused and candidates for being dropped from a MySQL schema with 100+ tables in order to improve performance. The project has 1M+ LoC. We’d like advice on what set of tools and approaches would help us with this. We were thinking of turning on the general query log for a few days on a few suitable production instances that are appropriately resourced for the extra I/O. Then we would like to figure out from the logs what indexes are being used in the query plans, then collate that by index in some way, then indentify indexes in the schema that are not those collated results. Is there a tool that can help us directly do this? What about providing query plans for queries in the log? Maybe a script that feeds each query into an Explain and formats the output to highlight indexes used, storing the result in a table that could be queried to determine frequency of use. Then I imagine the information schema for the db can be used to provide a list of all the indexes in db that we could use to identify ones missing from generated table of actually used indexes. A different approach would be for us to go through schema and manually identify indexes we suspect are candidates for being dropped, and filter the work above to concentrate just on them. Suggestions and feedback would be appreciated.