Not the answer you need?
Register and ask your own question!

precise slow-query.log for use wih playback

giomandagiomanda ContributorInactive User Role Beginner
Hi everyone

I am quite new to databases and so far i really like working on this "technology" hence excuse me in advance if i ask something obvius.

What i wish to achive is to replay traffic-load from one database server (live data) to another (dev) and so far what i have found out by researching is to
1) get a full backup
2) Captre the traffic with slow-query option (set to 0)]
3) Replay the traffic with percona-playback

I have done the above and all worked flawlessly including the playback.

My problem is that due to the database scheme, alot of queries fail or just hang there because provably there are some queries missing since there is a gap between the full backup and the slow-query logging.

Is there a way to get a full backup using the xtrabackup and then to make a slow-query log where the first query of the slow-query.log is the exac query executed in real life after the fullbackup has finished.

For example, lets assume that a full nackup is taken where the state of the databse is such as that includes the modifications made by the queries:
query-1-Insert
query-2-Insert
query-3-update

and the slow query-log should contain:
query-4-delete
query-5-update


I assume that there is some sort of timestamp-ordering of the queries executed in the database? right?

Thanks

Comments

  • scott.nemesscott.nemes MySQL Sage Current User Role Patron
    Hi giomanda;

    You could setup the dev MySQL instance as a slave of the production MySQL instance. Then run the slow query log on prod, and as soon as you stop the slow query log, stop replication on the dev slave server. You could create a short bash script to do both those things at the same time to reduce any gap (would potentially still be a gap depending on how busy your prod server is, but likely would be minimal). Then you would have your dev server with the most recent data and the slow query log that corresponds to the same time frame.

    Keep in mind that depending on your dev setup, running the same queries on dev might take a lot longer if the specs are not up to par with prod.

    -Scott
  • giomandagiomanda Contributor Inactive User Role Beginner
    Hi giomanda;

    You could setup the dev MySQL instance as a slave of the production MySQL instance. Then run the slow query log on prod, and as soon as you stop the slow query log, stop replication on the dev slave server. You could create a short bash script to do both those things at the same time to reduce any gap (would potentially still be a gap depending on how busy your prod server is, but likely would be minimal). Then you would have your dev server with the most recent data and the slow query log that corresponds to the same time frame.

    Keep in mind that depending on your dev setup, running the same queries on dev might take a lot longer if the specs are not up to par with prod.

    -Scott

    Hi Scott

    Thanks for your reply

    I guess that would be the best approach in order to have precise timing between the backup and the slow-query log.

    Is there a way to find out, from a full backup (taken from xtrabackup) the last sql statemnet executed on that full backup?

    Thanks
  • scott.nemesscott.nemes MySQL Sage Current User Role Patron
    Hi glomanda;

    You could look at the xtrabackup_binlog_info file, which will show you the binlog file / position for setting up slaves. You could then look in the binary logs themselves to find out what the statement is at that position, or you could replay new logs from your master against your slave starting after that position as well to manually catch it up to a more recent point from the master.

    -Scott
  • giomandagiomanda Contributor Inactive User Role Beginner
    Hi glomanda;

    You could look at the xtrabackup_binlog_info file, which will show you the binlog file / position for setting up slaves. You could then look in the binary logs themselves to find out what the statement is at that position, or you could replay new logs from your master against your slave starting after that position as well to manually catch it up to a more recent point from the master.

    -Scott

    scott you are my hero!!! :-D
    Thanks alot for your help,i am almost there, i do have one last question.

    When i playback the slow-query log, there are some queries that take too much time hence i use a script to kill all those queries that take more than 5 seconds to complete. However, when this happens, playback returns with an error:

    query interrupted number of tries 0
    query interrupted number of tries 1
    query interrupted number of tries 2
    .
    .
    query interrupted number of tries 9

    This affects my traffic per second, so i was just curious if there is any parameter on perconaplayback that can skip such tries? I looked through the manual but didn’t find such option.

    In case i dont use the relevant script which kills the slow queries, i do meet some queries which might have, lets say, duplicate keys where i get similar errors:

    Error during query: Duplicate entry 'xx1' for key 'PRIMARY', number of tries 0
    Error during query: Duplicate entry 'xx2' for key 'PRIMARY', number of tries 1
    Error during query: Duplicate entry 'xx3' for key 'PRIMARY', number of tries 2
    .
    .number of tries 8

    Is that the same thing? Could these tried be skipped?

    Once again thanks for your valuable comments
  • scott.nemesscott.nemes MySQL Sage Current User Role Patron
    Hi giomanda;

    Unfortunately I do not see any relevant options in the manual either, and do not use the tool myself, so I cannot say much about your final problem. Hopefully someone from the Percona dev team who works on these tools will stop by to give you an answer with more details.

    What you could do in the meantime is look at "show engine innodb status" while the slow queries are sitting there and see what they are waiting on. If you can identify the specific queries that are getting hung up, you could possibly remove them from your source file for the playback, but that would likely be pretty manually intensive.

    -Scott
  • giomandagiomanda Contributor Inactive User Role Beginner
    Hey scott


    With all your comments i managed to to craft a slow-query.log file which can feed the database with traffic for like 30 minutes.

    Thanks once again.
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.