PITR branching without losing backups

Hey guys!

Facing a bit of a dilemma with MySQL backups.

To lose as little data as possible, the idea is that you should take regular full + incremental/differential backups, and then separately also backup the bin log with mysqlbinlog. Then when restoring, you restore the full backup and then replay the bin logs.

Now picture this - I have the following backups in S3:

Time X - binlog.01
Time X + 1 - Full backup
Time X + 2 - binlog.02
Time X + 3 - binlog.03
but I don’t want to restore all the way to the end of the bin logs - say I want to restore to time X + 1.9.

This will work fine. However, when MySQL starts up, it’s going to start writing to the binlog again and create files with the same name - binlog.02 and binlog.03. However, these now have different data than the ones stored in S3. If you say “don’t back it up because it exists in S3”, you lose data on the current branch. If you say “overwrite what’s in S3”, you lose data of your old branch. If you decide to hash the file and add the hash to the filename, well, stuff becomes complicated - which ones should be used to restore which backup?

PostgreSQL solves this issue really elegantly with the concept of “timelines”. Their WAL files are named something like binlog.{{timeline}}.{{number}}, and the timeline gets incremented every time data is restored into the database. So in the scenario above, once you restore the data, the binlog it generates will instead be called “binlog.02.01”, “binlog.02.02”… which you can backup to S3 and be assured you haven’t lost any data in any branches.

How do people solve this issue in MySQL? I’ve read a lot of resources on MySQL PITR and I honestly haven’t found anyone mentioning this problem.


P.S. Sorry if “Percona XtraBackup” isn’t the right category, I know it isn’t strictly to do with just XtraBackup but it seemed the most relevant category for backup related things.

Hi @dima .

What is used as best practices when you have multiple servers is to create identifiers for your instances. This way each binlog will be unique.
You can do it by adjusting your my.cnf:


By especifying only log-bin it will default to your hostname. But this parameter accepts any string that you want to use as prefix to your binlogs. More details at https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#option_mysqld_log-bin

Just to give an example:

log-bin = joker-abc1

Will produce binary log files: joker-abc1.000001, joker-abc1.000002, joker-abc1.000003, etc. As Marcelo says, leave it with just log-bin and it will prefix with the hostname.

Thanks for your responses and for your suggestions!

My issue isn’t with multiple servers though - the scenario above was with a single server in mind. If you restore the backup onto that server, then it will generate the same binlog names and overwrite what you have in S3 (or not, in which you haven’t backed up the data for this branch).

This issue is described very succinctly in the PostgreSQL documentation for continuous archiving under the “Timelines” heading.

I could of course try remember to update the log-bin variable in the MySQL config every time a restore happens, but then a separate issue arises, which is which binlogs to use the next time a restore needs to happen, where to play a particular binlog until, and which binlogs can be cleaned up.

Because PostgreSQL has timelines, tooling around backups (e.g. pgBackRest) can backup all branches/timelines and restore based on a timeline ID and it’s all self-contained inside that tool. I was hoping for similar options or tooling for MySQL.

Yes, that’s correct. I don’t see an issue here, so perhaps I’m not understanding. Once you restore the backup and begin writing to this server, all other binlogs you have backed up are now void (and therefor can be deleted) as their data no longer matches what is on the server.

But why do you still need this older branch of binlogs with older, now-out-of-date data?

:slight_smile: Which I think is why I’m confused on this cause I too have never heard of anyone with an issue like this.