How to purge binlogs on replica nodes?

Hello, I have purged binlogs from pxc-0 node by querying PURGE BINARY LOGS TO 'binlog.000180'; since they were getting bigger than 100Gb. It was successful on the first node, but the replicated binlogs remain on the other 2 Kubernetes nodes. It seems they won’t auto-remove from there too. How can I safely purge those also?

Hello @Slavisa_Milojkovic ,

PURGE BINARY LOGS will not be written to binlogs and hence the effect will not be replicated.
To automatically purge the binary logs, consider using BINLOG_EXPIRE_LOGS_SECONDS or EXPIRE_LOGS_DAYS (deprecated) configuration option.


1 Like

Thanks, I’ll test with:

set global binlog_expire_logs_seconds = 360;

Becareful with that. 360s is only 5 minutes! If your replicas are not caughtup, or stuck in replication errors, you will end up missing binary logs and thus needing to restore.
I have usually seen 7 days of retention as a common practice but ofcourse depending on disk size available.
Also, you can compress your binlogs and save on disk, refer this.


1 Like

I meant 3600sec, an hour to test it, but I made a mistake. This is not production setup and we already have backup dumps, but for some reason binlogs are filling up pretty quickly, in couple of days. I will change the period after testing to a week of course. But seems it again cleared only node pxc-0. It is not removing old binlogs. After what time will replicas start purging the logs?


Hi Slavisa,

This configuration is for that particular mysql instance only. You need to configure on individual PXC node.


1 Like

Thank you for your answer and clarification.

Hi Slavisa_Milojkovic,

One last comment.

Binlogs won’t be automatically removed when they reach binlog_expire_logs_seconds but when a binary log is rotated the binary logs older than binlog_expire_logs_seconds will be removed.

Binary logs are flushed (and a new one generated) under 3 conditions:

  • a server restart
  • “flush logs” statement being executed
  • a binary log being full and rotated