When I tested pitr service, I found that if the binlog is too large, the service will be very slow.
I had set
timeBetweenUploads to 1800s and I can see there are 2 binlog files(1G and 800M).
The log is here:
2022/05/27 02:45:43 run binlog collector 2022/05/27 02:45:45 Reading binlogs from pxc with hostname= mysql8-pxc-test-pxc-d-pxc-0.mysql8-pxc-test-pxc-d-pxc.rds.svc.cluster.local 2022/05/27 02:58:18 Starting to process binlog with name binlog.000004 2022/05/27 02:58:34 Successfully written binlog file binlog.000004 to s3 with name binlog_1653617848_7d6cff4163b46a2ed0d451a6398a6e71 2022/05/27 02:58:34 Starting to process binlog with name binlog.000005 2022/05/27 02:58:46 Successfully written binlog file binlog.000005 to s3 with name binlog_1653618192_32b772b956f7edd3023316e7a34aeb12
And I also found that when pitr service is working on syncing binlog, my sysbench test write/read will fail.
It is theoretically better to reduce the
timeBetweenUploads, but this will lead to too many binlog files. How should I weigh the strategy in this situation?