Xtrabackup crashed when redo log size exceed 128G

Using xtrabackup version 8.0.33 with MySQL 8.0.32. When using xtrabackup to back up a MySQL instance with a very large dataset, the redo log generated during the backup can be extremely large, exceeding 128GB. In such cases, using xtrabackup for the prepare step may result in a crash.

We analyzed the cause as follows: when using xtrabackup for the prepare step, xtrabackup initializes the log by using the current log file size as m_target_physical_capacity. During subsequent calculations of each file size, it bases the calculation on m_target_physical_capacity and includes an assertion ut_a(file_size <= LOG_FILE_MAX_SIZE). Therefore, if the log file size of the backup exceeds 32 * 4G = 128G, it will trigger a crash.

crash log is as follows::

134069 2025-08-29T17:02:52.687529+08:00 0 [Note] [MY-012550] [InnoDB] Doing recovery: scanned up to log sequence number 1271987545088
134070 2025-08-29T17:02:52.953033+08:00 0 [Note] [MY-012550] [InnoDB] Doing recovery: scanned up to log sequence number 1271991439639
134071 existing_files_size: 158771838976 m_target_physical_capacity: 159845580800 srv_redo_log_capacity 1073741824physical_capacity: 159845580800 file_size:4995170304 LOG_FILE_MAX_SIZE: 4294967296
134072 InnoDB: Assertion failure: log0files_capacity.cc:560:file_size <= LOG_FILE_MAX_SIZE
134073 InnoDB: thread 139931550133120InnoDB: We intentionally generate a memory trap.
134074 InnoDB: Submit a detailed bug report to https://jira.percona.com/projects/PXB.
134075 InnoDB: If you get repeated assertion failures or crashes, even
134076 InnoDB: immediately after the mysqld startup, there may be
134077 InnoDB: corruption in the InnoDB tablespace. Please refer to
134078 InnoDB: http://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html
134079 InnoDB: about forcing recovery.
134080 2025-08-29T09:02:52Z UTC - mysqld got signal 6 ;
134081 Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
134082 BuildID[sha1]=
134083 Thread pointer: 0x562fda0
134084 Attempting backtrace. You can use the following information to find out

The relevant code is as follows:

Log_files_capacity::initialize
	...
#ifndef XTRABACKUP
	while (min_t / 1024 * 1024UL < max_t / 1024 * 1024UL) {
    m_target_physical_capacity =
        ut_uint64_align_down((min_t + max_t) / 2, 1024 * 1024UL);

    if (is_target_reached_for_resizing_down(files, current_logical_size)) {
      max_t = m_target_physical_capacity;
    } else {
      min_t = m_target_physical_capacity + 1024 * 1024UL;
    }
  }
#else
	if (recv_recovery_is_on()) {
    /** PXB current file size is the current_capacity and we add extra
    srv_redo_log_capacity (1G by default) for redo generated by dynamic
    metadata. This redo will not fit into the already full ib_redo0.
    Hence we need additional capacity to write redo before the checkpoint
    is issued. */
    uint64_t existing_files_size = log_files_size_of_existing_files(files);
    m_target_physical_capacity = m_current_physical_capacity =
        log_files_size_of_existing_files(files) + srv_redo_log_capacity;
  } else {
    m_target_physical_capacity = m_current_physical_capacity =
        srv_redo_log_capacity;
  }
#endif /* !XTRABACKUP */


os_offset_t Log_files_capacity::next_file_size(os_offset_t physical_capacity) {
  const auto file_size =
      ut_uint64_align_down(physical_capacity / LOG_N_FILES, UNIV_PAGE_SIZE);
  ut_a(LOG_FILE_MIN_SIZE <= file_size);
  ut_a(file_size <= LOG_FILE_MAX_SIZE);
  ut_a(file_size % UNIV_PAGE_SIZE == 0);
  return file_size;
}

already fixed: