MySQL 8.4 - Disabling InnoDB Redo Log During Restore - Safe Practice?

I’m planning to restore a database in MySQL 8.4 using mysqldump or util.loadDump(). I’ve heard that disabling the InnoDB redo log with ALTER INSTANCE DISABLE INNODB REDO_LOG can significantly speed up the restore process. However, I’m unsure about the potential risks involved.

Can I disable it only during the restore?

This should only be done on a brand new install of MySQL. You should not disable redo log on a production machine, or a machine taking active traffic.

The potential risk would be failed import due to table corruption.

I would recommend against mysqldump, as it is single threaded. If you really want to improve import speeds, use mydumper/myloader as both of these are multi-threaded and will be far superior to mysqldump in terms of speed.

Even better would be to use Percona Xtabackup, which is a physical backup tool. Physical backups are always faster than logical. You can backup an 8.0 with PXB, restore to new server, install mysql 8.4, and start it on the 8.0 datadir. The dir will auto-upgrade to 8.4.

2 Likes

Hi Mateus,

I am using mysqlshell with util.loadDump()

It is a new instance on a modest server just to retrieve data that has already been purged. Without disabling the redo log, a restore with 24 threads took 23 hours; with the redo log disabled, it took 9 hours.

It will not be done in production, I would just like to know if I can disable it to do this restore and then enable it again?

When I used MySQL 5.7 the same restore always took 9 hours. But in 8.4 it was much slower without disabling redo.

I can’t use xtrackbackup because it is a Windows server.

The database size is 240 GB.

That took 23 hours / 9 hours?!!! That’s incredibly slow, even with the redo disabled.

240GB = 245,760MB
9hrs = 32,400 seconds
7.58MB/sec

Even the cheapest HDD should be doing, at minimum, 30MB/sec. You have something else very wrong with this system with such low write speeds.

I am using mysqlshell with util.loadDump()

Yes, I saw that. I’m recommending something better, faster, more feature-rich, and more modern than mysqldump which is mydumper/myloader.

Disk: Seagate Exos X16 16tb 3.5 Sata Iii 6 Gb/s 256mb 7200rpm

Normal file copying takes 170/MBs, but the restore from MySQL HDD has always been slow, even when everything is disabled.

The database has 356 files, since it has 6 tables partitioned per month.

5 partitions of a table have 16 GB each

I tested again with 16 threads:

Loading DDL and Data from ‘C:/temp/data’ using 16 threads.

Opening dump…
Opening dump - done
Target is MySQL 8.4.3. Dump was produced from MySQL 8.4.3
Scanning metadata…
Scanning metadata - done
Checking for pre-existing objects…
Checking for pre-existing objects - done
Executing common preamble SQL…
Executing common preamble SQL - done
Executing DDL…
Executing DDL - done
Executing view DDL…
Executing view DDL - done
Loading data…
Starting data load
Building indexes…
Executing common postamble SQL…
Loading data - done
Building indexes - done
Executing common postamble SQL - done
334 chunks (783.34M rows, 127.10 GB) for 215 tables in 1 schemas were loaded in 21 hours 26 sec (avg throughput 1.68 MB/s, 10.36K rows/s)
224 DDL files were executed in 24 sec.
Data load duration: 21 hours 26 sec
2 indexes were built in 12 hours 26 min 47 sec.
Total duration: 21 hours 51 sec
0 warnings were reported during the load.

What is the size of your innodb buffer pool? You should be importing near as fast as your disks can go.

avg throughput 1.68 MB/s

My $35 raspberry pi goes faster than that.

Normal file copying takes 170/MBs, but the restore from MySQL HDD has always been slow, even when everything is disabled.

I fear you have other configuration issues with your MySQL that’s causing this poor performance.

innodb_buffer_pool_size should be at minimum 80% of the system RAM. innodb_redo_log_capacity should be at minimum 1GB.
innodb_flush_log_at_trx_commit should be 0 during imports, and unless you require absolute ACID compliance, should be 0 during normal operations.
sync_binlog should be 500 or 1000, and not 1 (the default)

I can’t say that this is a “Windows issue”, but you’re the first MySQL-Windows user I’ve come across in the past 6-8 months. The previous one was in the process of converting to linux because they got better base-line performance just by switching OS.

The production system is Linux.

Only the historical data that has already been purged is used by this client on Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz 3.10 GHz, Windows 10 home with 32 GB.

Here is my configuration:

default-storage-engine=INNODB
sync_binlog=0

sql-mode=“”

skip-log-bin
lower_case_table_names=1
max_connections=151
table_open_cache=10000
temptable_max_ram=1G
tmp_table_size=152M
internal_tmp_mem_storage_engine=TempTable
myisam_max_sort_file_size=2146435 072
myisam_sort_buffer_size=293M
key_buffer_size=24M
read_buffer_size=4M
read_rnd_buffer_size=4M

transaction_isolation=READ-COMMITTED

innodb_flush_log_at_trx_commit=0
innodb_log_buffer_size=64M
innodb_buffer_pool_size=24000M
innodb_redo_log_capacity=2000M
innodb_thread_ competition=0
innodb_autoextend_increment=64
innodb_buffer_pool_instances=8
innodb_concurrency_tickets=8000
innodb_old_blocks_time=1000
innodb_stats_on_metadata=0
innodb_file_per_table=1
innodb_checksum_algorithm=0
innodb_read_io_threads=32
innodb_write_io_threads=32
inn odb_io_capacity=10000
innodb_io_capacity_max=20000
innodb_doublewrite=OFF

flush_time=0
join_buffer_size=4M
max_allowed_packet=1024M
max_connect_errors=100
open_files_limit=40000
sort_buffer_size=4M
binlog_row_event_max_size=8K
sync_source_info=10000
sync_relay_log=10000

These 4 parameters are crazy overscaled.

innodb_read_io_threads=32
innodb_write_io_threads=32
innodb_io_capacity=10000
innodb_io_capacity_max=20000

The rest of your params all follow best practices, except for doublewrite=OFF. This tells me again, that you’ve got some other more systemic issue at hand if you’re only writing at 1.68MB/sec.

I would really look into mydumper/myloader. It’s quite possible the issue is with mysqlsh utility.

1 Like

I ran a test of mydumper on a production replica using Linux and the backup was very slow

I compared it with xtrabackup and nysqlsh, see the result:

xtrabackup:

Options: --parallel=8 --compress --compress-threads=6 --compress-zstd-level=6 --slave-info --safe-slave-backup

full backup: 59 minutes
incremental: 47 minutes

mysqlsh:

Options: util.dumpInstance(‘/backup’,{ consistent:false,threads:8,chunking:false,compatibility: [‘strip_definers’] })

Full: 177 minutes

mydumper:

Options: -u -a -o /backup -t 8 --format LOAD_DATA -v 2 -c --no-locks --trx-consistency-only --source-data 1

More than 5 hours and I ended up interrupting the backup

Ubuntu 24.04, 64 GB
MySQL 8.4.3

Database:
453 scehmas
158.771 thousand tables
1.5 TB

Are you watching diskIO/CPU while these backups are going? Is everything saturated?

xtrabackup
full backup: 59 minutes
incremental: 47 minutes

Why did your incremental take nearly the same amount of time as a full? Incremental should be fractional in duration, compared to full since it is only copying the deltas. Did you modify 1TB of data between full and incremental?

mysqlsh

Why do you have chunking disabled on your mysqlsh? The manual says if you disable chunking then it has to load one table per thread. You should get a good speed boost by enabling that, assuming your disks are not already over saturated.

mydumper Options: -u -a -o /backup -t 8 --format LOAD_DATA
Why did you use the LOAD_DATA option?

Try the following:
mydumper -u -a -o /backup -t 8 -v 2 -c -r 0:1000:10000 --trx-consistency-only

I used that very command to backup a remote MySQL with 335GB of data in under 30 minutes. (4 vCPU/48GB). Extrapolating for your DB size, it should have taken about the same time as mysqlsh.

-- db names "sanitized"
mysql> SELECT LEFT(MD5(table_schema), 10) AS schemadb, SUM(ROUND((data_length)/1024/1024)) AS MBsize FROM information_schema.TABLES   WHERE table_schema NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys') GROUP BY table_schema WITH ROLLUP;
+------------+--------+
| schemadb   | MBsize |
+------------+--------+
| 2078b09cd2 |      0 |
| 1f089cb42c |  11238 |
| b322be2a72 |      0 |
| 687d1b7ea4 |    263 |
| 0eb537d5fe |      0 |
| 34b4b952e5 |   5881 |
| bc645b472a |      0 |
| 284512f981 |      0 |
| 45e4d1e2bd | 304444 |
| 49384eb8ba |      0 |
| d4d6ffbd7d |    813 |
| b14e5b84c2 |      0 |
| 3c4eff94f0 |      0 |
| a6e45e3a4b |      2 |
| 5c556b4eab |    215 |
| 2af72f100c |      0 |
| NULL       | 322856 |
+------------+--------+
17 rows in set (0.00 sec)

$ grep 2024 db1-history-database1/metadata
# Started dump at: 2024-08-22 16:46:20
# Finished dump at: 2024-08-22 17:13:07

Rounding to 30m, that’s 322856MB/1800s=180MB/sec, which seems on par with your mysqlsh. Your xtrabackup was doing 437MB/sec?

Are you watching diskIO/CPU while these backups are going? Is everything saturated?

Cpu during the first 15 minutes of the backup reaches 100%

Why did your incremental take nearly the same amount of time as a full? Incremental should be fractional in duration, compared to full since it is only copying the deltas. Did you modify 1TB of data between full and incremental?

I believe it is due to the volume of files and some old partitions are changed due to application date errors causing you to have to copy the partition again.

Rounding to 30m, that’s 322856MB/1800s=180MB/sec, which seems on par with your mysqlsh. Your xtrabackup was doing 437MB/sec?

534.99 MB/s

If your PXB is so vastly superior, why are you messing around with mysqlsh?

I need to restore in Windows, as I mentioned before.

With mysqlsh I can restore a specific Schema or Table, filter data, etc.

An error may occur in the PXB, a bug, etc. I always like to do a BKP using more than one tool.

Sorry, I forgot about that part.

Same with mydumper.

Your linux-based MySQL is clearly able to handle things as speed. What’s up with the disks on that Windows box? The CPU looks good enough; disks can’t handle that much write capacity? For the most part, it should be sequential writes since you’re inserting via primary key.

Try the following:
mydumper -u -a -o /backup -t 8 -v 2 -c -r 0:1000:10000 --trx-consistency-only

real 232m55.752s
user 540m16.915s
sys 71m24.583s

Your linux-based MySQL is clearly able to handle things as speed. What’s up with the disks on that Windows box? The CPU looks good enough; disks can’t handle that much write capacity? For the most part, it should be sequential writes since you’re inserting via primary key.

I couldn’t figure out why Windows is slower than MySQL. MySQL started to slow down on restores starting with 5.7.11, something changed that I couldn’t identify.

Even starting the database on Windows takes 30 minutes vs. 50s on Linux with a 1.5TB DB with 15k tables.

I opened a ticket with MySQL and they said it wasn’t a BUG.

They told me to use innodb_validate_tablespace_paths starting with version 8 and even following it, it takes a long time, even on a dedicated SSD disk. I’ve tested it on storage with 80,000 iOPS, a server with 24 cores and 128 GB and it takes 30 minutes to start.

MYSQL should have a dump like Postgres’ copy(pg_dump). LoadInfile doesn’t have the same performance.

The backup of the same database with 1.5 TB in Postgres is done in 30 min with restore in 3 hours.

MySQL SendData is also very slow.

Oh that’s ridiculous. I don’t really know anything about Windows, but looks like there’s some big performance regressions on that OS. I wonder how to do the equivalent of a linux stack trace, to see where Windows is struggling.

Hi @matthewb,

Is it possible to speed up the preparation process?

time: 237 minutes

xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01
xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-14_16-05-02
xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-15_00-05-01
xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-15_08-05-02
xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-15_16-05-01
xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-16_00-05-02
xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-16_08-05-01
xtrabackup --parallel=48 --use-memory=20G --prepare --apply-log-only --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-16_16-05-01
xtrabackup --parallel=48 --use-memory=20G --prepare --target-dir=/backup-mysql/bkp-xtrabackup/base/2024-12-14_08-05-01 --incremental-dir=/backup-mysql/bkp-xtrabackup/incr/2024-12-14_08-05-01/2024-12-17_08-05-01

Not really. You’ve already got plenty of threads. If you have more spare RAM, increasing use-memory helps the most as that is the size of the buffer pool that PXB creates to manage page merges. You can try out the new smart memory feature of PXB, to see if that helps.

Shouldn’t there be an improvement where the arrival of deltas and other operations should use parallelism and in the end, in the application of logics and operations, there should be a single thread.

I noticed that even with 48 jobs, he didn’t use this.

To decompress, he did it in 30 minutes.

Hmm. I can’t find in the docs that says the --prepare phase uses threads. It only calls out the backup, copy-back, and (de)compress phases.

However, I know from experience that increasing the number of threads does indeed speed up the prepare phase. But not everything can be parallelized.