Out of memory doing import

I have an MySQL instance (an innodb cluster server, put in standalone, so it should not really be important). And I have a couple of servers I need to do this on, so I would like to get to the bottom of this one.
Secondly, im aware that the mysql and mysqldump might not be the best tools for this matter, but for now this is what we have/use :smiley:

So the current server currently has 16G of memory, and im trying to import an compressed dump fie of ~75G. I expect the imported result would be somethinig like ~500G.

Currently its deadly simple
gunzip -c dump.sql.gz |mysql -uuser -p

After some time(1h), all the memory of the server is consumed, and the import is killed by OOM. The innodb_buffer_pool_size is set to ~75% of the total memory of the server, which is 12G.

Now i’ve restarted it again, but setting the innodb_buffer_pool_size to 1G, because this worked on another import (only 4G in total memory here though).

Any suggestions for improvements ?

Hello @lbmdkap ,

When you say “innodb cluster put in standalone”, did you also disable group replication while you load the data?
Typically larger bufferpool should help in dataload though you are seeing OoMs, is there anything else running on the system?

A few improvements…
innodb_doublewrite=OFF
innodb_flush_log_at_trx_commit=2
sync_binlog=0
innodb_flush_method=O_DIRECT
Increase innodb_redo_log_capacity/innodb_log_file_size 40G

if you don’t want the data load to be replicated and binlogged- sql_log_bin=0. May be something like this:
echo “SET SESSION sql_log_bin=0; SET SESSION foreign_key_checks=0; SET SESSION unique_checks=0;” && gunzip -c dump.sql.gz |mysql -uuser -p

Related tools FYI:

Hi! Like my college is suggesting, relaxing/disabling some parameters while doing the dump can alleviate memory pressure and improve restore speed. You can get the complete list of parameters from here: https://www.percona.com/blog/restore-mysql-logical-backup-maximum-speed/

Keep also in mind that there should not be any other high memory consumption processes running on the server while the decompress+restore is ongoing.

Considering that the BP memory usage will be ~10% extra on top of the current configured value, plus OS and import memory. 12 GB for the BP might be a lot but you can consider using a larger value than 1 GB.

Since the BP can be changed dynamically, you can start the import and monitor the memory usage. Then when memory usage stabilizes consider increasing the BP size in small steps (1-2GB at a time)

Regards

Hi @lbmdkap,
Looks like you get 3 Perconians today! :slight_smile: Which process is actually receiving the OOM? I suspect the gunzip is eating up a lot of memory, trying to decompress/pipe 75G into memory. I might suggest gunzip’ing the file first, before importing.

Secondly, try using mydumper/myloader over mysqldump. It is multi-threaded, so your process will go 10x faster. Each table is backed up to its own file, so the risk of gunzip OOM a single file is removed.

HI All

Thank you so much for the replies here. Let me try to sum up.

I’ve tried with some of the relaxing parameters, im not sure how much it gave though, but for now its at least an good way going forward.

I’ve tried to gunzip the file first, and yes the problem persists.

If i’ve set the the pool like so on the server, the servers memory consumption for mysql stays between 1-2G which is expected.

SET GLOBAL innodb_buffer_pool_size = 1073741824;

For now the “per say” its fixed, meaning workable. Meanwhile im testing our the mydumper/myloader on some of the larger databases. Which was the intention anyway. :slight_smile:

So the mydumper was much faster, between 4-10 times actually. But the import is insanely slow.

Any suggestions for optimizations. Its only done like ~30-40% in 2 days. Seems to struggle with disk-io mainly ?

But maybe it would be enough to increase the VCPUs on the VM ?

top - 09:16:19 up 43 days,  3:05,  8 users,  load average: 4.07, 4.16, 4.23
Tasks: 261 total,   1 running, 258 sleeping,   0 stopped,   2 zombie
%Cpu0  : 29.8 us, 10.1 sy,  0.0 ni, 34.3 id, 24.7 wa,  0.0 hi,  1.1 si,  0.0 st
%Cpu1  :  8.2 us, 19.8 sy,  0.0 ni, 43.4 id, 26.9 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu2  : 11.8 us, 23.7 sy,  0.0 ni, 33.9 id, 29.0 wa,  0.0 hi,  1.6 si,  0.0 st
%Cpu3  :  7.9 us, 21.3 sy,  0.0 ni, 44.9 id, 24.7 wa,  0.0 hi,  1.1 si,  0.0 st
MiB Mem :  32093.2 total,    540.0 free,  30099.3 used,   1454.0 buff/cache
MiB Swap:   2048.0 total,   2048.0 free,      0.0 used.   1529.2 avail Mem

– edit –

Kinda silly, but I looked in the mysql.log, and I saw this.

2026-01-24T10:17:09.755774+01:00 0 [Warning] [MY-014084] [InnoDB] Threads are unable to reserve space in redo log which can’t be reclaimed due to the ‘log_checkpointer’ consumer still lagging behind at LSN = 1063483855943. Consider increasing innodb_redo_log_capacity.

So fixed it by’

SET GLOBAL innodb_redo_log_capacity = 8589934592;

Yes, your disks are incredibly overloaded; almost 1/3 of your entire CPU is spent waiting on the disks. The reason you need such a huge redo log is because your disks cannot keep up. Make sure you did the changes above as suggested by @kedarpercona as those will help reduce IO pressure.

Thanks a lot. You guys are awesome!

By using mydumper / myloader, the dump and import was increased by an factor between 6-10. Pretty massive! And without OOM isssues.