High InnoDB Log File Usage

I installed an InnoDB database yesterday on a new database server I am working on. Using the latest Percona 5.7 release from the apt repo.

The database contains 2 tables of highly transient data. One table has a high update rate and the other has a high insert & delete rate.

I installed PMM and was shocked at the attached graph!

Thing is I can actually sacrifice some durability of the data as the data is almost immediately invalidated by upstream sensors that feed in the data and I don’t need to keep the changes made. Obviously what I don’t want to sacrifice is that in the event of a crash being unable to open the tables, I don’t mind losing some data though!

My config file is as follows…

#
# The Percona Server 5.7 configuration file.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html

[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
explicit_defaults_for_timestamp

local-infile = 0

log-error = /var/log/mysql/error.log

# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_ALL_TABLES

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0


# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048

# INNODB #
innodb-log-files-in-group = 2
innodb-log-file-size = 256M
innodb-flush-log-at-trx-commit = 2
innodb-file-per-table = 1
innodb-buffer-pool-size = 8G

photoid=47426

Hi Leeus,

Redo log is used to keep track of all DML statements (insert update deletes) before impacting the changes on disk.
While using InnoDB storage engine MySQL :: MySQL 5.6 Reference Manual :: 14.1 Introduction to InnoDB , i’ts not possible to bypass/avoid redo log as it would defeat it’s purpose.

Having a large redo log file helps in minimizing disk write activity, i.e if many pages on same disk extent are modified before the page is written to disk, then on 1 disk flush, many rows on the same disk page are impacted on disk at once. If redo log is “too small”, then there can be write amplification in the sense that each modified row can potentially be 1 iop (i/o operation), and multiple rows on same disk extent will cause multiple iops instead of just one.

It’s suggested to have a big enough redo log (around 1 hour worth of writes) to minimize disk write activity when flushing disk pages. This is explained on this blogpost How to calculate a good InnoDB log file size - Percona Database Performance Blog and as per your graphs, you would need around 18 GB worth of redo log (or 2x 9 GB).
The tradeoff is that in the event of a crash, a larger redo log will take longer for crash recovery, but will potentially save a lot of write activity.

After executing a DML, the affected rows are impacted on the redo log, but rows on disk are not yet modified as writes are async in nature. You can read more on write tuning on the following blogpost Tuning MySQL/InnoDB Flushing for a Write-Intensive Workload - Percona Database Performance Blog

Another possibility is going with MyISAM storage engine that won’t use redo log, but at the cost of not allowing transactions, table level locking (instead of row level locking) and other disadvantages that you can check on this blogpost Should you move from MyISAM to Innodb ? - Percona Database Performance Blog or the official doc MySQL :: MySQL 5.7 Reference Manual :: 15.2 The MyISAM Storage Engine

Do note that since MyISAM does not have a redo log nor other mechanisms for safe guarding, in the event of a crash or corruption, some data loss will likely occur.

Let us know if you have any further questions and we will gladly assist.

1 Like