Innodb inserts/updates per second is too low


Inserts or updates to innodb tables are too slow. Ins/sec or upd/sec is around 50.

I use Percona Server version: 5.5.32-31.0

I increased the innodb buffer pool size to 4GB. My RAM is 7.5GB

innodb_flush_log_at_trx_commit is 1.

CPU is almost idle. It is a dual core processor.

Can anyone please suggest ways to trouble shoot this issue?


Performance of insert/updates depends not only h/w configuration of server, it also depends on size of table, number of rows table have, number of indexes, primary keys and foreign key constraints.
provide the above details?

Hi Yogesh,

I am using load data infile utility to load data in a dat files in to few tables. After the data is loaded, I run few tests. My tests would truncate the data in these tables.
So I would load it again for further tests. These insertions were going through fine without any performance issues.

But after 6th or 7th iteration, I am facing this issue even with empty table.

Each table has a primary key and a foreign key reference to other table.

Any idea why this could be slow?



How many records are there in load file for insertion?
are you using innodb_file_per_table=1 ( separate ibd file for each tables ) or single ibdata1 for all databases and tables? if you have single ibdata file for all databases/tables then multiple truncate and data load increase the size of file on file that might cause the problem.
you can try this : drop and recreate the table before every load file operation and see if it helps you.


I tried with 100k records. Reduced the number to 10k and even 1k. All the times, the insertions are too slow.
Yes innodb_file_per_table option is enabled.

What is the size of InnoDB transaction logs on this server? Also flushing on every transaction, but do you have battery backed write cache?

I’m having the same issue on Percona 5.6 (Ubuntu 13.04) running on AWS m1.xlarge instance
XFS filesystem build with default values or software RAID (mdadm)
Server configuration based on values provided by
Replication catch up rate is EXTREMELY slow (1-3-5 records per second, expecting 100-200 per second)!
Attached SHOW ENGINE INNODB STATUS As you can see server is almost idling.
Can you help me to identify bottlenecks?

Update: AWS Cloudwatch shows constant high EBS IO (1500-2000 IOPS) but average write size is 5Kb/op whish seems very low.



Server does not have a battery backed write cache.

Vijay, so you have innodb_flush_log_at_trx_commit = 1 but there is no write cache enabled. This would be not such a problem if you had SSD disks, but otherwise each flushing to disk is very slow. I would recommend to use innodb_flush_log_at_trx_commit = 2 - you can loose up to 1 second of transactions in case of OS crash but the IO pressure will be a lot smaller.
Now the innoDB logs are too small for any production purpose. Think about 512M or similar. But check first how to change them:

Serge, I don’t think your problem is similar. Indeed InnoDB looks idle. So your replication is not keeping up? Is this IO thread not being able to pull binary logs from it’s master in time, or is this SQL thread slowly processing updates? In first case - check the network between master and slave. If the second case - check if your master is using ROW based replication and if all the tables being updated have primary keys.

This is not on Production. I am doing this test on a non-prod node.

Only point is, with the same mysql configuration, with mysql 5.1, inserts are pretty fast. This problem is only with 5.5

I am going to try a few things.

​I will have the following settings in my.cnf


Currently, innodb_log_buffer_size=10M. Will check this number too.

OK, so 5.5 slave, with exactly the same settings and on exactly the same hardware is slower then 5.1 slave?

Why are you going to disable autocommit? How this would help? If you have large insert sessions, you can wrap them into a transactions without changing this variable.
Also turning off autocommit on slave won’t change anything regarding replication as it’s the master who wraps each insert into BEGIN … COMMIT sections. So you either can use per session on master:
set autocommit=0;
insert …
insert …

set autocommit=1;
or (IMHO better) wrap the inserts between BEGIN and COMMIT. This way you may achieve better speed for some group inserts, updates, etc.

innodb_log_buffer_size at 10M should be big enough, unless you have really big transactions.

I think you mistook my comments for Serge’s comments. :slight_smile:

I don’t have a replication setup. I have my data in a dat file. I am using mysql load in file utility for bulk data loading

Vijay, indeed I confused a bit with Serge’s comment, but at least part of the autocommit comment was for you :slight_smile:
So, regarding 5.1 vs 5.5 performance of LOAD DATA queries, it may be very simple explanation - in MySQL 5.1 the default engine for tables is MyISAM, while for MySQL 5.5+ it is InnoDB. And surely LOAD DATA can be faster for MyISAM as it’s much simpler process (no transactions, no doublewrite, no redo logs, etc.).
Please confirm if the table was InnoDB for 5.5 and MyISAM for 5.1.

I am using Innodb in both 5.1 as well as 5.5

OK, let me just double check then - you are using the same settings in 5.5 as they were in 5.1? For example, the transaction log size is the same? This is exactly the same hardware?

Hi Vijay,

If you are running script “LOAD DATA INFILE” from non-cloud instance it will take time and some time its very slow insert/update based on your AWS Region & Instance type + Network /bandwidth (between your machine to AWS instance).

Suggestion: a) Move the file in the cloud instance (instead of running from non-cloud instance) and run the command of LOAD DATA INFILE from local-host or remot-host exists in same region .
b) disable the indexes & FK Checks -> import the data -> enable indexes

for more detail on aws instance :