Counter as a bottleneck

Hi!

I have the following problem. There is an FMS instance which reports some information about its state in every 10 seconds. This data goes to a database table which holds other information too.

Sometimes in peek time there is a lot of report and sometime I get the following error: “SQL query failed (1213): Deadlock found when trying to get lock; try restarting transaction”.

The query is a simple “UPDATE table SET field = field + 1”, the table’s type InnoDB.

So it seems for me that the bottleneck is that one line should be updated lot. So I thought that I should move this kind of data to a separate table, and to every record in the original table would belong for example 50 rows in this table. The UPDATE would use randomly any of this rows, so the throughput would be much higher.

The sum of this rows is needed only few times.

A college of mine has another idea, instead of updates we should insert only on line to this counter table every time a report comes. And when we need the SUM we select the number of the rows, and then delete them.

I would be interested what is your opinion about this two possible solution. I guess that the second one is much more IO intensive (for example I think if the same 50 rows are updated frequently MySQL could optimize to write these changes to the disk while the needed pages will be in the memory), but I am not sure so I am curious what others think.

TIA!

Best Regards,
Felhő
/ustream.tv/

Unfortunately, that’s one of the pitfalls of InnoDB (or any other transactional database). Read more here: http://www.mysqlperformanceblog.com/2007/02/25/pitfalls-of-c onverting-to-innodb/

It’s not a bottleneck, but an unavoidable issue with concurrency.

Hi!

I am clear with InnoDB’s characteristic, and I don’t stated that it is the fault of InnoDB, just this issue is a bottleneck in our system.

Best Regards,
Felhő