multiple inserts at the same time.

Hi, new to this forum and while I have a pretty good grasp on the basics of mysql and databases. The stuff I usually use mysql and the different engines (InnoDB / myISAM) aren’t much hardcore and usually not many transactions occuring at the same time.

However just talked to my boss and since we run some, 2 a year, deadline submissions for the public everyone hits the submit at the very last minute, causing a strain like 2 times a year. Currently we take submissions via email just to avoid this issue, but some colleagues of his have similar issues and they encounter crashes in their system around the deadline for submissions.

Now we have approximatly a 100 submissions at each deadline. So it’s not really a very huge load and I it would really only be inserts in one table with I’m guessing 10-15 fields. No UPDATE’s will be done, nor any SELECT’s (only like one or two after the actual deadlines and those are done by us).

Anyway, am I correct to assume that with InnoDB and rowlocking, no data would be lost and assuming we set the allowed number of connections (transactions ), there won’t be any queues and it will pretty much all be done “at once”?
Now for our own submissions I don’t think we will have any real issues.

However I got me thinking since the colleagues of his have a lot more submissions, and their servers constantly crash each deadline since everyone is waiting for the last minute. Assuming the crash is caused by flawed database construction and not a bottleneck in network / computer load.
Just speculation from my part but I assume they would have a similar table and with 10-15 fields and the crash is caused when like a couple of thousand people try to do their submissions at the same time (each resulting in a INSERT operation). Would it just be enough to configure the number of transactions and use InnoDB and rowlocking and then if the crash continues it’s safe to say that the server just ain’t up for the spike in workload? =)

Now I usually suck at explaining things so ask if there’s something that I missed to point out.

w z l

Please have a look at MySQL docs about InnoDB next-key-locking: ng.html
So when you do insert, it first locks ability to add rows into table, then adds data, then unlocks table.

Did you get to know from your friend if:

  1. it was the DBMS specifically that had problems?
  2. the server runs out of memory due to excessive memory usage by the DBMS?
  3. that the server runs out of memory due to the webserver/application eats up all the memory?

Because I would say that most server crashes are due to out of memory errors caused by an application that is not tuned to throttle high loads.

I have MySQL servers that I work on that have a load of sometimes as much as 8500 queries per second during peak where about 40% of the queries are modifications and we haven’t yet experienced any problems at all.

But on the other hand I have also seen mysql configurations that could easily crash the server if more than 10 people would run the same heavy report at the same time.
In those cases it’s all about how much memory you allow MySQL to use for each query for sorting etc.