Maximum number of insertions

I need that mysql support an average of 200 insertions per second and that this value can be scalable, it is possible? … that would be necessary hardware and configurations? … thanks

Actually maximum throughput depends on hardware, configuration, database size and schema. But yes - it is really possible - 200 qps is really low threshold. We had some clients with thousands of inserts per second and they have pretty standard mysql. Everything depends on approach and configs )

As a small example I can say that I get up to about 8000 inserts/second with InnoDB table on my 3 year old laptop.
As long as it doesn’t have to write to disk to much. Since a laptop disk is so very slow in situations like this.
Example figures:

50000 records in: 6.27902913093567 seconds50000 records in: 6.30907201766968 seconds50000 records in: 9.94429898262024 seconds50000 records in: 7.40064191818237 seconds50000 records in: 6.3791720867157 seconds50000 records in: 7.81123185157776 seconds

(The dip i performance in the middle is due to other applications also running on my laptop.)

So buy a real server with a lot of RAM and fast CPU then there is no doubt that it can handle a lot more inserts than 200 per second even if your DB is big in size.

The question is now how heavy selects you will run against this machine at the same time?

Accurate, the application is actually a mail server in a duplication enbebido existing aplicaion this nesecita first make 8 inserts a parsing of data that comes in the form of mail, then 6 consultations to validate the operation, and just send the data paraseada another apliacacion this event and register again in the bd with 8 more inserts, this completes the routine has to do as 200 times per second minimum, I tried it on a server with 4 cpu redhat 4, and does not give optimal results , so if I could reach configurations that would have to be done in the bd to make this the most optimal possible by the side of this code in C and optimized. Thanks

OK, looks fine and dandy.

How big are the tables in rows and MB?

Do you have a lot of selects running against these tables also?
And more important is the application deleting records during normal working routines?

Output from:

SHOW STATUS LIKE ‘Table_locks%’;