We are evaluating mysql cluster (CGE 5.1.23-ndb-6.3.7) to see if it a suitable technology to keep online cache copy of our large sybase database.
We are trying 2 storage node/4 storage nodes configurations.
So far we can not make insert into a single table fast enougth, we are getting 300-1000 inserts per second (using load local file, will try connector/j later on). When we are trying to run multiple client processes to insert into a single table, it is getting even worse, like 100 records per second all together.
What is interesting, is that running simulteniously 10 processes to insert into 10 tables gives the same performance on each table, so adding more parallel loaders doesn’t affect single table (poor) insert performance.
We tried to tune all kinds of parameters on the cluster, so far didn’t help.
Also we don’t see much of cpu/traffic going on, it seems like there is some parameter needs to be tuned.
Any help would be greatly appreciated.
Thank you for your time.
It sounds like you might be using MyIsam as a storage engine.
Have you tried InnoDB?
Insert performance is also affected by number and nature of indices on your tables. Try to make sure the clustered index is incremental, such as an integer identity field
We need very high availability (will spread data nodes across 2 data centers, management node in the 3rd data center) + high performance (as it promised on the paper), so ndb storage engine is the only option here.
Thanks
[B]carpii wrote on Mon, 04 February 2008 20:46[/B] |
It sounds like you might be using MyIsam as a storage engine. Have you tried InnoDB? |
Only for Inserts?
Why should InnoDB faster than MyISAM.
With data_free==0 you dont even need locking.
I would say locking and network latency is the problem.
When you come to insert intensive applications there is nothing faster than a single server. The reason for this is that since a write needs an exclusive lock to the row/table (depending on lock level) it is not possible to scale.
And within the NDB cluster the nodes has to agree on that the specific lock is set.
And taking into account that a normal latency in a LAN often is about 0.5 to 1 ms about 1000 records per second is pretty ok.
When you insert into several tables at the same time the locks for each table are independent of each other hence you don’t have any problem with running them in parallel.
Don’t really know how load infile does the inserts but suggestions for similar problems are that you reduce the number of locks needed by either:
- Use the extended INSERT syntax.
- Grouping a lot of inserts within a transaction (for example 100).
We tried connector/j.
From a single multi-threaded program we were able to reach almost 20000/sec inserts into single table, using 50 threads and loadbalancing url against multiple mysqld nodes, much better. Batching rows together and use transactions give slight performance improvement ~20%, in this case the best batch size was around 10 records. Using single mysqld node to insert gave us ~12K/sec inserts (need to play with ndb pool option in mysqld). We hit ugly bug in connector/j with loadbalancing url and multiple mysqld nodes, some of the commits went missing, although con.commit() call for them went through without any exceptions.
We didn’t play with all connector/j options yet, nor try simultaneous runs in multiple tables.
We also tried msqlimport --use-threads=X utility from our CGE build, with very poor results.
Forgot to mention, we have dark fiber, no repeaters link between the datacenters.