RAID many smaller discs vs. fewer larger ones

I am considering the options for a new server running MySQL
I would like to have about 4 TB of storage and would like to know which is better, regarding performance :
RAID6 with 8 drives x 750GB each (STATA II drives)
or
RAID6 with 16 drives x 300GB each (STATA II drives)

The queries are mostly select, but i could have a lot of simultaneous ones.

So what do you think, which is better - many small drives or fewer larger ones and how would stripe size influence ?

RAID 5/6 over 16 drives is going to have absolutely horrible write performance. Even over 8 drives is bad. You should look at RAID 10 with 12 750GB drives. With RAID 10, more spindles (drives) will certainly result in better performance. And unless your database is pretty much read only, you’ll have small random writes all over the place, so smaller slice sizes will be better (less data needs to be re-written on each write).

While I don’t know your application, judging by the size of your data set, I’d suggest maxing out the RAM in your machine, too.

RAID 5 / 6 is bad for write performance, because for writing a block, the corresponding blocks from the other discs have do be read, computed with the data to write and the block and the new recovery record has to be written (1 recovery record for raid 5 (two disc writes) and two for raid 6 (three disk writes))
(Instead of reading the other block, the system my read the old data and parity block, to compute the one.)

So for good write performance (which you will need for transactions (innodb in general) and replication), and if you can offer the price, use RAID 10!
The data is also written twice, but no corresponding block has to be read first and no recovery record has to be calculated… The data will be directly written to disk. And the write will be distributed between all disks!

On Reads it does not make so much difference, but RAID 10 will also be faster…

I you want to use RAID 10 and InnoDB, I would suggest using a chunk size of 16KB (Which is the InnoDB default data-block size). The Reason: There is no point in making it bigger… you don’t gain much read/write performance. If you make it smaller, a normal InnoDB Data block read/write (of 16KB) has to be distributed between the RAID 0 disks. This results in seeking more than one disk, resulting in a bigger average seek time… because the resulting seek time is the maximum of the seek time for each disk (not the average)!
For writing small data chunks, the seek time is the limiting factor… not the write time.
Example Disk:
write performance 80MB/s
avg. seek time: 8ms
=> 0,64 MB could be written, instead of repositioning the head…

Sequential bigger data reads/writes (where read/write performance is more imported than seek time) will be distributed between the the RAID 0 Disks, increasing the speed.

Random Data-Block reads/writes will also be distributed between the RAID 0 disks, parallelize the seeks of the random reads/writes.

If you really want to use RAID 5/6, use a chunk size of 128KB - 512KB.
The reason is that modern Disk are much faster in reading sequential data than seeking between blocks.
As I said, to modify a block, the corresponding block of the other disks have to be read. On random access, reading big blocks does not take much more time than reading small ones, because the seek time is the limiting factor. But on sequential writes, the data for the second write has not to be read, because it is still cached from the read before…