I think in this test we've shown that the 3ware's hardware RAID5 is totally inferior to Linux 2.6.x's Software RAID5. The only advantage to using it is that when you're dual booting, it will work the same in Windows/Linux. For a server, its not even an option, as you see the dips in the concurrent numbers. It might be more fair if the 3ware could use larger chunk-sizes than 64KB.
Well, I think we see here that each of these has their niche.
EXT3 is extremely robust and mature. Its simplicity makes it easy to recover from failures and metadata problems. However, it got totally blown away in almost every test! The extreme exception is the directory indexing for metadata management.
XFS takes the performance numbers in almost every test. The big exception here is metadata creation. As one can see in the creation graph, ReiserFS, and EXT3 with Directory Indexes blow it out of the water. If you dig into the data a bit, you'll see that XFS *really* gets trounced on the delete speed. SGI has been promising to fix this for a while. I hope they, or somebody int he community do, as it makes one think twice about using XFS for something like Maildir storage.
JFS is relatively new, and I threw it in here because I like the concept. It performs well in all of the tests, though XFS still seems to win in most tests. They definitely did something to improve its performance between 2.6.1 and 2.6.3. Here's hoping that JFS continues to improve, and proves as stable as XFS.
ReiserFS is the red headed step child of the linux filesystems. It is totally stable, but some people are still sore from early problems with the recovery tools and just plain weirdness. I can't say I've ever had problems, and for certain applications there is *nothing* better. For anything where metadata speed is as important/more important than data throughput, Reiser wins hands down. Note though, that under high concurrent load, it definitely had problems writing data. Maybe this is just a buffer problem?
I wanted to try out RAID6, because in certain instances I think it would be nice. For instance, remote servers where it might take a day or two to get replacement disks in. Anyway, it eats about 35% more disk space than RAID5 with its extra parity storage. How about performance? Well you see the XFS and JFS numbers. They don't lie.. this puppy is slower. It makes sense on the writes, but why the reads? I guess you need more than four disks to spread out the actual data reads and make them faster.
Well at first I was going to throw LVM out the window. HALF the read performance? But then I did some checking and realized the RAID5 was actually going really slow on the 2.6.3 kernel. So I upgraded to 2.6.4 (Seen as K4 in the test names). WOW! This made everything faster! LVM performed at lot better. Still a bit disappointing on raw read performance, but much better than before. And the concurrent speed is actually better than just one process. Maybe this is LVM's enterprise roots showing.
Well, there is definitely reason to read those changelogs when a new kernel comes out. Somewhere before 2.6.3, they changed some things in the SCSI layer that seemed to have allowed the 3ware to do a lot of unaligned reads (that is purely a hypothesis based on reading the changelogs, not a fact). 2.6.4 re-introduced the 512-byte alignment. Hence the huge jump in read performance.