Some Fastt600 Turbo benchmarks & some serveraid 6i benchmarks


Hardware Configuration
--------------------------------------------
-IBM x345, 2*xeon@3Ghz
-SCSI RAID controller:ServeRAID 6i (Ultra320 SCSI - PCI-X), (discs inside box)
-QLogic PCI to Fibre Channel (FC) Host Adapter QLA2340 (connects to IBM FASTT600 Turbo)
-all disks are ~137GB IBM 10Krpm, SCSI and FC

software:
---------
-linux kernel: 2.6.7
-bonnie++ 1.03, 
 cmd: bonnie++ -u nobody:nogroup -b -d /a/tmp -s 5g -m category -n 128:20000:16:512

What was tested
--------------------------------------------
Tested under multiple raid configurations and filesystems.
Filesystems tested include xfs (Silicon Graphics), jfs (IBM), ext3 (Redhat) 
and reiserfs (NameSys).
Raid configurations tested were raid0 and raid5, in both scsi and 
fiberchannel host controllers using various filesystem combinations.

2-process tests were performed by running bonnie twice at the same time, 
so it is not probably to be trusted but usefull none the less.

fc refers to FastT600 Turbo fiberchannel, connected with the qlogic card.
scsi refers to internal raid (serveraid6i)

Various filesystem block sizes have been tested where applicable with no
significant performance improvement. Results refer to the default filesystem
block sizes.

For example:
--------------
fc-raid5-xfs-4disks-8kbSegment
means: fastt600, 4disks in raid5 configuration, 8kb segment in raid controller settings, 
formated with xfs filesystem


BENCHMARK RESULTS:
-------------------
bonnie output html formated
bonnie output+some more tests (dd)

Conclusions
--------------
The overall feeling from these results is that the FastT600 Turbo 
fiberchannel raid has a very slow poor performance despite its
price of ~$30k (with the "turbo" option). The relatively cheap 6i 
scsi raid configuration outperforms easily the fiberchannel setup even in 
raid5 configuration(!). Seeing the results I really felt I must have done 
some error in the tests but unfortunately it seems that's not the case. 
Please contact me if you get much different results.

--------------------------------------------------------------------------

FYI: From IBM's  i/o connectivity FAQ (for zseries):

Question: What is the actual throughput I can expect from 2Gbps link speeds?

Answer: The 2Gbps line speed is the theoretical maximum unidirectional 
bandwidth capability of the link. Fibre channels (be they FCP or FICON) are 
full duplex, meaning that each direction of data flow (100% reads and 100% 
writes) can have a maximum of 2Gbps (200MBps) of bandwidth. This is why you 
hear FICON can achieve 120MBps of mixed reads and writes on a 100MBps link.

The actual throughput of the 2Gbps link (whether it is measured in I/O 
operations per second, or MBps) will depend on the type of workload, fiber 
infrastructure, and storage devices in place. For maximum benefit, the 
end-to-end connection should be 2Gbps capable. For example, with a zSeries 
FICON Express channel through a 2Gbps capable FICON director to two host 
adapters on the latest ESS Model 800, about 150MBps can be achieved with 100% 
reads and about 170 MBps with a mixture of reads and writes (using highly 
sequential large block sizes). 



Spiros Ioannou