Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Hacking

Fibre Channel Storage? 119

Dave Robertson asks: "Fibre channel storage has been filtering down from the rarefied heights of big business and is now beginning to be a sensible option for smaller enterprises and institutions. An illuminating example of this is Apple's Xserve Raid which has set a new low price point for this type of storage - with some compromises, naturally. Fibre channel switches and host bus adapters have also fallen in price but generally, storage arrays such as those from Infortrend or EMC are still aimed at the medium to high-end enterprise market and are priced accordingly. These units are expensive in part because they aim to have very high availability and are therefore well-engineered and provide dual redundant everything." This brings us to the question: Is it possible to build your own Fibre Channnel storage array?
"In some alternative markets - education for example - I see a need for server storage systems with very high transaction rates (I/Os per second) and the flexibility of FC, but without the need for very high availability and without the ability to pay enterprise prices. The Xserve Raid comes close to meeting the need but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives.

I'm considering building my own experimental fibre channel storage unit. Disks are available from Seagate, and SCA to FC T-card adapters are also available. A hardware raid controller would also be nice.

Before launching into the project, I'd like to cast the net out and solicit the experiences and advice of anyone who has tried this. It should be relatively easy to create a single-drive unit similar to the Apcon TestDrive or a JBOD, but a RAID array may be more difficult. The design goals are to achieve a high I/O rate (we'll use postmark to measure this) in a fibre channel environment at the lowest possible price. We're prepared to compromise on availability and 'enterprise management features'. We'd like to use off the shelf components as far as possible.

Seagate has a good fibre channel primer, if you need to refresh your memory."
This discussion has been archived. No new comments can be posted.

Fibre Channel Storage?

Comments Filter:
  • by postbigbang ( 761081 ) on Sunday January 29, 2006 @09:34PM (#14595739)
    The answer has a lot to do with the previously mentioned I/O goals that you have. Let me try to answer this in a taxonomy. This rambles for a bit but bear with me.

    Case One

    Let's say this array is to be used for a single application that needs lots of pull and is populated initially from other sources, with a low delta of updates. In other words, largely reads vs writes. Cacheing may help; and if so, then you can tune the app (and the OS) to get fairly good performance from SATA RAID or through FC JBODs when in a Raid 0 or 5 configuration. (there is no Raid 0; its just a striped array without redundancy/availability and is therefore a misnomer)

    Case Two

    Maybe you need a more generalized SAN, as it will be hit by a number of machines with a number of apps. You'll need better controller logic. You'll likely initially need a SAN that has a single SCSI LUN appearance, where you can log on to the SAN via IP for external control of the can that stores the drives (and controls the RAID level, and so on). This is how the early Xserve RAID worked, and how many small SAN subsystems work. Here, the I/O blocks/problems come at different places-- mostly at the LUN when the drive is being hit by multiple requests from different apps connected via (hopefully) an FC non-blocking switch (Think an old eBay-purchased Brocade Silkworm, etc). SCSI won't necessarily help you much.... and a SATA array has the same LUN point block. Contention is the problem here; delivery is a secondary issue unless you're looking for superlative performance with calculated streams.

    Case Three

    Maybe you're streaming or rendering and need concurrent paths in an isochronous arrangement with low latency but fairly low data rates-- just many of them concurrently. Studio editing; rendering farms, etc. Here's where a fat server connecting a resilient array works well. Consider a server that uses a fast, cached, PCI-X controller connected to a fat line of JBOD arrays. The server costs a few bucks, as does the controller, but the JBOD cans and drives are fairly inexpensive and can be high-duration/streaming devices. You need to have a server whose PCI-X array isn't somehow trampled by a slow, non-PCI-X GBE controller as non-PCI-X devices will slow down the bus. You also get the flexibility of hanging additional items off the FC buses, then adding FC switches as the need arises. At some point, the server becomes more useless in cache and becomes its own botleneck-- but you'll have proven your point and will have what now amounts to a real SAN with real switches and real devices.

    The SATA vs SCSI argument is somewhat moot. Unless you cache the SATA drives, they're simply 2/3rd the possible speed (at best) of a high-RPM SCSI/FC drive. It's that simple. uSATA will come one day, then uSATA/hi-RPM..... and they'll catch up until the 30Krpm SCSI drives appear.... with higher density platters....and the cost will shrink some more.

    I've been doing this since a 5MB hard drive was a big deal. SCSI drives will continue to lead SATA for a while, but SATA will eventually catch up. In the mean time, watch the specs and don't be afraid of configuring your own JBOD. And if you want someone to yell at, the Xserve RAID is as good as the next one.... except that it has the Apple Sex Appeal that seems a bit much on a device that I want to hide in a rack in another building.
  • Try AoE instead (Score:3, Insightful)

    by color of static ( 16129 ) <smasters&ieee,org> on Sunday January 29, 2006 @09:41PM (#14595767) Homepage Journal
    Fiber channel just seems to have to high a cost of entry these days (or maybe it always have :-). It's not bad today with SATA being used on the storage arrays, but it is hard to compete with the other emerging standards. I've been using AoE for a little while now and have been impressed with the bang for the buck.
        A GigE switch is cheap, and a GigE port is easy to add, or you can use the existing one on a system. AoE sits down below the IP stack so there is little overhead for comm, and it looks like a SATA drive in most ways. The primary vendor's appliance (www.coraid.com) will take a rack full of SATA and make it look like one drive via various RAID configs.
        Yeah FC is faster, but how many drives are going to be talking at once? Are you really going to fill the GigE and need a FC to alleviate the bottleneck? If you are then FC is probably not the right solution for you anyway.
        Your mileage may vary, but I expect anyone will get comparable results for the price, and many will get excellent results overall.
  • How about iSCSI? (Score:3, Insightful)

    by MikeDawg ( 721537 ) on Sunday January 29, 2006 @09:58PM (#14595821) Homepage Journal
    Depending on your companies needs, could an iSCSI [wikipedia.org] solution be more viable. There are some very good units out there with loads of various different RAID setups. There are some trade-offs vs. Fibre Channel, such as speed vs. cost etc. I've seen quite a bit of data being handled to/from iSCSI arrays quite nicely. However, the companies I worked for had no true need for the blindingly fast speed, and extremely high cost of FC arrays.

On the eighth day, God created FORTRAN.

Working...