Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Hacking

Fibre Channel Storage? 119

Dave Robertson asks: "Fibre channel storage has been filtering down from the rarefied heights of big business and is now beginning to be a sensible option for smaller enterprises and institutions. An illuminating example of this is Apple's Xserve Raid which has set a new low price point for this type of storage - with some compromises, naturally. Fibre channel switches and host bus adapters have also fallen in price but generally, storage arrays such as those from Infortrend or EMC are still aimed at the medium to high-end enterprise market and are priced accordingly. These units are expensive in part because they aim to have very high availability and are therefore well-engineered and provide dual redundant everything." This brings us to the question: Is it possible to build your own Fibre Channnel storage array?
"In some alternative markets - education for example - I see a need for server storage systems with very high transaction rates (I/Os per second) and the flexibility of FC, but without the need for very high availability and without the ability to pay enterprise prices. The Xserve Raid comes close to meeting the need but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives.

I'm considering building my own experimental fibre channel storage unit. Disks are available from Seagate, and SCA to FC T-card adapters are also available. A hardware raid controller would also be nice.

Before launching into the project, I'd like to cast the net out and solicit the experiences and advice of anyone who has tried this. It should be relatively easy to create a single-drive unit similar to the Apcon TestDrive or a JBOD, but a RAID array may be more difficult. The design goals are to achieve a high I/O rate (we'll use postmark to measure this) in a fibre channel environment at the lowest possible price. We're prepared to compromise on availability and 'enterprise management features'. We'd like to use off the shelf components as far as possible.

Seagate has a good fibre channel primer, if you need to refresh your memory."
This discussion has been archived. No new comments can be posted.

Fibre Channel Storage?

Comments Filter:
  • by heliocentric ( 74613 ) * on Sunday January 29, 2006 @07:11PM (#14595190) Homepage Journal
    Sun's A5200s are cheap on eBay, and you can pick up something like a 420r or a 250 to drive the thing. Put a qfe card in with the free sun trunking now for Solaris 10 and it'll serve up your files super speedy, all for very reasonable.

    My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.

    I think new(er) fibre things are getting cheaper, but what was often high-end data-center-only big-$$ of a few years ago hits the price point of "at home" now.
  • eSATA (Score:1, Interesting)

    by Anonymous Coward on Sunday January 29, 2006 @07:15PM (#14595208)
    Would the new eSATA external SATA interface be fast enough for your purposes?
  • Try areca (Score:3, Interesting)

    by Anonymous Coward on Sunday January 29, 2006 @07:23PM (#14595236)
    You can get a sata-II to FC adapter from areca, these are pretty expensive, but the nice thing is that you don't need a motherboard in your case. Combine it with a chenbro 3U 16 bay case and you have a relatively affordable setup.

    http://www.areca.com.tw/products/html/fibre-sata.h tm [areca.com.tw]
  • Re:Try areca (Score:2, Interesting)

    by Loualbano2 ( 98133 ) on Sunday January 29, 2006 @07:36PM (#14595290)
    Where can you go to order these? I did a froogle search and got nothing and the "where to buy" section doesn't seem to have any websites to look at prices.

    -Fran
  • by Big Jason ( 1556 ) on Sunday January 29, 2006 @07:57PM (#14595390)
    Plus you get a free Veritas license, at least on Solaris (sparc). Don't know if it works on x86.
  • by Bert64 ( 520050 ) <bert AT slashdot DOT firenzee DOT com> on Sunday January 29, 2006 @08:11PM (#14595451) Homepage
    Aren't the A5200 arrays JBODs ? Or do they do hardware raid like the A1000 does...

    I need something that will do raid5 in hardware, and show up to the OS as a single device, just like the A1000 does... I considered an A5200 but i was told i`d need to use software raid on it.
  • by adam872 ( 652411 ) on Sunday January 29, 2006 @11:21PM (#14596094)
    Or, you get OpenSolaris and use ZFS on that same array. It's a filesystem and a volume manager in the same piece of software. Best of all, it's all free and I think the ZFS is open sourced too.
  • Re:eSATA (Score:4, Interesting)

    by TinyManCan ( 580322 ) on Sunday January 29, 2006 @11:23PM (#14596098) Homepage
    eSATA is getting closer, but I believe the real long term answer is going to be iSCSI.

    I used to be really against iSCSI, as the native stacks on various OSes just did not deal with it well. By that I mean that a 50 MB/s file transfer would consume almost 100% of a 3ghz CPU. Also, the hard limit on gig-e transfers of 85 MB/s (TCP/IP overhead + iSCSI overhead) was just too low.

    Now, that has all changed. Not only can you get TCP/IP Offload Engines for just about every OS (I don't work with Windows, so I don't know what the status of that is). Also, 10 gigabit ethernet has become financially reasonable.

    For instance, the T210-cx [chelsio.com] is around $800, and will deliver a sustained 600 MB/s (not peak or any other crap). Also, the latency on a 1500 MTU 10-gbs ethernet fabric is something to behold.

    I think by the end of this year, we will see iSCSI devices on 10gbe that out-perform traditional SAN equipment in the 2gbs evironment, in every respect (including price), by a large margin. 4gbs SAN could come close, but I still think hardware accelerated iSCSI has a _ton_ of potential.

    If I were starting a storage company today, I would be focusing exclusively on the 10gbs iSCSI market. It is going to explode this year.

  • by LordMyren ( 15499 ) on Monday January 30, 2006 @12:15AM (#14596231) Homepage
    You're right and you're wrong. I myself started with T-cards and 36gig cheetahs. It was amazing after a life of cheap low performant IDE (college student at the time). But shit kept breaking, the hacks kept getting worse and worse, the duct tape bill started getting too big and I just got tired of it. Drives would go offline and there was no hotswap support... kiss your uptime goodbye.

    So I exactly that, went on ebay and bought a pair of photons. Only 5100's, but 28 drives was pretty nice.

    I was pretty undewhelmed. They were a steal when I got them (well, a "good" price when you factor shipping), but the performance was never there even with really good 10k6 cheetah's. RAID never helped, no matter how it was configured. It just didnt seem that useful.

    Plus the A5200's weigh 125lbs and hauling them between dorm rooms proved less than fun.

    And even locked my basement closet I could hear the roar of two A5100's. I'd been "meaning" to get rid of them for a while, but now that I'm changing states... it was finally time. I sold em on craigslist for $280 for both. Same I bought em for, and taht includes shipping.

    I dunno, If I were anyone with a brain, I'd wait another year for SAS go to ape-shit on everyone. The enclosure-hostcontroller system is a smart breakdown that'll really help beat away the single-vendor-solution... the reason everyone can charge so much for hw now is that everything is one unit, the enclosures, the controller, its a big package with a nice margin. when XYZ company can come along and sell you a 24 drive enclosure for pennies that you can plug in to a retail SAS controller... its a game changer. Just watch the rediculous margins drop.

    If you need something now, just get SATA raid. Intel's new IO processor is amazing, it'll give you really nice performance. But otherwise, I'd say wait for SAS. I suppose its still more expensive than a pair of A5100's, but I'd wager the performance will be better.

    As a side note, I sometimes wonder whether the fibre cabling i bought was bad. I really couldnt sustain more than 40 MB/s even doing XFS linear copies, even with 14 drives dedicated to the task. I'm not sure if bad cabling would've given me some kind of overt error, or might have just quietly degraded my performance.

    Myren
  • by nuxx ( 10153 ) on Monday January 30, 2006 @01:46AM (#14596519) Homepage
    I have done this using a Venus-brand 4-drive enclosure, some surplus Seagate FC drives from eBay, a custom-made backplane, a Mylex eXtremeRAID 3000 controller, and a 30m HSSDC DB9 controller from eBay.

    I located the array in the basement, and the computer was in my office. I had wonderful performance and no disk noise, which was quite nice...

    If you want photos, take a look here [nuxx.net].

    Also, while I sold off the rest of the kit, I've got the HSSDC DB9 cables left over. While they tend to go for quite a bit new (they are custom AMP cables) I'd be apt to sell them for cheap if another Slashdotter wants to do the same thing.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...