Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage

Best eSATA JBOD? 210

redlandmover writes "I already have an HP Media Server (upgraded processor, and memory) that has already been upgraded internally to 3.5TB. I'm sure everyone already has their favorite backup solution (RAID, WHS, a billion external hard drives, etc). My question is: what is the best JBOD (Just a Bunch of Drives), eSATA-connected, external hard drive enclosure? (Preferably, at least 4 drives.)"
This discussion has been archived. No new comments can be posted.

Best eSATA JBOD?

Comments Filter:
  • by Facegarden ( 967477 ) on Monday June 22, 2009 @05:50PM (#28429667)

    This isn't quite what you want, but I have a $30 6 drive caddy (with 4 drives atm) and a $70 4 port internal SATA card. I just run long SATA cables to it, but it was cheaper than any single-cable solution i found, so that may not be a bad way to go.

    One thing I noticed though was that I actually have enough room for all 9 of my hard drives inside my case! I may migrate them in.

    And yes, before you say it, that is certainly quite a bit of porn!
    -Taylor

    • Re: (Score:3, Funny)

      by Stickerboy ( 61554 )

      >And yes, before you say it, that is certainly quite a bit of porn!

      I need some quantification here. Put it in layman's terms - how many Libraries of Congress of porn is that, exactly?

  • Duct tape (Score:3, Funny)

    by sakdoctor ( 1087155 ) on Monday June 22, 2009 @05:51PM (#28429683) Homepage

    Duct tape the drives together, then use software RAID JBOD.
    That's what MacGyver would have done.

    • Duct tape the drives together, then use software RAID JBOD. That's what MacGyver would have done.

      Duct tape? Oh heavens no! No, here's what I did: I went down to the local thrift store and bought a few big shelf speakers for ten dollars. Then I took them apart and got the really powerful magnets out. Using these, you can attach the drives to the outside of your case. There's one gotcha though--some cases are aluminum which means you have to attach the magnets and drives to your CRT if you have one. This usually just means a longer cable though.

      The smart thing about this is that the drives are on the outside of the case so they remain cooler than they would in any enclosure.

      If you think a RAID is a backup, you'll be overjoyed with the results of my advice!

      • I never laugh out loud at work, therefore what I did in reaction to your post was simply an uncontrolled spasm of my diaphragm

        Seriously though, that was some frickin funny stuff.

      • You're overthinking the whole thing. The drives themselves have even more powerful magnets already inside, so there is no need to bother with the speakers. Just crack open each drive, and remove one (only one!) of the two neodymium magnets. The drive only needs one ... the other is there for redundancy. Since you are doing RAID somethingorother, you will already have enough redundancy in your storage, and the extra magnets can be used elsewhere.

        [/brilliant advice]
    • Popsicle sticks between the drives, for airflow.

  • Please note that RAID and such are not "backup solutions" ! If your FS get screwed, you loose info.

    Think of a backup solution as independent from the media where the info is kept. Then you decide if you want to use RAID, tapes, etc.

    My backup solution: incremental backups every half-hour. And full backup once a month.

    Now for the media I use to store the backups : RAID mirroring for incremental and hard drives put in a safe at the bank with rotation for full backups. (NO RAID used for full backups).

  • by rei_slashdot ( 558039 ) on Monday June 22, 2009 @06:00PM (#28429813)
    ...I can't get the manufacturer to acknowledge or confirm that there is problem when copying between hard drives in the same enclosure. Windows hangs and eventually the Event Logs show "device is not connected" or some sort of issue. Copying between drive and the motherboard's SATA drives works fine but it always hangs/times-out/becomes inaccessible after a random amount of transfer. It's surprisingly well put-together without looking tacky and well-priced but this copying issue between drives inside it is a pain. Transfer between drives inside it seem to work without a hitch using slower USB. http://mediasonicinc.com/store/product_info.php?products_id=150 [mediasonicinc.com] It syncs with your PC's power so if the PC goes off, the box goes to sleep and wakes up when the PC power is restored.
  • Why do you need an enclosure that does JBOD?
    In my opinion you need an enclosure that does 2 things.

    Encloses your drives.
    Provides power (since current eSata doesn't, LOL).

    Let your system handle the JBOD. Everything supports JBOD. Or, you know, just have them as 4 separate drives and be organized, so you can deal with them as raw drives if need be, and so if one goes dead, it'll be a lot easier to get your shit from the others.

    I have yet to see a multi-drive enclosure that DOESN'T force it's shitty controll

    • by ls671 ( 1122017 ) *

      Thanks for the tip, I have never used external enclosures with "shitty controllers" but I have been tempted by them. I've only used file/backups servers that I would setup myself with computers running Linux.

      Have you actually tried any of these external enclosures with "shitty controllers" ?

      Details on problems would be fun to hear about...

      • Basically, I don't trust any enclosure to be a hard drive controller as well. Even higher-end boxes. If I want a controller, I'll use my motherboard, or a dedicated storage PC/server. Yes, I have tried many.

        Even if they were trustworthy, I would want to avoid them. It's just one more thing to fail, have to update firmware for, track down drivers for, worry about, etc.

        This is for your backups, right? Take a minimalistic approach. Just get shit that works. Hard drives work. You're just putting them in

    • Can't agree with the masking tape. If you don't peel it off pretty quick, you get an awful residue. Duct tape residue will clean off with a little rubbing alcohol.

      • Have never had that issue.
        And old duct tape residue (several years) will require significant work to remove.

        • I'm basing it on experience years ago when my car windows were broken out. Used some masking tape and some duct tape to cover the windows with plastic until I could get it fixed.

          The duct tape residue came off with a little light solvent. The masking tape put up a real fight.

          Of course, getting wet probably had an impact.

  • Why? (Score:2, Informative)

    by Doug Neal ( 195160 )

    I think Linux and Windows can both do this quite easily in software... but why bother? JBOD is the worst of both worlds when it comes to storage arrays. You have all the risk of losing everything if one drive dies, without gaining the performance benefits that RAID 0's striping gives you. Hard disks are cheap enough for a 2TB RAID 10 array to be affordable.

    Yes this was quite a predictable comment, but someone had to say it..

    • Re:Why? (Score:4, Informative)

      by caseih ( 160668 ) on Monday June 22, 2009 @06:34PM (#28430349)

      No that's not correct. JBOD is just that. Just a bunch of disks. Has nothing to do with redundancy (or lack of redundancy). What you do with them is completely up to you. You can implement a RAID-Z with them on solaris (which is actually faster on my Enterprise-class disk array than the built-in RAID-6 in hardware!), Linux RAID-5, RAID-10, or whatever. Except for issues of battery-backed caching, I have come to the opinion that for most low- to middle-end storage needs, a large JBOD and software RAID is the way to go.

      • I see that as a good method for tier 2 storage and definitely for backup tier storage.

        I wouldn't want primary storage to use that method for obvious reasons but I haven't really played enough with alternatives to Raid 6 so you've given me something to try! Thanks

      • by adolf ( 21054 )

        large JBOD and software RAID/quote

        Isn't that redundant? What does the above text specify which could not be concisely written with just the words "software RAID"?

        • by caseih ( 160668 )

          No, it's not redundant. JBOD has nothing to do with RAID. JBOD is just raw LUNs (disks) over a bus. You can put them together however you want. Software RAID is the most common thing to use with JBOD-exported LUNs.

          I'm not surprised that you haven't made the distinction, though. JBOD is an enterprise term and tends to be used when working with large external (Fiber Channel) disk arrays, either as a mode of operation, or meaning a chassis of disks without a hardware RAID backplane, over a SCSI bus or Fib

          • by adolf ( 21054 )

            Well, yeah.

            But make no mistake: I know the terms. And it's got nothing to do with the bus used, or whether the disks are multiplexed with LUNs.

            The words "software RAID" make it damn near implicit that there's Just a Bunch Of Disks attached. Therefore, I continue to suggest that software RAID+JBOD is redundant terminology, and that just saying "software RAID" is perfectly descriptive. (What else would you be software RAIDing[1], after all?)

            To use a car analogy: It's like saying "I have a car, with tires

      • Re: (Score:3, Informative)

        by atamido ( 1020905 )

        No that's not correct. JBOD is just that. Just a bunch of disks. Has nothing to do with redundancy (or lack of redundancy).

        This is incorrect. JBOD is similar to RAID 0 without striping, allowing one to use disks of dissimilar size. There are some RAID controllers that will incorrectly refer to presenting physical drives directly. However most RAID will correctly present a JBOD as a single logical volume.

        Please refer to the Wikipedia article on RAID [wikipedia.org].

    • The main reason people choose JBOD is because they have a bunch of differently sized drives, which are not well suited for redundancy or striping.

      In my surviving collection of misc drives, I've got a 40 gig (8 years old), a 200 gig (5 years old), and a 500 gig (9 months old)

      There isnt any concievably usefull redundancy method using these, but I can treat the entire lot as a 740GB backup drive..

      If its for a home media server, backups and redundancy probably isnt a serious issue.. and performance defina
    • In my case, it's because I don't care if I lose the data. They're rips of DVDs/CDs that I own, so 1 DVD represents 5minutes of time. In a lot of the RAID setups, if you lose a disk, you lose the entire RAID. In others, if you lose the card/motherboard, you lose the entire RAID.

      In that situation, the frustration represented by losing the entire array when a disk (or card) bites the dust is a lot higher than the performance benefits, or the supposed reliability benefit.

      Remember, in a consumer environme

  • by Robotbeat ( 461248 ) on Monday June 22, 2009 @06:05PM (#28429895) Journal

    You can get an external (4-port, but acts like one big 1.2 GiByte/s pipe) SAS RAID card for less than $500 that will allow you to make multiple RAID sets of up to 32-disks in a set using true hardware RAID 5,6,10, etc. You can even get a battery backup unit for the RAID card cache for $100 (priceless on critical DB systems).

    An external SAS card allows you to connect over a hundred drives through one connection using SAS expanders (some cards support up to 256 devices). Some external SAS RAID/JBOD cards have two SFF-8088 connections, for eight SAS lanes total. That's 2.4 Gigabytes/sec raw. At that rate, it's your PCI-e bus that's usually the bottleneck.

    A lot of SAS expanders are expensive, but Chenbro has some ones for $300 that spread one x4 lane SAS cable into 24 or 32 cables, plus they can be daisy-chained for more storage. Then, buy a nice 24-slot Supermicro 4U chassis with dual-redundant power. That's a little less than $1000. All you need is the Chenbro expander in the chassis, no need for a motherboard.

    If you're really cheap, you can use a cheaper $150 external SAS JBOD-only card, but hardware raid really is a must if you have a lot of storage. Plus, a hardware raid can use write-back cache, since it has effectively non-volatile RAM using the battery backup unit. And no, a UPS is NOT a replacement for NVRAM... Has your system ever crashed for any reason or hung for any reason? I've never had a RAID card hang or crash.

    So, basically, besides the external SAS card, you have:

    24-slot chassis with redundant power: $1000
    chenbro SAS expander: $300
    cables: depends

    That's about $60/slot, plus you have redundant power (and an upgrade route to dual-redundant controllers). You can scale this to hundreds of terabytes, too. Over a petabyte if you have multiple controllers (with raid array rebuilding on one card not affecting rebuilding on another).

    • BTW, you can use SATA disks with this SAS setup. Also, this is hot-swap.

    • hardware raid really is a must if you have a lot of storage

      No, hardware RAID is a bad idea. You're locked to a proprietary controller and a proprietary on-disk format. ZFS is a much better idea.

      • ZFS won't give you good performance for a large array because your random read speed is basically limited to the equivalent of one drive per raid set. That is unacceptable if you need performance:

        http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSRaidzReadPerformance [utoronto.ca]
        "...adding more disks to a ZFS raidz pool does not increase how many random reads you can do per second."

        • by pyite ( 140350 )

          ZFS won't give you good performance for a large array because your random read speed is basically limited to the equivalent of one drive per raid set. That is unacceptable if you need performance

          Cheap, reliable, fast: Choose two.

          Cheap + Reliable: RAID-Z and cheap drives
          Cheap + Fast: Stripe on a non-ZFS filesystem
          Reliable + Fast: ZFS mirrors on good drives. Go a step further, add L2ARC on SSD (readzillas).

          You can't have your cake and eat it too. That said, RAID-Z and RAID-Z2 perform quite well for most peopl

        • Note that this only applies to RAID-Z and RAID-Z2 pools. ZFS also supports mirrored pools and stripped pools (equivalent to RAID-1/0). If you care more about performance than data integrity (e.g. for /tmp, possibly for /var) then a stripped pool might be a better storage model than RAID-Z.
  • The Rosewill RSV-S8 (Score:5, Interesting)

    by UserChrisCanter4 ( 464072 ) * on Monday June 22, 2009 @06:30PM (#28430269)
    The Rosewill RSV-S8 [rosewill.com] is pretty much exactly what you've described. It's an eSATA enclosure with 8 drive caddies, a power supply, and a fan. It presents the drives to the system as JBOD or one of the various common versions of RAID (implemented in software, I assume). Ignore the comically inflated MSRP; it's $300 on Newegg. It ships with its own eSATA card for compatibility purposes, but I assume it would work with any eSATA adapter that followed the proper specifications. There's also a five drive version available for about $100 less, give or take. I can't speak to the reliability or ease of use, but this sounds like it will fit your requirements.
    • by Zerth ( 26112 ) on Monday June 22, 2009 @07:46PM (#28431617)

      This [geeks.com] is a 5 disk eSATA for $180. Appears to be similar (Silicon Image bits, single cable w/port multiplier, etc)

      • by lanner ( 107308 )

        Unless I am mistaken, this device is a re-branded Venus T5 storage enclosure, made by AMS. With Linux, it works well. I've read the FreeBSD doesn't currently support SATA hub/switches (or whatever it's called), though that may be old info. The AMS device comes with Windows software, but I've been told it sucks.

  • The old AT cases had a power supply with a mechanical power switch, rather than a soft-switch like ATX power supplies. Old AT cases and power supplies should be just about free, just strip out the old motherboard and you have a decent, inexpensive solution. Like someone else said, just get long SATA cables, and run them directly to the drives. You can bundle them together with zip ties periodically down the length, or use wire loom if you want something a bit neater. You may need molex-to-SATA power adapter
    • Old AT cases and power supplies should be just about free, [...] you will have a very reliable, well-cooled, very cheap solution.

      Two years of constant running could mean that a standard enclosure, consuming less power, is actually cheaper.

  • At $WORK we just got a nice 8 bay rackmount eSATA chassis from them - dual/redundant power supply two quad-port SAS connectors, about $895, $679 for single power supply version. We bought it with 8x 1TB SATA HDs and an Areca RADI card with cables for just over $2200. (it is available as a chassis without cables, cards, or drives).

  • Well, this is fun (Score:4, Interesting)

    by Master of Transhuman ( 597628 ) on Monday June 22, 2009 @10:26PM (#28433521) Homepage

    OP asks questions about external eSATA enclosures, the entire first page of responses is an argument over whether RAID is backup... /....

    Here's an ON-TOPIC RESPONSE! Horrors! Take away my EXCELLENT KARMA for this breach of /. protocol!

    I have a client who needed backup for a lot of big video files. We bought an enclosure from PC Pitstop, eight bays each holding 750GB SATA hard drives (1TB wasn't really around last year when we got it) attached to two eSATA cards in the PC controlling the enclosure. We spent a month futzing around trying to get the enclosures to be seen. I forget who made the eSATA controller cards but they sucked - or the enclosure chips sucked.

    So we turned to Burley, the guys who make enclosures for Macs mostly, but they work with PCs, too. These guys know their stuff. They told us not to use OEM hard drives in enclosures because some OEM drives you buy are dumped on the market and don't QUITE work with enclosures. They said use retail hard drives only. They also sell very good controller cards. The enclosure we got from them has worked fine for the last year and a half until last week when one of the drives went dead - no surprise. They aren't cheap, but they are well made and support is very good. I had both email and phone conversations with the Burley folks and they provide good support.

    We also in the last couple months bought two MicroNet 4-drive eSATA enclosures with 1TB drives from Newegg for use on a Mac Pro. That was a huge mistake, since the drivers simply weren't seen by the Mac at all. Apparently MicroNet didn't bother to test the drivers when Mac OS X 10.5 came out and couldn't be bothered to provide support for that. So we attached the enclosures to a Windows PC and they work OK, although occasionally one or more of the drives will disappear and generate "drive not ready for access" messages in the Windows event logs.

    Later, we decided to use those enclosures for iSCSI storage served up to the video lab. So I took one of the video lab PCs that were being replaced by iMacs and installed OpenFiler, the open source storage server run on Linux. The latest Rpath Linux kernel saw the drives and the enclosure no problem. I configured the iSCSI setup and everything seems to be working fine. And interestingly, none of the drives have gone offline like they did with Windows - which means it was Windows fault, not the drives. So now I can install an iSCSI client on the two iMacs - except Apple doesn't HAVE a Mac OS X iSCSI client, once again demonstrating how Apple isn't ready for the enterprise, since Linux has had them for years - fortunately there's a free Mac iSCSI client from another company - and serve up 1.8TB of iSCSI storage to each iMac.

    So my advice is: choose your enclosures and the drives in them and the controller cards carefully. Take notice of what Silicon Image chipsets are involved, since SI pretty much dominates the market for those things and they're not the smartest tech company in the world. Make sure you get retail disks for use in the enclosures. Make sure you can return what you bought for refund or replacement because this stuff is not yet "set and forget".

  • Eventually I just bought myself an Antec 1200 and a MIST PSU with modular cables. Loaded it up with a SATA rich mobo + a small SATA card for 12x SATA data/power. Because of an earlier RAID accident because of poor warning setup (two disks failed with some time between, but I didn't notice the first one) I do JBOD and manual copies, but you could just as easily do software RAID - the "hardware" RAID on these aren't worth it anyway. That way I have a full Linux server I can use for whatever too.

    Honestly, if I

  • I picked up one of these guys [newegg.com] for my backup purposes. I filled it with 5 1TB drives and set it up in a Linux software RAID5 config. It backs up all of my media that resides on an LVM volume. It's been working out quite nicely so far :). The port multiplier feature is very nice. I only have to run a single eSATA cable for the 5 disks.

"If it ain't broke, don't fix it." - Bert Lantz

Working...