Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Terabyte Storage Solutions? 574

DeMechman asks: "As many on Slashdot may know, storage is one thing which you can never have enough of. Given the current situation with CD/DVD rot (Personally I can attest to a 10% attrition rate) hard drives in a RAID configuration seem to be a better and more economical solution. If you own more than fifty CD/DVDs, it can be a daunting task to find a file. I am wondering if anyone has found a hardware solution that can inexpensively be set up to handle 10 or more 250GB HDDs in a RAID configuration. Primarily, has any case manufacturer tackled this niche market yet?"
This discussion has been archived. No new comments can be posted.

Terabyte Storage Solutions?

Comments Filter:
  • by daveschroeder ( 516195 ) * on Thursday July 29, 2004 @05:47PM (#9837322)
    I'd say that $2.82/GB, for a well-built, well-designed 14-drive 3U RAID (0, 1, 3, 5, 0+1, 10, 30, 50) hardware cabinet with dual-2Gb/s fibre channel connectivity, dual-100mbit ethernet and serial for monitoring and management, excellent Java setup, management, and montoring software, redundant hot-swappable power supplies and fans, and that works and is qualified for use with Windows, Linux, and Mac OS X, qualifies as "inexpensively". But that's just me.

    http://www.apple.com/xserve/raid/ [apple.com]

    Academic prices for:

    1.00TB - $5399
    1.75TB - $6749
    3.50TB - $9899
  • by oostevo ( 736441 ) on Thursday July 29, 2004 @05:47PM (#9837324) Homepage
    It's not RAID, but you could buy a 1-terabyte drive [lacie.com] from LaCie.
  • by Anonymous Coward on Thursday July 29, 2004 @05:47PM (#9837326)
    Why buy a specialized solution when the easiest solution is usually in your basement (or under your desk, or stacked up against a wall somewhere)? Grab a few PII/PIII boxes and load them up with drives.
  • Many have (Score:3, Informative)

    by Guspaz ( 556486 ) on Thursday July 29, 2004 @05:48PM (#9837335)
    Apple is one of the cheapest, at 6000$ (with drives)

    See page here. [apple.com]
  • Intel SC5200 5U (Score:2, Informative)

    by mtwalkup ( 745000 ) on Thursday July 29, 2004 @05:50PM (#9837367)
    http://www.intel.com/support/motherboards/server/c hassis/sc5200/index.htm Just bought one myself. You can get em at: http://www.bellcomputer.com Let em know G Force Hosting sent ya!
  • Easy these days. (Score:5, Informative)

    by ron_ivi ( 607351 ) <sdotno@cheapcomp ... m ['ces' in gap]> on Thursday July 29, 2004 @05:51PM (#9837370)
    With 250GB Hard drives for $179 [frys-electronics-ads.com] these days, a terrabyte is easily put between two computers.

    I have a TB here, and rather than raid, I decided to do a nightly "rsync" mirror to a "yesterday" partition.

    The two advantages of the nightly rsync over RAID are

    1. It protects against user-error too. If I make a bad edit, I can always 'diff' against /yesterday/home/me/...'
    2. It makes upgrades of both hardware and software easy. Since my live backups are excactly that (live, and tested every day), one machine can be fully upgraded while the other acts as the primary one for a while.
    Important data also gets backed up to another large HD in my car and DVDs in a safe occasionally, to protect against a fire or burglars.
  • by compwizrd ( 166184 ) on Thursday July 29, 2004 @05:51PM (#9837373)
    you can "cheaply" buy 3U rack mount cases that hold 15 drives in hotswappable SATA or SCSI cages up front. Combined with a 3ware 9500-12, and leave 3 cages empty(or spare drives just not cabled up), this will give you 2.75 TB in each unit of raid5 storage. If you were really hard up for space, you could use a pair of 9500-8's and this would give you 3.25 TB per unit. Some 4U units hold 16 drives, which gives you the full 3.5TB in 2 x raid5 arrays.
  • Terabyte Storage (Score:5, Informative)

    by Steffan ( 126616 ) on Thursday July 29, 2004 @05:51PM (#9837380)
    I have 8 x 160GB Maxtor drives in a RAID5 array. It's fast, relatively inexpensive [Fry's Electronics recently was selling the 160s for $69/ea]

    The 160GB drives used to come with a Maxtor [Promise] ATA-133 card. Two of those will support eight drives. Not the most optimal arrangement because of the bus having two drives on each channel, but it doesn't seem to affect performance too much since it is striping the data across all of the drives. I'm assuming it stripes in order, so you'd want to stagger the drives such that 1 & 2, 3 & 4 are not on the same controller.

    Output of df -h: /dev/md2 1.0T 521G 522G 50% /ext

    The cost to assemble something like this?

    ~ $600.00

    8 x $70 for the 160GB drives
    2 x $20 ATA-133 controllers

    The biggest issue is that there is no easy way to back up the array. You could use RAID 6 and have two drives worth of parity info, but it still leaves you vulnerable to a catastrophic hardware (or building) failure.

    Anyone have any ideas on how to back up 1TB in a home environment? i.e., not $3000 tape drives & $200 tapes

  • by rjstanford ( 69735 ) on Thursday July 29, 2004 @05:51PM (#9837382) Homepage Journal
    It gets even cheaper if you want more than one (or other Apple equipment). If you're a development shop, sign up for ADC. The first fully loaded RAID array is discounted about the same amount as the ADC membership fee. The second through nth are considerably cheaper.
  • by Anonymous Coward on Thursday July 29, 2004 @05:52PM (#9837388)
    What a rip off!!!

    Go buy a Lian Li case, 8 x 200gb maxtor harddrives and a 3ware raid controller.

    Controller $500
    Drives $150 each
    Case $150

    Total for 1.4TB = $1850

    With 400gb drives maybe $3000 for 2.8TB
  • by DaGoodBoy ( 8080 ) on Thursday July 29, 2004 @05:52PM (#9837393) Homepage
    Good IDE hardware RAID controllers with Open Source drivers. Appears as a single SCSI drive to Linux. We swear by them.
  • by lukewarmfusion ( 726141 ) on Thursday July 29, 2004 @05:55PM (#9837439) Homepage Journal
    He doesn't want his 50 CDs to "rot." For giggles, let's do some math:

    50 CDs * 700 MB = 35 GB
    50 DVDs * 4.7 GB = 235 GB

    It would take 250 DVDs (all FULL!) to get you to that terabyte. But you want to put ten 250GB drives together... so you want 4 drives (for the space) and six drives for redundancy.

    Expect to put down $5,000+. Or buy a 250GB drive and just store them on there. Buy two, and use the second one as a backup of the first. Total cost? $400.

    If you're a home user - don't go overboard. If you're a corporate user that's just trying to cut corners (and therefore cost) then don't shortchange yourself (or your company).
  • by psych-major ( 767984 ) on Thursday July 29, 2004 @05:56PM (#9837449)
    www.raidweb.com Bought one of these at my previous employer and we really liked it.
  • by tjasond ( 680156 ) on Thursday July 29, 2004 @05:57PM (#9837464)
    I use a Hard Drive Enclosure [newegg.com] for backing up files. With IDE HDD's getting less and less expensive, picking one of these versatile enclosures up for less than $50 is a good value. I own a DVD burner but rarely use it for data storage since the enclosure is way more convenient. Now as far as 10 250GB drives in a Raid configuration, how redundant redundant do you need you data to be? Or is it that you're just overly cautious after having your backup DVD's fail? Just curious.
  • by mr. methane ( 593577 ) on Thursday July 29, 2004 @05:58PM (#9837478) Journal
    Yes, I've used an HP/Compaq DLT auto-changer that will do the job.. Don't remember the price offhand, but I remember it was in the over-$100k range.
  • Just built one... (Score:5, Informative)

    by SlashChick ( 544252 ) <erica@eriGINSBERGca.biz minus poet> on Thursday July 29, 2004 @05:59PM (#9837489) Homepage Journal
    I can answer your question, as I've just built one as a giant backup solution for our hosting company. [simpli.biz]

    I went with Serial ATA for a couple reasons:
    1) It's cheaper and has more capacity than SCSI;
    2) Cabling is not a mess as it is with regular IDE (if you've never seen serial ATA cables, the first thing you will notice is that they are small!);
    3) It can hotswap, unlike regular IDE;
    4) It's not that much more expensive than regular IDE.

    I custom-built a 3U server from InterProMicro. [interpromicro.com] They are a small (local if you are in the Bay Area) SuperMicro reseller that does great work. (If you need something, call and ask for Andy. Tell him Erica from Simpli sent you!)

    The machine I specced out was as follows:
    * 3U case with 8 hot-swap SATA drive bays;
    * 8-port 3Ware 8506-8 SATA RAID controller;
    * 5x250GB SATA drives in a RAID-5 array;
    * Dual Xeon processors.

    The 5 drives give you 1TB of storage, and expanding up to 8 gives you 1.75TB. I would also recommend a separate mirrored SATA 10KRPM array for the OS if you want really fast speeds. :)

    This whole solution (Xeons; 5 drives; 3U case) cost just over $3000... which is pretty reasonable for 1TB of network-accessible storage. Interpro has solutions that go up to 24 SATA drives [interpromicro.com], which at 250GB each gives you an ungodly amount of space (5.75TB, if my calculations are correct.)

    My suggestion is to go with a niche server builder like InterproMicro over Dell or Compaq or any of those guys. You can get the same high quality from a custom manufacturer without paying the steep brand name price from a larger manufacturer. As for the drives, any time the goal is "as much space as possible", SATA should be your first choice.

    Good luck!
  • Re:Many have (Score:4, Informative)

    by Anonymous Coward on Thursday July 29, 2004 @06:01PM (#9837516)
    Cheapest?!?!?

    Lets see, 5 200 MB drives at $120 = $600 + another $600 for the case, MB proc etc... $1200 for a terrabyte server.

    I haven't looked (you can do that) but I bet there are plenty of stand alone raid units of that size for maybe twice the DIY price and that is still HALF the price of Apple.

    Now THIS is informative!
  • by Zergwyn ( 514693 ) on Thursday July 29, 2004 @06:04PM (#9837547)
    I have just been grappling with this very issue. What kind of solution can find depends on a couple of factors:

    -What RAID level you want (5 usually requires better hardware)
    -Whether you want hardware RAID (I strongly recommend this) or soft RAID
    -How much redundancy you need (Battery backup cache? Redundant controllers? Hardware environmental controls?)

    If you are looking for good pci cards, I would strongly suggest a card from 3ware [3ware.com], and a card from a place such a Seagate [seagate.com]. Getting a super-duper cheap card when terabytes of data are on the line is just fundamentally stupid. You can save some bucks now, but be ready with your next Ask Slashdot: "How do I recover data from my dead RAID?" Seagate now has a nice 5 year warranty, which match well with good quality and reasonably cheap drives. Look at some of the SATA drives like the Barracuda. However, any decent quality drive maker can work. If you have even more money, you can look at some of the things offered by places like StorCase [storcase.com]. A larger initial investment can become cheaper as you scale up the cheap harddrive count, and it can be a good thing in the long run. Obviously, the more time you are willing to invest doing things yourself, the cheaper you can get to some extent vs premade items. However, no support as well.


    Do read up on some of the fundamentals of RAID: Everything you need to know (and lots you don't) is probably at least mentioned in the PC Guide [pcguide.com] on RAID. Look through that. Things like hot swap and hot spares are important to understand. Finally, you should remember to check compatability. Unfortunately, I for instance have not been able to find much of anything in the way of controller cards that is compatable with OS X (except the obvious, the XServe RAID). So I have something set up on a BSD box in my server closet that I then link to, more like a storage appliance. Happily, the 3ware cards and many others are now compatable with a wide variety of *nix and BSD flavors along Windows, but do check to make sure.


    Last but not least, remember this!: RAID is *not* a backup solution, but an highly redundant onsite storage system. Have another form of backups, even if it is just a RAID 1 off site, or DVD-Rs, or something. If a disaster happens (thieves, fire, nuclear destruction, John Ashcroft) on site storage won't save you.

  • by egarland ( 120202 ) on Thursday July 29, 2004 @06:05PM (#9837554)
    Promise has a nice off-the-shelf solution [promise.com] and you can get it [hypermicro.com] for arround $3600.

    If I were going to do it I'd build it my own by combining a nice case [rackmountpro.com] and a 12 port 3Ware controller [rackmountpro.com] with whatever server configuration and SATA drives I wanted to get.
  • by Forge ( 2456 ) <kevinforge@@@gmail...com> on Thursday July 29, 2004 @06:20PM (#9837698) Homepage Journal
    The missing links.

    12 x 3.5 bay" Tower [lian-li.com]


    3 Ware 12 drive RAID card [3ware.com]


    Drives can be found at PriceWatch [pricewatch.com]

    I I havn't calculated the per MB cost of all the large sizes. someone with more time please do this.


    What will make this perfect is removeble drive kits (They require an external 5.1/4" bay for each 3.1/2" drive. Some even have little activity LEDs) and a server case with 12 external 5.1/4" bays.

  • by linuxbaby ( 124641 ) * on Thursday July 29, 2004 @06:21PM (#9837713)
    For CD Baby [cdbaby.com] we have about 50 TB of audio stored here, and we built the boxes ourselves, damn cheap. Goes like this:
    • Find any tall beige-box case. ($150)
    • Find 9 good 250g Serial ATA drives. ($100 each = $900)
    • Get an 8-port serial ATA hardware RAID controller like these [3ware.com] ($300)
    • Get a good 400-500W power supply ($200)
    • Any motherboard and CPU will do ($200)
    • Spend a few extra bucks on gigabit ethernet ($50)
    Put 8 of the hard drives into a RAID-5 array. (1 for your O.S/system use). That makes about 1.4 TB for only $1800 total. The 3Ware IDE raid thing works great with FreeBSD [freebsd.org], which is what we use for everything.

    Rip all your CDs as FLAC [sourceforge.net] so that (1) you never have to rip them again (it's lossless), but (2) it's half the size of saving WAV files

    At least that's what we've done with our 68,000 [cdbaby.com] CDs we have here.

  • Re:Terabyte Storage (Score:5, Informative)

    by drasfr ( 219085 ) <revedemoi&gmail,com> on Thursday July 29, 2004 @06:22PM (#9837716)
    A way of doing it (Which I did)

    8 Firewire drive enclosure: (i have the 4 drives version).
    $600. http://www.cooldrives.com/fi80013oc5fi.html
    $1360 = 8* $170 250GB ATA drives.
    $700 = Hardware for a Linux machine as correct file server
    = $2930 for 2TB of raw space, 1.5TB Of raid 5 with an hot spare, or 1.75TB of raid five with no hot spare.

    You got yourself a nice fileserver for home usage... install that with mythtv and you're up for hours of video....
  • Re:Easy these days. (Score:5, Informative)

    by hoggoth ( 414195 ) on Thursday July 29, 2004 @06:28PM (#9837778) Journal
    > do a nightly "rsync" mirror to a "yesterday" partition
    > advantages of the nightly rsync over RAID are

    Instead of only keeping a "yesterday" partition, use rsync to keep EVERY daily backup.

    Rsync has lots of great options to make copies as hard links if they haven't changed and only copy changed files. That allows you to make daily full backups that only use the space of daily incrementals. Do that to a backup partition, then RAID-1 the whole drive over to a mirror.
    That gets you full protection from hardware failure on a drive and user failure on your files.

    Google for more details [google.com]

  • by Psyrg ( 730923 ) on Thursday July 29, 2004 @06:34PM (#9837838)
    Some people have had a surprising level of success uing the software raid potential of Linux to do this for some time, getting prices as low as $0.60US per GB.

    Some slashdot articles on some previous attempts:
    Bulk Data Storage For The Common Man? [slashdot.org]
    Home-brewing a 1.2TB IDE to Firewire Monster [slashdot.org]

    Books on it:
    Managing RAID on Linux [slashdot.org]

    Even applicable controller hardware:
    LSI Megariad 150-6 [lsilogic.com]
    3Ware 9000 series [3ware.com]

    And soon to be applicable storage hardware:
    Hitachi Announces 400GB Hard Drive [slashdot.org]
  • by zuzulo ( 136299 ) on Thursday July 29, 2004 @06:41PM (#9837939) Homepage
    One key thing to add, when building a mass storage system *always* buy drives from different lots. Drives in the same lot will often fail very close to the same time, so spreading your your expected drive failures by buying different lots is a very good idea. Buy drives from multiple vendors and even manufacturers if at all possible.
  • Note about RAID (Score:2, Informative)

    by john_smith_45678 ( 607592 ) on Thursday July 29, 2004 @06:47PM (#9837992) Journal
    I never knew this, and apparently many others didn't either, but if you use hardware RAID the disks are tied to that card.

    More info here, plus the ever-acidic jwz calling people dumbasses, dipshits, and more fun!

    http://jwz.livejournal.com/368307.html [livejournal.com]
    http://www.dnalounge.com/backstage/log/2004/07.htm l#28 [dnalounge.com]
  • by Brandonski ( 605979 ) on Thursday July 29, 2004 @06:52PM (#9838041)
    I've always been fond of the "SCSI to IDE" or "SCSI to SATA" solutions. They are reasonably inexpensive and they scale (You can chain a whole lot of them together). Here are a couple [pc-pitstop.com]of good [interpromicro.com]ones.
  • Re:Terabyte Storage (Score:4, Informative)

    by RandomCoil ( 88441 ) on Thursday July 29, 2004 @06:57PM (#9838080)
    With 6 HDDs and all the other devices, what wattage power supply do you have?


    Can anyone give me a rough formula of wattage/# of devices?

    According to Western Digital's site, a 250GB SATA drive pulls 12.8 watts when reading/writing and 9.5 watts on standby. I figure for 6 drives that's about 100 watts of a _good_ power supply's rating.
  • Here's our solution. (Score:2, Informative)

    by Insomnia ( 11375 ) on Thursday July 29, 2004 @06:58PM (#9838090) Homepage
    We (The Binghamton University Computer Science Department) employ 2 debian raid servers. They make use of a 3ware ATA 12-port card and their (3ware's) hot-swap enclosures (whoever said hot-swapping with ATA is not possible is incorrect, we do it).

    It uses a 9 external 5.25 bay case (enlight) with an Antec 550W power supply to handle the 12 drives (plus a seagate system drive in the internal 3.5" bay). This has worked very well.

    We use Maxtor 300GB drives in one machine (RAID55) and have lost 5 of 20 drives we purchased in 6 months. The other uses Western Digital 200GB (RAID5), and we've lost 1 of 12 in a year. Manufacturer DOES matter. WD replaced our drive in days, Maxtor makes you jump through hoops and tries to deny the problem for a while, just to finally decide to replace the drive, then take 5-7 mroe days to get it to you.

    All in all, these machines cost us under 7K each and perform very well. However, if I bought one today, I'd get 3ware's SATA card and Seagate's new 400GB SATA drives instead. Whoever said ATA cables are a pain was NOT wrong, and these drives would give much better performance.
  • My usual solution... (Score:3, Informative)

    by cayce ( 189143 ) on Thursday July 29, 2004 @07:05PM (#9838156)
    And i've installed quite a bit of these:
    * SuperMicro motherboard (any of the newer ones, depend on your choice of architecture). Be sure to get one with PCI 133/64 and gigabit onboard.
    * 3Ware RAID board(s).
    * Chembro rackmount cases (they have a very nice one with 16 SATA hotplug slots with backplane and all)
    * Don't go cheap on the power supply. You'll need at least 600W. I always go for redundant ones.
    * 16 SATA disks of your choice (250, 120 or 80GB)
    * Linux!!! (Be careful with fedora core2, it doesnt support nativelly the 3Ware cards - you'll need to compile your own)

    Of course you could save about $1000 by using a cheap motherboard, chassis and PS. But it really pays off using the good brands on those.

    By the way, you should always get an extra hard drive (or two). They will fail (sooner or later) and you don't want to be left hanging.
  • Re:What I did... (Score:5, Informative)

    by ivan256 ( 17499 ) * on Thursday July 29, 2004 @07:09PM (#9838189)
    My server (with a smaller by far RAID) used to be a dual athlon too. I got tired of paying for the electricity, so I switched it to a Athlon-M 2500+ and setup all the powersaving stuff. (It took ages to find a desktop board with a PowerNow capable BIOS and voltage regulator...) Kernel compiles are a little slower, but 90% of the time (even streaming data at 100mbit) the processor stays in it's low power mode. What once took 350watts now takes 70. Highly recommended.
  • by dongkiru ( 157748 ) on Thursday July 29, 2004 @07:14PM (#9838224)
    Since the article is asking about generic storage/backup solution, 3ware may suffice. But you'll never see me buying another 3ware setup again. Don't know about Windows, but write performance on 3ware RAID-5 setup is horrible in linux. Even with the new 9xxx series, we're getting about 35MB/s, opposed to 100-150MB/s you get on the other solutions like RAIDServe will deliver.
  • Re:Maybe LVM (Score:1, Informative)

    by tntguy ( 516721 ) * on Thursday July 29, 2004 @07:15PM (#9838236)
    LVM and RAID are not mutually exclusive. They compliment each other nicely. I'm not sure how (Linux's) LVM could be easier to set up than RAID (hardware), though. Most hardware RAID has some form of a "use these disks as a RAID[0,1,5,whatever]" interface. My only experience with Linux's LVM was my last Gentoo install. However, I have quite a bit of experience with Veritas Volume Mangler^WManager.
  • by brsmith4 ( 567390 ) <.brsmith4. .at. .gmail.com.> on Thursday July 29, 2004 @07:28PM (#9838365)
    I can attest to this:

    Our 48 Node beowulf has a /home volume on a 3ware controlled array. Sometimes, we get those users that decide they need to write out their incremental data sets across the NFS mount... from 48 nodes. Sure, a parallel file system would be great, but from what we've seen, only GFS was close to production quality (and they just recently gpl'd it).

    Anyway, that kind of load brought that head node (dual proc 1700+ MP) to its knees until we decided to rebuild it. Moving from the hardware controlled raid to linux's software raid completely resolved that problem.
  • by CMonk ( 20789 ) on Thursday July 29, 2004 @07:29PM (#9838372)
    If you use that terabyte in any sort of useful raid configuration your total usable capacity quickly disappears. You'll need your parity disks for basic raid levels 3 and 5 or a whole bunch more drives if you want to be safer and mirror. If you're talking about a 14 drive configuration like many 3U arrays you are going to want at least 2 hot spares. You'll probably want a whole bunch of those drives dedicated to filesystem snapshots for backups. Getting the picture? When talking serious raid arrays you'll be lucky if if you end up with 40% of total space available for use.
  • by still cynical ( 17020 ) on Thursday July 29, 2004 @07:48PM (#9838531) Homepage
    If you want to spend the extra money and have a warranty and fancier case, look at Nexsan [nexsan.com], or EMC's AX100 [emc.com]. Scary that EMC is selling something cheaper than the competition, but they are. Sorta disturbs the natural order of the universe. Still, either will set you back several thousand. The AX100 looks pretty impressive on paper. Options for dual controllers, and up to 3 TB in a 2U space. Haven't tried one myself yet.

    Disclaimer: I work for a storage integrator, both are brands we sell.
  • by kfhickel ( 449052 ) on Thursday July 29, 2004 @08:42PM (#9838967)
    Unfortunately, the earlier 3ware cards won't allow you to build an array unless all the drive IDs match EXACTLY, meaning that this is not possible.

    Hopefully, they've changed this for the newer 7 series cards, but the 5 series are 'broken' this way.
  • by Anonymous Coward on Thursday July 29, 2004 @09:02PM (#9839110)
    Remember that some manufacturers offer better warranties than ithers (ie Seagate's is now 5 years).

    Also, remember that some drives nowdays don't allow for a 24x7 duty cycle. Given that the SMART diagnistics in the drives can tell quite a bit to the person examining your warranty return, don't try to 'cheap' your way through and then claim on warranty.
  • by tzanger ( 1575 ) on Thursday July 29, 2004 @09:24PM (#9839274) Homepage

    Oh, bullshit.

    Linux software RAID1 is just as fast as several of the hardware RAID1 setups I've tested using Bonnie++ -- These are fucking fileservers, not renderfarms. The processor's sitting there doing jack shit anyway, and you're more than likely putting a P4 in there since you can't buy anything else with decent reliability. Throw in a decent GigE network card and your processor is STILL at 0% utilization. Make that a RAID5 with hot-standby drive and I would be very surprised if you noticed any difference in the apparent "feel" of the server compared to a hardware RAID solution.

    Hardware RAID's okay but now you've got a proprietary format array with a SPOF (the RAID card(s)) -- sure you can keep spare RAID cards around but honestly unless you need every last bps on your network transfer and you've got your server so overloaded that SW RAID is impacting your performance you're just incurring extra expense. I am very happy that I can take any RAID array I have and throw it in another system should a motherboard or controller fail and I need the system up immediately. I'm very happy that LVM Just Works and works happily on top of software RAID. There's no issues and no extra question marks like there are with any hardware RAID "solution".

    Want beeping? Write a script. Want email/phone/paging when something goes wrong? Write a script. Or use any of the monitoring and alerting systems you can find on Freshmeat (mon, nagios, etc.). Jesus H Christ, give your head a shake.

    Oh wait, you're trying to build a performance system using an OS built for pushing pixels. Perhaps that is your biggest problem. Windows has its place, but high performance data transfer just isn't one of them. I guess if you've decided to spend a couple hundred on an OS license that gets you nothing you may as well blow another couple hundred to get hardware to go with it.

  • by bencvt ( 686040 ) on Thursday July 29, 2004 @09:29PM (#9839309)
    If you own more than fifty CD/DVDs, it can be a daunting task to find a file.

    Um... ever consider the mind-bogglingly simple solution of:

    ls -R> ~/dvd.index/<disc_label> for each dvd

    grep "<whatever_youre_looking_for>" ~/dvd.index/*

  • by keithosu ( 223527 ) on Thursday July 29, 2004 @09:58PM (#9839532)
    Apple has claimed that they do pick their drives from different lots. Atleast, that is what I've heard from insiders.
  • by Anonymous Coward on Friday July 30, 2004 @12:19AM (#9840496)
    Raid 3 is useless. It can be used with only three drives. 2 for data, one for parity.

    But Raid 5 storage efficiency follows (Number of Drives - 1) / Number of Drives) with an 8 drive RAID, that's 87.5% efficiency--and that's pretty dern good for a relatively decent fault-tolerant rig. That means that out of 1 Terabyte you lose the equivalent of 125GB, which isn't so bad for all of the benefits that RAID 5 brings, and it's a FAR cry from 40% usable as you claim. Hell, even RAID 1 (the most space-inefficient of all RAID configurations) is never more or less than 50% efficient.

    Besides, you only need to snapshot the really important stuff (that can't be easily obtained from backup, and can't be easily recreated), it's not like you need to take 12 rotating snapshots of all your warez/porno/MP3 collection per day. This is all about a relatively cheap PERSONAL server.
  • by slaker ( 53818 ) on Friday July 30, 2004 @12:36AM (#9840593)
    Instead of looking at a semi-commodity 1TB solution - which is a PITA for needing an industrial strength case, power supply, drive controller card and HVAC, you need to look at the other end:

    Two or three fairly normal PCs with STANDARD drive controllers, PSUs and HVAC.

    Look, we're talking file servers here. 128MB RAM is gobs if you aren't running any other service on 'em. Pick and OS, any OS: 2000 gets you dfs, *nix gives you NFS. Both give you a homogenous networked file system.

    So...
    Standard case/PSU/cheapo CPU (Athlon mobile or Via or P3, for lower power consumption)/RAM - That's $250, maybe. Add another $20 for a gigabit NIC or two per machine.
    4x 200GB drives @ $110 apiece (pricewatch shows $96 as the low price, but I'll go $110 for a little wiggle room)

    So... something around $700 gets you .8TB.
    Buy three machines. $2100 gets you tons of storage and scads of redundancy no matter how you look at it.

    This is the philosophy I use in setting up my file servers (now serving 6.5TB!). Over time I've added 3ware cards, upgraded PSUs and added gobs of RAM, but my basic starting point is a very modestly-appointed system.
  • by threephaseboy ( 215589 ) on Friday July 30, 2004 @04:36PM (#9847776) Homepage
    I'm actually researching replacing a 7disk*9gig scsi RAID5 (LVM) with a 4*160gig SATA RAID5, which would be about $500 (approx $1/gig total) so on the level with the big disk but this would be in a rackmount case hooked up to a server with PCI instead of FW800.
    It's about 1/5th the cost of the xserve raid but not nearly as flexible:
    • No expandability beyond the 4 ports on the card
    • Not abstracted from the host machine
    • No redundant PSU (you could get redundant ATX PSUs for $200+)
    • No redundant controllers
    • No support for the package as a whole from any one source
    • Etc...

    It's a different solution for different people. If you need reliability and performance and uptime, you get an xserve raid. If you need "good enough", you build one yourself. Same thing as getting a cheap dsl/cable router for $20 that "does the job", rather than getting a $$$ name brand router like a cisco or something, and a support contract, etc.
    One will do the job most of the time, but when you absolutely gotta have the performance and reliability, if your job depends on it like the video editors you mentioned in your original post, the extra $4k for the xserve raid starts looking pretty good.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...