Please create an account to participate in the Slashdot moderation system


Forgot your password?
Data Storage Hardware Hacking

Building a Massive Single Volume Storage Solution? 557

An anonymous reader asks: "I've been asked to build a massive storage solution to scale from an initial threshold of 25TB to 1PB, primarily on commodity hardware and software. Based on my past experience and research, the commercial offerings for such a solution becomes cost prohibitive, and the budget for the solution is fairly small. Some the technologies that I've been scoping out are iSCSI, AoE and plain clustered/grid computers with JBOD (just a bunch of disks). Personally I'm more inclined on a grid cluster with 1GB interface where each node will have about 1-2TB of disk space and each node is based on a 'low' power consumption architecture. Next issue to tackle is finding a file system that could span across all the nodes and yet appear as a single volume to the application servers. At this point data redundancy is not a priority, however it will have to be addressed. My research has not yielded any viable open source alternative (unless Google releases GoogleFS) and I've researched into Lustre, xFS and PVFS. There some interesting commercial products such as the File Director from NeoPath Networks and a few others; however the cost is astronomical. I would like to know if any Slashdot readers have any experience in build out such a solution? Any help/idea(s) would be greatly appreciated!"
This discussion has been archived. No new comments can be posted.

Building a Massive Single Volume Storage Solution?

Comments Filter:
  • gmail (Score:4, Funny)

    by Adult film producer ( 866485 ) <> on Tuesday October 25, 2005 @03:24PM (#13874188)
    register a few thousand gmail accounts and write the interface that will make writing of data to gmail inboxes invisible to the app.
    • Er... be careful (Score:2, Informative)

      by LeonGeeste ( 917243 ) *
      That violates their terms of use pretty severely. I don't know what they would do (Google's not the "suing-for-the-hell-of-it" type), but that wouldn't last very long when they found out. And they would find out. +5 Interesting? Well, curiosity killed the cat.
    • Re:gmail (Score:2, Interesting)

      by Anonymous Coward
      Gmail? Why bother when you can just use a few hundred million Tinydisks [] instead?

      I wonder if tinyurl can handle 25TB...
    • Re:gmail (Score:5, Funny)

      by Stuart Gibson ( 544632 ) on Tuesday October 25, 2005 @03:44PM (#13874433) Homepage
      That would have been my second answer.

      The first, and presumably the reason this was posted to /. is simple...

      Imagine a Beowolf cluster...

    • We at Vap-o-tech 2003 Inc. (not associated with Vap-o-tech 2001 Inc. which has closed its doors due to allegations of investor fraud) have developed ToastFS 2003. Using patented CRUMB technology and high capacity BUTTER read/write caching, we are able to turn your average loaf of Wunderbread into a 200gb storage media. Simply buy a loaf of our own specially tested Wunderbread ($250 USD) along with a USB-to-Popup Toaster interface (don't worry, USB 2.0 is more than capable of handling 120amp wall sockets w
  • GFS? (Score:5, Informative)

    by fifirebel ( 137361 ) on Tuesday October 25, 2005 @03:24PM (#13874189)
    Have you checked out GFS [] from RedHat (formerly Sistina)?
    • The Oracle Cluster filesystem [] is also available under the GPL. Dunno if that fits the bill; the description here is sort of vague. It sounds like a seriously ambitious project to approach for someone who doesn't even know what can be done, let alone what's within his budget.
      • Er, sorry, version 2 [] is what I meant.
      • Re:Oracle, also (Score:3, Insightful)

        by Spudley ( 171066 )
        It sounds like a seriously ambitious project to approach...

        I second that.

        Starting at 25TB to scale 1PB? And you want it cheap? If it was cheap to do that sort of thing, we'd all be lining up to get one of our own(*).

        Seriously, though, you don't really specify how cheap you are expecting to get it for. What are your expectations, and just how far over-budget are the options you've looked at already? Do you really need 25TB/1PB in one volume, or could it be achieved by splitting it into smaller chunks and wor
    • Re:GFS? (Score:3, Informative)

      by N1ck0 ( 803359 )
      GFS over a FC SAN with some EMC CLARiiON CX700s as the hosts is the solution that I'm going to looking at deploying next year, although there is still some thoughts on using iSCSI instead of FC. It all really depends on what your usage patterns and performcance requirements are. I don't believe GFS supports ATAoE systems but since their is linux support I doubt it would be too far of a strech.
    • Re:GFS? (Score:3, Informative)

      by LnxAddct ( 679316 )
      I second this parent post. GFS is exactly what he wants, although I've never used it in the 1 PB range, I can vouch for it working excellent with TBs.
    • Controllers! (Score:3, Informative)

      by man_of_mr_e ( 217855 )
      You could get a bunch of Broadcom 8 port SATA controllers, which equals about 4TB per controller. 4 or 5 controllers = 16-20TB per box, then you can run the cables into an outside drive bay enclosure and one box can control 40 500GB hard drives.

      If you're not doing any processing on this, a good CPU should be able to handle the load.
  • Apple Xserve? (Score:3, Informative)

    by mozumder ( 178398 ) on Tuesday October 25, 2005 @03:24PM (#13874198)
    Can't you hook up 4x 7TB Xserve RAIDs to a PowerMac and use that?
    • Re:Apple Xserve? (Score:4, Informative)

      by Jeff DeMaagd ( 2015 ) on Tuesday October 25, 2005 @03:29PM (#13874248) Homepage Journal
      Apple Xserve may be the cheapest of that kind of storage, but it's probably not fitting the original idea of commodity hardware.

      Scaling to petabytes means spanning storage across multiple systems.
      • Re:Apple Xserve? (Score:4, Informative)

        by stang7423 ( 601640 ) on Tuesday October 25, 2005 @03:55PM (#13874541)
        Apple has a solution for this. Xsan [] is a distrubuted filesystem that is based on the ADIC's StoreNext filesystem. Apple states on that page that it will scale into the range of petabytes.
    • Re:Apple Xserve? (Score:5, Interesting)

      by medazinol ( 540033 ) on Tuesday October 25, 2005 @03:30PM (#13874269)
      My first thought as well. However, he is asking for a single volume solution. So XSAN from Apple would have to be implemented. Good thing that it's compatible with ADIC's solution for cross-platform support.
      Probably would be the least expensive option overall and the simplest to implement. Don't take my word for it, go look for yourself.
      • I want to say it is 16 Tbyte offhand, but I'm not sure on that.

        Short research indicates this was a limitation in 10.3, but I haven't found anything confirming or denying that 10.4 still has it.

        Not that we've been looking into large amounts of Xsan storage here, but our requirements are a bit different. You can't hook >600 nodes up to the storage via fibre. Our problem is scaling out the NFS servers to be able to push all this data around.
    • Re:Apple Xserve? (Score:4, Informative)

      by TRRosen ( 720617 ) on Tuesday October 25, 2005 @04:19PM (#13874845)
      To do this would cost around $50,000 with xRaids and xSan...$2000/TB is probably the best price your going to get. You could do this with generic hardware but the cost of assembling, the extra room, extra power consumption and the maintaince and enginnering costs will cetainly wipe out what you might save. The xRaid solution could be up in a day and fit in one (actually 1/2) rack.

      I do remember some college buiding a nearline backup storage system using 1U servers with 2 or 3raid cards each connected to like 12 drives per machine in homemade brackets but it was hardly ideal. But It did work. Anybody remember where that was?

    • How about a PetaBox? (Score:5, Interesting)

      by McSpew ( 316871 ) on Tuesday October 25, 2005 @04:44PM (#13875101)

      The folks at the Internet Archive [] have already done the hard work of figuring out how to create a petabyte storage system [] using commodity hardware. The system works so well they started a company to sell PetaBoxes [] to others. Why reinvent the wheel?

      • by yppiz ( 574466 )
        You beat me to this link.

        I will add that the Archive has particular design and performance goals, namely:

        - keep the cost / GB as low as possible
        - keep cooling and power requirements low
        - use the filesystem and bundle objects into large chunks (~100MB ARC files, last I checked)
        - assume streaming writes affecting an edge of the system -- previously written data isn't modified
        - assume random reads
        - read latency is less important than cost / GB

        I worked on the Archive ~5 years ago, and these are based on my unde
  • Andrew FIle System (Score:5, Informative)

    by mroch ( 715318 ) * on Tuesday October 25, 2005 @03:25PM (#13874211)
    Check out AFS [].
    • Agreed. AFS is exceptional nice. However I think it still have a max file size of 2GB.
    • I was about to recommend this but when I googled afs and found a faq it said:

      Subject: 1.02 Who supplies AFS?

      Transarc Corporation phone: +1 (412) 338-4400
      The Gulf Tower
      707 Grant Street fax: +1 (412) 338-4404
    • by sirket ( 60694 ) on Tuesday October 25, 2005 @05:01PM (#13875307)
      Stop what you are doing right now. If your architecture requires you to have one huge volume then you have architected things wrong. Imagine trying to fsck this damned thing! What about file system corruption- What the hell are you going to do when you lose a Petabyte of data because of some file system corruption? Small, sensible, easily managed smaller partitions are the way to go. Use a database to organize where given files are stored. Do something that makes sense. I have a client now who just lost a bunch of data because they used a system like this.

      Having said all this- If you are still intent on finding a good file system then use AFS. It's probably your best free solution. If you want to sleep at night call EMC.

  • PetaBox (Score:4, Informative)

    by Anonymous Coward on Tuesday October 25, 2005 @03:26PM (#13874221)
    Howabout the PetaBox [], used by the Internet Archive [] ?
  • by mikeee ( 137160 ) on Tuesday October 25, 2005 @03:27PM (#13874224)
    Livejournal developed their own distributed filesystem: []

    It's scalable and has nice reliability features, but is all userspace and doesn't have all the features/operations of a true POSIX filesystem, so it may not suit your needs.
  • by Evil W1zard ( 832703 ) on Tuesday October 25, 2005 @03:27PM (#13874228) Journal
    I know a certain recent Zombie network that was discovered which collectively had quite a few Pbs of storage... Of course I wouldn't recommend going down that road as it leads to you know ... jail.
  • Petabox (Score:2, Insightful) made a petabox []

    There is now a company that seems to make the same design: []

    I don't know what FS they use, but apprently it is redudent.

    • Re:Petabox (Score:5, Insightful)

      by afidel ( 530433 ) on Tuesday October 25, 2005 @03:56PM (#13874563)
      This guy is worried about budget, yet even with the "low power" usage of the petabox it would still use 50kW for one petabyte of storage! When you combine the cooling for that with the cost of electricity you are talking some serious money. If you have trouble getting the capital funds for something like this how are you ever going to pay the operating costs?
      • Re:Petabox (Score:3, Interesting)

        by rpresser ( 610529 )
        Depending on latency requirements, perhaps most of the cluster can stay in sleep mode until it is needed.
      • Re:Petabox (Score:4, Insightful)

        by Databass ( 254179 ) on Tuesday October 25, 2005 @06:22PM (#13876196)
        This guy is worried about budget, yet even with the "low power" usage of the petabox it would still use 50kW for one petabyte of storage!

        Interesting to think about. My brain probably holds about a petabyte of memories and it uses 20-60 watts. Mostly from sugar.
  • GPFS from IBM (Score:5, Interesting)

    by LuckyStarr ( 12445 ) on Tuesday October 25, 2005 @03:29PM (#13874246)
    May or may not be what you search. Quite expensive but impressive featurelist. tware/gpfs.html []
    • Re:GPFS from IBM (Score:3, Interesting)

      by Zombie ( 8332 )
      My wife's building a 4 petabyte array (starting with 600 terabyte by the end of this year) for real-time multiple-access high-speed video streaming on GPFS. All GNU/Linux and commodity hardware. The switch fabric of the network is the hard bit. It's a bitch on fibre channel, but iSCSI should deliver higher performance at less than half the price. That's when you can get the hardware, and if you have the right Ethernet switch fabric again...
    • Re:GPFS from IBM (Score:3, Insightful)

      by Obasan ( 28761 )
      Having implemented GPFS I feel qualified to say it kicks butt. As the poster mentions, its not cheap but if you want reliability and support it may be well worth it. Thats where you need to decide the level of risk you are willing to expose your data to. One limitation of GPFS is that it does (or did last I looked) only run on IBM hardware, either Pseries or Xseries with FastT fiber channel at the back end.

      From what I've heard, definitely give GFS a thorough shakedown before you decide to implement it, I'
  • Why? (Score:2, Insightful)

    by Anonymous Coward
    What are you doing on a limited budget trying to build a 1PB solution? And why are you on a budget?

    Just because you are starting at 25TB doesn't mean you aren't building a 1PB solution.

    You also need to figure out what kind of bandwidth you need. It's very seldom that people have 1PB of data that is accessed by one person occasionally. If Some sort of USB or 1394 connection will work you are much better off than requiring infiniband.

    Like many "ask Slashdot" questions this is the last place you should be l
  • My research has not yielded any viable open source alternative (unless Google releases GoogleFS)

    Since when has Google released any open source software?
  • Scale (Score:3, Interesting)

    by LLuthor ( 909583 ) <> on Tuesday October 25, 2005 @03:33PM (#13874298)
    If you know the scale of the problem, you should consult with a company like EMC to provide the support for this thing - you WILL need it.

    Clustering the disks with iSCSI or ATAoE is trivial - you can do that very easily, but the filesystem to run on top of it is where you will have problems.

    PVFS - has no redundancy - Lose one node lose them all.
    GFS - does not scale well to those sizes or a large number of nodes - lots of hassle with the dlm.
    GoogleFS - Essentially one write only - no small (50GB) files - little or no locking.
    xFS - Way too easy to lose your data.

    It seems that you only have one option:
    Lustre - VERY Expensive - lots of hassle with meta-data servers and lock servers.

    Go with a company to take care of all this hassle - you do not have the resources of Google to deal with this kind of thing yourself.
  • by Anonymous Coward
  • Wow (Score:5, Funny)

    by DingerX ( 847589 ) on Tuesday October 25, 2005 @03:33PM (#13874302) Journal
    I never thought I'd see the day when sites were boasting a petabyte of porn.
    That's over 3 million hours of .avis -- if you sat down and watched them end-to-end, you'd have 348 years of "backdoor sliders", "dribblers to short", "pop flies", and "long balls". We live in an enlightened age.
    • by Surt ( 22457 )
      You're not thinking far enough ahead. The porn industry is always on the leading edge of technology, so of course they're going to be storing high definition porn on those petabytes, so that brings you down to a few paltry years worth of porn. And of course you have to factor in fast forwarding through whatever parts don't interest you.
    • Re:Wow (Score:5, Funny)

      by spuke4000 ( 587845 ) on Tuesday October 25, 2005 @03:57PM (#13874569)
      I'm not really sure I need 348 years of porn. I usually find porn really interesting for the first 3 minutes or so, then for some reason it's not so interesting anymore. But maybe that's just me.
  • by cheesedog ( 603990 ) on Tuesday October 25, 2005 @03:34PM (#13874314)
    One thing to think about when building such a system from a large number of hard disks is that disks will fail, all the time. The argument is fairly convincing:

    Suppose each disk has a MTBF (mean time before failure) of 500,000 hours. That means that the average disk is expected to have a failure about every 57 years. Sounds good, right? Now, suppose you have 1000 disks. How long before the first one fails? Chances, are, not 57 years. If you assume that the failures are spread out evenly across time, a 1000-disk system will have a failure every 500 hours, or about every 3 weeks!

    Now, of course the failures won't be spread out evenly, which makes this even trickier. A lot of your disks will be dead on arrival, or fail within the first few hundred hours. A lot will go for a long time without failure. The failure rates, in fact, will likely be fractal -- you'll have long periods without failures, or with few failures, and then a bunch of failures will occur in a short period of time, seemingly all at once.

    You absolutely must plan on using some redundancy or erasure coding to store data on such a system. Some of the filesystems you mentioned do this. This allows the system to keep working under X number of failures. Redundancy/coding allows you to plan on scheduled maintanence, where you simply go in and swap out drives that have gone bad after the fact, rather than running around like a chicken with its head cut off every time a drive goes belly up.

    • by OrangeSpyderMan ( 589635 ) on Tuesday October 25, 2005 @03:42PM (#13874407)
      Agreed. We have around 50 TByte of data in one of our datacenters and it's great, but the number of disks that fail when you have to restart the systems (SAN fabric firmware install ) is just scary. Even on the system disks of the Wintel servers (around 400) which are DAS, around 10% fail on Datacenter powerdowns. That's where you pray that statistics are kind and you have no more failures on any one box than you have hot spares+tolerance :-) Last time one server didn't make it back up because of this.... though it was actually strictly speaking the PSUs that let go, it would appear.
    • If you assume that the failures are spread out evenly across time, a 1000-disk system will have a failure every 500 hours, or about every 3 weeks!

      For the sake of your argument I suppose that assumption could be considered fair. If one were to do a somewhat more sophisticated analysis, a better model for hard drive failures is the Bathtub curve []. It represents the result of a combination of three types of failures: infant mortality (flaws in the manufacturing), random failures and wear-out failures.

      The fa

    • by fm6 ( 162816 )

      If you assume that the failures are spread out evenly across time, a 1000-disk system will have a failure every 500 hours, or about every 3 weeks!

      Not a sound assumption. Things don't fail uniformly over time. Suppose 70 babies are born with a life expectancy of 70 years. Is one of them guaranteed to die every year for the next 70 years? Obviously not. If they avoid some joint disaster (like they all take a trip on the Titanic), most of them will die within a decade or so of the 70-year mark.

      Same with di

    • by beldraen ( 94534 ) <> on Tuesday October 25, 2005 @05:54PM (#13875942)
      Just a comment about MTBF. It's often not understood, and it is one of my little pet peaves with tech producers because they don't try to correct it. MTBF is a rating for reliability to achieve lasting the warrenty period.

      You have a drive that is rated 500,000 hours MTBF. Suppose you bought a drive and let it run at rated duty. Driver are normally rated to run 100% of the time, but many other devices will have duty period. Further, you run the drive until its warrenty is up. You then throw this perfectly working drive out the window and replace it. If you keep the up this pattern, then approximately once per 500,000 hours on average you should have a drive fail before the warrenty period is up. This is why it is important to not only look at the MTBF but also its warrenty period.

      As a side note: In theory, you should be throwing drives out on a periodic basic. One way around this is to not buy all the same drive type and manufacturer. By having a pool of drive types, you distribute, thus minimize, risk of drive failures. Additionally, you may want to have a standard period of time for drive replacement so as to shedule your down time, as opposed to it all being unexpected.
  • I was going to suggest Reiser4 on LVM over a bunch of 4-disk RAID-5 arrays, but it seems that his definition or massive is more massive than mine.

    NFS on Reiser4 on RAID-5 on AoE (multipath) on LVM on RAID-5?

    What kind of availability do you need? Does all data need to be up all the time (like a bank/telco), or most of the data need to be up all the time (like google), or all the data need to be up most of the time (like a movie studio)?
  • by jcdick1 ( 254644 ) on Tuesday October 25, 2005 @03:34PM (#13874319)
    ...what your management was thinking. I mean, I can't imagine a storage requirement that large that you can build in a distributed model that would beat on price per GB an EMC or Hitachi or IBM or whomever SAN solution. The administration and DR costs alone for something like this would be astronomical. There just isn't really a way to do something this big on the cheap. I mean, this is what SANs were developed for in the first place. Its cheaper per GB than distributed local storage ever could be.
    • by temojen ( 678985 ) on Tuesday October 25, 2005 @03:42PM (#13874415) Journal
      With a project this large, they may be able to do it in-house and still take advantage of economies of scale. They can buy HDDs, motherboards, rackmount cases, etc. by the pallet or container load and temporarily up-hire some of their part-timers to do the assembly.

      With a network bootable bios, the nodes could just be plugged in and install an image off a server, then customize it based on their MAC.
    • by Kadin2048 ( 468275 ) <> on Tuesday October 25, 2005 @03:48PM (#13874473) Homepage Journal
      Exactly. This seems like somebody is trying to figure out a way to do something in-house which really ought to be left to either an outside contractor, or at least set up as a turnkey solution by a consultant. Given that he knows little enough about it that he's asking for help on Slashdot, I think this is yet another problem best solved using the telephone and a fat checkbook, and enough negotiating skills to convince management to pony up the cash up front instead of piddling it out over time on an in-house solution that's going to be a hole into which money and time are poured.

      I know people get tired of hearing "call IBM" as a solution to these questions, but in general if you have some massive IT infrastructure development task and are so lost on it that you're asking the /. crowd for help, calling in professionals to take over for you isn't probably a bad idea.

      It's not even a question if whether you could do it in-house or not; given enough resources you probably could. It comes down to why you want to do something like this yourselves instead of finding people who do it all the time, week after week, for a living, telling them what you want, getting a price quote, and getting it done. Sure seems like a better way to go to me.
    • but I'm just a linux hobbyist and programmer, so take any advice I give with a grain of salt, but here's what I did for my setup at home. To start, you're looking a little over $1000 per TB. And, that's about as cheap as it gets with redudundancy. I have 8 drives in one machine, it's in a RAID 5 config, and I have a hot spare. However, if I were doing this for a mission critical application, I would have it in a RAID 6 configuration with a hot spare, and buy a hot swap cage, which would further add to t
  • 15-zeros-is-a-lot-of-bytes

    15 zeros is no bytes at all... :)

  • by gstoddart ( 321705 ) on Tuesday October 25, 2005 @03:38PM (#13874360) Homepage
    I've been asked to build a massive storage solution to scale from an initial threshold of 25TB to 1PB ... Based on my past experience and research, the commercial offerings for such a solution becomes cost prohibitive, and the budget for the solution is fairly small.

    Unfortunately, I should think needing a solution which can scale up to a Petabyte (!) of disk-space and a "fairly small" budget are at odds with one another.

    Maybe you need to make a stronger case to someone that if such a mammoth storage system is required, it needs to be a higher priority item with better funding?

    Heck, the loss of such large volumes of data would be devastating (I assume it's not your pr0n collection) to any organization. Buliding it on the cheap and having no backup (*)/redundancy systems would be just waiting to lose the whole thing.

    (*) I truly have no idea how one backs up a petabyte
  • For the most part (Score:5, Insightful)

    by retinaburn ( 218226 ) on Tuesday October 25, 2005 @03:39PM (#13874376)
    the reason you can't find a cheap way to do this is because it just isn't cheap.

    I would look at some lessons learned from Google. If you decide to go with some sort of homebrew solution based on a bunch of standard consumer disks you will run into other problems besides money. The more disks you have running, the more failures you will encounter. So any system you setup has to be able to have drives fail all day, and not require human intervention to stay up and running(unless you can get humans for cheap too).

    • Re:For the most part (Score:3, Informative)

      by epiphani ( 254981 )
      It wont be cheep - but how about this idea. You'll get plenty of data redundancy out of it, however you may need to spend some extra bucks on stability and maintainability.

      eWare 12x S-ATA raid5 card
      12x 300GB raid5
      linux machine
      iscsi software - share out 1 LUN.

      Duplicate this machine until you have enough storage.

      One big box with a number of trunked/bonded gigE ports
      Iscsi initiator software - mount all the luns.
      software raid them together - striping if you arent too worried - raid5 if you are.

      tada - big stora
      • Re:For the most part (Score:3, Informative)

        by fool ( 8051 )
        well, since all of the (high-end) PC's we were looking at for snort boxen had severe problems pushing even 5Gbit/s (not GByte) of traffic in/out over the PCI busses simultaneously, you hit a bottleneck pretty quickly there, even before you get to 25TB with your disk sizes. at 500GB disks you get pretty close, but you're at the ceiling already. while a decent (not even cutting-edge) machine could push a Gbit to the server pretty easily, the server, no matter how beefy, needs a ton of internal bandwidth to
  • Do It Right (Score:5, Insightful)

    by moehoward ( 668736 ) on Tuesday October 25, 2005 @03:41PM (#13874402)

    Look. Everyone wants a Lamborgini for the price of a Chevy. Cute. Yawn. Half of the Ask Slashdot questions are people who didn't find what they want at Walmart. Despite the amazing Slashdot advice, Ask Slashdot answers have somehow failed to put EMC, IBM, HP, etc. out of business. There is no free lunch.

    Just call EMC, get a rep out, and give the paperwork to your boss. Do it today instead of 5 months from now and you will have a much better holiday season.

    Note to moderators and other finger pointers: I did not say to BUY from EMC, I just said to show his boss how and why to do things the right way. It does not hurt to get quotes from the big vendors, mainly because the quote also comes with good, solid info that you can share with the PHBs. Despite what you think about "evil" tech sales persons and sales engineers, you actually can learn from them.
    • by Genady ( 27988 ) <gary@rogers.mac@com> on Tuesday October 25, 2005 @04:08PM (#13874713)
      As a VERY satisfied customer, I say, just buy the damned thing from EMC. There's few enough warm fuzzy feelings that SysAdmins have in this day and age, like your CE calling at 7:00am saying: "Hey, you had a few hard SCSI errors on Disk 3 Enclosure 0 Tray 0 last night, that's your production LUNs isn't it? There should be a courier there with a disk by 10, and I'll stop by to make sure things are hotsparing back properly after you replace the disk okay?" And *THIS* is just because my CE knows I can handle replacing a disk. Normally he'd come out and do that, and sit around while it re-built the Raid Group.

      Yeah, EMC costs. THIS is why. The support, when needed, is top top top notch. Which would you rather have in a DR situation?
  • IBRIX (Score:4, Informative)

    by Wells2k ( 107114 ) on Tuesday October 25, 2005 @03:42PM (#13874414)
    You may want to take a look at IBRIX [] systems. They do a pretty robust parallel file system that has redundancy and failover.
    • Re:IBRIX (Score:3, Informative)

      by Wells2k ( 107114 )
      Something else I forgot about is the actual hardware... you may want to take a look at the nStor [] products. Their hardware RAID systems are relatively economical, and you can go to fibrechannel drives with fibre connected boxes quite easily with their equipment.
  • ...can probably solve this problem for you. whether or not they can do so on the sort of budget you're willing to spend is a totally different story, however....
  • Coda [] works even when nodes disconnect, for instance with network outages or mobile computing. Plus, there is a Windows client, if that's the way your shop swings.
  • I don't know what the limits of JFS are, but it sounds like a nice set up.

    This article in Linux Journal ( [] ) talks about doing just that. The hardware costs ring up and don't scale as you get into your capacity ranges unless you can get a deal buying bulk HDDs - something like $10K per 7.5 terabytes
  • by tomhudson ( 43916 ) <> on Tuesday October 25, 2005 @03:52PM (#13874513) Journal

    Hard disk space is doubling every 6 months - wait 5 years and you'll be able to buy a 25TB disk for $125.00.

    A single raid50 of them will then give you your petabyte of storage, for around $6,000.

  • No Redundancy? (Score:5, Insightful)

    by Giggles Of Doom ( 267141 ) <(ten.gninthgilder) (ta) (leahcim)> on Tuesday October 25, 2005 @03:57PM (#13874567) Homepage
    A PETABYTE without redundancy? I can't imagine having that much data I didn't care about.
  • iSCSI storage / san (Score:3, Informative)

    by pasikarkkainen ( 925729 ) on Tuesday October 25, 2005 @04:13PM (#13874776)
    There seems to be lots of SATA-RAID based iSCSI SAN devices available nowadays.. Some links to products I have seen: [] They make nice SATA-raid based iSCSI SAN devices with all the features you could expect (volumes, snapshots, array/volume-expansion, hotswap, redundant controllers, redundant fans, etc). m []
    14 250G sata disks, 3U, 3.5 TB of raw storage. m []
    14 500G sata disks, 3U, 7 TB of raw storage. tm []
    56+ TB

    Looks good. I have not yet used them myself :)

    Another iSCSI SATA SAN possibility: uct_detail/dataframe_420.html []
    16 sata disks, review: _53700.html?view=1&curNodeId=0 []

    This company also has SATA iSCSI SAN devices: section/Product~Categories/category/iSCSI/options/ IPBank/drivetype/L~Series/formfactor/Integrated/in face/SATA~-~Serial~ATA []

    iSCSI SAN comparison: rmat.jhtml?articleID=170702726 []

    There are also software iSCSI target solutions for use with your own/custom hardware. [] for building linux-based iSCSI target/SAN.

    If you are familiar with iSCSI targets / iSCSI SAN devices please post your comments!
  • by bernz ( 181095 ) on Tuesday October 25, 2005 @04:21PM (#13874861) Homepage
    We've scaled this to 30TB so far. I'm not sure about 1PB, though. For us, redundancy and storage size is key, performance less so.

    Storage nodes: 7 x 2.8TB 2U RAID5+1 boxen with Serial ATA. The 2.8TB is logical, not physical. The OS for each of those machines is RAMDISK based (something we concocted based on what I read about the DNALounge awhile back) so it helps curb disk failures of the storage nodes themselves. We avoid disk failure by using RAID5. Of course that doesn't protect against mutiple simultaneous disk failure, but read on for more. Each of the storage nodes is exported via NBD.

    Then we have a head unit, a 64-bit machine. This machine does a software RAID5 across the storage nodes using an NBD client. Essentially each storage node is a "disk" and the head unit binds and manages the sofware raid5. So let's say a whole storage node goes down (for whatever reason it does), all the data is still intact. RAID5 rebuild time over the gigabit network is about 18hrs, which is acceptable. We even have another storage box as a hot-spare.

    On top of that, we have the whole cluster mirrored to another identical cluster via DRBD in a different geographic location. This is linked by Gigabit WAN. So if we have a massive disaster and lose the entire primary cluster, then we have a 2ndary cluster ready to go. We needed to purchase the Enterprise version of DRBD ($2k US) but that's worth it because they're neato guys.

    We use XFS as the filesystem. This system gives us 14TB of redundant "RAID-55 with a Mirror" space. Both clusters together? $85k.

    When the cluster starts running out of space (about 70% or so), we add ANOTHER cluster of similar stats to the initial one and use LVM to join the two units together.

    This has scaled us to 30TB and we're pretty happy with it. The read speed is very good (hdparm says Timing buffered disk reads: 200 MB in 3.01 seconds = 66.49 MB/sec) and the write speed is about 32 MB/sec. For what our application is doing, that's a fine speed.

    • I'll put this out as a side point since I'm the OP: If we had to do more than 50TB, I think we'd go to a "real" solution like EMC or something like that. This has been very good for us, but given the need for that amount of storage, we also now have the money to spend on a superduper storage machine. Homebrew has been wonderful to get to this point, but unless we get the kind of employees necessary to really write our own FS a-la GoogleFS, I can't see us taking this solution that much further past where it
  • by jlarocco ( 851450 ) on Tuesday October 25, 2005 @04:22PM (#13874868) Homepage
    Dear Slashdot,
    I have been tasked with (insert very difficult, very important job). This is very important to my company. I have (insert number much lower than it should be) dollars to do this. I do not want to use (insert company name specializing in this exact thing) because management thinks they are too expensive. I think I can do this (insert better/faster/cheaper/...) than said company, even though they have vastly more experience and have invested much more time and research than I have. My continued and future employment probably rests on this project. Please advise.
  • by Ironsides ( 739422 ) on Tuesday October 25, 2005 @04:30PM (#13874958) Homepage Journal
    Nexsan has a box called ATA Beast []
    Raid, Fibre Channel, 42 ATA drives per 7 RU chasis. Throw in 500GB drives and 1 parity drive for every 6 data drives and you have ~30 TB per chasis.
  • by @madeus ( 24818 ) <> on Tuesday October 25, 2005 @04:33PM (#13874979)
    I appreciate this might not seem like helpful advice, but...

    If you've been asked to do something this by a company that can afford to buy one commercial off-the-shelf high volume storage solutions, then I honestly can't imagine any solution they try and knock up will actually work (as I'm not aware of any free software solution that's currently up to the task).

    If your company doesn't have / can't raise the capital to buy a commercial system for a project of this scale, I can't possibly see how they could afford to screw up on this and go with an untested idea that could very well end up being a huge money sink they wouldn't be able to dig themselves out of - one that could doom the entire company and all it's investors given the cost it could run to.

    And of course, for such a big project, they should hire people who would already know how to do something like this (which is not a dig, it's just crazy to skimp on staff when you have an ambitious project which requires large amounts of capital investment).

    That said...

    I were going to do large scale storage on the cheap, depending on the design of the software and the specific requirements (particularly if I was also developing the software we were going to use, or was able to set feature requirements and/or was able to make the modifications myself) I would build the largest standard file shares I could with SATA disks (using commodity hardware, hot swappable, running linux, with front loading drive bays).

    The specifics of handling the load balancing (via multiple front ends, multiple mount points, pre-deteremined hashing to balance things out, proxies/caches, hooks in the file system calls, hooks in the application to talk to a controller, etc) depend entirely on the sort of application however.

    It's definately likely to be far easier (and more cost effective) to have the software take care of knowing where the data is stored, rather than trying to build a single really large file share. I know at least one very known large company who've went down this route (with essentially elaborately hacked up versions of common OS software).

    The downside is you have to support whatever hack you come up with to do this, but that shouldn't be an enormous amount of work (and you can probably afford to hire someone to support it full time for significantly less than the cost of a support contract for a commercial solution).
  • Why one volume? (Score:3, Informative)

    by photon317 ( 208409 ) on Tuesday October 25, 2005 @04:34PM (#13874995)

    What's making your question hard is the "make it like one volume" restriction. The problem is trivial otherwise. If I were you, I'd be asking whoever tasked you with this to *really* justify on a technical level why they need it to appear as a single volume, since that makes all the possible solutions slower, more costly, and more difficult to maintain.

    Chances are extremely high that what they really want is a "/bigfatfs" directory visible everywhere in which they will store many discrete items in subdirectories by project or by dataset or by user. You should convince them to let you build it from commodity machines serving a few TB each mounted as seperate filesystems underneath that umbrella directory. Then your only challenge is coherent management of the namespace of mountpoints for consistency across the environment (which there are longstanding tools for, like autofs + (ldap, nis, nis+, whatever)), and administration/assignment of new space requests within your cluster (that could be scripted to automatically allocate from the least-used volume which can satisfy the request (where least used could mean space or could mean activity hotness based on the metrics you're logging)).
  • by painehope ( 580569 ) on Tuesday October 25, 2005 @05:02PM (#13875313)
    GPFS []
    Take it from someone who's messed with nearly every storage product on the market, if you want something that works fairly simply, performs at approaching spindle speed ( meaning the file system is not the bottleneck - if you have 10 GB/sec. storage bandwidth, expect to see near that with proper tuning ), is very stable ( compared to most storage solutions on the market - bear in mind that most storage products are aimed at large-block sequential I/O, and fall down - either performance-wise or stability-wise - when you throw other I/O patterns or combinations of patterns at them ), and is portable across nearly any Linux distribution ( with varying amounts of difficulty, I have had to hack their kernel patches before when using a unsupported kernel ), GPFS is the one. Of course, the problem there is I believe it's pretty expensive to run on non-IBM hardware. But if you have IBM hardware ( even if it's not the hardware you're running the FS on ) or some sort of in with IBM, they'll let you have it for a song and a dance.

    Having said that, Lustre [] is getting there. I'd say it's the equal of GPFS ( as a parallel filesystem - I believe it is even more flexible as a distributed filesystem ) in performance, probably scales roughly the same ( haven't played with it in a large installation, so can't tell you beyond looking at the architecture ), and is going to the be the biggest player on the market in the future. It's also free ( IIRC Cluster File Systems sells support, but the code is freely available ) and not tied to IBM and whatnot, like GPFS is. Of course, HP has a big connection with Lustre, but not ownership thereof.

    Those are really the only two that I would consider for a serious high-performance storage project. If you don't need great performance, that's when you can start looking at things like GFS, ADIC's StorNext, Ibrix, etc.

    Oh, Gautham Sastri ( of former Maximum Throughput fame ) has a newer company called Terrascale, I recall them putting on a presentation at the 2003 or 2004 ( can't remember ) Supercomputing conference ( SC2005 is coming up in a few weeks, yeah!!! ) which showed pretty good performance ( relative to the small system they were using ), not sure how they're coming along...

    Anyways, good luck...and don't forget to use Iozone [] to benchmark the damn thing!
  • by Mars Ultor ( 322458 ) on Tuesday October 25, 2005 @05:03PM (#13875331) Homepage
    Why not store the data randomly in a dilithium matrix with asynchronous data transfer and AJAX? Maybe some RUBY on RAILS too - I hear that's hot right now. Of course, you'd have to make use of a couple of Heisenberg compensators configured in parallel to keep track account for any memory addressing issues, but no need to state the obvious there.
  • by buss_error ( 142273 ) on Tuesday October 25, 2005 @05:48PM (#13875867) Homepage Journal
    Sounds like the PHBs have been at this. First, *why* does it have to be a single file system? With Oracle, MySQL, and MS-SQL you can do partitioning, if your need is databases. If your need is really a monolithic file, then I'll bet that the single file size won't be multi-hundreds of gigs.

    In short, your stated objective smells. Not enough data.

    WHAT is going to be done (database, file storage?)

    HOW will it be accessed? (One large file, many smaller files)

    WHEN will it be accessed? (During business hours, distributed over the day?)

    AVERAGE TRANSFERS - will the whole schmear come over, selected parts?

    SECURITY a concern? (Sensitive data, protected network)

    BACKUP - a petabyte of tape storage is expensive, and takes quite a while to do.

    POWER - do you have enough?

    COOLING - ditto

    SPACE - ditto - my $DAYJOB computer room is about 3000 sq ft... and we're going to be using all of it within 12 months.

    That said, if you go with big drives over a lot of systems, use lots-o-nics to keep the nic from being the bottleneck. A single gig connection sounds fine, but wait until you have 100's of people going for files at once. It'll get swamped. And swear off V-SAN from Cisco. Not worth it at all.

  • PetaBox? (Score:3, Informative)

    by mr_zorg ( 259994 ) on Tuesday October 25, 2005 @06:46PM (#13876370)
    The PetaBox, as previously discussed on Slashdot [] sounds like just what you want...
  • by anon mouse-cow-aard ( 443646 ) on Wednesday October 26, 2005 @06:26AM (#13879237) Journal [] Run a client on linux boxes with user-mode drivers that provide a logical abstraction for a whole network of backend linux boxes over any networking transport you want.

The first Rotarian was the first man to call John the Baptist "Jack." -- H.L. Mencken