Building a Massive Single Volume Storage Solution? 557
An anonymous reader asks: "I've been asked to build a massive storage solution to scale from an initial threshold of 25TB to 1PB, primarily on commodity hardware and software. Based on my past experience and research, the commercial offerings for such a solution becomes cost prohibitive, and the budget for the solution is fairly small. Some the technologies that I've been scoping out are iSCSI, AoE and plain clustered/grid computers with JBOD (just a bunch of disks). Personally I'm more inclined on a grid cluster with 1GB interface where each node will have about 1-2TB of disk space and each node is based on a 'low' power consumption architecture. Next issue to tackle is finding a file system that could span across all the nodes and yet appear as a single volume to the application servers. At this point data redundancy is not a priority, however it will have to be addressed. My research has not yielded any viable open source alternative (unless Google releases GoogleFS) and I've researched into Lustre, xFS and PVFS. There some interesting commercial products such as the File Director from NeoPath Networks and a few others; however the cost is astronomical.
I would like to know if any Slashdot readers have any experience in build out such a solution? Any help/idea(s) would be greatly appreciated!"
Petabox (Score:2, Insightful)
http://www.archive.org/web/petabox.php [archive.org]
There is now a company that seems to make the same design:
http://www.capricorn-tech.com/products.html [capricorn-tech.com]
I don't know what FS they use, but apprently it is redudent.
Why? (Score:2, Insightful)
Just because you are starting at 25TB doesn't mean you aren't building a 1PB solution.
You also need to figure out what kind of bandwidth you need. It's very seldom that people have 1PB of data that is accessed by one person occasionally. If Some sort of USB or 1394 connection will work you are much better off than requiring infiniband.
Like many "ask Slashdot" questions this is the last place you should be looking for help...
Stress the importance .... (Score:4, Insightful)
Unfortunately, I should think needing a solution which can scale up to a Petabyte (!) of disk-space and a "fairly small" budget are at odds with one another.
Maybe you need to make a stronger case to someone that if such a mammoth storage system is required, it needs to be a higher priority item with better funding?
Heck, the loss of such large volumes of data would be devastating (I assume it's not your pr0n collection) to any organization. Buliding it on the cheap and having no backup (*)/redundancy systems would be just waiting to lose the whole thing.
(*) I truly have no idea how one backs up a petabyte
IBRIX (Score:1, Insightful)
For the most part (Score:5, Insightful)
I would look at some lessons learned from Google. If you decide to go with some sort of homebrew solution based on a bunch of standard consumer disks you will run into other problems besides money. The more disks you have running, the more failures you will encounter. So any system you setup has to be able to have drives fail all day, and not require human intervention to stay up and running(unless you can get humans for cheap too).
Do It Right (Score:5, Insightful)
Look. Everyone wants a Lamborgini for the price of a Chevy. Cute. Yawn. Half of the Ask Slashdot questions are people who didn't find what they want at Walmart. Despite the amazing Slashdot advice, Ask Slashdot answers have somehow failed to put EMC, IBM, HP, etc. out of business. There is no free lunch.
Just call EMC, get a rep out, and give the paperwork to your boss. Do it today instead of 5 months from now and you will have a much better holiday season.
Note to moderators and other finger pointers: I did not say to BUY from EMC, I just said to show his boss how and why to do things the right way. It does not hurt to get quotes from the big vendors, mainly because the quote also comes with good, solid info that you can share with the PHBs. Despite what you think about "evil" tech sales persons and sales engineers, you actually can learn from them.
Re:Data redundancy REQUIRED (Score:4, Insightful)
Yup, time to pick up the phone. (Score:5, Insightful)
I know people get tired of hearing "call IBM" as a solution to these questions, but in general if you have some massive IT infrastructure development task and are so lost on it that you're asking the
It's not even a question if whether you could do it in-house or not; given enough resources you probably could. It comes down to why you want to do something like this yourselves instead of finding people who do it all the time, week after week, for a living, telling them what you want, getting a price quote, and getting it done. Sure seems like a better way to go to me.
Re:Why? (Score:2, Insightful)
Re:Petabox (Score:5, Insightful)
No Redundancy? (Score:5, Insightful)
Another, There Is. (Score:2, Insightful)
given 2PB = 1 Human Brain, non interlaced
1024 TB == 1 PB
1 TB == 1 PC Computer with 1200GB H/D, 2Gig RAM, Networking
If designing for cost, NOT speed:
1 DVD = 4.5GB
1 PB = 1024 TB = 1,048,576 GB
1 PC Computer, with a DVD like the one mementioned above.
1 Robotic CNC Arm, with DVD Gripper(tm)
1 Very Huge Wire Cage to hold DVD's like a Juke Box.
(This has been done before, but with Tapes)
I built a 1.7 TB for about $2000 (Score:3, Insightful)
Where all this could get terribly expensive is in power requirements, it requires less power to run a cage of hard drives than it does to run a network of PC's. I'd imagine that any money you save on hardware, you would spend on your power bill. Either way, your looking at, bare minimum, about $30K to start for 25TB's, and I would add another 10K padding just to be safe, to pay for stuff like UPS (which you want), a high end switch (which you'll also need), cabling, etc. In other words, it's not cheap, and like my parent just said, it will probably be cheaper in the long run to have someone like IBM do it for you. Do you really want to be responsible for 25-1000 TB's of data?
Re:Scale (Score:4, Insightful)
Re:Do It Right (Score:2, Insightful)
Can the company funding this really afford this? (Score:3, Insightful)
If you've been asked to do something this by a company that can afford to buy one commercial off-the-shelf high volume storage solutions, then I honestly can't imagine any solution they try and knock up will actually work (as I'm not aware of any free software solution that's currently up to the task).
If your company doesn't have / can't raise the capital to buy a commercial system for a project of this scale, I can't possibly see how they could afford to screw up on this and go with an untested idea that could very well end up being a huge money sink they wouldn't be able to dig themselves out of - one that could doom the entire company and all it's investors given the cost it could run to.
And of course, for such a big project, they should hire people who would already know how to do something like this (which is not a dig, it's just crazy to skimp on staff when you have an ambitious project which requires large amounts of capital investment).
That said...
I were going to do large scale storage on the cheap, depending on the design of the software and the specific requirements (particularly if I was also developing the software we were going to use, or was able to set feature requirements and/or was able to make the modifications myself) I would build the largest standard file shares I could with SATA disks (using commodity hardware, hot swappable, running linux, with front loading drive bays).
The specifics of handling the load balancing (via multiple front ends, multiple mount points, pre-deteremined hashing to balance things out, proxies/caches, hooks in the file system calls, hooks in the application to talk to a controller, etc) depend entirely on the sort of application however.
It's definately likely to be far easier (and more cost effective) to have the software take care of knowing where the data is stored, rather than trying to build a single really large file share. I know at least one very known large company who've went down this route (with essentially elaborately hacked up versions of common OS software).
The downside is you have to support whatever hack you come up with to do this, but that shouldn't be an enormous amount of work (and you can probably afford to hire someone to support it full time for significantly less than the cost of a support contract for a commercial solution).
Good point, bad data (Score:3, Insightful)
Same with disk drives — most failures will be clustered around the 57-year mark. Not that your attitude towards redunancy is wrong. Just as people sometimes die in infancy, some disk drives break down quickly. So there's a chance that you'll lose some drives from your thousand-disk system in the first year.
How big a chance? To answer that question, you need more statistics about drive failure — and a much better grasp of probability theory.
Re:call EMC. i am sure their clarion line will han (Score:3, Insightful)
and rebooting.
EMC is obsolete. Their customers just haven't discovered it yet.
AFS Rocks- Now stop (Score:5, Insightful)
Having said all this- If you are still intent on finding a good file system then use AFS. It's probably your best free solution. If you want to sleep at night call EMC.
-sirket
Re:GPFS from IBM (Score:3, Insightful)
From what I've heard, definitely give GFS a thorough shakedown before you decide to implement it, I've heard some horror stories.
Get out now!! (Score:2, Insightful)
There's a reason why Terabyte storage arrays for commercial applications cost a lot of money, and why consulting services from IBM, EMC, Hitachi, etc. have the huge per-hour cost. If you/your management can't see that, you really have no business being there. Sure, anyone can throw a JBOD RAID together for a thousand bucks, but I wouldn't trust anything more important than MP3s to it.
Who let the PHBs out? (Score:3, Insightful)
In short, your stated objective smells. Not enough data.
WHAT is going to be done (database, file storage?)
HOW will it be accessed? (One large file, many smaller files)
WHEN will it be accessed? (During business hours, distributed over the day?)
AVERAGE TRANSFERS - will the whole schmear come over, selected parts?
SECURITY a concern? (Sensitive data, protected network)
BACKUP - a petabyte of tape storage is expensive, and takes quite a while to do.
POWER - do you have enough?
COOLING - ditto
SPACE - ditto - my $DAYJOB computer room is about 3000 sq ft... and we're going to be using all of it within 12 months.
That said, if you go with big drives over a lot of systems, use lots-o-nics to keep the nic from being the bottleneck. A single gig connection sounds fine, but wait until you have 100's of people going for files at once. It'll get swamped. And swear off V-SAN from Cisco. Not worth it at all.
Re:Oracle, also (Score:3, Insightful)
I second that.
Starting at 25TB to scale 1PB? And you want it cheap? If it was cheap to do that sort of thing, we'd all be lining up to get one of our own(*).
Seriously, though, you don't really specify how cheap you are expecting to get it for. What are your expectations, and just how far over-budget are the options you've looked at already? Do you really need 25TB/1PB in one volume, or could it be achieved by splitting it into smaller chunks and working out some sort of load-sharing system?
And in any case, what on Earth kind of data do they anticipate will take a petabyte of contiguous storage????
[(*) Yes, I'm aware that in X years, someone's going to be looking back at this in the
Re:Petabox (Score:4, Insightful)
Interesting to think about. My brain probably holds about a petabyte of memories and it uses 20-60 watts. Mostly from sugar.
Re:Oracle, also (Score:3, Insightful)
If you don't want to participate, don't. Stop stuffing the threads with posts about how lame everyone's questions, knowledge and motivations are.
I'm actually interested in what people have thought about this very topic, AND I'm not a petabyte database expert. So it's news to me. And probably is to you as well.
Re:Apple Xserve? (Score:3, Insightful)
Only on fucking Slashdot.
Re:Controllers! (Score:4, Insightful)
Re:That's not MTBF, this is.. (Score:1, Insightful)
Secondly, you generally can't mix drive types, as they tend not to be exactly the same size. This will really mess up any attempts to rebuild a failed drive, or redundancy in general. Additionally, most "hot-swap" array solutions require drives of a specific mounting type and form-factor, which is going to throw that idea out the window.