Open Source Highly Available Storage Solutions? 46
Gunfighter asks: "I run a small data center for one of my customers, but they're constantly filling up different hard drives on different servers and then shuffling the data back and forth. At their current level of business, they can't afford to invest in a Storage Area Network of any sort, so they want to spread the load of their data storage needs across their existing servers, like Google does. The only software packages I've found that do this seamlessly are Lustre and NFS. The problem with Lustre is that it has a single metadata server unless you configure fail-over, and NFS isn't redundant at all and can be a nightmare to manage. The only thing I've found that even comes close is Starfish. While it looks promising, I'm wondering if anyone else has found a reliable solution that is as easy to set up and manage? Eventually, they would like to be able to scale from their current storage usage levels (~2TB) to several hundred terabytes once the operation goes into full production."
On a normal hardware you can (Score:4, Informative)
On usage nodes you can use ocfs2 or RedHat's gfs for accessing those iSCSI devices.
You should also use meaningful fencing/locking methods. (read manuals for ocfs or gfs for details).
Then configure failover ... (Score:3, Informative)
OpenFiler and Apple's XServer RAID (Score:3, Informative)
Apple's XServer RAID and OpenFiler (openfiler.org) The XServer RAID is basically a LSI Logic Engenio RAID at a very cheap price and you can't beat OpenFiler for free. The XServer RAID at 10.5TB costs about $1.31 a GB.
I know several people who backup their NetApps to this setup or just use it for storage where they don't require what NetApp offers and don't want to spend $25k+.
GlusterFS (Score:5, Informative)
It's design is simple is smart. Every feature is a translator that interconnects to other translators. So, you may organize your filesystem they way *you* want it.
Let-me give-you an example: they have 2 translators: 'unify' to unifying harddrives as one and 'afr' for automaticly file replication. Depeding on the order you use it you have two completly different setups. You can have two cluters replicating eachother or you can have a cluster of replicating servers pair.
Beside it's features and design, it's development team is *very* friendly. Yesterday someone (user) asked for a feature in the devel list, a get answered saying: good ideia, i'll do it.
Very good software.
Take a look: http://www.gluster.org/glusterfs.php [gluster.org]
Just buy bigger servers (Score:4, Informative)
But with 2 TB currently and scaling to perhaps a few hundred TB in the future, the obvious simple solution is to just buy bigger servers. With modern gear you can really connect a frightening amount of storage to a single server at modest cost. Say a rackmount box with space for 12 drives, then SAS card(s) with external connector(s), so you can chain together multiple enclosures. Taking Dell as an example (just what I quickly found with google (http://www.dell.com/downloads/global/power/ps3q0
When needs grow beyond one server, clever use of automount maps lets you manage the namespace for multiple servers easier than doing it all by hand.
As for Lustre, it's really a specialized solution for HPC, made for multiple compute nodes striping to the storage nodes at full speed using a collective IO API like MPI-IO.
Try out MogileFS (Score:4, Informative)
We've been using MogileFS [danga.com] on commodity Linux servers for a few months now and it's been working great. The MogileFS community/mailing list is very active, so it's actually been fun to implement.
Right now we have 22.8 TB spread across six 2U servers using a mix of 400 and 500 GB SATA drives. The great thing is that we can lose an entire file server (or two) with no downtime or loss of data.
Another reason to like MogileFS is that it removes the need to maintain RAID arrays. A RAID-5 array made of 750 GB disks is very risky. A high-end controller will still take many hours to rebuild a degraded array, during which time you could lose another disk and be largely screwed. (This actually happened to us very early on and we lost 0.02% of our data after restoring from backup, which still hurt.)
Take another look at NFS (Score:4, Informative)
I would not suggest cluster file systems such as Lustre for a small installation; they're generally designed to scale up to hundreds or thousands of servers, but not to scale down to a handful.
I don't think your question has enough detail (Score:5, Informative)
"High Availability" can mean a lot of things. The most important part of it, though, is "how highly available do you need?". Do you want to survive the loss of a server? Of a room? An office? A city?
Basically, you've got two options.
1. Homebuilt, possibly based around either Solaris (ZFS looks interesting) or a specialised Linux distribution. OpenFiler [openfiler.com] looks interesting but doesn't appear to get a lot of attention, so community support may be lacking. Unless you've already got the hardware, however, you'll need at least two reasonably large servers.
Depending on how crucial all this is to your employer (I'm assuming it's fairly crucial or you wouldn't be looking at HA systems in the first place), the level of support you have available to fall back on with this may or may not be acceptable.
In any case, if you're going to have to spend the amount of money involved in buying two large servers and paying for support on a linux distro anyway, you may as well look at option 2.
2. An entry-level SAN.
Yes, I know you said you can't afford it. But I don't think the problem you're discussing can be easily tackled for zero-cost, and if there's cost involved you'd be in remiss of your duties to not cover every possible base.
I was faced with the same problem myself a few months ago. Eventually I concluded that there simply wasn't the business justification for highly-available storage - we could make do with servers with redundant power supplies and disks, and regular backups. However, I was surprised to find that an entry-level SAN from Dell (actually rebranded EMC units) isn't that much dearer than "buy two dirty great servers and run OpenFiler", and has the benefit that if you do need support, you don't run the risk of hardware and software support folks pointing the finger at each other, saying "it's not our problem, it's theirs".
Plus any half-decent SAN vendor will provide a clear upgrade path - if you roll your own, you'll have to figure out how you upgrade on your own when the time comes.
Finally, think of it like this.
Any business which relies on its backend systems to be solid and reliable should take any reasonable suggestion to maintain that reliability seriously. And by definition, this implies that storage must be reliable.
If it's that important to the business that your systems continue to operate in the face of extreme adversity, and you decided to save £1000 by taking the homebrew route, you're going to have a lot of justifying to do if the worst happens and your supposedly-HA system falls over. Particularly if your answer to "what are you doing about it?" is "I've posted a message to a forum and I'm awaiting a reply". Realistically the only way it can work is if you're competent enough to be able to fix even the worst outage yourself with little or no recourse to asking on forums (though reading documentation is OK). Even then, you should keep the system simple enough that it doesn't take several months of familiarising yourself with it before anyone else has a chance of fixing it, otherwise all you've done is moved the point of failure from the hardware to yourself.
The alternative answer "I've placed an emergency support call with our suppliers and they should be ringing me back within the hour" carries a heck of a lot more weight.
Re:Free is not necessarily as in free beer (Score:1, Informative)
You may be just out-of-date - Lustre development hasn't used the "ghostscript" like "old versions are open source" model for ages now.
But if you want HA Lustre, you still need HA-grade and doubled-up-for-failover hardware with shared and raided block devices. Lustre scales much better than virtually anything else, but it's not particularly cheaper hardware-wise if you're using it for HA.
If you want commercial software, HP will sell you quality-assured Lustre and decent hardware in HA configurations, relabelled "HP SFS".
Re:Entry level SAN? (Score:4, Informative)
Full Disclosure: I'm one of the author's of the Starfish Filesystem.
Simply not true anymore, lukas84. High-availability solutions don't have to cost "big money". Starfish is the perfect example of such a system. In fact, it is THE reason we wrote Starfish: To provide an in-expensive, fault-tolerant, highly available clustered storage platform that works from the smallest website to the largest storage network. We've based the technology on the assumption that having expensive hardware/software is the wrong way to go about solving the problem.
Full HA environments do not need to be incredibly complex. If your HA solution is incredibly complex, you've done something wrong. Take a look at how easy it is to set up a Starfish file system:
Starfish QuickStart Tutorial [digitalbazaar.com]
That solution doesn't cost "big money", nor is it "incredibly complex".
Re:Just buy bigger servers (Score:2, Informative)
Full Disclosure: I'm one of the author's of the Starfish Filesystem.
High-availability solutions don't have to be complicated and expensive. Starfish is the perfect example of such a simple and low-cost system. In fact, it is THE reason we wrote Starfish: To provide an in-expensive, fault-tolerant, highly available clustered storage platform that works from the smallest website to the largest storage network. We've based the technology on the assumption that having expensive hardware/software is the wrong way to go about solving the problem.
Buying bigger servers and attaching massive storage systems to them is not a very good idea when it comes to reducing single points of failure in your HA network. You must assume hardware failure - it is going to happen, when you have so many pieces of spinning metal you will hit the point at which you are losing a hard drive every day. You will start losing machines at least once a month. Or worse - what happens when you lose one out of your four "big servers" and 155TBs goes off-line in an instant? Buying bigger and more expensive hardware is a "throw money at the problem and maybe it'll disappear" solution. It is wishful thinking at best. The system you describe is a nightmare scenario when it comes to HA - I would highly advise that nobody solve their storage problem with that approach.
Not really. We've used it for years on several of our web clusters. It does a very good job at providing great I/O throughput, yes - but it is applicable to many more problems than that. It is a good file system back-end for any website that has to deal with a large amount of data. It might not be right for what you want to do with it, but that doesn't mean it should be pigeon-holed to only being a "specialized solution for HPC".
Re:Just buy bigger servers (Score:2, Informative)
Hmmm... you seem to be concerned with a completely different class of problem than the one Starfish addresses. HA systems assume that your single server will fail eventually (which it will). There many single points of failure in the scenario you describe (ram, motherboard, glitch in the redundant power supply). What happens when you need to take the machine down for maintenance? What happens when the power strip or the UPS you have the machine plugged into fails? Your proposed solution also doesn't scale very well. If you connect 10 clients to a file system exported by your single redundant server (you have created a fantastic bottleneck in your system architecture).
I'm glad you said this - you are quite right. Most people do not address the amount of money that it costs their system administrators to get it right.
Just because something has just been released to the public doesn't mean it is not stable and mature. You are drawing a false parallel between "time that the software has been available to the public" and "stability".
We postulated that most web server clusters out there right now did not need more than 1TB of back-end storage. We use Starfish internally for our storage needs. The system is free for the previously mentioned conditions and has the source code available. We are attempting to provide a solution to the problem that you state at the end of your post.