Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

Does ZFS Obsolete Expensive NAS/SANs? 578

hoggoth writes "As a common everyman who needs big, fast, reliable storage without a big budget, I have been following a number of emerging technologies and I think they have finally become usable in combination. Specifically, it appears to me that I can put together the little brother of a $50,000 NAS/SAN solution for under $3,000. Storage experts: please tell me why this is or isn't feasible." Read on for the details of this cheap storage solution.


Get a CoolerMaster Stacker enclosure like this one (just the hardware not the software) that can hold up to 12 SATA drives. Install OpenSolaris and create ZFS pools with RAID-Z for redundancy. Export some pools with Samba for use as a NAS. Export some pools with iSCSI for use as a SAN. Run it over Gigabit Ethernet. Fast, secure, reliable, easy to administer, and cheap. Usable from Windows, Mac, and Linux. As a bonus ZFS let's me create daily or hourly snapshots at almost no cost in disk space or time.

Total cost: 1.4 Terabytes: $2,000. 7.7 Terabytes: $4,200 (Just the cost of the enclosure and the drives). That's an order of magnitude less expensive than other solutions.

Add redundant power supplies, NIC cards, SATA cards, etc as your needs require.
This discussion has been archived. No new comments can be posted.

Does ZFS Obsolete Expensive NAS/SANs?

Comments Filter:
  • ZFS (Score:5, Informative)

    by Anonymous Coward on Wednesday May 30, 2007 @08:05AM (#19319859)
    Also should be noted that FreeBSD has added ZFS support to Current (v7). It's built on top of GEOM too so if you know what that is you can leverage that underneath zfs.
  • by BigBuckHunter ( 722855 ) on Wednesday May 30, 2007 @08:08AM (#19319881)
    For quite a while now, it has been less expensive to build a DIY file server then to purchase NAS equipment. I personally build gateway/NAS products using Via c7/8 boards as they are low power, have hardware encryption, and are easy to work with under linux. Accessory companies even make back plane drive cages for this purpose that fit nicely into commodity PCs.

    BBH
  • by alen ( 225700 ) on Wednesday May 30, 2007 @08:09AM (#19319885)
    place where i work looked at one of these things from another company. did the math and it's too slow even over gigabit for database and exchange server. OK for regular file storage, but not for heavy I/O needs
  • Current issues (Score:5, Informative)

    by packetmon ( 977047 ) on Wednesday May 30, 2007 @08:12AM (#19319903) Homepage
    I've snipped out the worst reasons as per Wiki entry:

    • A file "fsync" will commit to disk all pending modifications on the filesystem. That is, an "fsync" on a file will flush out all deferred (cached) operations to the filesystem (not the pool) in which the file is located. This can make some fsync() slow when running alongside a workload which writes a lot of data to filesystem cache.
    • ZFS encourages creation of many filesystems inside the pool (for example, for quota control), but importing a pool with thousands of filesystems is a slow operation (can take minutes).
    • ZFS filesystem on-the-fly compression/decompression is single-threaded. So, only one CPU per zpool is used.
    • ZFS eats a lot of CPU when doing small writes (for example, a single byte). There are two root causes, currently being solved: a) Translating from znode to dnode is slower than necessary because ZFS doesn't use translation information it already has, and b) Current partial-block update code is very inefficient.
    • ZFS Copy-on-Write operation can degrade on-disk file layout (file fragmentation) when files are modified, decreasing performance.
    • ZFS blocksize is configurable per filesystem, currently 128KB by default. If your workload reads/writes data in fixed sizes (blocks), for example a database, you should (manually) configure ZFS blocksize equal to the application blocksize, for better performance and to conserve cache memory and disk bandwidth.
    • ZFS only offlines a faulty harddisk if it can't be opened. Read/write errors or slow/timeouted operations are not currently used in the faulty/spare logic.
    • When listing ZFS space usage, the "used" column only shows non-shared usage. So if some of your data is shared (for example, between snapshots), you don't know how much is there. You don't know, for example, which snapshot deletion would give you more free space.
    • Current ZFS compression/decompression code is very fast, but the compression ratio is not comparable to gzip or similar algorithms.
  • Real SANs do more (Score:5, Informative)

    by PIPBoy3000 ( 619296 ) on Wednesday May 30, 2007 @08:13AM (#19319909)
    For starters, our SAN uses extremely fast connectivity. It sounds like you're moving your disk I/O over the network, which is a fairly significant bottleneck (even Gb). We also have the flexibility of multiple tiers - 1st tier being expensive, fast disks, and 2nd tier being cheaper IDE drives. I imagine you can fake that a variety of ways, but it's built in. Finally, there's the enclosure itself, with redundant power and such.

    Still, I bet you could do what you want on the cheap. Being in health care, response time and availability really are life-and-death, but many other industries don't need to spend the extra. Best of luck.
  • by Tester ( 591 ) <olivier.crete@oc ... .ca minus author> on Wednesday May 30, 2007 @08:14AM (#19319913) Homepage
    A good 20k$ RAID array does much more. First, it doesn't use cheap SATA drives, but Fiberchannel Drivers or even SAS drives which are tested to a higher level of quality (each disk costs like 500$ or more..). And those cheap SATA drives also react much more poorly to non-sequential access (like when you have multiple users). They are unusable for serious file serving. You can never compare RAID arrays that use SATA/IDE to ones that use enterprise drives like FC/SCSI/etc, because the drives are quite different.

    Then you have the other features like dual redundant everything: controllers, power supplies, etc. Then you have thermal capabilities of rack-mount solutions that often are different from SATA, etc, etc.
  • No (Score:5, Informative)

    by iamredjazz ( 672892 ) on Wednesday May 30, 2007 @08:19AM (#19319955)
    Speaking from personal experience - This file system is far from ready. It can kernel panic and reboot after minor IO errors, we were hosed by it, and probably won't ever revisit it. This phenomenon can be repeated with a usb device, you might want to try it before you hype it. Try a google search on it and see what you think...there is no fsck or repair, once it's hosed, it's hosed, the recovery is to go to tape. http://www.google.com/search?hl=en&q=zfs+io+error+ kernel+panic&btnG=Google+Search [google.com]
  • Reliable? (Score:5, Informative)

    by Jjeff1 ( 636051 ) on Wednesday May 30, 2007 @08:21AM (#19319979)
    Businesses buy SANs to consolidate storage, placing all their eggs in one basket. They need redundant everything, which this doesn't have. Additionally, SATA drives are not as reliable long term as SCSI. Compare the data sheets for Seagate drives, they don't even mention MTBF on the SATA sheet [seagate.com].
    Businesses also want service and support. They want the system to phone home when a drive starts getting errors, so a tech shows up at their door with a new drive before they even notice there are problems. They want to have highly trained tech support available 24/7 and parts available within 4 hours for as long as they own the SAN.
    Finally, the performance of this solution almost certainly pales as compared to a real SAN. These are all things that a home grown solution doesn't offer. Saving 47K on a SAN is great, unless it breaks 3 years from now and your company is out of business 3 days waiting for a replacement motherboard off Ebay.
    That being said, everything has a cost associated with it. If management is ok with saving actual money in the short term by giving up long term reliability and performance, then go for it. But by all means, get a rep from EMC or HP in so the decision makers completely understand what they're buying.
  • No but... (Score:4, Informative)

    by Junta ( 36770 ) on Wednesday May 30, 2007 @08:22AM (#19319989)
    ZFS does not obsolete NAS/SAN. However, for many many many instances, DIY fileservers have been more appropriate than SAN or NAS situations for many concepts long before ZFS came along, and ZFS has done little to change that situation (though adminning ZFS is more straightfoward and in some ways more efficient than the traditional, disparate strategies to achieve the same thing).

    I haven't gotten the point of standalone NAS boxes. They always were not fundamentally different from a traditional server, but with a premium price attached. I may not have seen the high-end stuff, howerver.

    SAN is an entirely different situation all together. You could have ZFS implemented on top of a SAN-backed block device (though I don't know if ZFS has any provisions to make this desirable). SAN is about solid performance to a number of nodes with extreme availability in mind. Most of the time in a SAN, every hard drive would be a member of a RAID, with each drive having two paths to power and to two RAID controllers in the chassis, each RAID controller having two uplinks to either two hosts or two FC switches, and each host either having two uplinks to the two different controllers or to two FC switches. Obviously, this gets pricey for good reason which may or may not be applicable to your purposes (frequently not), but the point of most SAN situations is no single-point of failure. For simple operation of multiple nodes on a common block device, HA is used to decide which single node owns/mounts any FS at a given time. Other times, a SAN filesystem like GPFS is used to mount the block device concurrenlty among many nodes, for active-active behavior.

    For the common case of 'decently' availble storage, a robust server with RAID arrays has for a long time been more appropriate for the majority of uses.
  • by ZorinLynx ( 31751 ) on Wednesday May 30, 2007 @08:25AM (#19320017) Homepage
    These overpriced drives aren't all that much different from SATA drives. They're a bit faster, but a HELL of a lot more expensive, and not worth paying more than double per gig.

    We have a Sun X4500 which uses 48 500GB SATA drives and ZFS to produce about 20TB of redundant storage. The performance we have seen from this machine is amazing. We're talking hundreds of gigabytes per second and no noticeable stalling on concurrent accesses.

    Google has found that SATA drives don't fail noticeably more often than SAS/SCSI drives, but even if they did, having several hot spares means it doesn't matter that much.

    SATA is a great disk standard. You get a lot more bang for your buck overall.
  • by tgatliff ( 311583 ) on Wednesday May 30, 2007 @08:32AM (#19320073)
    It is not my intention to offend, but I alway love it when I hear the dreaded marketing phrase of hardware "tested to a higher level of quality".

    I work in the world of hardware manufacturing, and I can tell you that this "magical" more testing process simply does not exist. Hardware failures are always expensive, and we do anything we can to prevent them. To do this, we build burn in procedures based on what most call the 90% rule, but you really cannot guarantee more reliability beyond that. Better device design at that point is what will determine reliability beyond that point. Any person who says differently either does not completely understand individual test harness processes or does not understand how burn in procedures work.

    In short, more money is not nessesarily better. More volume designs typically are, though...
  • Re:Everyman? (Score:5, Informative)

    by Max von H. ( 19283 ) on Wednesday May 30, 2007 @08:35AM (#19320095)
    I'm a photographer and my RAW image files are 15MB each. At every shooting, I come back with 1 to 8GB worth of data to be processed. My workflow involves working on 16-bit TIFFs that weigh in excess of 40MB/file and I'm not even counting the photoshop work files. 40GB would last less than a week here.

    Not being rich, I have a couple of external HDs totalling a little less than 1TB, and it's nearly full. The rest is archived on DVD or transfered to HD for storage (cheaper, faster and more reliable than DVD).

    So yeah, I can easily imagine why any organisation dealing with huge media files would be interested. Heck, I'd be a client for a safe, multi-TB storage system if I could afford it... Not everybody only deals with text files for a living :P
  • ZFS is great, but... (Score:4, Informative)

    by Etherized ( 1038092 ) on Wednesday May 30, 2007 @08:42AM (#19320147)
    It's no NetApp - yet. One thing to realize is that iSCSI target isn't even in Solaris proper yet - you have to run Solaris Express or OpenSolaris for the functionality. That may be fine for some people, but it's a deal-breaker for most companies - you're really going to place all those TB of data on a system that's basically unsupported? I'm sure Sun would lend you a hand for enough money, but running essentially a pre-release version of Solaris is a non-starter where real business is concerned. Even when iSCSI target makes it into Solaris 10 - which should be in the next release - are you really comfortable running critical services off of essentially the first release of the technology? Furthermore, while ZFS is amazingly simple to manage in comparison to any other UNIX filesystem/volume manager, it still requires you to know how to properly administer a Solaris box in order to use it. Even GUI-centric sysadmins are generally able to muddle through the interface on a Filer, but ZFS comes with a full-fledged OS that requires proper maintenance. Your Windows admins may be fine with a NetApp - especially with all that marvelous support you get from them - but ask them to maintain a Solaris box and you're asking for trouble. Not to mention, since it's a real, general purpose server OS, you'll have to maintain patches just like you do on the rest of your servers - and the supported method for patching Solaris is *still* to drop to single user mode and reboot afterwards (yes, I know that's not necessarily *required*). Also, "zfs send" is no real replacement for snapmirrors. And while ZFS snapshots are functionally equivalent to NetApp snapshots, there is no method for automatic creation and management of them - it's up to the admin to create any snapshotting scheme you want to implement. Don't get me wrong - I love ZFS and I use it wherever it makes sense to do so. It may even be acceptable as a "poor man's Filer" right now, assuming you don't need iSCSI or any of the more advanced features of a NetApp. In fact, it's a really great solution for home or small office fileservers, where you just need a bunch of network storage on the cheap - assuming, of course, that you already have a Solaris sysadmin at your home or small office. Just don't fool yourself, Filer it ain't - at least not yet.
  • by Wdomburg ( 141264 ) on Wednesday May 30, 2007 @08:46AM (#19320187)
    This doesn't strike me as having much to do with ZFS at all. You've been able to do a home grown NAS / SAN box for years on the cheap using commodity equipment. Take ZFS out of the picture and you just need to use a hardware raid controller or a block level RAID (like dmraid on Linux or geom on FreeBSD). There are even canned solutions for this, like OpenFiler [openfiler.com].

    That being said, this sort of solution may or may not be appropriate, depending on site needs. Sometimes support is worth it.

    You're also grossly overestimating the cost of an entry-level iSCSI SAN solution. Even going with EMC, hardly the cheapest of vendors, you can pick up a 6TB solution for about $15k, not $50k. Go with a second tier vendor and you can cut that number in half.
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Wednesday May 30, 2007 @08:56AM (#19320273) Journal
    Some of these issues looked familiar, so I thought I'd do a basic comparison:

    Reiser4 had the same problems with fsync -- basically, fsync called sync. This was because their sync is actually a damned good idea -- wait till you have to (memory pressure, sync call, whatever), then shove the entire tree that you're about to write as far left as it can go before writing. This meant awesome small-file performance -- as long as you have enough RAM, it's like working off a ramdisk, and when you flush, it packs them just about as tightly as you can with a filesystem. It also meant far less fragmentation -- allocate-on-flush, like XFS, but given a gig or two of RAM, a flush wasn't often.

    The downside: Packing files that tightly is going to fragment more in the long run. This is why it's common practice for defragmenters to insert "air holes". Also, the complexity of the sync process is probably why fsync sucked so much. (I wouldn't mind so much if it was smarter -- maybe sync a single file, but add any small files to make sure you fill up a block -- but syncing EVERYTHING was a mistake, or just plain lazy.) Worse, it causes reliability problems -- unless you sync (or fsync), you have no idea if your data will be written now, or two hours from now, or never (given enough RAM).

    (ZFS probably isn't as bad, given it's probably much easier to slice your storage up into smaller filesystems, one per task. But it's a valid gotcha -- without knowing that, I'd have just thrown most things into the same huge filesystem.)

    There's another problem with reliability: Basically, every fast journalling filesystem nowadays is going to do out-of-order write operations. Entirely too many hacks depend on ordered writes (ext3 default, I think) for reliability, because they use a simple scheme for file updating: Write to a new temporary file, then rename it on top of the old file. The problem is, with out-of-order writes, it could do the rename before writing the data, giving you a corrupt temporary file in place of the "real" one, and no way to go back, even if the rename is atomic. The only way to get around this with traditional UNIX semantics is to stick to ordered writes, or do an fsync before each rename, killing performance.

    I think the POSIX filesystem API is too simplistic and low-level to do this properly. On ordered filesystems, tempfile-then-rename does the Right Thing -- either everything gets written to disk properly, or not enough to hurt anything. Renames are generally atomic on journalled filesystems, so either you have the new file there after a crash, or you simply delete the tempfile. And there's no need to sync, especially if you're doing hundreds or thousands of these at once, as part of some larger operation. Often, it's not like this is crucial data that you need to be flushed out to disk RIGHT NOW, you just need to make sure that when it does get flushed, it's in the right order. You can do a sync call after the last of them is done.

    Problem is, there are tons of other write operations for which it makes a lot of sense to reorder things. In fact, some disks do that on a hardware level, intentionally -- nvidia calls it "native command queuing". Using "ordered mode" is just another hack, and its drawback is slowing down absolutely every operation just so the critical ones will work. But so many are critical, when you think about it -- doesn't vim use the same trick?

    What's needed is a transaction API -- yet another good idea that was planned for someday, maybe, in Reiser4. After all, single filesystem-metadata-level operations are generally guaranteed atomic, so I would guess most filesystems are able to handle complex transactions -- we just need a way for the program to specify it.

    The fragmentation issue I see as a simple tradeoff: Packing stuff tightly saves you space and gives you performance, but increases fragmentation. Running a defragger (or "repacker") every once in awhile would have been nice. Problem is, they never got one written. Common UNIX (and Mac) philosoph
  • by pyite ( 140350 ) on Wednesday May 30, 2007 @08:57AM (#19320285)
    I guess this setup could replace some people's need for a turnkey NAS solution. But your thinking it could replace SAN solutions shows you haven't looked into SAN too much. To start, there's a reason Fibre Channel is way more popular than iSCSI. The financial services company I work for has about 3 petabytes of SAN storage, and not a drop of it is iSCSI. Storage Area Networks are special built for a purpose. They typically have multiple fabrics for redundancy, special purpose hardware (we use Cisco Andiamo, i.e., the 9500 series), and a special purpose layer 2 protocol (Fibre Channel). iSCSI adds the overhead of TCP/IP. TCP does a really nice job of making sure you don't drop packets, i.e. layer 3 chunks of data, but at the expense of possibly dropping frames, i.e. layer 2 data. The nature of TCP just does this, as it basically ramps up data sending until it breaks, then slows down, rinse and repeat. This also has the effect of increasing latency. Sometimes this is okay, people use FCIP (Fibre Channel over IP), for example. But, sometimes it's not. Fibre Channel does not drop frames. In addition, Fibre Channel supports cool things like SRDF [wikipedia.org] which can provide atomic writes in two physically separate arrays. (We have arrays 100 km away from each other that get written basically simultaneously and the host doesn't think its write is good until both arrays have written it.) So, like I said, this might be good for some uses, but not for any sort of significant SAN deployment.

  • by Britz ( 170620 ) on Wednesday May 30, 2007 @09:10AM (#19320399)
    Linux has more perfomance testing on x86 than OpenSolaris (so you are not as likely to run into a bad bottleneck). On Linux you can create a RAID-1,-4,-5 and -6 under Multiple Device Driver Support in the kernel. You can then use mkraid to include all the drives you want. This code in not new at all. It was stable in 2.4, maybe even in 2.2

    After that you just create a filesystem on top of the raid. If you don't like ext3 or don't trust it, there is always xfs. I had some rough times with reiserfs, xfs, and ext3 and for all the experience I had I would go xfs for long running server environments (and now get flamed for this little bit, use ext3 all you want).

    The advantage is that you use very well tested code.

    The problem comes with hotswapping. I don't know if the drivers are up to that yet. But I also highly doubt that OpenSolaris SATA drivers for some low price chip in a low price storage box can deal with hotswapping. So Linux might be faster on that one.

    That is a setup I would compare to a plug'n play SAN solution. And it totally depends on the environment. If the Linux box goes down for some reason for a couple hours/days, how much will that cost you? If it is more than twice the SAN-solution, you might just buy the SAN and if it fails just pull the disks and put them in the new one. I dunno if that would work on Linux.
  • by msporny ( 653636 ) * <msporny@digitalbazaar.com> on Wednesday May 30, 2007 @09:10AM (#19320409) Homepage

    Ever heard of Starfish? It's a new distributed clustered file system:

    Starfish Distributed Filesystem [digitalbazaar.com]

    From the website:

    Starfish is a highly-available, fully decentralized, clustered storage file system. It provides a distributed POSIX-compliant storage device that can be mounted like any other drive under Linux or Mac OS X. The resulting fault-tolerant storage network can store files and directories, like a normal file system - but unlike a normal file system, it can handle multiple catastrophic disk and machine failures.

    And you can build clusters at relatively low cost:

    For a 2-way redundant, RAID-1 protected, 1.0 Terabyte cluster: $2,000 (Jan 2007 prices). Per server, that breaks down into around $400 for a AMD 2.6Ghz CPU, 1GB of memory, and a motherboard with integrated 100 megabit LAN connection, SATA support, 350 watt power supply and a commodity server enclosure. Four SATA 500GB hard drives will run you around $600. The cluster would ensure proper file system operation even in the catastrophic failure of a single machine. Hard drive failure rates could even approach 50% without affecting the Starfish file system.

    (warning: I work for the company that created Starfish)

    -- manu
  • by TheSunborn ( 68004 ) <mtilstedNO@SPAMgmail.com> on Wednesday May 30, 2007 @09:30AM (#19320579)
    But Google Filesystem is not available for buying which is a shame.

    And hiring a team to develop something similary to google filesystem is not cheep. Even highend sans will be cheeper.
  • by NSIM ( 953498 ) on Wednesday May 30, 2007 @09:35AM (#19320621)

    We're talking hundreds of gigabytes per second and no noticeable stalling on concurrent accesses.


    In which case you're talking complete rubbish, "hundreds of gigabytes per second" just one GB/sec would need 4x2Gbit FC links all exceeding their peak theoretical throughput :-) Hundreds of MB/sec I can beleive (just about assuming the right access patterns)

  • by pla ( 258480 ) on Wednesday May 30, 2007 @09:36AM (#19320641) Journal
    If you need more than 1-3 TB, you can't use generic components

    Why not?

    Sure, a 16-channel SATA controller with RAID 0/1/5 will cost you $400. But that will handle, using 750GB drives that have recently entered the "affordable" range, a total of 12TB (or more practically, a 10.5TB RAID5 with one hot spare). Find an OEM that can set you up with that for under $5000 total.

    Now, that uses a PC chassis and wouldn't look "nice" in a rack. So what? If you need 10TB and don't want to blow $50k on it, you don't have a lot of choices... So if you insist on all racked equipment, buy a rack shelf kit and lay it on its side (and hide it with blanks if you care that much) ;-)
  • What amazes me is all the talk of iSCSI, but almost no mention of AoE (ATA over Ethernet).

    What you have is a box that exports block devices out over layer 2. Another devices loads it as a block device, and can now treat it in whatever fashion it could deal with any other block device, so for example I have 2 "shelves" of Serial ATA drives going. I have a third box that I could either load linux on, using md to create raid sets, or what I've actually done is used the hardware on each of the two shelves, created a raid5 set on each, then used md to create a raid1 set out of the two raid5's. I then take my spankin' new md0 device which is huge for my needs (7.5TB), use LVM to create a volume group (called 'office' for me) and that creates /dev/office. Then I create several lv's (logical volumes) of arbitrary size beneath *that*. So I have /dev/office/home, /dev/office/mp3, /dev/office/blah, etc.

    Now you can format those lv's like any other partition/slice. I've used xfs on all of mine, but you could use ext2/3 if you really wanted.
  • by StarfishOne ( 756076 ) on Wednesday May 30, 2007 @10:09AM (#19321053)
    Direct linkage for those who are interested:

    Google Releases Paper on Disk Reliability:
    http://hardware.slashdot.org/article.pl?sid=07/02/ 18/0420247 [slashdot.org]
  • Re:Reliable? (Score:5, Informative)

    by Znork ( 31774 ) on Wednesday May 30, 2007 @10:14AM (#19321095)
    "Additionally, SATA drives are not as reliable long term as SCSI."

    The CMU study by Bianca Schroeder and Garth A. Gibson would suggest otherwise. In fact, there was no significant difference in replacement rates between SCSI, FC, or SATA drives.

    "They want the system to phone home when a drive starts getting errors"

    Of course, the other recent study by Google showed that predictive health checks may be of limited value as an indicator of impending catastrophic disk failure.

    Basically, empirical research has shown that the SAN storage vendors are screwing their customers every day of the week.

    "Saving 47K on a SAN is great, unless it breaks 3 years from now"

    Of course, saving 47K on a SAN means you can easily triple your redundancy, still save money, and when it breaks, you have two more copies.

    At the same time, the guy spending the extra 47k on an 'Enterprise Class Ultra Reliable SAN', will get the same breakage 3 years from now, he wont have been able to afford all those redundant copies, and as he examines his contract with the SAN vendor, he notes that they actually dont promise anything.

    "But by all means, get a rep from EMC or HP in so the decision makers completely understand what they're buying."

    Premium grade bovine manure with (fake) gold flakes?

    Really, handing the decision makers several scientific papers and a few google search strings would leave them much better equipped to make a rational decision.

    Having several years experience with the kind of systems you're talking about I can just say, I've experienced several situations where, if we didnt have system-level redundancy, we would have suffered not only system downtime but actual data loss on expensive 'enterprise grade' SANs. That experience, as well as the research, has left me somewhat sceptical towards the claims of SAN vendors.
  • Re:Specifics please. (Score:4, Informative)

    by sixoh1 ( 996418 ) on Wednesday May 30, 2007 @10:22AM (#19321205) Homepage
    Designs are expensive, but components are not. My PCB designs can support several different Bill-Of-Materials loads during manufacturing and when the boards are destined for industrial or military use we can use 'screened' parts which have been pre-selected and tested at high-temperature to ensure correct operation. Marginal parts at higher temps may be fine for consumer boxes (ie the ones on your desktop) but in a server box run 24-7-365 it-has-to-work-all-the-time may not be a good idea. I've been fustrated with the exact scenario quizzed in the original topic, using Maxtor SATAII 500GB disks as a drop-zone for my DLT backup machine I've had the HDDs for less than a month and already 3 of 4 failed with bad sectors because they all sit in a PC case. I'm going to have to rig out the box with extra fans, and the hastle of pulling and replacing the disks is driving me crazy too so now I'm adding removable disk bays. Not as easy or as cheap as I had anticipated (labor costs mostly).
  • by Anonymous Coward on Wednesday May 30, 2007 @10:26AM (#19321271)
    "patching Solaris is *still* to drop to single user mode and reboot afterwards (yes, I know that's not necessarily *required*)"

    Now seriously, you must be a windows "admin"... In the rare case a reboot is necessary, you should know that you can patch an offline boot partition (via lu) and boot into it afterwards (whenever). The downtime is measured in seconds and you can boot back to the original un-patched partition if anything goes wrong.

    That's unix administration for you.
  • Re:Specifics please. (Score:5, Informative)

    by Ngarrang ( 1023425 ) on Wednesday May 30, 2007 @10:28AM (#19321311) Journal
    Toleraen wrote, "So "Mission Critical" is just a myth too, right?"

    No system can compensate for bad management by people, but I digress.

    All data is critical. But, to say that your data is less safe with a system that cost $4700 than a system that cost $50,000 is fallacious without some heavy proof behind it. For now, I am going to ignore that a functional backup is part of "mission critical" and just address the online storage portion of the argument.

    Let's start with a server white box. Something with redundant power supplies, ethernet, etc. Put a mirrored boot drive in it. Install Linux. So far, the cost is fairly low. Add an external disk array, at least 15 slots, the ones with hot-swap, hot-spare, RAID 5, redundant power supplies and fill it with inexpensive (but large) SATA drives. Promise sells one, as do others. Attach to server, voila, a cheaper solution than EMC for serving up large amounts of disk space.

    What if a drive fails? The system recreates the data (it is RAID5, after all) onto a hot-spare. You remove the bad drive, insert new, run the administration. The uses won't even notice their MP3's and Elf Bowling game were ever in danger.

    For the people who believe strongly in really expensive storage solutions, please explain why. I would like to know if you also hold the same theory for your desktop PCs, because surely, a more expensive PC has to be better. Right?
  • Re:Specifics please. (Score:4, Informative)

    by hackstraw ( 262471 ) on Wednesday May 30, 2007 @10:32AM (#19321377)
    Hardware designs are expensive, so rarely are there multiple designs. Sales guys are selling you additional support, but the hardware is rarely different.

    True, but there is a difference. The difference is in QA.

    The "consumer-grade" and "business-grade" are the same off the shelf stuff, but if you are getting business-grade stuff from a reputable vendor they QA the consumer-grade parts, throw out the bad ones, and stamp "business-grade" on the ones that survive. This is why the business-grade level of products often are a generation or so behind the consumer-grade level.

    Yes, you can get lucky and get consumer-grade stuff that works great. But if it doesn't, then you are the QA guy, and the downtime is on you. If the time for you to do the QA and the associated downtime is cheaper than the cost of business-grade, then by all means do it. Otherwise, you have to pay the extra bucks.

    Now, regarding NASs, I think these things are overpriced, especially the maintenance on them. The maintenance goes through the roof once the equipment is beyond the MTBF of the drives, which is where a high dollar NAS should shine right? Any piece of crap RAID box will work when all the drives are new and functioning well. What you are paying for is the redundancy and availability, which is redundant to pay for when all of the equipment is new.

  • Re:Everyman? (Score:5, Informative)

    by hoggoth ( 414195 ) on Wednesday May 30, 2007 @10:35AM (#19321417) Journal
    I am the original poster, and I am not actually a typical user.
    I routinely work with files that are 100 GB - 300 GB each.
    Just copying one file from drive to drive takes hours.
    I have about 4 Terabytes in use, with another 4 Terabytes for backup.

    My usage is the exact opposite of database usage (which most storage is optimized for).
    I need to copy huge sequential files. I rarely need many small reads or writes.

    Because of the long times it takes to move these files around, I think NFS or CIFS would be too slow. That's why I am interested in the ability of ZFS to easily export iSCSI targets. Some tests I read showed that ZFS exporting iSCSI is about 4 times faster than ZFS exporting NFS or CIFS.

    I am comparing to drives directly attached via eSATA, so it's got to be fast to come anywhere close to what I get with eSATA.

  • Re:No (Score:3, Informative)

    by darrylo ( 97569 ) on Wednesday May 30, 2007 @10:45AM (#19321561)

    It's generally not about the 64- vs 128- vs whatever.

    It's about the additional reliability (current bugs aside), and the ease of filesystem/pool management. For example, a Sun developer was developing on a workstation with bad hardware, which occasionally caused incorrect data to be written to disk. After setting up raidz, ZFS automatically detected and corrected the error: http://blogs.sun.com/elowe/entry/zfs_saves_the_day _ta [sun.com]

    Scary, yes. Doing that definitely isn't something I'd recommend, but it does show one of the powerful features of ZFS.

  • Re:Everyman? (Score:2, Informative)

    by mulvane ( 692631 ) on Wednesday May 30, 2007 @10:47AM (#19321595)
    That's why I am doing an upgrade to larger disk soon. Hopefully I can get a deal on 1TB drives. The data has all been offloaded to a myriad of machines 3 times so I could upgrade the arrays and stay consistent with disk sizes. Latency isn't as noticeable as one would think. The array is mounted as read only except for scheduled uploads of new content(usually only 2-3 times a month). Once the reads start, they play without any problem (never played more than 2 HD and 4 SD streams at once), and writes are slow(read that as EXTREMELY slow) but not a problem as I only sync like I said 2-3 times a month. I'm looking to do a single RAID6 using 2 PCI-E 16x cards and upto 12 drives. My initial storage requirements of storage have been met and I only add a few movies and EP's a month now so a gain of 4TB over my current would keep me going for a couple years considering I am only right now using a little over 6TB. The reason I have the first raid5 is that the SATAII port multipliers I am using support JBOD, RAID0,1 and 5. Backup is not CRITICAL as I have legit copies of all the movies, and most of the tv shows have been bought as season bundles. Redundancy of data is important though so I can can suffer a pretty massive crash of a number of drives in this setup. It seems like a reasonable trade off to use RAID as I did to ensure I could recover without have to rip everything all over again. At the time I built this (200GB drives were new at the inception), RAID6 was not an option so please don't call me stupid for building a setup in a much smaller scale and growing with it until a large enough disk capacity became available at a price point to make it worth building a system from scratch. Its server my needs and now with the 750's at a good price point, and 1TB's coming out, the next few months will see my capacity grow and complexity diminish.
  • Re:Specifics please. (Score:4, Informative)

    by lymond01 ( 314120 ) on Wednesday May 30, 2007 @11:00AM (#19321833)
    If you're buying from EMC or another large storage company, you do pay a premium. Generally, it's for simple configuration of the NAS or SAN using their proprietary software. You're also paying for warranty and support, something you don't get through NewEgg (you get it, but it's limited). If you're either a large company not wanting to pay a yearly salary for 3 or 4 admins to run your storage system, or a smaller company that doesn't have the technical know-how to do it yourself reliably (not everyone reads "Ask Slashdot" regularly), then the premium stores are a good way to go (if you have the money).

    It's the same reason we buy Dell. We could buy white boxes or parts from Newegg for all of our systems, but talk about a hassle when it comes to them needing hardware maintenance or just assembly. With the support Dell offers, we get a complete box that's been tested, we just need to reformat and install our own stuff. Something breaks, we make a 10 minute phone call and get a replacement the next day, with or without assisted installation. But we pay probably 30% more per box for that.
  • by darrylo ( 97569 ) on Wednesday May 30, 2007 @11:13AM (#19322005)

    OK:

    And, for more than you wanted to know about ZFS: http://en.wikipedia.org/wiki/ZFS [wikipedia.org]
  • Re:ZFS (Score:4, Informative)

    by darrylo ( 97569 ) on Wednesday May 30, 2007 @11:23AM (#19322175)

    Increased reliability (all data is checksummed, even in non-raid configurations), near brainless management (e.g., newfs is not needed, raid configurations are trivial to setup, etc.), built-in optional compression (even for swap, if you're feeling masochistic), etc.. Encryption is in development.

    See my other posts here for links.

  • by wurp ( 51446 ) on Wednesday May 30, 2007 @11:28AM (#19322263) Homepage
    And what happens when the RAID controller fails and corrupts all of your drives?

    Because I've seen that happen more than once.

    I'm not saying the more expensive solution is better. I'm just saying that in my personal experience I've seen *more* data destroyed from RAID controller failure than from hard drive failure. I would love to find out the solution to that one.

    I do not claim to be a hardware expert or system administrator, so there may be a well known solution (don't buy 'brand X' RAID controllers). I just don't happen to know it.
  • Re:ZFS and Sun boxes (Score:3, Informative)

    by tricorn ( 199664 ) <sep@shout.net> on Wednesday May 30, 2007 @11:33AM (#19322337) Journal

    You can pick up those 750GB Seagate SATA drives for about $200 each now...

  • Re:ZFS (Score:2, Informative)

    by perbu ( 624267 ) on Wednesday May 30, 2007 @11:44AM (#19322541)
    See for yourself [opensolaris.org].
  • Re:Specifics please. (Score:3, Informative)

    by TopSpin ( 753 ) * on Wednesday May 30, 2007 @11:44AM (#19322543) Journal

    Not enough specifics here. I am going to say do your thing. If it works, you're a hero and saved 47k.
    Not really. The assertion that a 12 spindle NAS box with iSCSI costs 50k is the issue. That level of NAS/iSCSI hardware does not cost 50k. It may have, years ago, from Netapp or someone, but not today. Today such a box will cost around 10k with equivalent storage.

    A Netapp S500 with 12 disks and NAS/iSCSI features is a good example. Roughly 10k and you get Netapp's SMB/CIFS implementation (considered excellent), NFS, iSCSI, snapshots, etc. Slightly lower price points can be had through Adaptec Snap Servers. They have a nice SAS JBOD expansion unit for their systems. HP just released new "storage servers" based on Microsoft's storage server OS; heck of a lot of value in those systems.

    The delta between 3-4k and 10k isn't trivial, and if your budget is tight perhaps should roll your own. But 10k for supported NAS/iSCSI that functions a few minutes after you get it in the rack isn't a ripoff either. Not by a long shot.

  • Re:ZFS and Sun boxes (Score:2, Informative)

    by Anonymous Coward on Wednesday May 30, 2007 @12:18PM (#19323095)
    I'm familiar with SATA and IDE...but, the FC ones are new to me..

    Just a brief summary:
    -- SATA refers to the new Serial ATA.
    -- ATA or PATA refers to the older "Parallel" ATA. (ATA dates back to IBM PC AT and refers to that machine's AT Attachment interface.)
    -- IDE refers to any drive (ATA, SCSI...) with integrated drive electronics, that is, everything that has come after the ancient dumb drives that required a model-specific controller on the motherboard. In other words, not a very useful term anymore.
    -- SCSI refers to Small Computer System Interface; funny how it's the one used in the bigger iron. Beats the pants out of ATA when handling multiple daisy-chained drives; SATA is catching up in handling multiple drives. SCSI also has parallel interface and cabling.
    -- SAS refers to Serially Attached SCSI (some inspiration from SATA perhaps?).
    -- FC refers to Fibre Channel, a SCSI-like very fast interconnect type and interface protocol; often (but not always) uses optical cabling.
    -- iSCSI refers to SCSI over Ethernet (thus it could be "SCSIoIP"...).

    But I never understood the difference between a SAN and a NAS when the configuration gains any complexity beyond a textbook example. You can have a SAN with many NAS boxes, or you can have NAS with multiple SANs, sooo... ;-)
  • Re:ZFS (Score:4, Informative)

    by Kymermosst ( 33885 ) on Wednesday May 30, 2007 @12:57PM (#19323685) Journal
    Why use ZFS? As far as I can see, I find no reason not to use GEOM and UFS2 or something like that...

    Simple administration and data integrity. This is all it takes to make a 6-disk RAID at home:

    zpool create sun711 raidz c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t0d0

    That gets you a ZFS storage pool mounted at /sun711. 'raidz' specifies a ZFS RAID that is similar to RAID 5 but always does full-stripe writes. Every block is checksummed. All operations are copy-on-write, so no journal or fsck is needed.

    Of course, once you have a storage pool, you can then create additional file systems from there. Here's how you create some NFS storage:

    zfs create sun711/storage
    zfs set sharenfs=rw sun711/storage

    When I had a disk start getting flaky (it started reporting high raw read error rates - that's what I get for buying drives on ebay...), I simply did the following:

    zfs offline c0t5d0

    zfs replace c0t5d0

    There you have it... it can't get much simpler than than.
  • Re:ZFS (Score:5, Informative)

    by beezly ( 197427 ) on Wednesday May 30, 2007 @01:09PM (#19323879)

    A correction:

    First off it is "copy on write"

    Copy-on-write is quite a misnomer here (even if Sun use that term). It is a Transactional filesystem. Blocks are not copied upon write, they are only written and then the transaction log is updated. It's far more clever than old-fashined COW schemes. It can be compared with NetApp's WAFL filesystem.

  • Re:ZFS and Sun boxes (Score:3, Informative)

    by Anonymous Coward on Wednesday May 30, 2007 @01:21PM (#19324051)
    A SAN is not a host. It presents itself to a host machine as native storage in the form of raid groups/Luns, and/or raw storage. Access controls related to end users are done by the host OS, not the SAN, the SAN has no concept of file locking either, this is accomplished at the OS level on the host as well. Although the SAN does provide access controls for which host OS can connect to it. A NAS is the storage and some type of OS supplying network shares to a host. There are many tools that can make a NAS appear to a host as a native file system as well which kind of blurs the lines.

    In really really simple terms, a SAN provides configurable disk space to a host, a NAS supplies file space and file serving to a host(s). Many storage solutions offer various functionality and can provide both NAS and SAN functionality at some level.
  • Re:ZFS (Score:3, Informative)

    by bl8n8r ( 649187 ) on Wednesday May 30, 2007 @01:26PM (#19324167)
  • Re:Specifics please. (Score:1, Informative)

    by Anonymous Coward on Wednesday May 30, 2007 @01:34PM (#19324273)
    There is not single point that can fail on a 100K+ san. You have multiple controllers on each shelf, multiple power supplies on each self, multiple paths to each server. Take something like a SAN from EMC, it is basically two completely separate SANs running in parallel with each other. The only thing in common between the two is the disks and parts of the enclosure they are installed in (each enclosure can hold 14 drives and there are multiple enclosures) which can be in any raid configuration that you want with multiple spares across multiple enclosures. So basically, everything is at least double redundant and data paths can automatically switch between any available path. I can have a SAN storage processor fail, a fiber switch with 14 servers attached fail and a two hard drive fail all at one time and not loose a single bit of data and keep running along fine like nothing happened. I know the name SAN seems like a single thing that can fail and you might be concerned but you really need to look at what they can do and what redundancy they offer before making your opinion.
  • Re:Specifics please. (Score:3, Informative)

    by prgrmr ( 568806 ) on Wednesday May 30, 2007 @01:40PM (#19324351) Journal
    've steered clear of the SAN/NAS solution for once simple reason: it's a single point.

    If you honestly think a SAN, particularly an EMC SAN, is a single point of failure, then you don't understand SANs. Take a look at the high-end Clariion and Symmetrix models for redundancy and scalability. Then take a look at who some of EMC's bigger customers are.

    No, I don't work for EMC, but my employer has several mid-level EMC storage systems. They work. Reliably, quietly, and yes, they scale nicely too.
  • Re:ZFS (Score:3, Informative)

    by slashthedot ( 991354 ) on Wednesday May 30, 2007 @01:45PM (#19324421) Homepage
    From ZFS link in Opensolaris.org :

    "ZFS is a new kind of filesystem that provides simple administration, transactional semantics, end-to-end data integrity, and immense scalability. ZFS is not an incremental improvement to existing technology; it is a fundamentally new approach to data management. We've blown away 20 years of obsolete assumptions, eliminated complexity at the source, and created a storage system that's actually a pleasure to use.

    ZFS presents a pooled storage model that completely eliminates the concept of volumes and the associated problems of partitions, provisioning, wasted bandwidth and stranded storage. Thousands of filesystems can draw from a common storage pool, each one consuming only as much space as it actually needs. The combined I/O bandwidth of all devices in the pool is available to all filesystems at all times.

    All operations are copy-on-write transactions, so the on-disk state is always valid. There is no need to fsck(1M) a ZFS filesystem, ever. Every block is checksummed to prevent silent data corruption, and the data is self-healing in replicated (mirrored or RAID) configurations. If one copy is damaged, ZFS will detect it and use another copy to repair it."
  • Re:ZFS and Sun boxes (Score:3, Informative)

    by mollog ( 841386 ) on Wednesday May 30, 2007 @02:40PM (#19325271)
    Anonymous Coward writes IDE refers to any drive (ATA, SCSI...) with integrated drive electronics, that is, everything that has come after the ancient dumb drives that required a model-specific controller on the motherboard. In other words, not a very useful term anymore.

    Well, actually, IDE's history is a bit different than that. IDE requires a host buss interface, but, yes, they do have their disk controllers built into the PCA attached to the disk mechanism.

    Before Compaq and others developed the first IDE systems, hard drives usually had external controller boards that used low level commands. IDE standardized the host interface to disk storage at the driver level, and standardized the host buss/drive command set at that buss level.

    And, it's not just disk drives that use the IDE stack. Other devices can be attached to the IDE buss, too.

    SCSI drives require a SCSI host buss adapter with a dedicated processor and that adapter does the heavy lifting for disk access. IDE requires the host CPU to do a lot of processing, where SCSI does the majority of the work. This model was used for the FC technology. It, too, unloads the processing from the CPU.

    SCSI/FC are preferred in the 'big iron' type of installations. IDE/ATA/SATA are fine on a dedicated NAS system. In effect, the CPU of the NAS motherboard is doing the work that is done on the host buss adapters in SCSI/FC.

    At the drive mech level, FC is a copper interface. The design of the connector on the disk mech allows it to be plugged. This provides the ability to quickly replace failed drives. The drive mechs are aggregated into some type of array to provide protection from data loss. This array of drives is then attached to systems via fiber optic cabling.

    You can simulate some, but not all, of the benefits of a FC/SCSI array using SATA technology. I don't know if the IDE drivers are being rewritten to use the multi-core processors yet, but that would help reduce some of the latency.

    Short answer, if what the OP was aiming for is to get into a large disk array for cheap, trading some reliability and performance for low cost, the idea is a good one. I would be looking for a multi-core cpu in the motherboard and an OS that has parallel processing drivers for the IDE channel. Be sure that all the drives have plenty of cooling. Have a backup solution. Some day, this lash-up will give you heartache, but till then, you've saved money.

    Can you tell I used to work in the disk storage business?

    Good luck.
  • Re:ZFS and Sun boxes (Score:3, Informative)

    by fluffy99 ( 870997 ) on Wednesday May 30, 2007 @02:43PM (#19325331)
    I would avoid that card. It's limited to striping or mirroring, for starters. It's also not true hardware raid and depends on the drivers to do all the raid work. You really do get what you pay for here. You also get very little notice when one drive starts going bad. You just start getting random system hangs.
  • by msporny ( 653636 ) * <msporny@digitalbazaar.com> on Wednesday May 30, 2007 @02:48PM (#19325417) Homepage

    Here are my attempts (the text is the Gnuplot script, which produces the graphics), what do your company's experts say?

    The first problem with your gnuplot script is that you're assuming a Poisson distribution for HDD failures (which is incorrect). Statistical failure distribution follows a Weibull distribution with k roughly equivalent to 7.5. Unfortunately, because you build your argument off of a Poisson distribution approximation, the rest of the analysis doesn't make much sense.

    If you are interested in HDD failure rates and failure prediction, there is a fantastic paper done by Bianca Schroeder and Garth Gibson of CMU. I think this is the link to their main research website [cmu.edu].

    Even if my calculations are wrong, I suspect, the a failure of another disk, while the RAID is recovering from an earlier disk-failure is so improbable (even if the RAID spans dozens of drives), no efforts to reduce that already minuscule risk can possibly be justified.

    I think you miss the point of systems such as Starfish and other distributed clustered file systems. You have many other points of failure in a system: memory, CPU, power supply, power outage, motherboard, network switch, OS kernel, router, network cable, and the all important "oops, I tripped over the power cord". There are also times that you want to take down nodes in a highly-available cluster for maintenance without affecting your applications - to do this, you need a file system that assumes and can work around node-level failure.

    There is much more to highly-available clustering than just making sure your disk sub-systems are bulletproof.

    -- manu
  • enter GPL3 (Score:4, Informative)

    by bill_mcgonigle ( 4333 ) * on Wednesday May 30, 2007 @03:14PM (#19325773) Homepage Journal
    Where's the uncertainty? Sun fears Linux, and their programmers have already admitted this is why they deliberately made a GPL-incompatible license. Using their patent minefield to prohibit GPL implementations would be incredibly foolish if widespread use of ZFS were actually their goal.

    That's nice except Jonathan Schwartz has indicated that OpenSolaris will go GPL3, assuming the final version of the license is OK.
  • Re:ZFS (Score:1, Informative)

    by Anonymous Coward on Wednesday May 30, 2007 @04:19PM (#19326811)
    *shrug*, I have Centos in a Solaris Zone. It's living in a ZFS volume. I zfs snapshot it and the ZFS home directory within it just like anything else. Of course I do that from OpenSolaris [technically Solaris Express Community Edition] (I could do it from Nexenta if I wanted, but I'm not).
  • Re:No (Score:3, Informative)

    by KonoWatakushi ( 910213 ) on Wednesday May 30, 2007 @06:33PM (#19329181)
    Yes, it is easy to panic a system with ZFS, but such situations are also easily avoided. Furthermore, they will not lead to data corruption. If you are aware of the causes, it is no more than a minor inconvenience.

    For example, ZFS will panic when you lose enough data or devices that the pool is no longer functional. If you take care and use a replicated pool though, this is unlikely to ever happen. Even if it does, all it requires is that you reattach the devices and reboot. If the disks truly are dead, then you are going to backups anyway. You do have backups, right?

    ZFS has some rough edges yet, but to call it "far from ready" is mere FUD. Most of the problems with it are a matter of convenience, and nearly all that have been mentioned in the comments are actively being worked on. With a little bit of care, and proper backups, ZFS is rock solid. Meanwhile, it is improving every day, and if you choose not to revisit it, it is your loss.

  • by Anonymous Coward on Wednesday May 30, 2007 @06:55PM (#19329511)
    A double disk failure may be very unlikely, but a disk failure combined with a read-error during rebuild isn't...

    Nice little blog about that (from a manufacturer) ;
    http://blogs.netapp.com/dave/TechTalk/2006/03/21/E xpect-Double-Disk-Failures-With-ATA-Drives.html [netapp.com]
  • by kraut ( 2788 ) on Wednesday May 30, 2007 @07:35PM (#19330039)
    Software RAID. Been running it for years for exactly that reason,

    Maybe not suitable for high performance situations, but I've not found it slow.
  • by FoolishBluntman ( 880780 ) on Wednesday May 30, 2007 @10:06PM (#19331393)
    Hello, I work at a 3 letter company whose name starts with "E", a friend of mine works at another 3 letter company starting with "I" as field service supervisor. Not a month goes by without seeing a double disk failure in a RAID-5 system from a customer site for either of these companies and that's on SCSI, SAS and Fibre Channel drives. ATA & SATA drive MTBF values that are given by drive manufactures can be off by as much as a factor of 1000 depending on lot. Most consumer level drives are listed as "Nonrecoverable Read Errors per Bits Read 1 per 10^14" a 1TB drive contains 8*10^12 bits. 1/8*10^14/10^12 = 1/8*10^2 = 100/8 = 12.5 So the manufacture says if you read the entire drive 12.5 times you will get 1 Nonrecoverable Read error. So if you think the manufacture is off by a factor of 100 then every .125 times you read the entire drive you get a non-recoverable read error. Do you still feel you data is safe?
  • Re:ZFS (Score:4, Informative)

    by nuzak ( 959558 ) on Wednesday May 30, 2007 @11:05PM (#19331957) Journal
    To say nothing of the energy requirements of populating that drive. Quoth Jeff Bonwick:

    Although we'd all like Moore's Law to continue forever, quantum mechanics imposes some fundamental limits on the computation rate and information capacity of any physical device. In particular, it has been shown that 1 kilogramme of matter confined to 1 litre of space can perform at most 1051 operations per second on at most 1031 bits of information [see Seth Lloyd, "Ultimate physical limits to computation." Nature 406, 1047-1054 (2000)]. A fully populated 128-bit storage pool would contain 2128 blocks = 2137 bytes = 2140 bits; therefore the minimum mass required to hold the bits would be (2140 bits) / (1031 bits/kg) = 136 billion kg.

            To operate at the 1031 bits/kg limit, however, the entire mass of the computer must be in the form of pure energy. By E=mc, the rest energy of 136 billion kg is 1.2x1028 J. The mass of the oceans is about 1.4x1021 kg. It takes about 4,000 J to raise the temperature of 1 kg of water by 1 degree Celsius, and thus about 400,000 J to heat 1 kg of water from freezing to boiling. The latent heat of vaporization adds another 2 million J/kg. Thus the energy required to boil the oceans is about 2.4x106 J/kg * 1.4x1021 kg = 3.4x1027 J. Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.


  • Re:ZFS (Score:3, Informative)

    by Anarke_Incarnate ( 733529 ) on Thursday May 31, 2007 @11:13AM (#19337911)
    Depending on the needs and configuration, however, I have found UFS to be up to 5x faster than ZFS for reading small blocks of data " One more thing, ZFS will try to use almost all memory as a cache. This is bad for databases or performance applications. ZFS will release memory when an application, which should have a higher priority, needs it, but there is a bug in some versions of ZFS that, due to the bad memory accounting, releases too many pages of memory, and too fast, causing thrashing and a performance hit. For a file server, ZFS is great, but it still has a ways to go for some applications.

    Also, if you are using a controller (as on a SAN) that has read/write order battery backed cache, you will want to disable fsync() for ZIL in the /etc/system file. There is also a nice script that can be used to limit the amount of memory ZFS uses, though, again, due to the poor memory accounting, it doesn't work that great. I have limited ZFS to 512MB of RAM to use and it grabs 3GB. Solaris 10 u4 is supposed to address some of this. Because of ZIL (part of the integrity check/scrubber) performance for multiple writes suffers and it can make using it as an NFS server less than ideal. It has a LOT of potential, and with enough spindles it can overcome some of these issues temporarily. It just has not matured to the level at which they are selling ZFS.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...