Forgot your password?
typodupeerror
Data Storage Media IT

Long-Term Storage of Moderately Large Datasets? 411

Posted by timothy
from the rodents-of-medium-size dept.
hawkeyeMI writes "I have a small scientific services company, and we end up generating fairly large datasets (2-3 TB) for each customer. We don't have to ship all of that, but we do need to keep some compressed archives. The best I can come up with right now is to buy some large hard drives, use software RAID in linux to make a RAID5 set out of them, and store them in a safe deposit box. I feel like there must be a better way for a small business, but despite some research into Blu-ray, I've not been able to find a good, cost-effective alternative. A tape library would be impractical at the present time. What do you recommend?"
This discussion has been archived. No new comments can be posted.

Long-Term Storage of Moderately Large Datasets?

Comments Filter:
  • by rwa2 (4391) * on Wednesday March 03, 2010 @06:19PM (#31351342) Homepage Journal

    I don't think you can beat a bunch of conventional hard disks in a RAID5 for both cost-per-TB and backup/restore performance, not to mention medium-term data integrity. Might be able to make hooking up the drives more convenient with an eSATA mult-bay enclosure, but those are kinda expensive. But I bet your backup box already has some sort of hot-swap on it already, like: http://www.amazon.com/Thermaltake-BlacX-eSATA-Docking-Station/dp/B001A4HAFS [amazon.com]

    I assume you already compress your data, since scientific datasets tend to compress well. You might consider compressing to squashfs, since it will let you do transparent decompression later on so you can skip the restore step if you just need a handful of files.

    • Except for the Fact that if you have a fire or flood or some other disaster to hit your site there is a good chance you data may be gone, as well physically stolen from you.

    • by forgottenusername (1495209) on Wednesday March 03, 2010 @06:34PM (#31351504)

      I don't think it's a great solution. You're storing relatively fragile hard drives in a raid5 configuration in a lock box? It's not like you can tell if one of the drives goes bad and needs to be replaced when it's sitting in a box. You'd have to regularly pull the data sets out, fire them up and make sure everything is still functional.

      I'd at least want to do 2 complete sets of mirrored drives.

      Tape storage does store better.

      Depending on how important the data is, I might do something like a local mirrored drive set in storage and an online copy at something like rsync.net - stay away from s3, it's not designed to protect data, despite what AWS fans may say.

      • Re: (Score:3, Informative)

        by mabhatter654 (561290)

        LTO4 tapes are reasonably price.. but the DRIVES push $5k. For long term even the drives go obsolete too quickly, then become MORE expensive in 5 - 10 years when you really need them.

        The best thing is probably what Google does, simply keep 3 "live" copies of the data. Then the data is always on current hardware. The data is on "production" hardware along with other stuff so it is properly monitored by the OS and database for integrity, and hardware is maintained with support. Drive arrays are cheap enough i

      • by MoonBuggy (611105)

        Tape storage does store better.

        Admittedly the submitter said tape would be impractical, but my nerdly curiosity has been piqued: how reliable are relatively cheap tape systems?

        The price crossover point seems fairly reasonable even for a small-ish operation, if you're looking at a few TB per customer. A quick look on Google puts drives at about £700 and 800GB tapes at ~£20, compared to ~£55 for 1TB hard drives.

        Going on £0.055/GB for hard drives and £0.025/GB for tapes, my quick back of the envelope calculatio

        • by hawkeyeMI (412577)
          I didn't consider just a drive and tapes -- it's been mentioned now by several commentors and it's a good idea. It's a tape *library* machine that would be impractical for cost and space considerations.
          • by lgw (121541) on Wednesday March 03, 2010 @08:35PM (#31352844) Journal

            Tape is really best for archiving, to this day. A single LTO drive won't break the bank for a small business, and it will be reliable.

            3 Things to remember about tape backup:

            Encrypt your backups. This is becoming available in the tape drive itself, but many backup applications will also do it for you in software. Limits embarassment if a tape goes missing.

            Occasionally test restores. This is incredibly important - almost every unreadable tape in existance was unreadable when created. Any reasonable backup software will give you the ability to do this automatically (as part of the backup job). If practical, create a job that does a backup of everything, but verifies only some small volume. If you can read anything, chances are high that the whole tape is fine.

            Get those tapes offsite. A safe deposit box works for a tiny company, but someone like Iron Mountain works better and is less hassle. Store a copy of your encryption key in the same facility (but don't transport the tape and key together).

      • Depending on how important the data is...

        That's the key question the author needs to address. Is it important enough to throw a few thousand dollars per dataset into archiving? A few tens of thousands? The best suggestion seems to be multiple copies on multiple non RAID hard drives stored at different physical locations with periodic integrity checks and regularly scheduled drive replacements.

      • Tape storage does store better.

        CITATION MISSING

      • Re: (Score:3, Interesting)

        stay away from s3, it's not designed to protect data, despite what AWS fans may say.

        Just curious... S3 stores all of your data at multiple, geographically separate data centers. How exactly does that not protect your data? What else would you want it to do in terms of protection? It even gives you md5 sums of your files if you want to verify them (check the ETag attribute of each object).

        So, honest question: what do you think they're missing to make S3 really protect data?

    • Re: (Score:3, Informative)

      by TheMeld (13880)

      The other thing to do if you want longish term reliability is to add redundancy to whatever you're storing with a tool like par2, http://www.par2.net/ [par2.net] and http://www.quickpar.org.uk/ [quickpar.org.uk] are your friend.

      Raid5 will help you if you lose a whole drive (e.g. siezes up from sitting still for a long time), the par2 data will both allow you to verify that the data hasn't been corrupted, and if it is (e.g. a couple sectors go bad), it will let you recover the data.

    • by toastar (573882)

      since scientific datasets tend to compress well

      Really? The Datasets I deal with are fairly gaussian in nature, I've yet to find a good compression algorithm that works on segy.

  • bzip2 (Score:5, Funny)

    by Colin Smith (2679) on Wednesday March 03, 2010 @06:19PM (#31351344)

    And optar:

    http://ronja.twibright.com/optar/ [twibright.com]

    You know it makes sense.

    • by mrmeval (662166)

      It would take approximately 5242.88 pages to store 1 gigabyte. This comes up from time to time. Laser printed pages will not store well over time. The toner degrades and if the pages are stacked together you lose all the sheets. Some inkjets have ink that will not glue the pages together but some ink will migrate and some is nutrient source for bacteria.

      One of the better printed codes I've seen uses this http://microglyphs.com/english/html/dataglyphs.shtml [microglyphs.com] As an added bonus this coding can be printed with v

  • GMail Drive (Score:3, Funny)

    by sopssa (1498795) * <sopssa@email.com> on Wednesday March 03, 2010 @06:20PM (#31351348) Journal

    Unlimited space with several accounts.

  • Amazon AWS? (Score:5, Interesting)

    by TSHTF (953742) on Wednesday March 03, 2010 @06:23PM (#31351386) Homepage
    It might not be the cheapest option, but with Amazon's AWS [amazon.com], you can snail mail them a copy of the drive with the data and they're store it in S3 storage buckets.
    • Exactly. Let someone else do it. I don't know if Amazon is the right place, but the answer is still the same: Let someone else do it.

      Why do we see questions like this so often? Why aren't people going to existing services with guaranteed availability that let you store a generic blob? Pass the buck -- they're probably going to do it better anyway.

      • Re:Exactly. (Score:5, Informative)

        by TooMuchToDo (882796) on Wednesday March 03, 2010 @06:34PM (#31351516)
        Because Amazon can be *expensive* compared to doing it yourself ($$$ for data in, $$$ for data out, $$$ for monthly storage). But heh, what do I know. I just manage the storage for one of the LHC detectors (5PB spinning disk, 17PB tape). Amazon is good when you've got VC money or have no IT folks.
        • Re:Exactly. (Score:5, Insightful)

          by Anonymous Coward on Wednesday March 03, 2010 @06:43PM (#31351640)

          Ok, yes, we see you know a lot about this.

          So what's your recommendation?

          • Re: (Score:3, Funny)

            by snikulin (889460)

            Huh, don't you see has has Too Much to Do?

          • Re: (Score:2, Funny)

            by Anonymous Coward

            I think his/her recommendation was: 17PB Tape.

          • Re:Exactly. (Score:5, Insightful)

            by TooMuchToDo (882796) on Wednesday March 03, 2010 @09:46PM (#31353348)
            Either MogileFS, Lustre, or possible Hadoop (depending on the type and size of the data). Any sort of distributed file system where multiple chunks, replicas, etc (3 is a good number, more is better if you have cheap disk and deduping at the filesystem level) are constantly available.

            Feel free to ask more questions.

            • Re:Exactly. (Score:4, Interesting)

              by TooMuchToDo (882796) on Wednesday March 03, 2010 @11:21PM (#31354078)
              Almost forgot to add. Never pay for expensive disk systems. Put the intelligence into your application instead. It'll scale faster and much cheaper. You also aren't locked into a technology (and instead, can enjoy the falling costs of storage, both spinning and SSD).
              • Re: (Score:3, Interesting)

                by raddan (519638) *
                Why distributed FS and not something like live mirroring/shadow copy? I wonder also... what do you consider an "expensive disk system"?

                We played around with DIY JBOD a bit (i.e., moving the complexity up into software) because it seemed a lot cheaper, but we have yet to get the thing to operate as reliably and simply as our fibre channel RAID units. The main problem we're running into is that for SATA to be practical, you need to multiplex several SATA disks onto single SATA ports, but that software su
                • Re: (Score:3, Interesting)

                  by TooMuchToDo (882796)

                  Why distributed FS and not something like live mirroring/shadow copy? I wonder also... what do you consider an "expensive disk system"?

                  Distributed file systems are great because your limitations are (almost) always going to be hardware. Want 1000 boxes serving up your content? Get 1000 commodity boxes with disk. Need 10000? Also, not a problem. A box filled with raw disk is WAY cheaper than an EMC, Nexsan, etc (i.e. expensive disk system).

                  Serving over Ethernet should be fine, as you can always bond network connections together to increase throughput from your storage boxes to whatever boxes are processing the data (or even process the dat

        • Re: (Score:3, Informative)

          by hawkeyeMI (412577)
          I already use S3 for some things. Unfortunately it would be about $500/month/customer's data on S3 right now, so that's out. I could buy a whole computer with hard drives every month for that.
    • by vrmlguy (120854)

      According to http://aws.amazon.com/s3/#pricing [amazon.com], S3 will cost you about $150/month per TB. OTOH, it appears that all data transfers into S3 are free until June 30th, 2010, after which transfer fees will be about $100/TB. So if you want to do it, do it now. Be prepared to spend to get your data back out, if you ever need it.

      For comparison, this week I bought a 1TB USB 2.0 external HD for under $100, so a DIY RAID should save you money in the long run.

      I do have to ask one question: Exactly how is a tape li

      • Hard drives do not store well, that's not how they are designed or warranted by the manufactures. The physical Tape media is designed to be stored in a safety deposit box a long time. I wouldn't believe their 50+ year claims (and who would have a drive from even 10 years ago?) but Tape is probably the best.

        The problem with tape is that it's pushing $15K to get the proper server with the proper capacity set up.. that's a lot of months of paying Amazon... and then in 5 years your hardware warranty runs out an

  • by idiot900 (166952) * on Wednesday March 03, 2010 @06:25PM (#31351410)

    Hard drives are ridiculously cheap these days, especially for how much data you are storing. You may wish to consider buying drives from different manufacturers but of the same size to put in a single mirrored set. This way if there is a problem with a particular batch of drives it won't ruin everything.

  • Tape is your friend (Score:5, Informative)

    by chill (34294) on Wednesday March 03, 2010 @06:25PM (#31351414) Journal

    LTO tape, properly stored, will outlast burned optical media and hard drives. Great stuff and designed specifically for what you're talking about.

    http://en.wikipedia.org/wiki/Linear_Tape-Open [wikipedia.org]

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Go Betamax!!!

    • by cruff (171569) on Wednesday March 03, 2010 @06:35PM (#31351524)

      I agree, when the tapes are stored in proper environmental conditions. You don't need a library, just use some stand alone tape drives. Also look at the claimed media lifetime and recovered bit error rate figures to see if you are choosing the right tape drive/media.

    • by Saint Aardvark (159009) on Wednesday March 03, 2010 @06:38PM (#31351562) Homepage Journal

      Couldn't agree more. A tape library (as in autochanger) might be out of your budget, but a simple tape drive wouldn't be too much -- say $5000 for an LTO4. Media is $50-$100 or so depending on where you shop. Seriously, you're not going to find a reasonable way of storing that much data anywhere else.

      BTW, if you're not a member of LOPSA [lopsa.org], you may want to seriously consider it. Even if you're not a sysadmin, this is definitely a sysadmin-type question, and their mailing lists are second to none. It's an excellent resource.

      • But SOMEBODY has to tend those tapes. It's not a REAL backup unless you can prove it works. The big problem with tapes is that "* proper environment" condition. If you're only worried about 2-3 TB, then you're not going to be putting these in "guaranteed" conditions 100% of the time... hence you can't guarantee their useful shelf life. So you need several more TB to periodically restore and re-backup the data every 6 months or so so that you have multiple "known-good" copies.

        Then the company has to pay som

        • If the setup is raw files on tape (the stuff I've read in from reels recorded in the 1980s) or tar (stuff from the 1990s) it's not a big deal at all. It's only two-bit closed source single developer efforts on MSDOS that are difficult for anyone to read if they have the hardware.
          With recent backup systems like AMANDA you can just dump a file with dd or similar and the instructions on how to deal with the data are there in the header in ASCII! It couldn't possibly be easier.
          Also the "proper environment" is
      • by hawkeyeMI (412577)
        This may be my best option. As you mention, a tape changer (the only place I've ever seen/dealt with LTO drives) is out, but a drive and tapes sound like a good option.
    • Re: (Score:2, Funny)

      by Icegryphon (715550)
      I laugh at your table on wiki
      3.2 TBA what kind of weakling only has 3.2TB?
      That is a like throwing Zip drives at the problem.
    • by mengel (13619) <`ten.egrofecruos.sresu' `ta' `legnem'> on Wednesday March 03, 2010 @06:43PM (#31351634) Homepage Journal

      There's some code lurking in the amanda backup package I did a while back for "RAIT" (RAID with tape instead of disk) to make a stripe-set of tapes, if you need several tapes worth of data in one set, with redundancy.

      On the other hand, while LT04 tapes are about half the price ($40) of cheap 1TB disk drives ($80), the tape drives are ablout $2k apiece, so depending how many data sets you want to keep, and for how long, the disk drives may really be cheaper...

      • by afidel (530433)
        It only takes 50TB's for the tape drive to reach parity and every TB past that is half the cost according to your numbers, if each dataset is 3TB's that's only 17 jobs. Add to that the vastly superior shelf life and reliability of tape over HDD's and it seems like a no brainer.
      • $2000 is one not-particularly-brilliant workstation. If he's running a business which is heavily computation-oriented (which multi-TB datasets implies that it is) then $2000 is not a large one-time outlay.

    • by toastar (573882)

      This. Depending on how long you want to store it tape lasts longer, and once the upfront cost of the drive is paid off the per unit cost is cheaper too. Also dealing with offsite storage places(iron mountain) is easier with tape then with HDDs.

      Lastly, I've been told you have to spin up the HDD's every so often or the lifetime rating is even less then what they are rated for. Although I'm not sure I believe that part.

      • Lastly, I've been told you have to spin up the HDD's every so often or the lifetime rating is even less then what they are rated for. Although I'm not sure I believe that part.

        It's by no means unbelievable. Lubricant, rubber and plastic have this annoying tendency to degrade over time even if they're just sitting there. Metal actually does too, but perhaps not quite so quickly. And newer plastics aren't nearly as bad as they used to be, but I still don't trust their longevity that much just yet...

        An

    • by klubar (591384) on Wednesday March 03, 2010 @06:58PM (#31351838) Homepage

      Tape is probably your best option. You can buy at DAT-5 (or even a DAT-4) tape drive for not very much. The tapes cost about $10 to $30 each (depending on what tape option you choose). Make 3 copies of the data set, store one onsite, store another offsite in a secure/climate controlled facility and send the 3rd to the client. Buy a spare tape drive and use both to make writing across tapes easier. There is a wide variety of software to write to the tape; we use the aging Retrospect.

      The disk options is just way too complex; if anything, skip the RAID option and just store 2 copies. Putting the RAID sets back together and finding the RAID software will be nearly impossible in a couple of years. Use some standard formatting on the drives (FAT, NTFS, etc.) and you'll be good to go for the next 15 years.

      • Re: (Score:3, Interesting)

        by iluvcapra (782887)
        As someone that did a lot of backing up (and maybe restoring if I was lucky) to DDS DATs in the earlier part of the century, I can assure there's a very good reason the drives are so cheap now :) The reliability was atrocious and at $10 a cart DVD-R is quite competitive.
  • Amazon S3 (Score:4, Informative)

    by friedo (112163) on Wednesday March 03, 2010 @06:26PM (#31351420) Homepage

    It can get a little pricey for huge datasets, but Amazon S3 now has an option where you can ship your data [amazon.com] on a big set of disks directly to them, they will import everything into S3, and it will live there forever. The nice thing about S3 is unlike physical disks, it can grow essentially forever, and comes with retention and redundancy guarantees. And once your stuff is in S3, you can recycle the same disks to mail them more data.

    • by Joce640k (829181)

      Mod up.

      Online storage in a properly managed data center is the way to go for long term safety. Keep a local copy exactly as you are doing and send a second copy to a data center (eg. Amazon).

      PS: You don't say if the data is compressed or not. Does it compress?

      • Re: (Score:2, Informative)

        by Andraax (87926)

        I hope so. A 3TB dataset on Amazon S3 would run $450 / month for storage.

  • Go with Blu-ray (Score:3, Interesting)

    by sabreofsd (663075) on Wednesday March 03, 2010 @06:29PM (#31351446) Homepage
    With the advent of 2TB drives, you could easily combine 3 of these with software RAID 5 as you suggested. Depending on how long you need to keep the data, recording them to dual-layer blu-ray disks might be a better solution. Ya, it's a lot of disks (you can buy 100GB discs now), but they'll last longer and you don't have to worry so much about mechanical failure or needing a certain OS when you want to restore them.
  • Drobo fan and user (Score:3, Interesting)

    by Lvdata (1214190) on Wednesday March 03, 2010 @06:33PM (#31351502)
    You might look at a www.drobo.com as a set of 4,5 and 8 drive enclosures. 1 TB disks gives you 3 TB usable space with a 2 drive failure tolerance. I have the older 4 bay drobo (2 for myself, and 2 at separate clients offices). It is much simpler to use, and will scale to your 2-3tb use and allow mismatched drives that normal raid will not use. Get a enclosure to start with, and then financing permitting, get a 2nd for Drobo redundancy. Not the fastest or cheapest, but reasonably good by both accounts, and simple to use.
  • Have you already ruled out blu-ray? 25GB per disc, make two copies per customer. Much cheaper than RAID5.
    • Re: (Score:3, Insightful)

      by snowraver1 (1052510)
      Cheaper how exactly? Even if you could get BR discs at $2 each, it would cost $80/TB, and I havn't seen BR discs even close to that cheap. That doesn't include the writer which I belive are still $400. For the cost of the writer alone, you could purchase 5TB of HDD.
  • by Rivalz (1431453) on Wednesday March 03, 2010 @06:38PM (#31351566)
    Label it something like complete american idol blueray collection and upload it on p2p to piratebay. every couple years rename it to some other horrible popular tv series. It will be self sustaining form of storage with infinite number of redundant hosts.
    • I was having a terrible day at work - our tape drives for 1 TB Backups are failing, funny coincidence. Licensing issue though, not programmatically.

      Anyways, this made my day. I'm going to tell it to all my friends, I hope you don't mind.

  • by jbridges (70118) on Wednesday March 03, 2010 @06:39PM (#31351578)

    I would use RAID6 not RAID5, since 2 drive failures means data loss with RAID5, while it takes 3 drive failures to loose data on RAID6.

    Linux MDADM has supported RAID6 for years, it's stable.

    I would mix and match drives, not buying all the same model from one maker. One Samsung, One WD, One Hitachi, One Seagate.

    That gets you 4TB in 4 drives, and unlike a RAID1, any 2 drives can fail with no dataloss.

    You can further ensure no dataloss by making a second copy using different brand drives for each clone.

    Eight 2TB drives is around $1500. Not bad for a very safe 4TB backup.

    • by Vancorps (746090)
      Or just get 4 LTO4 tapes or 2 LTO5 tapes at ~$70 a pop to achieve the same capacity with no software setups or drives that fail when they are not spun up regularly. Common hard drives are not good long term storage. They are great for online or near-line storage but at some point, bite the bullet and just get a tape drive. Given the datasets are only 3TB a single tape drive is sufficient, at that range you could have two drives backing up identical data and storing the tapes in separate locations. This is m
  • There are going to be quite a few storage service names thrown out as well as compression schemes.

    1. Storage vendors you run real risk of having the data go away. There's a huge liability balancing act going this route.
    2. Compression schemes. As someone who has lost data to compression errors, the consequences of 'just' compressing a file can be huge. http://www.linuxquestions.org/questions/linux-software-2/recovering-files-from-corrupt-tar-archive...-326716/ [linuxquestions.org] (not my post, but similar story)

    I would sugg

  • use a tape drive (Score:4, Informative)

    by Lehk228 (705449) on Wednesday March 03, 2010 @06:40PM (#31351604) Journal
    you make the assertion that a tape archive would be impractical, but really it is the most practical solution. the drive will set you back a couple thousand, but 800 gig tapes are only around 40 bucks each, and they are engineered for data storage unlike hard drives. this will only cost $160 per 3 gig dataset, or 200 if you use par2 files and an extra tape to make it recoverable in case a tape does fail.
  • The only answer here is LTO tape stored at a contracted record archival facility. Optical media degrades and is easily damaged, hard drives fail ALL THE TIME and will have obsolete interfaces in a few years. Tape has very long shelf life when stored properly -- it is time tested and trusted. It is not that expensive to get one tape drive and a few carts for each customer.

  • I've never had good experience with tape, from DC6150 SCSI linear tape at home all the way through an Exabyte library with stacks and stacks of 8mm tapes. Two decades of tape has been two decades of heartache and frustration for me and the companies I've worked with. These days I'm no longer in tech or IT (thank god) but for my personal needs I use RAID-1 for live and DVD-RAM (as cumbersome, slow, and small as it is) for offline.

    Tapes just bleed data at an alarming rate, and they are about as reliable as a

  • You don't need a tape library. Just get a single tape drive, and you will be able to store everything on 3-6 tapes. Yes, you will have to swap tapes by hand, but it is a lot cheaper.

    LTO-4 stores 800 gig per tape, uncompressed. If you let the tape drive do the compression, you might even be able to get away with one or two tapes. Tapes are inexpensive, and are designed for long term storage.

  • by Chalex (71702)

    With easily compressible data (e.g. genomics data), I've gotten as much as 5TB onto a single LTO-4 tape using the regular drive compression.

    An LTO-4 tape costs me ~$50. It's smaller than a 3.5" SATA drive and easier to handle. It can probably even survive a drop to the floor from chest height.

    You'll need to spend some money on a drive or tape library. So it depends on how many datasets like this you need to write.

  • by vlm (69642)

    You have to keep rotating onto newer media, and newer media technologies. This sounds horrible, "oh no! I'm generating ten full drives per year". But realize in a couple years, all those drives will fit on a USB 4.0 stick, or on a card in your cellphone.

    If you haven't read it (and recopied it) in a couple years, its probably gone.

  • by mewsenews (251487)

    A tape library would be impractical at the present time.

    Why?

    I've worked in Visual Effects production and every time a new project came along we'd have to clear the servers of terabytes and terabytes of data. We used tapes. How are they impractical exactly? Inexperience?

  • Depending on the openness of the data you could ask Google if their Palimpsest project is still operational. Basically they wheel large storage systems around for scientific research. But I believe they want to keep a copy by themselves, Alexandria style.

  • by brennz (715237)

    Buy a netapp. Yay, RAID-DP.

    That was hard!

    * wipes brow *

  • The problem with storing things is that they tend to degrade over time, and you never know when they'll fail.

    Without being ridiculous, four sets in two locations is the best bet. Two sets are on line, and a regular parity check should be made between the two, with full data verification on a longer scale basis. One backup set gets made of each online set (an external drive which is sync'd once a week/month is likely good enough) and stored unpowered. This prevents local disaster from destroying your data,

  • http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/ [backblaze.com]

    I'm sure the price has come down some since this article was published...

    For those too lazy or paranoid to read the link... It describes how backblaze builds "cheap" 67 TB storage boxes for use in their online backup service. All the hardware specs are open sourced and freely available. They also talk a little bit about the software for managing all of the spce they have, but not in any real detail...

  • As far as I know the 2TB Raid problem hasn't been fixed. http://blogs.zdnet.com/storage/?p=162 [zdnet.com] If anyone knows differently, please let me know.
    I've been using a drive docking station and splitting my backups for large databases.

  • I have a bunch of old data backups on CDR that were great for years but they've started to degrade. I'd be willing to bet that any magnetic disk would be even more vulnerable to data corruption over time. I don't think your RAID 5 storage technique is a good long-term option.

    This could be a ridiculous suggestion, but have you considered something like cloud storage for this? You could encrypt the data and store it in somebody's cloud and let them worry about backing everything up.

  • by strangeattraction (1058568) on Wednesday March 03, 2010 @07:17PM (#31352074)
    Repeat never use DVDs as long term storage. I have seen them go unreadable anywhere from 2-5 years. I have fired up disk drives 10 years later with no problems. They are cheap reliable and fast. Don't try and get fancy just compress and store data sets over multiple volumes. Don't use RAID.
  • by adosch (1397357) on Wednesday March 03, 2010 @08:18PM (#31352698)

    I work as a contractor for the USGS [usgs.gov] and the projects I've been involved with host, archive and provide means for customers to access all our different satellite data products. We've got a Long-term archive method for tons of data products (digitally and tangible) and I can honestly tell you the first thing that always comes up is: how often will the data need to be accessed?

    For the longest time (almost a decade) we used 3 big, STK tape silos for data archive and retrieval for custom orders. The problem behind that type of design is we used a archive in a completely wrong manner in the fact that we tried to use it as a archive and a quasi-online retrieval system into a caching filesystem. We had tape mount counts in the hundreds and thousands, constant mechanical tape issues because of the excessive use, ect. We actually decided to move it all to online storage using enterprise RAID (EMC Clarion) and moved to a small LTO-4 tape unit for almost permanent, maybe-once-in-a-great-while storage and the rest we leave completely on spinning disk and control the access to it via application layer network protocols as needed.

    IMHO, I really think it's going to depend on the access frequency of your data. If that custom needs their data once, and maybe never again in case they lose it, put it on tape. If it's a requirement they can get the data from you any time they want and you've got the hardware and administrative resources, power and bandwidth, put it some RAID.

  • by bruciferofbrm (717584) on Thursday March 04, 2010 @12:26AM (#31354430) Homepage

    A problem I have here is the definition of 'long term'. To each of us it means something different.

    In my job I have to archive 1.6 terabytes of data per day, and keep it around for 45 days (which, BTW, is not my definition of LONG TERM). For this task I utilize Data Domain storage, which utilizes data deduplication techniques for massive compression.

    What you find is that at the block level your data may in fact be incredibly deduplicatable. In my case it very much the situation. I am currently storing 86 terabytes of rolling archives within 2.5 terabytes of physical disk space.

    The problem with any technology you use for 'long term' storage is the ability to read those archives later. Assuming the media doesn't self degrade inside of the time frame you call 'long term', you must have the tools to read that media again. If you use BluRay, then you must store a compatible drive with it. (Nothing says Sony will not change the standard in two years and make all current drives obsolete, so no one makes them any more). Tape is worse, in that in two major model revisions, drives wont be able to read your media because its density is to low for the new drive head technology. Hardware based disk raid has the issue that the controller the raid was built with needs to stay with that raid. Another controller from the same manufacture, with the same model number, but a different firmware revision may not be able to figure out the raid, and declare the drives empty. Software raid is a little easier to deal with as long as you keep a copy of the OS you used to create it with in the same box. But then, during your defined 'long term' period, will you still have access to a system you can even plug these drives into, or run the OS on?

    What you end up dealing with in reality is that as an archivist, you either ignore these facts, or you invest in a constant media / technology refresh and spend large amounts of time keeping your archives on the latest storage available.

    Of course, all this falls apart if your definition of 'long term' isn't as long as some will project. In my case, my archives roll over every 45 days. I could easily keep that data alive for years on a live piece of hardware with a service contract. If I do not trust that hardware enough, I can buy two and replicate between them. (which, actually I am, for disaster recovery purposes)

    With deduplication my (acknowledged) high initial investment quickly outweighs the cost of single purpose drives holding one copy, and wasting unused space. My purchase cost was less then $60k, but if I had to store all of that data in its raw form, my costs would be in the millions. However, if the data is not deduplicatable, then of course it is a moot point.

    Each answer has it flaws. You decide which risks are acceptable, plan your best to deal with obsolesce, and define your definition of 'long term'. You also have to be ready to change your solution, when the one you choose today, fails to be the right solution for your needs in 5 years.

"Say yur prayers, yuh flea-pickin' varmint!" -- Yosemite Sam

Working...