Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

What Software Do You Use for Unix Backups? 216

jregel asks: "Linus has stated that dump should not be considered a reliable backup program, and both tar and cpio have their limitations. So what are Slashdot readers doing for backing up Linux servers and workstations? (you do backup, right?)" Given this bit of news, have you used anything other than the standard Unix staple to back up your Linux boxes? If you were forced off of tar, cpio and dump, what would you use as a replacement?
This discussion has been archived. No new comments can be posted.

What Software Do You Use for Unix Backups?

Comments Filter:
  • Easy. (Score:4, Funny)

    by torpor ( 458 ) <ibisum AT gmail DOT com> on Tuesday March 18, 2003 @06:33AM (#5535433) Homepage Journal
    If you were forced off of tar, cpio and dump, what would you use as a replacement?

    I'd use dd of course...
    • I was actually thinking, what happened to pax? Supports both cpio and tar formats, worked great last time I tried it.

  • 9 out of 10 tape monkeys prefer taper [e-survey.net.au]!

    9 out of 10 network admins smack their tape monkeys when they forget about modprobe zftape after reboots.

  • dump on solaris... (Score:5, Informative)

    by Polo ( 30659 ) on Tuesday March 18, 2003 @06:49AM (#5535461) Homepage
    You know, I was thinking about the same thing since I had problems with a recent restore from a compressed dump archive. I was missing some files probably because I ran the dump from an active file system.

    I found out that solaris has a very interesting command: fssnap

    It creates a read-only snapshot of your filesystem intended for backup operations.

    You create a snapshot, dump the snapshot, then delete the snapshot and the dump is consistent.

    I wonder if there's something like this for linux...
    • by root 66 ( 72128 )
      FreeBSD 5.x has fs snapshot capabilities. See http://www.freebsd.org/releases/5.0R/relnotes-i386 .html#AEN1150 for more details.
    • by AlexA ( 97006 )
      Yes there is. It's called LVM [sistina.com]. I've used its snapshot capabilities before on my Linux server, it's very nice.
      • by Polo ( 30659 )
        Hey, you're right, and the wonderful LVM documentation even has a Recipe [tldp.org] for performing the backup. I assume that since the snapshot is read-only, dump should work fine without the issues Linus mentioned.

        The snapshot partition just has to contain enough space to hold the changes made to the original volume while the snapshot exists.
    • by sql*kitten ( 1359 )
      I found out that solaris has a very interesting command: fssnap

      If you are using EMC Symmetrix storage, you can use the TimeFinder [emc.com] product to create a "Business Continuence Volume", or BCV. It deltas against your last backup (at the track level, not files or blocks), applies changes to a copy of the last backup to create a consistent image, then you can dump that to tape.

      I wonder if there's something like this for linux...

      So long as you have one host (Solaris, NT, whatever) to run the TimeFinder client
      • So do Netapp filers & IBM Sharks. It's not a unique feature (despite what the vendors would like you to believe), but it is damned useful.

        One thing to beware: a lot of vendors ship the snapshot capabilities as an added cost option; if you intend to use it, make sure of your costings.

      • At our dotCom company, we bought EMC boxes and I was REALLY excited about the TimeFinder concept. But then I found out that it doesn't really find time, it just makes backups.

        I had thought we had found the answer to getting a six-month project done in 3 months - use "TimeFinder" by EMC. :)

        -Peter
    • Sounds remarkably similar to FreeBSD 5's snapshot feature. I've used it with dump, with good results.

      I started wondering if there's something like this for Linux, but then realized that there's just way too many different filesystems to add the feature in any meaningful, practical way.

  • Roll Your Own (Score:4, Interesting)

    by JimR ( 101182 ) on Tuesday March 18, 2003 @06:51AM (#5535467) Homepage

    I wrote my own (Perl) script, that copies all my "important" files (basically stuff in my home directory that can't be reconstructed by other means and all the system config files) to a new directory tree (using cpio) it then burns the copied tree to CD-RW and verifies the CD against the copied tree.

    I operate a 4 disc system, so I always have the last four backups on CD and I keep the copied trees around (uncompressed) for as long as I have disk space. So far I've not needed the CDs (I store 2 of them offsite in case of disaster) but the copied filesystem trees have come in useful a couple of times.

    The only drawback of this is it's not appropriate for backing up huge quantites of data (like lots of audio or video files) as the CD media is quite limited in size - but when rewritable holographic storage comes along I'll be able to just change my function that decides which files are "important".

    • Re:Roll Your Own (Score:3, Informative)

      I wrote my own (Perl) script, that copies all my "important" files (basically stuff in my home directory that can't be reconstructed by other means and all the system config files) to a new directory tree (using cpio) it then burns the copied tree to CD-RW and verifies the CD against the copied tree.

      That's what I used to do, (wrappering tar) but the matanence of the script became a pain and I needed to add support for incremental backups, and exclusion lists.

      After some web searching, I on google, fre

    • I do something similiar, but use rsync.
      I rsync my entire drive to another drive.
  • No good answer (Score:2, Insightful)

    by Yonder Way ( 603108 )
    Amanda comes up a lot. They can't span tapes.

    Veritas also comes up a lot. Aside from cost, did you know Veritas can't back up single files larger than 2GB in size on Linux clients?

    On paper, BRU looks pretty darned good. I haven't yet put that theory into practice.
    • That's not true, there *is* a good answer.

      If you really, truly need your data, no matter what, go with Tivoli Storage Manager

      http://www-3.ibm.com/software/tivoli/solutions/ s to rage/

      Sure, you have to pay for it, but it's really no more expensive than Veritas NetBackup, and certainly a better product!

      Cross-platform (everything from Wintendo to OS/390, Solaris, Mac OS X, Linux ...)

      TSM is more of a hierarchical storage manager than more "traditional" backup programs.... but with things like Portable Backu
  • BackupEDGE vs. Taper (Score:4, Informative)

    by mindslip ( 16677 ) on Tuesday March 18, 2003 @06:58AM (#5535478)
    I think the 2 above are both excellent, Taper for the less demanding environment, BUpEdge for a system with multiple drives.

    I'm actually doing a 100gb backup as we speak... so good timing on the Ask Slashdot.

    My only beef with Taper (and I'd use it otherwise, on my home system) is that when you do an "e"xclude or "i"nclude of a directory, it scans the entire subtree, which can take *forever*, (like when excluding /var/squid) instead of just simply skipping that directory.

    mindslip
    • Taper has a limitation in that the archive can not be greater than 4GB. If larger, it will appear to write OK, but it'll segfault when you go to try and read the archive.
  • by Isomer ( 48061 )
    we use rsync for backup over ssh onto another machine at a remote location. This works really well, especially if you do a cp with hardlinks each night. rsync will download just the changes since yesterday, and files that are the same just end up being hard links to the same data. This also makes restores reasonably trivial. (just ssh onto the backup machine, cd into the directory for the date you want, cd into the directory and grab the file -- done.)
    • Re:rsync (Score:4, Informative)

      by Colitis ( 8283 ) <jj...walker@@@outlook...co...nz> on Tuesday March 18, 2003 @07:17AM (#5535513)
      I use rsync over ssh too; I back it up to a machine at work (which I can reach from home). It basically does my whole home directory except for a few excludes for stuff that's a bit sensitive (ssh keys, keychain, ICQ history) which I manually backup to CD now and then. The machine at work is then backed up with TSM.

      The rsync over ssh style of backup is so easy it's addictive!
    • I second the use of rsync & ssh, at least for particular directories. (Also makes a handy replacement to FTP or CVS in a pinch... I used to devlop in BBEdit on my Mac OS X box and rsync the files to our RedHat server...)
    • Re:rsync (Score:2, Informative)

      by GigsVT ( 208848 )
      I hear rdiff-backup is good, but I still mainly use rsync with the incremental rsync type scripts that use hardlinks for versioning. We use it here to backup over 2TB of data over a 512kbit link. Since you never need to do a "full" backup, the bandwidth is plenty.
    • I've been happy with daily rsync backups to a second local harddrive with the same partition setup (one rsync command per partiion, with the -a -x --delete flags).

      This makes rebuilding the system fairly trivial if the first hard drive fails.

      It also means I can recover from mistakes if I notice them the same day.

  • by Dahan ( 130247 ) <khym@azeotrope.org> on Tuesday March 18, 2003 @07:09AM (#5535497)
    Dump has been the standard Unix backup program for decades... I don't use Linux, but if I did, I'd consider it a bug that dump didn't work properly.

    Seems to me that Linus (or another kernel hacker) should fix the ext2 race condition reported in that thread, rather than blithely dismiss the problem with, "dump was a stupid program in the first place."

    • It seems to me that somebody who actually wants to use the dump program on Linux should fix it.

      On the other hand, is anyone who wants to take a dump on Linux likely to contribute good code?
  • rsync (Score:3, Informative)

    by heikkile ( 111814 ) on Tuesday March 18, 2003 @07:20AM (#5535517)
    We have a dedicated backup machine, into which we rsync all the important stuff. We are a smallish shop, so it only has a couple of 120G disks.

    This backup machine keeps seven generations of daily backups on one disk (cp -al, so no duplicating of static data), and a few weekly ones on the other disk. Every night it rsyncs things off-site (to my home). That rsync has turned out to be unreliable (probably my adsl), so I have a script that does it in small bits and pieces. Takes a few hours in the early morning.

    • Re:rsync (Score:3, Informative)

      by dubl-u ( 51156 )
      Here's a howto for rsync snapshot backups [mikerubel.org]. I keep daily backups for two weeks, weekly backups for two months, and monthly backups forever. I rolled my own wrappers for this stuff in a few hours.

      It is about eight zillion times better than tapes. I have hot, random access to all versions of all my files. Thanks to the hard linking, space used is moderate. Since it backs up to a remote computer, backups are instantly off site. And if I want to verify my backups, I don't have to feed in eight million tapes; I
    • if you use rsync, have a look at rdiff. It works similar to rsync, but can produce incremental "backups", i.e. it can have a master version of 2 weeks ago and 2 weeks full of diffs. You can restore any version from the diffs, which makes it very nice compared to rsync. rsync is good for basic desaster recovery and stupid users that "accidently" delete some folders, but rdiff can protect you from more subtile changes (do we still have that version from last week ? .. and no, media designers don't use cvs ;)
    • Yes, rsync is the dog's.

      On the Mac, I use RsyncX [macosxlabs.org], which knows about resource forks, even when transferring them to systems which don't have them.

      And on Windows, I use rsync [unimelb.edu.au] again.

      I've tried every damn sync program for the Mac. I've tried tar and dump on UNIX. I've tried fancy network backup tools. I've not found anything that compares with rsync.

      I hate the complexity of the command-line syntax, but it has the required functionality:

      1. Automatically incremental.
      2. Works locally from disk to disk or acro
  • cdrtools (Score:3, Informative)

    by Masa ( 74401 ) on Tuesday March 18, 2003 @07:30AM (#5535535) Journal
    I use "mkisofs /etc /root /home -R -T -o backup.iso && cdrecord dev=0,0,0 speed=4 blank=fast -data backup.iso" to create an ISO image, which will be burned to the CDRW disk. That's all I need to backup my workstation. And restoring the data doesn't require any special tools.
    • cdbkup (Score:3, Interesting)

      by bLanark ( 123342 )
      cdbkup [sourceforge.net] is a little more sophisticated - multiple levels, multiple disks.

      "CDBKUP is a professional-grade open-source package for backing up filesystems onto CD-Rs or CD-RWs."

    • That isn't reliable. ISO9660 directories can only have ~1024 entries, any more are dropped on the floor. There are also limitations on the length of a filename with Rock Ridge extensions, possibly 32 characters.

      Then there's the other things that don't translate well. Do you deference symbolic links? What about fifos and special devices?

      If you want to be safe, you need to either check the directory tree first or put everything into a container without these restrictions. I've been developing some tool
  • by martin ( 1336 )
    www.amanda.org

    nice - can use tar or dump as the back end system. Works on *nix/ MaxOSX/windows via samba or cygwin.

  • Just use your favorite volume managment system to create a snapshot of whatever volume you would like to backup and use dump, tar or whatever to write that snapshot to tape. This way you'll have a consistent backup of the filesystem.

    I'm using LVM as a wrapper around Amanda to create and remove snapshots of every filesystem I'd like to backup.
  • Amanda! (Score:5, Informative)

    by nathanh ( 1214 ) on Tuesday March 18, 2003 @07:53AM (#5535564) Homepage

    I have been extremely happy with Amanda [amanda.org]. Single centralised backup server running amanda-server. Multiple workstations running the amanda-client. Amanda automagically schedules backups based on sensible heuristics. I just tell Amanda how many tapes I have, how many workstations I have, and Amanda does all the hard work of working out how much tape capacity is required and how often it should schedule incrementals/fulls.

    The server/client protocol has been designed to avoid reliance on dangerous security holes like rsh. The server sends the client a "send me your dump" message. The client then connects back to the server and delivers it the output from dump or tar. You can configure exclusion lists on the client if you're worried about sending certain files or filesystems. You can also encrypt the data stream and/or use Kerberos for authentication.

    If I forget to load a blank tape then Amanda plays it safe. It doesn't overwrite last night's backup: instead it stores incrementals into the "holding disk". Amanda will then flush the held backups to the next blank tape.

    Amanda emails me reports after every backup with a neat summary of what went right/wrong. It also gives you several hours advance warning if you forget to load a blank tape or if any of the workstations are offline.

    The only downside of Amanda is that it is fiddly to setup. The documentation is poor and the configuration files are cryptic. But if you're willing to invest some time and effort then you can't do much better (for free) than Amanda.

    • Re:Amanda! (Score:3, Interesting)

      by larien ( 5608 )
      Other downsides:
      • No support for spanning tapes with a dump; i.e. if you need to dump 4GB on a 2GB and you can't compress it down, you're stuffed.
      • Restoring files is fiddly

      Yup, Amanda is great for small setups (I use it myself at home) but it lacks certain features to make it really usable. For example, I had to restore some files in Legato Networker; I was able to open up a GUI, navigate to the file and set the restore path (i.e. where it will restore to). With that done, it worked out which tapes the

      • Re:Amanda! (Score:2, Informative)

        by riffraff ( 894 )
        Yes, and with amanda I was able to open up the command line client, navigate to the file and set the restore path. With that done, it worked out which tapes the file was on and restored.

        Amanda does the same thing, it's no problem. Yes, spanning tapes is a problem, but people might be working on it now. You can get around it by just backing up files, or directories, under the filesystem, in increments that are less than the tape size. I use it at a couple of different work locations, and it has worked r
  • afbackup (Score:4, Informative)

    by Vairon ( 17314 ) on Tuesday March 18, 2003 @08:30AM (#5535617)
    Website URL: http://sourceforge.net/projects/afbackup/ [sourceforge.net]
    Features:
    • Server & Client programs
    • Supports multiple clients streaming backups at the same time
    • Webmin module for easy configuration
    • Support for many tape drives and autoloaders
    • SSL and DES encryption support
    • Remote or local start of backups
    • Compatible with most *NIX systems (personally used it with Linux, Solaris & FreeBSD)
    • Non-root users can restore their own files
    • Unlike AMANDA:afbackup can actually append to tapes

    For those who don't know: AMANDA cannot append to tapes.
    Every time you backup with AMANDA it must start from the beginning of the tape.
    So, if you want backups every day, you must have a tape for every day.
    (http://amanda.sourceforge.net/fom-serve/cache/29. html [sourceforge.net]
    • Re:afbackup (Score:5, Informative)

      by martin ( 1336 ) <maxsec.gmail@com> on Tuesday March 18, 2003 @08:57AM (#5535666) Journal
      amanda doesn't append to tapes so there is not possibility of blowing away that tape. This is a problem I've experienced with other commercial software that appends to a tape each run - tape write error and it marks the entire tape bad. which means you have to scrap the entire entire tape and start again.

      Also tisk of appending is loss of tape or drive due to environmental factors - fire/flood (plane being driven into data centre).
      • Re:afbackup (Score:2, Informative)

        by Vairon ( 17314 )
        It would seem like this itself would cause more wear on the tape. It's my understanding that the hardest thing on tapes is rewinding them. Everytime it runs into the beginning or the end of the tape it "pulls" at the tape. Which is why smart tape backup units slow down the speed of the drive as they near the beginning or the end, during a rewind. If your backup program causes a rewind every-single-day, that would seem (IMO) to cause more ware.

        In addition, unless you own a autoloader/robot unit, using a bac
        • Re:afbackup (Score:2, Insightful)

          by AlexA ( 97006 )
          If you overwrite the previous day's backup, what happens if the server crashes and loses all its data while the backup is in progress? It seems to me that you'd lose up to a week's worth of data if you switch tapes once a week, unless you actually append data to the end of the tape instead of overwriting it. But, as mentioned in the link you posted, appending data to the end of the tape isn't all so great either, plus you increase the chances of running out of tape, in which case you have to switch tapes
  • by Khopesh ( 112447 ) on Tuesday March 18, 2003 @08:33AM (#5535626) Homepage Journal
    Arkeia [arkeia.com] is a powerful one, but not free software. there are two versions, a free one for small offices and a more powerful costly one. ...quick browse of the site does not reveal the free version, i don't think it exists anymore for 5.x (maybe i am not recalling correctly).

    anyway, arkeia can back up windows, linux, unix, and mac osx.
    • I second the Arkeia vote!

      It's proprietary software, but has proven exceedingly reliable for backing up my entire network onto a tape library. Basically, it's cheap for what it does, and depending on how you use it -- it may be available at no cost.

      ~GoRK
    • You can get a restricted demo license key for the full version of Arkeia if you email their sales people.

      I evaluated this product and really liked it. I would be using it now except that I found out our Windows guy was using Veritas and had already had it paid for, so I installed the free Veritas Linux clients and pawned the backup job off on him ;)

  • I don't use tapes because I hate them. Granted in order not to use them, you need lots of spare hard drives, which I do.

    First, I use mysqlhotcopy to get all sql data. Then, my backup server uses samba to tar and gzip up all data from various servers, Windows and Linux, into one place. Then it uses scp to send it all across the WAN to another backup server which keeps a business week rotation, and one month rotation. The other site does the same, and so far no problems at all.

    This way you don't have
  • Veritas Bare Metal Restore 4.5

    Works for windows and Unix(AIX, Solaris, HP-Ux) but I don't see support for Linux but I would guess that if you can get it to work on the above there is a tweak to get it working.

    Just anouther option, I know it is not going to be the flavor of the month because it is not free or OSS.

    Enjoy,

  • Backup2L (Score:3, Informative)

    by JLester ( 9518 ) on Tuesday March 18, 2003 @09:48AM (#5535890)
    We use the backup2l [sourceforge.net] script from Sourceforge to backup about a dozen servers each night to a remote NAS server. It keeps multiple generations (not sure how many, but we can restore files from several months or even years later) and has worked great for us. It is tar based, but that hasn't caused any problems and we're backing up about 150 gigs with it.


    Jason

  • On topic, but adrift at sea a bit:

    what tools will restore a backup done with Windows 2000/XP under Linux?

    Under win95/win98, you can smbtar the entire remote drive into a compressed tarball. To restore, fdisk a new drive, format it, and tar -xjpf tarball.tar.bz2, and possibly sys C: it once it's back in the windows machine. Windows takes care of anything else that needs to be done.

    Under Win2000/XP, obviously this won't work, so you need to use Windows's backup or other tools. But if you want to restore
    • i dont' see why you can't do the same thing with win2k/xp?

      you can connect to win2k/xp with smb just as easily as win95/98

      what is the problem?
  • by mnmn ( 145599 ) on Tuesday March 18, 2003 @10:22AM (#5536090) Homepage

    They say tar has its limitations. I really dont understand.

    Ive worked with different unixen and Linux distros, so I just dont want to be dependant on something that isnt installed by default everywhere. tar already has a VERY well known format and execution parameters.

    Ive lost my fair share of data to buggy harddrives and dumb mistakes like pulling off the ide cable while the system is running. So cron does daily backups using tar cfj using a file that has a list of other files to be backed up. This way I dont have to backup the whole partition. To restore a certain file, just tar xvfj backup2.tar.bz2 /pathtofile --root=/

    The cron setup renames backup.bz2 to backup2.bz2 and removes backup2.bz2 so I have the data for the past two days. Beside incremental backup which I dont need due to this setup, what else could I need? And by the way the backup.bz2 is copied off onto an NFS share elsewhere incase my whole RAID setup crashes, or the XFS filesystem bombs out. This setup can be replicated onto FreeBSD Solaris and many others.
    • I beleive that there are two downfalls to tar. One was long file name support. I think I read once it was long file names and the other was backing up devices in the /dev/ directory. I don't know if either of these are currently an issue with recent versions.

      Also, if tar encounters a bad spot on a tape, it usually bombs out. cpio can be told to skip over the bad sector.

      There are advantages and disadvantages to all the backup programs. I don't think that there is one program that is "perfect" for ever
    • by mcelrath ( 8027 ) on Tuesday March 18, 2003 @12:54PM (#5537289) Homepage
      The horrible problem with linux right now though is that because the memory management is so braindead, that backup will swap out everything in memory in favor of caching your multi-gigabyte backup file. Thus your method brings the machine to a standstill while the backup is occuring (which can take hours to days depending on the size of your filesystem).

      Not a criticism of your method (in fact, I use this), just a rant that the Linux MM system NEEDS TO BE FIXED. I'm sick of watching as some trivial process that will only read or write once gets the whole filesystem cached for it while programs I'm using interactively get swapped to disk. Video recording and playing programs (mplayer, ogle) have the same problem.

      Let's hope 2.6 is better than 2.4. Can any kernel hackers comment on this? In 2.5 will tar cvjf /home /mnt/backup/home.tar.bz2 bring my system to its knees?

      -- Bob

      • as a temporary and ugly solution - why not just nice the jobs you don't want hogging the system?

        • To demonstrate his complete stupidity, Hubert_Shrump writes:

          as a temporary and ugly solution - why not just nice the jobs you don't want hogging the system?

          Perhaps because the NICE level dosn't impact anything but CPU timeshare? So a nice 19 tar -czvf /tmp/totape.tgz /home will still thrash the hell out of your system.

          The semantics are fairly trivial: This process is generating a lot of disk cache that's only being hit once, so let's bound how much memory it uses.

          The reality is much trickier. It'

        • It is not CPU usage that hogs the system, it is disk I/O. tar (for a large file) forces all running programs onto disk, so that all memory is being used as a cache for this huge file. Then whenever you try to do something with an interactive program it must swap the entire thing back into memory. It then stays in memory for about 5s until tar provides some more memory pressure and puts it back on disk.

          It is the constant swapping-in and swapping-out that make the system unusable. Nice has absolutely no

        • It seems reasonable, but AFAIK nice only helps when the issue is CPU contention. When your problem is related to IO (e.g., seeking, caching, or transfer) then nice doesn't help.
          • Thanks for clearing that up.

            Thinking like a bonehead, indeed. I got all focussed on keeping your interactivity up... and probably never finished the manpage to nice in the first place.

            Hey, good thing I phrased that as a question, so that I'd get an intelligent answer as to why, rather than get my ass flamed off.

            Off to blithely trash my system because I'm too retarded to boot it without pouring hot coffee in the power supply vents.

      • Umm, I use 2.5 myself and I dont know if the 50MB file brings it down. Its a pentium200 with 64MB ram, 256mb swap, and all backups occur at 4am. I remember testing it some time ago I think it finished the job clean while on the same lousy system I was running X and reading hotmail email using opera and twm. Maybe because its 2.5...
    • The problem is tar always archives the entire space which makes it difficult to backup, say gigabytes of data, daily.

      A decent backup tool (as opposed to an archival tool) must absolutely have incremental backup support.

      • by dissy ( 172727 ) on Tuesday March 18, 2003 @02:57PM (#5538342)
        > The problem is tar always archives the entire space which makes it difficult to
        > backup, say gigabytes of data, daily.
        >
        > A decent backup tool (as opposed to an archival tool) must absolutely have
        > incremental backup support.

        Er?

        tar --help
        [snip]
        Operation modifiers:
        -G, --incremental handle old GNU-format incremental backup
        -g, --listed-incremental handle new GNU-format incremental backup
        [snip]
        Local file selection:
        -N, --newer=DATE only store files newer than DATE
        --newer-mtime compare date and time when data changed only
        [snip]

        This is in tar (GNU tar) 1.12
        (Which is really really old actually.. slackware 3.2 dist)

        There are also tons of options to exclude directorys and files, to force it to span disks, and pretty much match in any way you need.
        I've been making incremental backups (and even restored a few) for awhile now.

        • Heh, you're right, GNU tar does do that...

          But, seriously. If you back up Gb of data and millions of files with tar periodically, I bow to you. Don't get me wrong, I like this tool (I happen to use it every day), it's just that the incremental backup support you mentioned is fairly primitive (it almost always needs custom helper scripts) and not at all adequate at that scale.

          tar lacks things related to data management, which I kind of expect when it comes to periodic backup software. An example is file a

  • I use mkisofs and cdrecord to copy all my data to a CD-RW. I have two of those; the oldest data gets overwritten. At work I use cp to copy all data to an extra harddisk that is removed after the copy is made. And I keep my backup for home at work and vice-versa. Very important in case your building burns down.
    • The ISO9660 FS has some pretty strict limits on number of files in a directory (~1024) and length of filenames under Rock Ridge extensions (~30s, I think). If you exceed this, you'll be unable to retrieve those "extra" files - I know after being burned by it in the past.

      (Obviously I don't like working in directories with thousands of entries, but some tools will produce them, it's easy to accidently hit numbers like that with mail or news spools, etc.)

      As for the RW media, you do realize that they have a
  • Just because it's broken on Linux doesn't mean that:

    * it's not better on other platforms
    * the other tools aren't worse

    Elizabeth Zwicky's classic Torture-testing Backup and Archive Programs [linuxfromscratch.org] will give a whole list of reasons why you should be suspicious of tar or cpio.

    Heck, the FreeBSD Handbook [freebsd.org] answers the question "Which backup program is best?" by saying "dump(8). Period."
    • by coyote-san ( 38515 ) on Tuesday March 18, 2003 @06:15PM (#5539926)
      Have you even read Linus's comments?

      Dump works by reading the raw data partition. That works great with an unmounted partition, or if you have a very limited OS that does not perform any caching.

      But Linux is different - it's now using the cached pages as the primary content, usually flushing them to disk only as the pages are dropped. This is the approach used by most mature OSes, but Linux doesn't yet have an interface for "dump" programs to query the OS for updated but unwritten sectors.

      So dump is the worst of all possible things now. Not only will you get incomplete live files, you can get incomplete files even if the users have all terminated but the pages haven't been flushed to disk yet. That's non-deterministic, and there's simply no way for you to perform reliable dumps.

      On the practical side, dump is specific to the filesystem. When everyone ran ext2, that wasn't a problem. But now people may have a mixture of ext2, ext3, reiserfs, xfs, jfs, and probably even other formats. Each requires their own dump and restore, and that requires a lot more effort.
  • Very simple. Big stuff gets backed up with cp -ax across NFS to other disks. I have never liked tape -- restores are dicey.

    Small, important or irretrievable stuff gets mkisofs [even -J!] to CD-R.

  • by Corporate Gadfly ( 227676 ) on Tuesday March 18, 2003 @10:36AM (#5536187)
    Some people have already mentioned Amanda [amanda.org].

    In addition to amanda, I have good luck with star [fokus.gmd.de] coded by Jörg Schilling [fokus.gmd.de]. star is very feature-rich, fast, standards compliant and has been around since 1985. Give it a try!

    The star-users mailing list is here [berlios.de] . You can also look at the man page [fokus.gmd.de] and finally download it [berlios.de]
  • Hotswap IDE (Score:2, Interesting)

    by N8w8 ( 557943 )
    For backing up my FreeBSD home server I use a second (identical) HDD in a swappable IDE bracket on a standard plain ole onboard IDE controller (the 2nd channel to be precise). Though hotswapping isn't really supported on these controllers, it does seem to work :)

    Making a backup is easy. I just plug in the bracket and start a homebrew script which:
    - enables and inits the hotswap IDE channel
    - mounts the partitions on the hotswap HDD
    - removes system immutable flags on files on the hotswap HDD (so that they ca
  • BackupPC (Score:4, Informative)

    by dissy ( 172727 ) on Tuesday March 18, 2003 @11:35AM (#5536594)
    http://backuppc.sourceforge.net/

    Automated backups to an online disk server, open source, and a really nice web interface as well as command line interface.

    It uses samba and ssh to backup and restore to windows and unix machines.
    You can have it restore any files/folders in a backup you select, using the same methods (samba or ssh) as well as it can send the restore files to your browser in a tar or zip file.

    I recently replaced a machine using amanda and a DLT drive with a fileserver using a raid 5 array and backuppc. Best switch ever.

  • Part of my job is maintaining game servers as well as servers for web hosts and web clients. We wanted to keep 'mirrored' servers that reflected twice a day any changes that might occur in the live servers. We tried a number of commercial products and found that all of them lacked - mainly, they were hogs and would drive system load up to the point where I felt uncomfortable. So we buckled down and designed our own system which we call "MakeItSo". A daemon runs on the server to be backed up to, and a cli
  • We use LoneTar [lonetar.com] at a couple of different clients. Not much to dislike except slow file restore seeks on tape but apparently this has been fixed within the last year.
  • TSM (Score:4, Interesting)

    by duffbeer703 ( 177751 ) on Tuesday March 18, 2003 @11:50AM (#5536710)
    Tivoli Storage Manager is the only "backup solution" that I have ever seen that truly works well without alot of tweaking and twiddling.

    I've worked at places using Legato and Amanda, where restoring from backup was an unreliable and error-prone process more likely to be a waste of time than anything else.

    TSM is not cheap, but is worth every penny. We have one full time and one part time employee handle the backup/restore jobs for about 2000 servers. Try that with Legato or Amanda.
  • Dump isn't the problem. It is the fact that you are bakcup of a live system. Short of running everyone on a raid 1, cutting the mirrors, backing up the mirror,and reconnecting, UNIX file systems are not meant to be backed up while in use.

    Companies with money can get a netapp box for critical data. Here you can absolutly use dump, tar or cpio. They create a "snapshot" of a file before backing it up.

    Unfortunately we are talking a minimum of $40k for this type of solution.

    If the snapshot comcept could be w

  • Although S tar is a nice utility, as mentioned by previous posters. You can get pax directly [openbsd.org] from the OpenBSD [openbsd.org] people. Debian [debian.org] also packages pax [debian.org], if you run Linux.
  • We use Backup Exec coupled with a StorageTek L80 (tape robot). This is reponsible for doing nightly backups of between 3-6 TBs of data on Novell and Linux boxes.

    For linux we create a SMB share with samba that the backup server has access to. All files are either tar-gzip'd or just copied over to the directory. Everything in the directory is backed-up.

  • Problems... (Score:3, Insightful)

    by hafree ( 307412 ) on Tuesday March 18, 2003 @02:30PM (#5538094) Homepage
    The problem with most suggestions here is that it seems the average slashdot reader is a linux hobbyist or works as the IT manager for a small office that happens to run linux. What happens when you need to backup 6TB/night and don't want to pay someone to sit around swapping tapes all night. Sometimes it just isn't practical to purchase another SAN solution to facilitate an rsync. Or what if you have a collection of high capacity LTO tape drives at your disposal, but don't have the budget for something larger and automated, or smaller with an autoloader. I think automation and efficiency is almost as important as reliability and cost. Not everyone can afford a Storagetek Powderhorn Silo [storagetek.com], or needs the versatility of expensive products such as Veritas Netbackup [veritas.com]. Then again, sometimes tar or rsync just don't cut it in an enterprise environment where data is mission critical.
  • First, I have a Perl file list maker. I make a file, say "stuff.conf":

    #!/usr/bin/perl -w
    use Backup;

    add_path("/home/vadim");
    del_path("/ho me/vadim/.kde");

    There are other functions to filter the files to add, and it can also include other files. If running as root it will switch to the user that owns the included file, and not allow including any files not owned by the file's owner. I use this to let people with an account on my system configure how their stuff gets backed up.

    This simply generates a file

  • I do rsync to a dedicated backup server at a different location. As a fallback, I backup portions regularly to CD-ROM.
  • A lot of the problems backing up live systems are because of poor coding practices. (The other problem is people attempting to back up things that shouldn't be backed up at the filesystem level. A classic example of this is relational databases - they should usually be dumped and restored with their own tools.)

    Specifically, how many programmers routinely get advisory write locks on files they plan to update? How many home-brewed or ad-hoc backup solutions bother to get advisory read locks?

    I've written
  • Prayerware(TM)2.0 http://www.prayerware.com/
  • by Moderation abuser ( 184013 ) on Tuesday March 18, 2003 @08:08PM (#5540686)
    The best open source backup system I've come across is Amanda, but it's got a bunch of limitations. i do use it at home. It's slow[1], makes inefficient use of tape, has tape/partition size issues but just about gets the job done.

    At work, we use Veritas Netbackup. Having used both it and Tivoli Storage Manager, TSM is easily the better of the two.

    [1] Estimates? Estimates? Just run the bloody backup.
  • In my IT Department we use BakBone Software's NetVault software. It has a client for most commercial Unixes (we use AIX and SCO) BSD's, Linux, and Windows. It works well with all the autoloaders I use, and backing up to NAS as well.

    It's way! cheaper than BackupExec and kiskcs butt! Highly reccomended!

    LK
  • I use a mix of /bin/hope and /usr/ccs/bin/prayer. Why? What do you use??
  • Before I start looking for new backup tools, I would look for the one responsible for removing my tools in the first place.

    A possible(read: theoretical) form of backup would be to use the various online search engines as distributed backup mediums. Ie, convert your data into various web pages which are encoded. Since webcrawlers will crawl a site and attempt to store/cache te data(google, the wayback machine, etc), your data is, in theory, cached on those crawler databases.

    The only problem with this idea

BLISS is ignorance.

Working...