Complete Filesystem Checkpointing? 36
polymath69 asks: "Living on the edge of Debian unstable means that
updates sometimes break stuff, occasionally to an extent that is
difficult to recover from. This got me thinking about treating the
entire set of mounted filesystems as a transactional database.
Mark state, try something which might be dangerous, test, and approve (commit) or panic (rollback). Obviously some filesystem support would be required, but with ext3 and reiserfs available, maybe the potential is already there. And such a system would need lots of disk space, but these days that's a demand easily granted. There's lots out there on process-level checkpointing, and even some stuff about system-level checkpointing, but all I've found on that was in the context of saving and restoring processes for a system freeze and restore. But I couldn't find anything on Google or SourceForge about doing this sort of temporary branching in the filesystem. Is this idea feasible? Is anyone working on it?"
Hmmm. (Score:1)
Now maybe if you could have a delaid type raid mirror array, so the mirror gets updated once every 24 hours instead of every time the disk is accesed, then that may be something cool.
Re:Hmmm. (Score:2)
Personally I think the answer for Debian is to stop using such a fragile packaging system, *shrug*
(no, I don't mean switch to RPM, I mean things like not storing the entire package database for packages in one huge big file, and using that same file to keep track of what's installed. That's just asking to be nuked)
Re:Hmmm. (Score:1)
Regardless, this is not what the article submitter was talking about at all. He was talking about specific software packages in Debian unstable (bleeding-edge) that might have bugs that significantly affect the rest of the system. In such a case, a rollback process would be nice. This has nothing to do with the fact that the package database is stored in one big file.
Re:Hmmm. (Score:3, Insightful)
> (ASCII!) file is advantageous. By being separate
> from the individual packages, the dependencies
> are kept nicely so they can be easily referenced
This is in no way an advantage of the implimentation format being one huge file. It could just as easily be one file per package, or one directory per package and multiple files referencing things like packing list, uninstall commands etc, plus a static, throwaway index file. It makes updates faster and a lot less disaster prone.
> As an ASCII file, any sort of little error
> which might cause a problem is easily
> correctable, as opposed to other packaging
> systems.
That's great until your filesystem decides to truncate it to 0 bytes, and it's lots of fun looking for parse errors 40,000 lines into a file and guessing what might have been in the garbage that you find.
> Regardless, this is not what the article
> submitter was talking about at all.
It is one of the dangers of Debian; upgrade to the latest unstable apt -> nuked package db, upgrade to the latest kernel and get fs corruption -> nuked package db. Find your ATA driver/chipset is buggy on handling large files -> nuked package db. Install a borked package -> nuked package db.
Re:Hmmm. (Score:2)
+1, Insightful.
That's only a minor issue. Although I *have* had package DB corruption, it's usually been due to a daemon restart causing a sudden system freeze. And using dselect to rebuild the package database takes care of it.
More serious is when an update breaks, say, Ethernet or Modem access and I can no longer connect to the Net to fetch repaired packages. That's when I'd love to be able to roll back to a known working configuration.
Re:Hmmm. (Score:2)
> More serious is when an update breaks, say,
> Ethernet or Modem access and I can no longer
> connect to the Net to fetch repaired packages.
> That's when I'd love to be able to roll back to a
> known working configuration.
portupgrade on FreeBSD creates a package of every port it upgrades before replacing it so the update can be rolled back if install breaks. Taring up everything on the packing list and leaving a backup package would be a lot simpler (and probably faster) to impliment than a transactional fs.
Re:Hmmm. (Score:1)
Re:Hmmm. (Score:2)
> packaging system is the least of your problems
This is what happens when you mount filesystems async on drives that do write behind caching and you get powercuts or crashes.
Using one big fragile file for core system information is the same sort of braindamaged concept as a registry, or having
Re:Hmmm. (Score:2)
Interestingly enough, XP is very selective on what information it does include into rollback information. It will revert to old drivers, old registry keys (not all), but it will not uninstall software. Durring my rollback attempts XP broke my Nero installation.
Transactional filesystems would be fantastic I bet there are tons of little quirks to get around.
Chuq is working on it. (Score:4, Informative)
You can also find interesting filesystem info here [nondot.org]
There's also work being done on TRAM [redwoodsoft.com] (Transactional RAM).
Re:Chuq is working on it. (Score:3, Insightful)
A cleaner solution to this person's problem would be more robust 'uninstall' functionality in Debian's package manager, to make it clean up its own mess. To 'rollback' a hard drive would not only be time-consuming, but it would require exclusive access to the hard drive for the duration of the rollback. Might as well just make a disk image once everything is running smooth, and rebuild it whenever there are significant changes.
Re:Chuq is working on it. (Score:3, Informative)
Of course, Veritas have their own FlashSnap [veritas.com] product that does this for VxFS filesystems, and have just released it for Linux. It's a relatively pricey option, but it works well, and if you need this sort of functionality, the price is negligible.
Restore (Score:1)
The problem is not so much backing up your good data, but restoring it when your system is hosed.
If your filesystem is badly corrupted - is it possible to restore the last backup?
The poster is asking for rollbacks, or restoration, at the file system level.
Windows System Restore does not do this - it assumes the system is still bootable in "safe mode".
Re:Restore (Score:2)
True, but I'm looking for a filesystem solution here. If the filesystem is hosed, the solution is hosed too. It's inadvertently bad file system contents that I'd like to be able to roll back if they turn out to be Not So Hot.
This idea would also be very handy for other admin tasks. Say you have X working but want to try out a new and possibly better driver. There may be files all over the place to tweak to try to get that up and running, and if you fail, you may lose graphics altogether. So you have to keep careful track of what you do, so that you can undo it. But with this sort of system, you could place a mark and play, knowing that you could revert to a working set of config files if you're ready to give up. And the system would be doing the tedious recordkeeping for you. Doesn't that sound great, in theory?
My preferred embodiment for this scheme would be one that you could "rollback" even after a few reboots. But let's see what's out there.
Re:Restore (Score:1)
I can see what changed with diff BEFORE AFTER. I backup files as I change them, and backup to DAT as well.
VMWare (Score:1)
Re:VMWare (Score:3, Informative)
Lately I wanted to experiment with the various kernel-level security packages like LOMAC, LIDS, and SELinux. It was great to be able to build a default linux install on a virtual disk and then copy it three or four times to install the weird security stuff.
It's even better for non-Unix OSes. A friend wanted help installing his Java web app on NT. I built a variety of virtual machines for testing, all using the VMWare "Undoable disk" choice. So when some weird registry key got screwed up by an Oracle installer, I just picked "Undo" and tried again!
If you have to use crappy OS or packages that are inclined to break things and put crap everywhere, VMWare is a delight!
(Yep, I'm just a happy customer.)
plan9's n/dump is this (Score:3, Informative)
one can roll back the filesystem on a PER PROCESS basis with the yesterday command.
In this way you can narrow down what's broken by for instance using yesterday's c library, or last week's , or last years!
Also take a look at Venti
From: Sean Quinlan To: 9fans Mailing list
For those of you interested in the direction we are heading
with respect to plan 9's file system, you might want to
checkout our paper on Venti that will appear in the
USENIX fast conference.
http://www.cs.bell-labs.com/~seanq/pub.html#venti
Venti is a block level storage server that replaces the optical
juke box for a plan 9 file system. Some of the benefits include:
coalescing of duplicate blocks
compression
no block fragmentation
Also, we have switched from optical to magnetic disks as the storage
technology. I know many of you already use magnetic disks to
"fake" a worm, but for those of us using a optical juke box,
the performance improvement is rather substantial!!
seanq
Seems simplistic but... (Score:4, Insightful)
After it finishes, I test software on the existing system. If it breaks I restore, if it doesn't I play-test it for a while and if it keeps behaving I commit it to the next full "state preservation".
The biggest drawback, of course, is that this sceme requires close to 1.5 hours of downtime for the backup and more if you need to restore. But for noncritical systems it works great.
TSM (Score:2)
Tivoli Storage Manager
Command Line Backup Client Interface - Version 4, Release 1, Level 2.0
(C) Copyright IBM Corporation, 1990, 2000, All Rights Reserved.
tsm> q server
Node Name: FOOBAR
Session established with server WAYBACK: Solaris 2.6/7
Server Version 3, Release 7, Level 4.10
Server date/time: 02/10/02 16:13:02 Last access: 01/30/02 00:00:56
tsm> restore -subdir=yes -replace=yes /
Consider it done.
Windows XP supports this? (Score:2)
Sounds like basically the same thing, no?
- Steve
Backing up (Score:2, Interesting)
So I have both a snapshot of the current status of the server with 2 hours accuracy and the ability to roll back the server to any point with 24 hours accuracy.
The best part is the company is paying for my Cable connection at home to do this.
Re:Backing up (Score:2)
My Apache logs have me uploading ~500MB/day. *That* may be hoggish. I wonder how much I download.
Re:Backing up (Score:1)
I am now down to about 400 to 500 MB a day and I now have the fun task of processing all that data.
LVM (Score:2)
This is great for backups too. You don't have to worry about files being open and it's a great way to get a point in time shot of your data in case of short backup windows.
Re:LVM (Score:2)
That's just about the idea I was looking for; or, at least, half of it. (Thanks, Morzel, for the link to the white paper.) LVM seems to handle the "commit" half of the problem cleanly and efficiently (by just deleting the snapshot), but the "rollback" problem seems to remain. If I can't redesignate my "snapshot" as the new "mainline," then I don't have a painless way to do a "rollback."
Others have mentioned using rdist or tar, which are not unreasonable ideas. But this is my home server, with more space than anything else I've got; so those solutions just require me to have another server, both bigger and more stable, and to which the same questions would ultimately apply. So, my ideal solution would be the all-in-one-box answer, if it can be done.
snapshotting.. (Score:2, Informative)
FreeBSD [freebsd.org] 5.0-CURRENT includes preliminary snapshot [mckusick.com] support for ffs.
The Linux options aren't quite as good. The most promising new filesystem that could provide this functionality is tux2 [linux.org], where data is structured in a way that would make implementing this functionality fairly easy. There was a post explaining how it would work in the mail archives, but they seem to have disappeared.
There is commercial option: MVD Snap [mountainviewdata.com]. Their fileserver is Linux based, and the code for their snapfs filesystem was once available during beta testing.
Re:snapshotting.. (Score:3, Informative)
In fact, we've had that since we first shipped our machines. There's a paper on our Web site that discuss how this works, File System Design for an NFS File Server Appliance [netapp.com].
However, although snapshot directories let you dredge up copies of files from snapshots in case you (or a program) screws up and trashes them, that's not a convenient way to roll back the state of the entire file system.
We did implement that later (atop the same mechanism); see SnapMirror and SnapRestore: Advances in Snapshot Technology [netapp.com] - SnapRestore(TM)(R)(LSMFT) is the "roll back an entire file system to a snapshot" feature. (At times, all this SnapStuff makes me want to SnapTheNeckOfMarketing, but so it goes....) That paper doesn't discuss technical details to the extent that the other paper does, but it should be possible from the earlier paper to figure out at least some of how you'd do it.
Just use Linux LVM (Score:3, Interesting)
LVM provides the ability to create snapshots of your volumes, so you can easily roll back if anything icky happens. Mind you that write perfomance when using the snapshot feature goes down: instead of one write operation, every write becomes a read/write/write operation slowing things down. And this happens for every active snapshot, so you really can't have too much active snapshots
Then again, if it's just for checkpointing (create snapshot), installing experimental stuff and then committing (delete snapshot) or rollback (restore from snapshot, delete snapshot), it should do the trick wonderfully.
AFS (Score:4, Informative)
(CodaFS should be able to do this too. I haven't played with CodaFS enough to know if it offers any other way to accomplish checkpointing.)
Method 1: backup volumes
$ cd
$ kinit me/admin
Password for me/admin@MYCELL:
$ aklog
$ vos backup some.path.avol
$ kinit me
Password for me@MYCELL:
$ aklog
$ cd avol
do stuff with the filesystem...
Oops! I need files that I modified or deleted!
$ cd
$ fs mkm avol.backup some.path.avol.backup
$ cp avol.backup/little-lost-file avol/
$ fs rmm avol.backup
Many sites run 'vos backupsys' (generally before 'vos dump'ing volumes) every night to automatically back up all their volumes, and leave users' backup home volumes mounted under their home volumes, to provide easy access to yesterday's files without an administrator's help.
Method 2: for replicated volumes
$ cd
do stuff - uh-oh, I need a file back that I changed!
$ cp
ok, finished with the changes. Commit them!
$ kinit me/admin
Password for me/admin@MYCELL:
$ aklog
$ vos release some.volume
Released volume some.volume successfully
$ kinit me
Password for me@MYCELL:
$ aklog
Volume (for volume, read filesystem) backups work by saving the state of a volume at the time the backup command was issued. When changes are made to the volume, the original state is copied to the backup volume. The backup volume only takes as much space as the changes made since the last backup. Replication works by making read-only copies of a volume in one or more locations, as specified by 'vos addsite' commands. The copies are only updated when changes are 'released' from the read-write copy to the read-only copies. By convention, cell root volumes are mounted read-only on
I think that newer versions of Solaris will do checkpointing on UFS. I haven't adminned Solaris since 2.3 (the slooow SS20 with 2.8 under my bed dosen't count until I play with it some more), so I'm not familiar with the details.
A year ago someone did this .... (Score:2)
SnapFS looks like the answer... but where is it? (Score:2)
Thanks to everyone for all the help!
Re:SnapFS (Score:2)
The Martin Pool/linuxcare.com.au project seems to be unrelated to the SnapFS in the original announcement, now owned by Mountain View Data. There's a README on the Pool project here [ultraviolet.org]. It does seem to be dead, which is OK since it didn't offer the full filesystem rollback capability. In addition, it was a true filesystem, unlike the other, which was designed to sit in another layer above Ext3.
If anyone has an old tarball of the now-MountainView SnapFS, please let me know...