Forgot your password?
typodupeerror
Linux Software

Linux Equivalents for Novell's "Filer"? 50

Posted by Cliff
from the deceptively-easy dept.
Josh Berkus asks: "One of my clients would love to convert their entire office to Linux. But one thing keeps them tied to a NetWare file server: a little utility called 'Filer' Filer allows the sysadmins to retrieve deleted and overwritten files, up to a week after the event. With 70 secretaries using that server, that ability is crucial. I've looked around the Internet, but cannot find anything quite equivalent in the Linux world, except maybe hourly backups and that's a pain. What the client really needs is a utility/mod for the filesystem or network layer that archives files instead of over-writing or deleting them. I had one kernel hacker offer to write me something like that, but my client does not want to be a test case. Note, that we have nothing against Netware, it's just that this client has historically not been able to get good Novell support. Anybody know anything that's already in production like this? Is Reiser working on this for ReiserFS?" This sounds deceptively easy. If this was a personal machine, this would be easy, but the showstopper is that this has to work as a share meaning that this "trashcan" like functionality needs to be implemented at the filesystem level. While I can understand the submittors desire to not be a test case, if there doesn't exist a ready-made solution to this particular problem, how difficult would it be to add this on to ext2/ext3, Reiserfs or some other suitable open source filesystem and test it for reliability?
This discussion has been archived. No new comments can be posted.

Linux Equivalents for Novell's "Filer"?

Comments Filter:
  • Perhaps... (Score:3, Informative)

    by crazney (194622) on Sunday September 29, 2002 @10:19AM (#4353503) Homepage Journal
    Perhaps you could look at cvsfs? Don't know if it actually works, but meh..
    http://sourceforge.net/projects/cvsfs/ [sourceforge.net]
  • by NaveWeiss (567082) on Sunday September 29, 2002 @10:22AM (#4353509) Homepage Journal
    Here's a project [uprm.edu] that supposedly does that. They use a daemon that physically deletes the files after a while. It's not as cute as in Novell, where erased files were physically overwritten only when the system really needed the space.
  • $ su
    # rm /bin/rm
    # mkdir /trash
    # cat > /bin/rm
    mv $1 $2 /trash
    ^D
    # chmod a+x /bin/rm
    • that's a brilliant solution when you consider the heavy use of system("rm ..."); in packages like samba, etc.
    • Yes, too trivial. It's a fileshare. You know, NFS, SAMBA, netatalk, something like that. They're not using rm to delete files. Besides, deleting the rm binary would be a really bad idea.

      I have heard of people using RCS for similar functionality. I think it might work, because as I understand it it creates archives containing the original file and all modifications to it over time. However, I haven't used it myself, so I really dont have any idea how well it would work with a fileshare. Good luck!
  • unlink() (Score:3, Informative)

    by Saint Nobody (21391) on Sunday September 29, 2002 @11:05AM (#4353655) Homepage Journal

    I know i saw a library a while ago on freshmeat that supplies a new unlink() function. This version will move the file into a trashcan area rather than just delete it. it could be worth investigating. after a quick search, it's apparently called libtrash [freshmeat.net].

    I don't really know the software, so i can't vouch for it, but it seems to be taking a sensible approach. of course, the whole idea breaks if you use the kernel nfsd, so be careful with that.

    • All the links so far are about accidental deletion prevention. The real problem I think is most likly the accidental overwriting. The power to get your file from 15 minutes ago on Novell is awsome. Unless you have a utility that backs up files as they are overwritten, and then allows them to be retrieved you are not going to get the functionality needed. You need a trashcan far more advanced then either Linux or Win has given you.
  • This would also have the advantage that is someone deleted even part of a file the could even get that back later.
    Make nightly (incremental) backups on a 14 day rotation. Putting them on a large file storage system.

    Than create some perl scripts to retrieve the files when someone wants to.

    $>getback letter_to_customer*

    (0) letter_to_customer_20020927
    (1) letter_to_customer_20020928
    (2) letter_to_customer_20020930

    Which one (enter for all)?

    Of course a GUI would be more office friendly.

    All the parts would be off the shelf so you can say it is not a test case. And you would have a backup system that had day to day value.
    • The time we found ourselves using salvage the most at my old company was when we needed to get back to a particular revision of a document. Sometimes, the revisions were casual, just faxing something off to a client, and waiting for comments. You continue working with the document, possibly changing direction and making the previous issue irrelevant.

      Although proper revision control procedures would save you, when a lot of things are happening at once it doesn't always happen.

      Salvage is no substitution for proper backups, but it does bridge the gap when you need to get back a particular file from a particular time and day! (Not to mention the fact that it is immediately available from the user's machine, not requiring any administrative effort, changing tapes, etc.!)
    • The 'libtrash' library already mentioned is a reasonable approach. Another approach, however, might be instead of a full back-up made with 'cp' or 'tar' or 'dump', but a periodic 'ln' of files into a special directory somewhere on the filesystem. (I mean a hard link, not a soft link.) A 'find' script could very easily accomplish this and could build time-oriented versioning into it. Of course this approach will keep blocks used for a considerable amount of time, unless you have another script that deletes them periodically.

      You will, however, looe the contents of the file if the file is truncated (open(2) is called with 'O_TRUNC'). I do not know whether the file server software (be it Samba, Netatalk, etc) can guarantee that truncation won't happen. I would hope that 'libtrash' includes a wrapper for open(2) that also handles truncation.

    • Here's one method for he backups: Somebody has taken the time to put together a nice page on how to do snapshots with rsync [mikerubel.org].

  • by sclatter (65697) on Sunday September 29, 2002 @11:27AM (#4353726) Homepage
    I'm guessing this may be out of your budget range, but you should know about Network Appliance filers' snapshotting capability. The filers create online "backups" by saving the state information about a filesystem. You could have snapshots of the state of the filesystem every day for a week and several times a day. I think they support something like a dozen snapshots at a time.

    All this does take up space on the filer, but only the changed blocks have to be saved. As long as your churn is pretty low it's not that much extra space.

    If you lose or munge a file, all you have to do is cd to a special .snapshot directory, choose the latest image that has your file intact, and copy it back to the live filesystem. It's perfect for those belly churning "Ooops" situations.

    Snapshots are also used for backups, so that you are always backing up a totally static image of the filesystem. No files changing midway through.

    Yeah, filers are spendy, but when it comes to fast, reliable, easy to administer file service I really believe they are the best.

    Other than that, I think Veritas added a similar capability to VxFS, but I don't recall it being quite as elegant.

    Sarah
    • They are called snapshots. Many NAS boxes do it and they are VERY helpful. They keep me from doing restores from tape when someone deletes a file. I just grab it from last night's snapshot. I do a nightly snapshot as well as a weekly so that I keep up to a month's worth for each volume (free-space willing...). That way I can go back to a file 3 weeks old without calling tapes back from off-site storage.
  • by fingal (49160) on Sunday September 29, 2002 @11:28AM (#4353731) Homepage
    As chance would have it, I had to perform this task for one of my clients at the weekend as they had just deleted their entire active database (doh!). After a load of research, I found that the best resources are to be found in the reiserfs mailing list [theaimsgroup.com].

    The key post was this one [theaimsgroup.com] which pointed me at using:-

    reiserfsck --rebuild-tree -S -l rebuild.log /dev/yourdevice

    Very scary. Had to boot into a rescue system off CD first, mount HDD RO, then get enough tools to be able to backup the HDD to a remote box. Unmount disk and then run reiserfsck. Pray for a bit. Got all the db files back (and because they where in a directory that was deleted, they had all the correct names as well (once I'd found the directory)).

    There was some minor file-system corruption for files that had been written frequently, but this was taken care of by restoring the previous backup and checking everything else from the RPM db. So, all-in-all not an experience that you would want to do on a daily basis, but definately worth it to restore 2 months of lost work (why oh why don't people use backups?).

    As far as kernel hooks for undeleting data, the mailing list link above contains several discussions about this, but the general feeling seems to be that it is possible but Linus doesn't want it (see here [theaimsgroup.com].

    • The last referenced link:
      Linus has insisted that there is no reason to support undelete at the filesystem level, or anywhere in the kernel for that matter. Don't delete important files and you'll never need the feature ::-).

      If you really need undelete, write a shared library that overloads the unlink() system call and LD_PRELOAD it. This is the right place to do it.

      I think it is really a shame that Linux doesn't suppor this out of the box, but what does the above actually require to implement all of the novell-style functionality?

      Namely:

      Files only overwritten when space is required.

      Every revision of a file is kept!

      Is completely independant of a proper backup solution; the purpose isn't always to recover a file that was accidently deleted, but often to get back to a previous revision.

      It doesn't seem like it can really function as a layer above the filesystem, or the tape backups would include all your revisions as well.

      Companies that use this functionality love it... those that don't suffer blindly with the lost productivity.

      • From personal experience, Novell's Salvage function has saved my bacon a number of times. (Including accidently deleting a fair amount of the sys volume...)

        Linus's suggestion of using LD_PRELOAD requires that every single program has that LD_PRELOAD environment varable set - quite unreliable, IMHO.

        I suspect one difference is that Netware is mainly a file server os, so the salvage function might have been intergrated in the file shareing code.

        I can't imagine it's that hard to manage the unlink call to relink in a new target directory or something - but, the hard part is it might just take a user-level daemon to manage the cleanup and recovery, and tracking all the increased metadata. (Precisily which version do you want to recover?)

        Also, if the disc gets full and you can't boot easily to launch that daemon - well, someone would have to think about this.
  • Ahhh. Salvage. (Score:5, Interesting)

    by FreeLinux (555387) on Sunday September 29, 2002 @11:37AM (#4353759)
    Filer is in fact a DOS based frontend utility for managing files on Novell Netware servers. Being a DOS utility, it is obvious that filer is as old as the hills but is still very useful. Filer's functionality has been included in the GUI tools that Novell has developed over the years and is included in the outstanding NWAdmin utility and the less appreciated Console 1 tool. Today, the utility Salvage is even included in the Netware Client and is accessed from a context sensitive menu on the client machine's desktop.

    Salvage, the feature's proper name, is a tremendously powerful feature of Netware. Basically, when files are deleted from Netware volumes, they are not truely deleted. They are unlinked or renamed and become invisible to the clients but, the deleted files remain on the Netware volumes. When the Salvage utility is invoked, these files can be displayed, selected and instantly restored.

    Deleted files remain on the volume until the volume runs "out of space", at which time the oldest deleted files are automatically purged as necessary to allow space for new files. It is important to note that the space consumed by deleted files is not reflected in any of the client utilities and does not count in disk quotas. This means that from the clients perspective the volume will show that there are 30 Gigs free despite of the fact that the volume is in fact full, due to deleted files.

    Another interesting note is that deleted files retain their rights attributes. Files invisible to a certain user cannot be seen or restored by that user if they are deleted. Only users that had the appropriate rights before the file was deleted can manipulate these deleted files.

    The deleted files can also be purged manually from individual directories or the entire volume. Purging can also be configured to purge the deleted files immediately which is the recommended configuration of temp directories.

    This beloved feature of Netware has always been admired. The user community has always requested it in other OSes but, as yet, the only thing to even come close is an NT/2000 add-on called Network Undelete, from the folks at Executive Software, the same people that brought us Diskeeper. Unfortunately, it's still not quite the same.

    Several posts have stated that this should be a simple thing to implement. I cannot speak to the ease or difficulty of implementing this feature. However, one does have to wonder how easy it would really be. Considering that Salavge is such an old feature on Netware, that there have been so many requests for it in other OSes and yet Netware is still the only OS to offer it, one must conclude that it is not really so easy to implement.

    If you have found a developer that is willing to try to implement such a feature, I strongly encourage you to get them going, regarless of whether you want to be a test case or not. The community would love and appreciate this feature in Linux and any other OS.
    • This beloved feature of Netware has always been admired. The user community has always requested it in other OSes but, as yet, the only thing to even come close is an NT/2000 add-on called Network Undelete, from the folks at Executive Software, the same people that brought us Diskeeper. Unfortunately, it's still not quite the same.

      Several posts have stated that this should be a simple thing to implement. I cannot speak to the ease or difficulty of implementing this feature. However, one does have to wonder how easy it would really be. Considering that Salavge is such an old feature on Netware, that there have been so many requests for it in other OSes and yet Netware is still the only OS to offer it, one must conclude that it is not really so easy to implement

      Ahh - but it is a violation of the Slashdot Code of Posting to mention Novell technology at all, and never in a complimentary way!

      More seriously, most Linux-ians and Slahsdotters have never worked with a properly designed, engineered, and installed Netware network, so they don't know what that system is (was?) capable of.

      sPh

  • Since I learned Unix on the big shared machine, I always thought it was built into the OS from the beginning. Was almost dissapointed when I installed Linux and found out it was something they added.

    I don't remember all the particulars, but they had a "Charon" daemon that handled the deleted files. You had a pretty good chance up to a couple of days of recovering files, then they were permanently gone.

    Someone is working on this for FreeBSD [iu.edu], but I haven't seen a Linux equivalent.
  • Although there appear to be some partial solutions around for Linux, I do think is would be worth giving it full treatment in one or more of the most common FS modules. It certainly isn't new; I remember liking how the DEC Tops 10 OS handled this. With settable number of generations to keep both for when on-line, and a (usually) smaller one for when you log off. Files are given "generation numbers" that increment each time it is writen and closed, then you can add a generation number to any file reference if you don't want the most recent.

    This needs to be very user oriented and transparent so that it doesn't fail when you need it most.

  • by Kz (4332)
    set up a daemon looking at new files (use fam, of course) and create a hard link on a mirror directory as soon as they're created. When the file is unlink()ed, it won't be deleted since there's still another link to it. Add a cron job that scans the mirror directory and moves any file with a link count of 1 to the 'just deleted' folder. the rest is routine, hourly/dayly/weekly rotations, old ones really deleted, etc.
  • Second opinion... (Score:4, Insightful)

    by zulux (112259) on Sunday September 29, 2002 @01:08PM (#4354100) Homepage Journal
    Not to be a complete twit - but I woulden't replace the Novell server. If it's stable and does what you want then I'd just let it be. Put your enenrgies into adding value rather than replacing somthing that works with something else. Perhaps a Linux based firewall is needed? Or Linux based VPN so people can get into the office network from home?

    Office types love VPN from home - though I'd sugjest OpenBSD for the job over Linux, both would work fine and make people *very* happy if done correctly.

    Cheers and good luck.

  • I don't have the details yet, but: I am evaluating NAS solutions. The two Linux systems that have journaled filesystems also use XFS, and also have snapshot capabilities. I don't know if it's something they've added to their systems, or if it's inherent in XFS. Can somebody who's already directly playing with XFS fill me in on this?
    • it does have a snapshot command (haven't used it myself), but it needs to first 'freeze' the filesystem; during that time, all writes go to the journal file, and get only committed when it's 'unfreezed'

      but it wouldn't do a file rollback, it's mostly for directory backups, block dumps and media management (via LVM).
  • Doesn't the Linux Logical Volume Management software do snapshotting? Ok, not perfect, but it ought to work.

    http://www.tldp.org/HOWTO/LVM-HOWTO/x136.html

    http://www.tldp.org/HOWTO/LVM-HOWTO/snapshots_ba ck up.html

    Not really efficent for hourly, but perhaps it could be made to work, also look at rsync or some of the options for cp.
  • libtrash is everything you are looking for:

    keeps copies of all deleted, renamed, truncated or overwritten files

    works with all applications on the system

    file system independent

    versioning (!): all versions of a file are kept in the trash directory

    very configurable

  • But what about only using SymLinks. Then write a little scripty that checks to see if the symlink has been deleted. if it has put a counter on the file and do what you must to the file saved safely on the server...
  • I like having an extra harddrive in my systems, and just run nightly backups to the hard drive. guess you could do this more often, say every 1 hour...

    In my work this is very useful, I to remember the convenience of FILER, SALVAGE. But Windows Servers don't offer a way to save deletes from a network share, and any third party apps seem to cause the blue screen of death. So I'll stick with an extra hard drive. I use Linux, but find it too cumbersome to use daily for simple file sharing, and Print serving. I'll be using at home until I have more time to spend with my server at work, which may be a long time to come....:(

    Cheers.
  • And it has worked since the first verion of the OS. Now, admitedly it's not a network file server, but it does preform the same function for the user--they think they deleted the file (thrown it out) but they can still rescue it until they run out of disk space and 'empty the trash can.' But, I would suggest that in this case what you really need is an addition/mofification to the samba codebase which would allow your windows SMB shares (what it sounds like you really care about) to not delete files when the appropriate network APIs are called, but rather move the files to a temporary directory which could opperate as a FIFO queue or a 'empty manually when it gets full' system.

Some people carve careers, others chisel them.

Working...