Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software The Internet

Do-It-Yourself Internet Archiving? 29

A moron asks: "Web pages change and disappear all the time. For legal and historical purposes, I need to have accessible archives of the websites I maintain. I'm basically looking for a do-it-yourself version of Internet Archive's Way Back Machine which provides a simple versioning system and accessibility through web interface. Is there already software that does this? If not, what ideas does Slashdot have to make such a system possible? How should it work? What existing tools can be used together to make a workable system?"

"There are all sorts of tools out there that will archive web pages, and each have other necessary features such as making links relative. I don't always have filesystem access to pages, so tools that rely on such access won't work. There are some obvious tools that do part of the job such as:

But grabbing pages is only part of my, and I suspect many other peoples needs. The other pieces include intelligently archiving the pages, and making them accessible. If a page or a page element hasn't changed, there is no need to store multiple copies. The archives need to be easy for end users to navigate, search, and link."
This discussion has been archived. No new comments can be posted.

Do-It-Yourself Internet Archiving?

Comments Filter:
  • by Mr. Darl McBride ( 704524 ) on Wednesday December 31, 2003 @01:27PM (#7846260)
    For a small site with complete backups, make a script:

    ARCDIR = `date +%y%m%d`
    cd /var/www/archives
    mkdir $ARCDIR
    cd $ARCDIR
    wget -r http://mysite.com

    Add error-checking and season to taste.

    If you want to be more efficient like the poster wanted, you could easily have it always fetch to the same directory and just use cvs to check in. This eliminates duplicate storage. There are many free web-based CVS browsers out there with date searching and similar features. Might not be quite as nice as the wayback machine, but it definitely does the job for free.

    A lot of folks are doing a simple version of the above to maintain SCO mirrors so there's to be no history erasing before the trial. God bless you all -- it will make the case that much stronger for us.

    • by Mr. Darl McBride ( 704524 ) on Wednesday December 31, 2003 @01:31PM (#7846306)
      A quick caveat:

      If archiving SCO or other such pr0n sites, or if you have no-robots policies set on your own site that you're archiving, you'll need to tell wget to be a little rude. He needs to go where robots aren't meant to go. I figure if you were going to visit every page yourself anyway, it's not so impolite. And besides, robots.txt is for other people. You know... the ones we make ride the back of the internet.

      To accomplish this: cat >>~/.wgetrc "robots = off"

    • For this to work well, you'd need --mirror not -r. --mirror includes -r (recursion) but also turns on time-stamping and sets the recursion level to infinite, from default of 5. Time-stamping is crutial, as its what lets wget know what has changed and what doesn't need to be retrieved, assuming you're doing the cvs checkin thing.
  • Use - (Score:2, Informative)

    by noselasd ( 594905 )
    this [tigris.org]
  • CPAN is your friend (Score:2, Informative)

    by B1LL_GAT3Z ( 253695 )
    I highly recommend that you check out w3mir [cpan.org] - which was found after a quick search on CPAN [cpan.org] (The Comprehensive Perl Archive Network). I particularly like w3mir due to it's ability to compare against existing copies of your local mirror - which is more of what you're looking for. Using this in conjunction with a simple shell script (to tar and mv files, as so desired - hooked to a cron job) will create your very own, automated Internet Archive.
  • I think wget is the way to go, perhaps with the "-m -k" flags, and then check the whole directory tree into CVS using `date +%Y%m%d` as your version number.
    • by Anonymous Coward
      CVS with the -D option will do your date-oriented functions for you without needing special version numbering, I think.

      But personally I don't think wget and CVS are very helpful in this case. I think it would be better to use something like Perl or Ruby to write a custom spider, and then using cp -lR to make iterative snapshot copies of your working archive tree (you use cp -l because then your copies don't take up extra space). This way you can write hooks to test whether content has changed before writin
  • I have wanted for a while now to be able to run a command "every time a file is created or updated" in a tree. I know how to do this on a per-directory basis, but I would love to run a command on the changed-file itself whenever an update occursas far as I know, there is no way to cause a "trigger" of this sort. Due to this limitation, I've instead been running the same command on every file on the server each morning. Yes, it's a cron-controlled script and I don't need to touch it, but having it scan and c
  • CVS? (Score:3, Informative)

    by Phleg ( 523632 ) <stephen@@@touset...org> on Wednesday December 31, 2003 @02:25PM (#7846850)
    If you have console access to the machines (or can at least make a script), CVS could be a viable solution. Just maintain a central CVS server and have the websites do CVS commits when timestamps on files change. On the other hand, this might not really work if you have dynamic content.
    • Or he could use wget to download the latest copy of the page and then use CVS (or another version control system) to record the latest changes.

      There's no real need for console access, unless its a dynamic site in which case you need to store the source for your scripts as well as maintain versions of the database!

      At this point it's nothing more than keeping multi-versioned backups of your website and database files. Check out rdiff-backup [stanford.edu]

      Best of Luck.
  • If you're solely maintaining static sites, just keep copies of the site as published.
  • sourceforge (Score:3, Interesting)

    by nocomment ( 239368 ) on Wednesday December 31, 2003 @03:18PM (#7847430) Homepage Journal
    Sourceforge is open source, why not go d/l that [sourceforge.net]. YOu can use CVS as an easy way to switch around and do upgrades. YOu can develop a site, then run upgrade via cvs and if something unexpected breaks, downgrade via cvs. Once you get the infrastructure in place things like that would be a breeze.
    • Re:sourceforge (Score:4, Informative)

      by tf23 ( 27474 ) <tf23@lottad[ ]com ['ot.' in gap]> on Wednesday December 31, 2003 @07:18PM (#7849426) Homepage Journal
      Why not recommend gforge [gforge.org] rather then sf? Sourceforge's code is untouched for a few years now, right? While gforge is opensource and being currently developed.
    • Actually, SourceForge is *not* Open Source any longer.

      Check the URL you referenced, and you'll notice that the last release was made on 2001-11-04. And the code released there is actually even older than that, as the release date got updated when they moved it from the original Alexandria project.

      SourceForge intentionally killed off public development of the SourceForge code, and then did an excellent job of convincing people that it was still an Open Source project. They kept promising and promising th
  • Linkrot (Score:3, Informative)

    by sakusha ( 441986 ) on Wednesday December 31, 2003 @03:35PM (#7847567)
    Bloggers are acutely aware of this problem, they link to pages that change or are moved to paid archives, they call it "linkrot." I've started to provide a .pdf capture of linked articles on my blog, as well as the original link (which I usually take down if I notice it's disappeared).
    I like Adobe Acrobat for this job, you just point it at a URL, tell it how many levels you want to archive, and go. You can even archive externally linked pages if you uncheck "stay on same server," or you can select other options like "Archive Whole Site."
  • For dynamic sites... (Score:2, Interesting)

    by dcocos ( 128532 )
    You may want to consider something that caches the pages as they are displayed. This will add overhead and doesn't scale, but would allow you to keep a copy of the pages as they were dispayed. You could atleast use it for a subset. For example you use JSPs to serve up pages from a DB but the resulting page is different depending on parms to the page. wget isnt' going to capture all of this, so when the page is generated you write it out with a timestamp ( you build some intellegence so the page only gets
  • I use teleport pro.
  • How about using Heritrix [archive.org], the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler?
    • From the FAQ [archive.org]:

      I need to crawl/archive a set of websites, can I use Heritrix?

      Eventually. For now, the crawler is still in early development, and only if you are comfortable grabbing code directly from CVS, wrestling with incomplete documentation, and running into undocumented limitations, would you want to use the current software.
  • You could always extract the site with adobe acrobat and have a 'distilled' in tact, usable site in a single file.

God doesn't play dice. -- Albert Einstein

Working...