Forgot your password?
typodupeerror
Privacy Government The Courts News

Archiving Web Pages - Legal or Illegal? 102

Posted by Cliff
from the if-google-and-wayback-can-do-it... dept.
Dyer asks: "I used to run several high-trafficked anonymous surfing sites and if I wasn't getting emailed by a lawyer telling me to block someone's site from being accessed I was being woken up at 2am with a telephone call from a crazy person yelling, sometimes swearing at me with the impression that my site copied theirs and it resided on my server, when in actuality it was being accessed by my server at that instant and being relayed to the user. This is my point, how do services like Archive.org and Google's cache get away with what they're doing? You can call their services whatever you like, but it doesn't change the fact that they are copying people's websites and saving them onto their servers for everyone to access."
This discussion has been archived. No new comments can be posted.

Archiving Web Pages - Legal or Illegal?

Comments Filter:
  • It SHOULD be legal (Score:4, Interesting)

    by Anonymous Coward on Monday June 30, 2003 @04:00PM (#6333451)
    Well, it should be legal/allowed. If you don't want it read and archived, don't put it on the Web.

    Everything should go, except for things like malicious alteration and theft (taking stuff and claiming it is yours)
    • by lightspawn (155347) on Monday June 30, 2003 @04:39PM (#6333757) Homepage
      Well, it should be legal/allowed. If you don't want it read and archived, don't put it on the Web.

      You know, I've been wondering about Java/Shockwave games. Certainly most kids would love a CD full of those games, and many companies have many different games online which mostly disappear a few months later.

      Is anybody archiving these? Do we need to start?

      Would the companies object?

      You can play The Hitchhiker's Guide to the Galaxy [douglasadams.com] on Douglas Adams' web site. As it happens, if you know what you're doing you can also download the .z5 file and play it offline on any zip interpreter. Would the copyright owners object to it? I own that Infocom 33-game collection and all 5 books; the reason the game wasn't included in the collection is copyright hassles. Am I "entitled" to play it offline?

      This ties in to today's "is ROM collecting wrong" story, except in this case you're actually offered the games, under mostly unclear terms.

  • RTFF (Score:5, Informative)

    by kalidasa (577403) on Monday June 30, 2003 @04:04PM (#6333481) Journal

    Archive .org FAQ [archive.org]

    How can I remove my site's pages from the Wayback Machine?
    The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection. By placing a simple robots.txt file on your Web server, you can exclude your site from being crawled as well as exclude any historical pages from the Wayback Machine.
    See our exclusion policy.
    You can find exclusion directions at exclude.php. If you cannot place the robots.txt file, opt not to, or have further questions, email wayback2@archive.org.

    In other words, by your NOT including a robots.txt file, you are implicitly granting them permission to cache your content. Also, the content is cached as it was published, complete with the appropriate markings, and is only publicly accessible content, so you'd be hard press to argue there is any economic harm from the caching, which means there would be likely be no damages from a successful copyright suit, which means a copyright suit would be pretty damned unlikely.

    IANAL.

    • Re:RTFF (Score:3, Insightful)

      by sir_cello (634395)
      You don't properly understand the legal process.

      In a copyright case, the courts first establish whether infringement has taken place, and this is determined irrespective of economic issues. It is determined purely on issues of subsistance, owernship, duration, etc - in terms of the statuory provisions and the existing case law. It is only then that exceptions (such as fair use, and specific exemptions - say - for public archives and libraries) are considered.

      Then, finally, when remedies are considered (e.
      • Read more carefully. The implications of my posting: The cachers are providing a mechanism to have your work excluded at your request, providing you with a non-court means to remedy the caching if you choose. Since it is all publicly available information anyway, the potential economic damage is minimal. There are usually two remedies provided to a plaintiff after a lawsuit over copyright: the violater is ordered to stop violating, and the violater is ordered to provide monetary compensation. In this case,
        • Re:RTFF (Score:3, Interesting)

          by ScuzzMonkey (208981)
          In this case, the first remedy is provided by the potential violator...

          Yes, but it places the burden in the wrong place and so is not likely to be considered an adequate remedy by the courts. More properly, the violator should be seeking permission prior to re-distributing the content, rather than essentially saying to the copyright holder "Stop me before I copy again!"

          I'm not sure I think that caching sites should be subject to traditional copyright law--it has some nasty implications for anyone who cu
    • In other words, by your NOT including a robots.txt file, you are implicitly granting them permission to cache your content.

      Riiiiight. See you in court.

      As I've just posted elsewhere, it is quite feasible that a site owner could be damaged if caches maintain information after the original site has been changed or taken down. For example, if updated information is placed on the original, this leaves the "cached" versions out of date and misleading anyone who reads them thinking they're seeing a perfect c

      • As I've just posted elsewhere, it is quite feasible that a site owner could be damaged if caches maintain information after the original site has been changed or taken down.

        Damaged in what way? Aren't there archives of newspapers, journals, and magazines? And if time-sensitive information is present on a website, does the public have a right to see what was previously there? Websites can get away with a lot of instant censorship that way - you can check out this site [thememoryhole.org] for an archive designed in response

        • Damaged in what way? Aren't there archives of newspapers, journals, and magazines? And if time-sensitive information is present on a website, does the public have a right to see what was previously there?

          If I put up information on a web site, for free, as a volunteer, then the public has no rights whatsoever, either legally or morally. Why the hell should they? They didn't do anything to earn them.

          If you have a specific example related to this problem, I would love to hear it.

          I'll give you a coupl

          • If I put up information on a web site, for free, as a volunteer, then the public has no rights whatsoever, either legally or morally. Why the hell should they? They didn't do anything to earn them.

            The fact that the public has a right to anything you produce is the reason that the public domain exists. Copyright is instituted by governments to keep creative people in a position to keep creating - but when you're dead, the information should go somewhere to enhance the public good. If the human race is to

    • I think you're sort-of-right. The mere fact that a search engine gives you this facility to opt out does not create an implicit licence to use content by itself: there is an old principle of law that silence does not mean consent. If this were not the case, I could, e.g., write to you offering an opportunity to engage in a Nigerian money-laundering scam with the rider that "if you don't reply to this I will take it to mean you have accepted my offer" and then enforce that through contract law if you didn't

    • > In other words, by your NOT including a robots.txt file, you are implicitly granting them permission to cache your content.

      Bullshit.

      Your argument is like saying "If you leave your front door unlocked you are giving your neighbors implicit permission to loot your house."

      Many website creators don't even know about archive.org, so how will they know to go read the document? You can not assume permission by the lack of existence of a robots.txt file.

      Now, if archive.org only copied your site if you DID
  • My 9/11 Archive (Score:5, Interesting)

    by limekiller4 (451497) on Monday June 30, 2003 @04:05PM (#6333487) Homepage
    On the day of 9/11, I began to think that maybe a lot of things would be online that would disappear on the next update, forever. We tend to think of 1880 newspaper clippings as being perishable, not online media, but the opposite is true. So all day on 9/11 I archived news sites and about two hundred blogs using "wget -p".

    Over the next week I archived some 4,600 blogs. They've kind of been sitting around waiting for me to weed through and organize. I've also been wgetting 30 or so large news sites' front page every 15 minutes or so on the hunch that I'll grab something emerging even if I'm AFK. Well ...what can I do with this data?

    The answer(s) to this question will definitely be of use to me. Thanks for asking it. Slash, thanks for posting it.
    • Here are a couple of ideas:

      1) Burn it onto DVD. But I don't know which format is likely to survive the longest!

      2) Hand it over in whatever form you can to your nearest major University and let them work out how to archive it. If they can find a way to do so reliably, it will be very valuable to their Faculty of History in a hundred years or so!

      If you can do both, then great - you could distribute it to several Universities. Be sure to include a few European Unis that that have already been around for at
    • Give the Smithsonain Institution a call. They are working on a extremly extensive media and 9/11 project. I went to thier current offerings. Very impressive.
    • Print it out.

      Paper will last far, far, far longer than any electronic media. I can still read the masters thesis my dad wrote in the 70s, but the box of punch cards to go along with it is utterly useless.
      • CDs will last at least as long as the average paper archive, and will still be readable in 50 years. Presumably the equipment to do so won't be widespread, but it'll be there.

        • CDs will last at least as long as the average paper archive

          Paper can easily last a hundred years (I have a number of books from the late 1800's & early 1900's); IIRC the typical MTF for CDs is on the order of 20 years, and can be as low as 5 [fcw.com].

          -- MarkusQ

          • Paper can potentially last a long time (the US Constitution is still intact, for example). However, the average paper archive the size of a CD (which would physically be quite substantial) would require enough upkeep to make the cost of storing and maintaining it much greater than the cost of burning a new copy of the CD every ten or twenty years.
    • I made a mistake by not doing that and I was wondering if you could send me some of this info?

      email me with details!, use my public key!
    • I say try and set up a server for all this. You personally may not have the money, but I'm betting that your local university would be willing to help. Now, if they don't, you could get people to donate money to help you set up a server for all that stuff. I'd love to see some of it, since its got to be an interesting cross-section of post-9/11 America and such. As others have said, the Smithsonian may be interested too, but giving everyone access to your archives would be a great public service. I know I'm
  • An idea (Score:5, Insightful)

    by revmoo (652952) <slashdot@mee p . ws> on Monday June 30, 2003 @04:06PM (#6333495) Homepage Journal
    Here's a thought, a rather complicated one, but I Think it just might do the trick...

    DON'T POST THINGS YOU DON'T WANT PEOPLE TO SEE ON A PUBLIC NETWORK.

    It's quite simple really.
  • It might be useful to note that the archive servers are located outside the US, and that they act on requests to have information and websites removed from their archive. (IIRC). I would state that the Archive serves a compelling public interest, both in the sense of free speech, and in the basic idea of keeping a history or record of the internet. The archive is a museum of sorts.

    Google, on the other hand, is gathering data for its search engine, and, of necessity, must have what essentially amounts to a copy of each web page in its stores in order to provide this service. If one does not want to have their data in Google, they simply use robots.txt, and Google doea not spider, cache, or store any data from that site if robots.txt is filled out. However, the site owner also denies themselves the ability to be listed, for 'free', in googles search pages. This could be thought of as the cost of being listed.

    So I don't think either of those two situations have any problems defending themselves. An anonymizer could also be seen as providing a useful, protected service. An anonymizer is nothing more than a proxy service, and many ISPs use proxies now, not to mention caches and many other tools that store website information or meta information without notifying or requesting explicit permission to do so - they request implicit permission by sending a GET command.

    -Adam
    • Actually, the Internet Archive's main Wayback Machine [archive.org] servers are located in a co-location center in San Francisco, so it's not correct to say they're located outside the US. There is a mirror [bibalex.org] of the Archive's web content at the Library of Alexandria in Egypt, however - maybe that's what you're thinking of?

      In any case, the Archive's work with the Library Of Congress and, increasingly, national libraries who want to archive the Web content of their countries, proves that the establishment also thinks Web a
    • Please note that robots.txt affects whether Google crawls various parts of your website at all. To prevent your pages from being stored in the Google cache (even if they are searchable using Google), you need to specify the META tag <META NAME="GOOGLEBOT" CONTENT="NOARCHIVE"> in each and every of your pages.
  • Email? (Score:2, Funny)

    by Anonymous Coward
    We do not accept email from lawyers as a legitimate form of communication.

    Email from lawyers is /dev/null'd.
    As for the waking up in the middle of the night...
    Um, turn off the ringer? Stop sleeping in the NOC? Maybe invest in a second phone line for your business instead of using moms POTS line.
  • Be Happy (Score:3, Insightful)

    by Apreche (239272) on Monday June 30, 2003 @04:19PM (#6333584) Homepage Journal
    I'd be damn happy if someone made backups and mirrors of a site I made. People will visit my site without using bandwith I pay for. Also, if disaster strikes I can get my site back because someone else was kind enough to back me up. The more the merrier
  • Honestly... (Score:2, Informative)

    by lptport1 (640159)
    This sounds sort of cynical to me, but it strikes me that the people who might be concerned about that don't comprehend the word "cache" and therefore never click on that link in the search results...

    Thus, never discovering that their site has been archived somewhere else. That, and Google has a rather chunky disclaimer-type-deal at the top--I'm sure it's in response to just that behaviour.
  • *copy* right (Score:5, Interesting)

    by ccady (569355) on Monday June 30, 2003 @05:06PM (#6333970) Journal

    (FWIW, IANAL) Web site content is copyrighted. Therefore, you have a right to make your own personal copy, and backup copies, but it is not legal to redistribute those copies without the site owner's permission. I cannot imagine that the Wayback machine or the Google cache is legal. They are blatantly disregarding the site owners' copyright.

    That said, I think the law should be changed or at least clarified, because it is patently (pun intended) obvious that those services are doing a vast social good, and should be encouraged.

    • Re:*copy* right (Score:3, Interesting)

      by stanwirth (621074)

      Web site content is copyrighted. Therefore, you have a right to make your own personal copy, and backup copies, but it is not legal to redistribute those copies without the site owner's permission. I cannot imagine that the Wayback machine or the Google cache is legal. They are blatantly disregarding the site owners' copyright.

      That would imply that every ISP running a public squid cache is breaking the law, and Akamai's entire business model is based on illegal content-smuggling. I really don't th

      • Akamai's entire business model is based on illegal content-smuggling
        Can you clarify that? Last time I checked, Akamai only distributes for those who pay them to do so, so I'm pretty sure they have permission.
      • Re:*copy* right (Score:3, Informative)

        by limekiller4 (451497)
        stanwirth writes:
        "...and Akamai's entire business model is based on illegal content-smuggling. I really don't think so!"

        Akamai caches sites of people who pay them to cache them, so that would be one hell of a lawsuit. I know this because I worked for them for a few years.
    • Re:*copy* right (Score:2, Informative)

      (FWIW, IANAL)

      Obviously [cornell.edu].

      • Re:*copy* right (Score:5, Informative)

        by SeanAhern (25764) on Monday June 30, 2003 @06:03PM (#6334426) Journal
        Mod parent up! This link to the US Code is very useful in this context.

        Heck, it's so useful that I'm going to quote some of it here:

        TITLE 17 > CHAPTER 5 > Sec. 512. Prev | Next

        Sec. 512. - Limitations on liability relating to material online

        (a) Transitory Digital Network Communications. -

        A service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the provider's transmitting, routing, or providing connections for, material through a system or network controlled or operated by or for the service provider, or by reason of the intermediate and transient storage of that material in the course of such transmitting, routing, or providing connections, if -

        (1)

        the transmission of the material was initiated by or at the direction of a person other than the service provider;

        (2)

        the transmission, routing, provision of connections, or storage is carried out through an automatic technical process without selection of the material by the service provider;

        (3)

        the service provider does not select the recipients of the material except as an automatic response to the request of another person;

        (4)

        no copy of the material made by the service provider in the course of such intermediate or transient storage is maintained on the system or network in a manner ordinarily accessible to anyone other than anticipated recipients, and no such copy is maintained on the system or network in a manner ordinarily accessible to such anticipated recipients for a longer period than is reasonably necessary for the transmission, routing, or provision of connections; and

        (5)

        the material is transmitted through the system or network without modification of its content.

        (b) System Caching. -

        (1) Limitation on liability. -

        A service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the intermediate and temporary storage of material on a system or network controlled or operated by or for the service provider in a case in which -

        (A)

        the material is made available online by a person other than the service provider;

        (B)

        the material is transmitted from the person described in subparagraph (A) through the system or network to a person other than the person described in subparagraph (A) at the direction of that other person; and

        (C)

        the storage is carried out through an automatic technical process for the purpose of making the material available to users of the system or network who, after the material is transmitted as described in subparagraph (B), request access to the material from the person described in subparagraph (A),

        if the conditions set forth in paragraph (2) are met.

        (2) Conditions. -

        The conditions referred to in paragraph (1) are that -

        (A)

        the material described in paragraph (1) is transmitted to the subsequent users described in paragraph (1)(C) without modification to its content from the manner in which the material was transmitted from the person described in paragraph (1)(A);

        (B)

        the service provider described in paragraph (1) complies with rules concerning the refreshing, reloading, or other updating of the material when specified by the person making the material available online in accordance with a generally accepted industry standard data communications protocol for the system or network through which that person makes the material available, except that this subparagraph applies only if those rules are not used by the person described in paragraph (1)(A) to prevent or unreasonably impair the intermediate storage to which this subsection applies;

        • Interestingly, the law cited makes explicit provision for several of the concerns I expressed in earlier posts in this thread, notably the issues of keeping the data up-to-date and of the information provider getting information from those visiting their site directly.

          The normal Internet convention is that when I update my site, changes are immediately visible to everyone. (NB: browser caching is not equivalent to web caching here for several reasons.) Also, visitors to my site normally leave information

          • All in all, if that is the exemption I was referred to earlier in this thread, it looks as though the web caches are skating on very thin ice. If they did something like cloning material on a web site that was later removed in order to publish it in a book, I imagine they could wind up having a serious dispute with the publisher, or perhaps the author himself, either of whom might have a strong case that they suffered financially because of the actions of the caching site.

            I'm not sure you're talking abou

            • I was referring to the post where someone said there was an exemption under copyright law for web caches. I assumed the parts of the DMCA that were cited here were that exemption. In that case the validity of the original claim appears to be less clear than was suggested.

    • (FWIW, IANAL) Web site content is copyrighted. Therefore, you have a right to make your own personal copy, and backup copies, but it is not legal to redistribute those copies without the site owner's permission. I cannot imagine that the Wayback machine or the Google cache is legal. They are blatantly disregarding the site owners' copyright.

      This confuses fair use on purchased items that you own with what you are allowed to do with temporary copies for viewing. By the same logic, you could legally tak

  • I really think archiving is important, and is one strenght of the internet : archiving your data without paying, or even asking for it. I mean, there must be a lot of companies or organization (I think about the NASA, etc...) who probably have hundreds of terabytes of data, and don't want to spend money or time making backups. Add that most archiving medium won't last more than a couple of decades, and you'll understand that archiving is great because everyone can backup a little something, and all those w
  • That's right, the DMCA contains provisions protecting companies like google from copyright infringement. Read it some time.
  • Any easy way for me to save pages I'm looking at? Perhaps a little button in Mozilla that automatically saves the page with graphics, and places everything neatly into a timestamped folder?
  • legality (Score:3, Informative)

    by sir_cello (634395) on Monday June 30, 2003 @06:18PM (#6334547)
    There are limited provisions in copyright law (at least in the UK, and I expect to occur elsewhere in the world) for public libraries and archives. But these are indeed limited provisions and do not apply to a random commercial organisation that decides to provide such a service.

    Firstly, in the general case of search engines providing indexing of content, this is legal and there are legal cases to back it up (in the UK: antiquesportfolio) so long as the indexes are not copies.

    Secondly, in the case of USENET groups and mailing lists, then in the process of submitting a message to the mailing list or group, you have given an implicit license for the message to be reproduced within the nature of the particular technology at hand. This means if at a later date you object to a message in a mailing list that you wrote in the past, you don't really have the ability to retract it. In all cases, anyone deciding to use the material in another way (e.g. creating a commercial CDROM of USENET material for a marked up price) would be violating your (and others) copyright. However, if they were providing that CDROM as a distribution service for USENET itself (e.g. "get your monthly USENET CDROM") then this is probably within the bounds of legality as it is still transfer via the USENET system, and the cost is likely to be that to reflect media/distribution costs rather than some specific aim to make a commercial product out of your material.

    Finally, in the specific case of copies of websites, yes this is a violation of copyright - but as far as I know this has not been tested in a court of law. The use of the Robots Exclusion Protocol and the NOARCHIVE, NOINDEX and NOFOLLOW elements allow a weasal argument suggesting that it is inherent in the WWW itself (as a new form of media / technology) that search engine indexing and archiving / caching is legal unless you specifically disallow it with this mechanism. It may also be the case that if this archiving / caching was carried out for profit or at price greater than fair for distribution/media then a party is making an economic gain out of your material and this suggests an inequitable violation of your economic rights.

    Another point to remember is that in WTO treaties that resulted in DMCA provisions, as enacted in the UK and EU, there are specific fair use allowances for intermediate copies of a copyright work as necessary for the telecommunications medium itself (this would seem to allow things like store-and-forward systems, and caching).
  • Some people are arguing robots.txt as the determiner, however remember
    the court case that a company *lost* because it copied the data of a
    competitor site and set it's prices lower.

    This is equivalent to Kroger hiring a few clerks to go down each day and
    take prices of various objects on their wifi equip'ed phones/handhelds in
    a store so Safeway can under cut prices.

    What, you didn't read the fine print on the Safeway door that says no price
    comparisons or making up price lists? Or what...were they supposed to

Passwords are implemented as a result of insecurity.

Working...