Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Silicon Graphics

XFS on a Web Server? 32

WWYD asks: "I am going to be setting up a fresh web server for the company I work for and am looking for some advise. It will be a Redhat 7.3 / Apache / PHP standard everyday setup that will be hosting 50+ radio station sites. My question is about SGI's XFS file system. I've been running it at home and love the recovery time after the system dies. (I experiment a lot). Would XFS be a good filesystem for a web server?"
This discussion has been archived. No new comments can be posted.

XFS on a Web Server?

Comments Filter:
  • by Khazunga ( 176423 ) on Friday July 19, 2002 @08:09AM (#3915625)
    I have ReiserFS on the server farm for a free webmail provider I manage (150kaccounts). It has proven very reliable, and fast enough that disks are not a bottleneck.

    On my laptop, after reading the scary warnings on Gentoo install docs, I opted for XFS (not really, it was purely an excuse). I had a buggy intel e100 pro ethernet driver, and on many of the crashes, I lost all the data on some of the open files. Filesystem integrity was ok, after every crash, but I never had the same data-loss behaviour with ReiserFS.

    My advice is: benchmark the filesystems you consider stable, against your usage pattern. It's the only real data you need, apart from reliability info. However, if you're going to serve mostly static HTML, I'd say your bottleneck will be bandwidth or RAM, not your disks.

    I was not very impressed with XFS, but it's my opinion only. It's credited as a Very Good Filesystem (tm).

    • For a butt load of small files I think you'll find ReiserFS faster but for total throughput of data XFS wins hands down. I've done some bench marks on a spare disk where I formatted it with XFS and run bonnie++ and then formatted it with ResierFS and and bonnie++. With XFS I got just shy of 20megs/sec on the bulk io tests and with ResierFS I got 17megs/sec. For lots of small files ReiserFS was faster but I don't remember by how much.

      If will have larger than usual files for a web site then I suggest XFS. If you don't then I'm less opinionated on the topic.

  • by Leknor ( 224175 ) on Friday July 19, 2002 @08:11AM (#3915638)

    I've been using XFS for all my systems since the beggining of this year and I've personally had zero problems with XFS. I have heard some complaints from other people but when I asked them what they didn't like the complaint was that there were null bytes in files after a crash. Unfortunatly for them this is the intended behavior [sgi.com] by XFS in some situations.

    I think the most common kernel with XFS is 2.4.18 which is known to have some swapping problems [zork.net].

    So as long as the RedHat 7.3 kernel doesn't have that swapping problem I'd say go for it. Be sure to install the xfsdump package if it isn't already and run the xfs_fsr [sgi.com] command weekly from a cron job to keep performance high.

  • Would recommened it (Score:3, Interesting)

    by primetyme ( 22415 ) <djc_slash AT djc DOT f2o DOT org> on Friday July 19, 2002 @09:26AM (#3915990) Homepage
    My setup is five dual AMD webservers each with a RAID1 and RAID5 partition, 1GB of RAM, and XFS.

    The 1.1 release against 2.4.18 is really stable, and I haven't had any problems myself, or heard too many on the XFS mailing list in the past few months. If you have the means, I'd recommend patching against a vanilla kernel, although the ISO option can be nice too.

    XFS is well known for its good performance when handling large files, which in a streaming situation like yours is a good thing. 'Allocation groups' in XFS are also very good at handling parrallel I/O, another good thing in a streaming enviornment. Of course, most of this can be found on the XFS site, http://oss.sgi.com/projects/xfs/

    As a happy user of XFS in a production environment for almost two years, I'd highly recommened it for your type of situation. Good luck! :)
    • if you buy a cun Cobalt RAQ then the FS they use is XFS

      hey if its good for Cobalt/SUN then its good for me

      regards

      john jones

      p.s. cobalt used to ship MIPS as well but because they had so much hassle then switched to x86
  • Why not use ext3? (Score:4, Insightful)

    by g4dget ( 579145 ) on Friday July 19, 2002 @09:28AM (#3916008)
    Ext3 is the default on several installations and it's tremendously easy to install. You can convert ext2 partitions to ext3 on the fly by just creating a journal. If you want to turn off journalling for some reason, you can do that, too, since an ext3 partition remains backwards compatible with ext2. And you actually get a choice between three different kinds of consistency.

    ReiserFS would be my second choice: it isn't compatible with anything, but it brings some nice, new functionality to the table.

    XFS is unlikely to be as well tested or tuned as either of these others on Linux.

    • Re:Why not use ext3? (Score:3, Interesting)

      by j-turkey ( 187775 )
      I don't mean to contradict you -- and while ext3 is a big step ahead of ext2, I can't say that I'd recommend it for this setup.

      Ext3 is great for systems that are either using an existing ext2 filesystem and require journaling, or systems like RedHat that have no way of installing with XFS or Reiser on the root partition. It's backwards compatibility and ability to upgrade an existing ext2 partition is excellent.

      However, after running a sandbox with bad memory that would consistently crash (it took me a while to get around to diagnosing, and even longer to actually fix) I found that ext3's recovery was not as good as XFS or BSD's FFS with softdep turned on (FFS with softdeps is my favorite solution thus far -- although it is not a true JFS, it does asynchronously write your metadata, is faster than ext2, and has the same effect as any JFS...but I digress -- BSD is not Linux). After a hard crash and an unclean reboot with ext3, I would consistently lose data on open files, and at times, my journal was, at times,(seemingly) corrupt, and I would have to boot into single user mode and manually fsck the disk, which took forever.

      I do not have sufficient experience with ReiserFS, but I hear that its excellent. If you're using Linux, and are starting from a clean slate, check out XFS from a freshly-patched kernel (ie, not stock RedHat). Again, I've heard great things about ReiserFS, but since I don't have experience with it, I can't recommend it.
      -Turkey
      • Re:Why not use ext3? (Score:5, Informative)

        by g4dget ( 579145 ) on Friday July 19, 2002 @10:43AM (#3916531)
        After a hard crash and an unclean reboot with ext3, I would consistently lose data on open files,

        As you should--that's the way it's supposed to work by default. If you want it to work differently, you need to configure it differently. You get three choices for what is recoverable [linuxplanet.com]. That's two more than most other journalling file systems give you.

        and at times, my journal was, at times,(seemingly) corrupt, and I would have to boot into single user mode and manually fsck the disk, which took forever.

        It can happen, I suppose, but I haven't noticed it, and I crash machines with ext3 a lot.

        • Cool -- thanks for that article. I'll have to play around a little more with ext3 and try some of that stuff. I didn't mean to disparage it, like I said -- its a great extension to ext2, I've just had more success with its alternatives.
          -Turkey
        • Hey, does ext3 have 3 options.
          It has journal, ordered and writeback.
          Now I don't want to lose data, so I'd use journal, which is also the slowest option.
          In the benchmarks posted here last week between reiserfs and ext3, it was ext3 which was faster. But it was only benchmarked with ordered and writeback. So using journal will be slower, which would make it camparable with reiserfs (with notail).
          Interesting.

          Now, I'm not interested in XFS, sorry, but if XFS has options like this, using the "don't-lose-data" option will make it "just as slow" then.
          • Transactional guarantees for file contents are generally useless because programs don't expect them and don't have any means of expressing complex transactions involving file contents.

            For example, if the machine dies while AbiWord is in the middle of writing a file, transactional guarantees do you no good because you will still have a truncated AbiWord file. The fact that it is truncated in a consistent way doesn't really help you. Now, you might suggest that files should revert to their original content if the machine crashes before they are closed, but that has huge performance problems and leads to other inconsistencies.

            Transactioning file operations only makes sense if you give people an API to express their transaction boundaries. But those APIs already exist, and they work with any file system.

            In a nutshell, that's why it doesn't make much sense to journal file contents and why pretty much all journalling file systems only do structure.

            One thing that might be useful is if journalling file systems mark any file that was being written or read at the time of a crash as "potentially bad".

            • Ok, saving a file while crashing won't do much good.
              But on ext2 I had kernelcrashes, and fsck's where lots of files were thrown away, like /usr/lib/libgtk+.so or similar.
              Actually those files are read by a user process, and I didn't even have write access to it. Still it got thrown away.
              I'm not sure how ext3 handles this with the different options, but if it works that way i don't really like it.
              • But on ext2 I had kernelcrashes, and fsck's where lots of files were thrown away, like /usr/lib/libgtk+.so or similar. Actually those files are read by a user process, and I didn't even have write access to it.

                Reading a file causes its "last access" time to be written. That means that the inode is being written and may get lost under ext2.

                I'm not sure how ext3 handles this with the different options, but if it works that way i don't really like it.

                I believe that should not happen under ext3 because it tries to keep your file system structure (which includes inodes) consistent.

      • I'm kind of surprised. Ext3 has the reputation for being the most "recoverable" of the three big stable Linux journalling FSes. Of course, performance with massive numbers of files is not up to Reiser...
        • Keep in mind that my opinion on this is based on pretty subjective observations. I'd be interested in seeing some proper test results to confirm those observations.
          -Turkey
  • by Bravo_Two_Zero ( 516479 ) on Friday July 19, 2002 @09:30AM (#3916017)
    I use XFS on an intranet web server and samba server with positive results. It's an older kernel (I needed the ACL kernel patches so the NT domain ACLs would work with samba), so I don't have to recover from crashes. But, performance-wise, I have no complaints. Granted, I have a fraction of the traffic that your site would have.

    I also followed Daniel Robbin's advice on XFS (which you've no doubt read already, but just in case: http://www-106.ibm.com/developerworks/linux/librar y/l-fs10.html

  • I tried XFS for a bit, but I found it to be a bit too un-integrated for me. I like to put together my own kernels (grabbing various things from -ac and random other patches I like), and I've found that XFS (either on a source patch level or on a runtime stability level) is always a bit incompatible with other intrusive patches. It was just too much of a pain to be worth it to keep a solid XFS patchset in my kernel.

    OTOH, ext3 works quite well, and is well-integrated in the mainline kernel. If your going for journalling to avoid fscks and for an overall saner and more stable filesystem, I would go ext3.
  • by elbles ( 516589 )
    While many people here have commented, suggesting that XFS may not be as stable as Ext3 or ReiserFS, but, in fact, it is just as stable, if not more so. Before SGI does a release, an unbeliveable amount of testing is done to ensure that the code works without failure, and in my experience, it does. I personally run it on my servers, with one at home that handles all of my large files (MP3's, Photoshop BMP's, lots of other stuff), and it's noticeably faster than Reiser, with no loss in stability. I'd recommend it, based upon experience, rather than some of those people out there who seem to "deface" (for lack of a better word in my mind) XFS without even using it, save for the 1st post I saw. Hope that helps . . .
  • I've been using Linux for several years on x86 small servers (no more than 20 clients, mixed win/mac), but i never had a good opportunity to test any JFS, only ext2 (its rock solid, of course).

    Some months ago, I installed Yellow Dog Linux on my iMac. After a couple of weeks of trying around, I converted the system to ext3. No trouble at all, but i didn't feel any difference. Replacing KDE2 with 3 clearly made it far slower and the hard disk was getting heavvy punishment.

    Later, i decided to erase it and replace with Gentoo linux (it's nicer for programming and trying different things). as a first choice, i used ext3. And i kept crashing during the long compile sessions! again and again it wold crash, around the same point. I'm not experienced reading the kernel panic messages, but it certainly felt like a FS problem. tried ext2, it was a little different, but still unstable.

    Clean it, and begin again; this time using XFS. solid. not a single crash. ever. months running, compiling, streaming, getting low on RAM, never a single hiccup.

    And KDE3 with Liquid feels far smoother than ever. I don't know if it's because XFS vs. ext3 or Gentoo vs. Yellow Dog, but i'm really really happy with it!

    It seems to me that ext2/3 need a little more tuning for heavy proceses on non-Intel systems. No doubt it's 'unbreackable' on x86, but still not tested enough cross plattform-wise.

    XFS, on the other hand, was developed precisely for POWER servers, and it clearly delivers.
  • by corky6921 ( 240602 ) on Friday July 19, 2002 @02:47PM (#3918324) Homepage
    Sun Cobalt is using XFS on their new RaQ550 web servers.

    More information. [sun.com]

    I've seen the demo and it looks cool. Not sure why they chose XFS, but I'm sure you can ask the developers. [cobalt.com]
  • I ran XFS for a few months and never had any problems with it. Its a very solid filesystem and is also backed by a very responsible development team. The only reason I don't use it anymore is because its too much work to merge other patches like -ac or rmap. Lets hope it goes into 2.5 soon.. :(
  • (I experiment a lot)

    Are you going to "experiment a lot" on your web-server too? I don't imagine a web-server, that is expected to crash often (outside of the Windows realm)... Stick to the standard fs...

  • I've run XFS on my colo server (a highly patched RH 7.1) for over a year. Ever since the second week I've had zero problems with it. (The first week, there was a nasty bug with the early 2.4 kernel and XFS + my VIA chipset... caused a bit of data loss, but after upgrading the kernel I haven't had trouble since.)

    However, if I was just setting up a new colo server today, I think I'd stick with ext3, if for no other reason than peace of mind. If something by chance *did* go wrong with my filesystem, I'd pay my colo provider out the arse to fix it.

    Plus it would be nice to stick with Red Hat kernels -- they do a LOT of quality control on those things. It's now somewhat unlikely that XFS will appear in 2.6 (but I sure hope it does). I just would rather not keep patching kernels for ages.

    But nearly all the anecdotal evidence indicates that it is as stable and robust as anything and works VERY well. I'd be more inclined to use XFS on a new server if it was located in the same state as me. :) But using it in co-lo is a little nerve wrecking.
  • i use XFS for a year and a half and also ext3 (i have tested ReiserFS) and i find the three good for journalized FS.
    I fond ext3 a bit slower, but i depends mainly on conf (best to use ordered mode)

    On performance, i don't find any real difference (need some giant database test ? ;)

    My only problem is all journalized FS (including JFS) are only available on Linux, no other free OS (like *BSD or Darwin) and we have to patch kernel (except for ext3 and ReiserFS, but they are all in 2.5 so ...).

    • My only problem is all journalized FS (including JFS) are only available on Linux, no other free OS (like *BSD or Darwin)....

      Forgive me if I'm wrong, but isn't SoftUpdates FreeBSD's journaling mechanism? I understand that these major journaling filesystems arent available on FreeBSD, but that certainly doesn't mean that all journaled filesystems aren't.
      • Yes SoftUpdates is a "kind" of journaling FS, but it is not the same (i've read part of softdep doc and i use it on OpenBSD): it's more grouping actions to increase speed than journaling, even there were big discussion about sofdep vs journal.

        more, when there is a crash, at least for openbsd, there is almost always a fschk ...
      • SoftUpdates is not a journaling system. It is just some speed and stability improvemnts on FFS. You still have to fsck after a crash, but the data integrity softupdates provides is good. The problem (as I understand it) with the Linux journaling filesystems (ext3, xfs, reiserfs, jfs) is that the code is GPL, which cannot be included in BSD kernels (their code is all BSD-style liscenced...duh). From what I've seen and heard, softupdates is very nice, but that doesn't stop people from wanting genuine journaling filesystems for the allure or purely technical reasons (although I belive that the technical gaps between journaling filesystems and softupdates is growing smaller all the time). I certainly must say that the prospect of running SGI's legendary XFS on BSD is very attractive (I've heard that the plug can be pulled in the middle of a cp, then rebooted and the machine will continue without even noticing).

        Technically, BSD doesn't need a journaling fs because of softupdates, which is a great suite of improvements on FFS.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...