Become a fan of Slashdot on Facebook


Forgot your password?
Data Storage Software Linux

Linux LVM - Is It Ready for Prime Time? 62

Deagol asks: "I'd like to replace our aging IBM server with a commodity solution (Linux, 3Ware cards, and lots of IDE drives). The main reason is price (the cost of 5 36GB SCSI disks for this sucker -- one of which died today -- could pay for the replacement server with 2TB of usable space after RAID-5. Being a huge fan of AIX's LVM,I've recently been playing with the Linux version of LVM. It's got all the right features (and even the ability to shrink logical volumes, a feature which AIX 4.3.3 doesn't have!), though the commands aren't as polished as the AIX counterparts. The big question for me is, will it stand up and be stable under heavy load, like the IBM does? Is anyone running Linux LVM on a 1TB+, 24/7 production machine?"
This discussion has been archived. No new comments can be posted.

Linux LVM - Is It Ready for Prime Time?

Comments Filter:
  • LVM on AIX rocks. (Score:4, Informative)

    by Fished ( 574624 ) * <.moc.liamg. .ta. .yrogihpma.> on Thursday June 12, 2003 @02:50PM (#6184288)
    For those who haven't used it, the LVM support on AIX is probably the best in the industry. It allows you to do just about anything, in a very clean, structured (if somewhat hard to figure out at first) way. I personally have not used it on Linux, so can't comment on the poster's main question, but the kernel-raid stuff is not up to part with IBM's LBM.
  • by Glonoinha ( 587375 ) on Thursday June 12, 2003 @02:56PM (#6184342) Journal
    -The main reason is price (the cost of 5 36GB SCSI disks for this sucker -- one of which died today --

    Odds are the drives are OEM versions of a very popular drive vendor, perhaps pop it out, figure what kind of drive it is, buy a new one that is an exact match (or better yet, buy five new ones of exactly the same type) and replace them yourself for +/- $2,000 total) and restore from your backup. Maybe this is a little oversimplified but if it is a RS/6000 box odds are it uses regular ol' SCSI drives.
    • by crow ( 16139 ) on Thursday June 12, 2003 @04:40PM (#6185254) Homepage Journal
      Don't be surprised if that doesn't work. I'm not sure about IBM, but in some cases the storage vendor installs custom firmware on the drives. If you install a different drive in the system, it might not behave correctly.
      • I'm not sure about IBM, but in some cases the storage vendor installs custom firmware on the drives. If you install a different drive in the system, it might not behave correctly.

        As far as I can tell, Sun doesn't do this, at least. I installed a Seagate U320 SCSI drive into a Sun Ultra workstation--it works beautifully (only at 40MB/sec, now). Most of the Sun-branded drives are really just certified model numbers of Seagate, IBM, Fujitsu, or, historically, Conner drives.
        • EMC changes the firmware in their drives which look a lot like Seagate ones. But honestly, would you even think about putting an off-the-shelf-drive into an expensive EMC box?

          Sun has firmware updates for their disks in T3 storage arrays. I would not expect any problem, but i don't want to find out that the T3 completely crashed after a firmware update of the drives, just because one of the drives had a small bug in the firmware which prevented it from being updated correctly.

          Quite often, while it's techni

    • (cough) Have you ever priced an IBM SSA drive? Scsi it's not. We're running many many terabytes of Shark storage here, most of which is attached to either the Mainframes or the SP complexes, and the cost of a 36GB or 72GB SSA drive are STAGGERING.
    • The drives are encased in special mounting hardware (providing status lights, etc.), which are then mounted into 2104-DL1 enclosures. We have 5 of these enclosures, fully populated (10 drives each), and the max size HD they support is 36GB. These HDs list at about $1800 from IBM, and I can get 3rd party refurb units for about half that.

      In any case, I wouldn't risk replacing the actual disk within the mounting hardware, even if it were fairly simple. The attached hardware is just too expensive to risk f

    • A case of ID ten T?

      Using disks from the same batch, from the same manufacturer (which is what comes to my mind when you say an "exact match") for a RAID setup, pretty much goes against every decent thing I've read about and learned through the years.
      The general reason being that once a disk in a batch goes down, so shall the other disks in the same batch go down pretty soon too, thus increasing the risk of having more than one disk down (thus potentially losing data).

      Use disks from different batches. Hell
      • I meant model number, not same batch. My theory was that by going with a new drive of the same model you have reduced the amount of reconfiguration necessary during the swap out ...

        However, if the drives he has are 3+ years old and have been running for three years straight, only now starting to see failures - that sounds like a pretty good batch to me.

        Unless they are on the far-side of the bathtub shaped curve (drive mortality is very high at the very beginning, and at the end of the expected life cycle
  • LVM and Redhat 7.3 (Score:5, Informative)

    by cvande ( 150483 ) <craig.vandeputte ... m ['il.' in gap]> on Thursday June 12, 2003 @03:12PM (#6184468)
    I built LVM into a RH 7.3 kernel and used it for a DB2 database box. Worked great with the dell 2650 and a 1 tb powervault 220s (raid5 ...aaarrrg...) with the perc3 raid controller. It passed our rather aggressive load testing cyle, ~7 days constant load w/ a variety of different tests, (broken queries, massive table joins, etc, stuff you WOULDN'T want to see in production) and passed with flying colors. It remained in production for a little under a year until we migrated to a Oracle on Linux solution w/o LVM (RedHat AS2.1). I'd do it again in a minute if the implementation called for it.....

    just my 2 cents.......
    • That is the same setup I have on my work bench now except the drive are the 146Gb instead of the 73Gb. It will be a nice upgrade from the old multipacks the department is using now.

      About a year and a half ago we setup a 6450 with fiberchannel and lvm. It is still running good.

    • While load testing, did you use any of the dynamic LVM features? It's good to know that it'll stand up during a heavy load. But what about growing a filesystem under load? What filesystem did you test with? Reiser or ext3?
  • I've used LVM on my home server for a year or two now without any issues whatsover. I've been using it about as long on my development server at work as well...same story. Rock solid, no problems.

    YMMV, however, as neither one of these boxes is heavily loaded and I've never required the functionality on any of my production servers..
    • --I'm using Knoppix/Debian's LVM on both IDE and SCSI drives on both of my home servers (different logical volumes tho.)

      --As far as LVM being "ready for prime time" - well, it works - altho I haven't had to deal with a drive failure yet. It doesn't have a decent free frontend (ncurses) interface unless you use something like SuSE's LiveCD (yast) to set things up.

      --BTW, I recommend using Reiserfs over LVM. Just my experience.
  • Yeah, sort of. (Score:3, Interesting)

    by Lukey Boy ( 16717 ) on Thursday June 12, 2003 @03:13PM (#6184476) Homepage
    I'm running a huge ReiserFS slab of space over an LVM IDE cluster using cheap drives and Promise cards, and it's perfect - totally stable and the box has lots of traffic with uptime approaching 6 months.

    But, it's looking as if the LVM code isn't actually included in the 2.5/2.6 series of kernel (I could be wrong). If you plan on upgrading to this eventually, stay away from LVM. If you don't care just dive in.

    • That's another thing I wanted to ask. Which file system do folks recommend? Ext3 seems solid ( as it's based on the tried-and-true ext2), Reiserfs, on the other hand, is a fairly new player (though it looks pretty slick on paper).

      The server will serving home file systems, so there will be no "norm" for usage patterns. I like that reiser lets you grow filesystems on the fly. But is it as solid as ext2/ext3?

      • I quite like XFS. Nice and fast. SGI provides patches against all major kernel versions as well as official releases that are heavily tested. In the official releases, there are kernels derived from both the stock Linus kernel as well as RedHat's.

        There are four gotchas with XFS, though:

        First, while you can grow an XFS filesystem, you can't shrink it. You have to backup and mkfs.

        Second, if you lose power between writing metadata and data, you can end up with a partially empty file. Since I know you'll hav
        • I've had a bad experience with LVM+XFS. The first drive on my LVM volume died and took the rest of them with it. Not that it has anyting in particular to do with XFS though, but I couldn't find a way to copy the meta data files so I had redundant versions of those. It would have been nice if I could have restored the data on the other drives in the volume.

          Well, you live and you learn, I guess RAID really is the way to go for storage. (Even at home.)
      • I've had some problems with ext2/3 on raid systems and under load. The ext* filesystems seem to like buffering up as much as they can, and then suddenly deciding to flush everything to disk, blocking i/o processes across the whole system while they do. Reiser has been, and continues to be (for me at least), much better behaved about scheduling its writes so that it keeps the i/o load on the disks match to the i/o load from user space processes, isntead of batching it up and then having to wait while it fl
        • Case in point: I was doing some high bitrate multimedia capture a while back. In my system at that time, I had both 10krpm u2w scsi disks, and some 5400rpm ide disks. In order to not get frame drops due to long flushes to disk with ext2/3, I had to jump through all sorts of hoops and basically have a process calling sync() every couple seconds, even when writing to the scsi disk.

          I had similar problems with IDE disks and ext2/3. You need to alter the bdflush settings in /proc/sys/vm.

          echo "5 150 0 50

  • Well, according to SCO, the LVM support in Linux was added by IBM, so it's probably pretty good.

    (/me ducks)
    • Nope. LVM was done by Sistina [], I've been running it on my lightly loaded home server for ~3 years with no major problems. Minor problem with my mp3 collection which is on Reiserfs over LVM which has been extended a couple of times (as you do). Some of the files are corrupted, starts off as Thin Lizzy then jumps to a Cure track and then Dire Straits. I don't know if it was the resize_reiserfs that did it or the lvextend (or my PC objecting to Dire Straits). I have extended another logical volume which is ext
  • If you like the aix system, and want something made by IBM that's even to a certain degree "compatible" with the system AIX uses, then have a look at evms [].
  • by linkages ( 131028 ) on Thursday June 12, 2003 @04:04PM (#6184898) Homepage
    I have been using LVM on my workstation and just about every machine that I have built in the past 2 years. I also use AIX everyday and agree that the LVM is very robust but I prefer the simplicity of the linux LVM.I recently built a box with software raid 0 and made the whole thing on physical volume and have had zero problems. I have also user the LVM to migrate date from one dist to another without any problems. Oh and did I mention that the LVM howto is really really easy to understand even if you have no previous LVM experiance.
  • by devphil ( 51341 ) on Thursday June 12, 2003 @04:31PM (#6185178) Homepage

    is that they keep replacing it and reimplementing it in the kernel.

    The one linked to in the article (Sistina's) is in 2.4. I'm using it at home, and I like it. We're considering using it at work, but I hear rumours that 2.6 will contain Something Completely Different (Again), which annoys me.

  • I'm sorry. What is LVM?
    • Logical Volume Manager, it is a layer between the filesystem and the disk (device if a raid), and can be used to expand or contract the size of the device (add new disks, etc.)

      Also able to do snapshots, etc.
    • It's a virtual block device that, in turn, calls other block devices, sorta like how software RAID works. The extra level of indirection between your filesystem and block devices can be used for assorted Neat Tricks.
  • We use EVMS on a 2.6TB file server and it works fine.
  • by Spacelord ( 27899 ) on Thursday June 12, 2003 @07:06PM (#6186613)
    As far as stability and reliability goes, I haven't experienced any problems yet.

    There are some features though that are still missing from Linux LVM, compared to AIX LVM. One of them is mirroring on the logical volume level (no mklvcopy command). You can sort of get around this by creating a software raid device, and then make it a physical disk. Or even better: just go for a hardware RAID(1/5) solution.

    Another thing to keep in mind is that, unlike in AIX, you can't put *all* filesystems in LVM. Either the root filesystem or /boot has to be a non-LVM partition. I'd recommend making a root filesystem of a few hundred megs, outside of LVM. It's less of a hassle than making a separate /boot filesystem outside of LVM.

    Also if you want to be able to resize live filesystems, you have to be careful about your choice of filesystem. Reiserfs for example supports online resizing, while ext3 doesn't (yet?)

    All things considered Linux LVM is a great addition to Linux, but it's not as nicely integrated yet as AIX's LVM.

    One final thing to note is that the Linux LVM commands seem to be modeled after HP-UX LVM rather than AIX LVM. (e.g. lvcreate instead of mklv, vgdisplay instead of lsvg ... etc.) but if you're used to AIX LVM, you'll be up to speed with this in no time.
  • The mail spool for a 15k-user ISP in southwestern Ontario is running on Slack9 + LVM (Reiserfs). It exports the spool via NFS and the edge servers (SMTP+IMAP4+POP3, virus+spamscan) mount the spool directly over ipsec. No issues. I can grow the filesystem, take snapshots and it all just works. The PostgreSQL database is also on an LVM volume, but I haven't had to do much with it related to LVM yet, as pg_dump works live.

    I have a number of other mail spools for businesses around the area (probably a hal

  • Hot Swapping? (Score:3, Insightful)

    by oh ( 68589 ) on Thursday June 12, 2003 @09:53PM (#6187474) Journal
    I'm not going to buy into the IBM LVM vs Linux Software Raid debate, but no one has mentioned something thatâ(TM)s just as important. One of the big advantaged with a good SCSI enclosure is the ability to pull and pop drives in and out without powering down.

    With good hardware, you can walk up to a running machine and replace the failed drive then and there. Hopefully your 144Gb raid-5 array has been fully rebuilt by the time you come back from lunch. If you don't have hot-swap hardware, you have to schedule downtime, come back later that night, shut it down, pull the drive and pop in a new one. And hope everything powers up OK, cos if the power supply stuffs up at that time of night and you don't have a (good) support contract you are going to have a lot of fun getting everything going again before the rest of the office shows up for work.

    I know you can get hot-swap IDE hardware these days, but I've never used them. I suspect hot-swap IDE drives are not that much cheaper then SCSI, but I could be wrong.

    One last little bit of advice, try including a hot spare in your array. Its nice to come in in the morning and read an email saying that a hard drive failed last night, and the array was automatically re-built using the spare before start of business. If you are going to go with non hot-swap hardware, Iâ(TM)d say this is a must. Running raid-5 in degraded mode is no fun.
    • I think hot swap has more to do with the raid controller than the drives themselves. Of course, the physics of it have some to do with the enclosure as well, but push comes to shove, one can open up a running machine and unscrew a hard drive, it's just not as much fun as just releasing a latch and pulling on a lever.

      I have one of the (lower end) 3ware ide-raid cards, and they claim to support hot swap. You have to use their admin tool to tell the controller to deactivate the drive, but supposedly you can
      • I think hot swap has more to do with the raid controller than the drives themselves. Of course, the physics of it have some to do with the enclosure as well, but push comes to shove, one can open up a running machine and unscrew a hard drive, it's just not as much fun as just releasing a latch and pulling on a lever.

        Sure opening the case and using a screwdriver isn't as much fun, but its also risky. With a proper enclosure there is much less change you are going to stuff something up. Thats why server

  • One thing you can do under AIX/IRIX/etc that you can't do under linux is to grow a filesystem on the fly. You have to umount first, which is rather silly in a production environment. Of course, if you're running XFS on your linux box [], this wouldn't be an issue.
  • I've got 8 servers managing about 20TB total with all of it being managed by LVM.

    The 30 RH desktops I've installed have /boot with 100M and the rest of the drive is LVM.
  • On a compaq proliant server with a 500 Gig RAID 3 configuiration. The firmware did all the mirriroring. The performance was not bad at all, in fact i didnt realize any downsides. One more thing i tested is adding physical volumes to the array, and as posted above (it was transparent to the users, they didnt even feel the difference), which is good.
  • Well I was using LVM for about an hour or so, but I decided to go for RAID-0 instead ;)

    Mmmm... tasty striping.

You can fool all the people all of the time if the advertising is right and the budget is big enough. -- Joseph E. Levine