Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Hardware IDE/SCSI RAID for Windows 2000 Servers? 70

reezle asks: "Mostly I was wondering what other sysadmins have been doing for Mirroring or RAID-5 in their w.2000 servers. I really don't like the M$ 'Enhanced' disks that allow for RAID, since I've actually lost a volume during the conversion from 'basic' to 'enhanced', and also I worry that I will get locked out of the volume if the OS goes belly-up on me. There is also the idea that software RAID is much slower, but it's cheap, and so are some of my customers. What kinds of solutions are being used successfuly? What kind of recovery nightmares have people run into? Is IDE RAID ready for the real-world server market yet?"
This discussion has been archived. No new comments can be posted.

Hardware IDE/SCSI RAID for Windows 2000 Servers?

Comments Filter:
  • Adaptec! (Score:2, Insightful)

    by qurob ( 543434 )


    Seriously, Ask Slashdot question for this??

    Toms Hardware IDE RAID review [tomshardware.com]

    IDE RAID without hardware [tomshardware.com]

    Exercise left up to reader: Finding SCSI RAID reviews
  • Veritas (Score:4, Informative)

    by Jeremiah Cornelius ( 137 ) on Monday July 22, 2002 @04:03PM (#3932664) Homepage Journal
    Veritas Foundation Suite is available for Windows 2K. I know it is expensive.

    What would it cost for your company to reconstruct the lost data?

    Risk Analysis argument over!

    Seriously, the Win2K volume management and "enhanced disk format" you worry about are a subset of the Veritas VM, licensed by MS. It's crippled without many of the data-recovery features, and doesn't include the file-system enhancements.
    When you convert a Windows volume from "Basic", you are essentially performing the same operation as "Encapsulating" a native volume with Veritas on Solaris or HP/UX.

    • Thank you..

      I, for one, am happy with the MS Enhanced disks. And the author shouldn't worry, as he can import disks from a different machine with a simple click of a mouse.
  • Well, I've never had the problem you had when going from Basic to Dynamic, but when I was doing software mirroring, it was on a HP LPr rackserver with a pretty simple 2 disk setup. All done through Windows, never noticed any before/after performance loss. In my testing, I had to do was move the original disk out of the slot, put the mirror copy in it's place, and it booted right up. Then if I broke the mirror it was ready to start all over again.

    Right now (moved on to a different - less technical - company), I have a Dell PowerEdge server that I inherited using a PERC 2/Si RAID 5 config. Windows 2000 (just upgraded from NT4) is none the wiser as to how it's setup, and lists the disk as Basic. Downside is, the Dell RAID setup is pricey, and if your customers are cheap, well...

    If worse came to worse, I'm sure booting off of the Win2k CD and going into the Recovery Console [microsoft.com] with a will save you if the drive does act flaky, but I dunno the specifics on that.

  • Seriously - don't bother with anything else if you're gonna do IDE RAID. Drivers for lots of OSes besides Windows, too (including Linux). I just wish they did OS X!
    • I just want to second this.

      3Ware [3ware.com] makes several raid cards, 2way, 4way, and 8way. I have installed two 2ways and 1 4way in the recent servers I have built.

      They are very easy to configure and look just like scsi hard drives on linux. Yeah, I know this is a widows question.

      The 4 and 8 way card are also on standard sized PCI cards, unlike ATI. Watch out for cards that require a server case becuase they are extra long.

      They are also cheaper then most of other cards.
    • Re:IDE RAID: 3ware (Score:3, Informative)

      by drdink ( 77 )
      3ware is defantly a nice product. Their support people are also easy to correspond with and will answer any questions you have. You can find 3Ware [3ware.com] products at NewEgg [newegg.com]. You can do hotswappable IDE with them too, using these [circotech.com].

      There are Windows drivers, Linux drivers, and the FreeBSD kernel also has a driver (twe) for it too. You can also find the management software for FreeBSD, though not through 3ware.

    • i totally agree, 3ware is great. I have a debian/woody box with the 8 port controller and WD 120gig drives and its worked flawlessly for 6 months now. i think 3 ware has a new 8 port version with a 2mb buffer now, not much compared to the scsi controllers out there but great for cheap mass storage and local mirrors.
  • Keep in mind that unlike the SCSI RAID you can get from Dell and the like, IDE RAID is NOT HOT-SWAPPABLE.

    This means that if a drive fails, and you want to replace it, you must take down the system if you use IDE RAID. With SCSI RAID, and the apropriate controller, You can literally pull the dead drive and replace it with a new one, and the machine doesn't even skip a beat (well, it slows down a little bit as it rebuidls the array, but...).
    • Re:No Hot Swap (Score:2, Interesting)

      by questionlp ( 58365 )
      No ATA RAID hot-swap? Are you really sure about it since I did find information that states that ATA RAID is capable of hot-swapping... it just needs a decent ATA RAID controller (you can knock the low-end, aka cheap, Promise and Highpoint controllers off of the list) and a drive cage that supports hot swapping.

      The following pages provide information about ATA RAID and hot swapping:

      • Adaptec 2400A - FAQ [adaptec.com]

        It supports online capacity expansion, hot-spare and hot-swap (chassis required), and all major operating systems.

      • 3Ware 7500-series controller - Datasheet [3ware.com]
      • 3Ware ATA Drive Cage - Product Specs [3ware.com]
      • Promise SuperTrack SX6000 - Datasheet [promise.com]
      There are other products out there that do support ATA RAID and do provide hot swap facilities and capabilities.
    • BEFORE you generalze this guy's statement to read that you can hot-swap any SCSI drive, please know this: Hot-swap is an electrical matter more than anything. Both ATA and SCSI support hot-swap, with the proper equipment. You will need to use 80-pin connector SCA SCSI drives (which combine power and data into one connector). These are intended to be mounted in hot swap chassis and plugged into a backplane, although adapters are also available to plug them into normal power and data cables.

      There are also ATA drive chassis available that have some onboard electronics that allow the drive, mounted in the chassis, to be hotswapped into the appropriate recepticle, although I am not as familiar with these as I am with the SCSI drives (I have 3 of them in my desktop machine that I built and learned a great deal about SCSI in the process of getting everything working).

      UNDER NO CIRCUMSTANCES SHOULD ANY DRIVE, ATA OR SCSI, BE "HOT SWAPPED" IF IT PLUGS DIRECTLY INTO THE BUS AND POWER SUPPLY. THIS WILL RESULT IN DAMAGE TO YOUR DRIVE AND QUITE POSSIBLY YOUR DRIVE CONTROLLER.

      • Um, not quite true. I once worked with a guy with a somewhat cavalier attitude to this.

        I was running a backup on the system and the backup failed. The bastard had borrowed my tape drive while it was in operation. The backup was useless, but the other SCSI attached disks didn't even winge. What is interesting is that this system had a tape drive and one disk on an external tape drive. These were not on the SCA connectors.

        The thing is that it preferred termination on the bus but didn't insist on it. Without the terminator, it would happily continue, retrying every so often, but still working.

        You are right, for true hot swap you need controllers and drives that support it. Most high-end SCSI controllers with builtin RAID support this. However, it is interesting to see what one can get away with.

  • by zulux ( 112259 ) on Monday July 22, 2002 @04:27PM (#3932844) Homepage Journal
    First - 3ware makes an excelent line of IDE hardware raid cards if your too cheap for SCSII.

    Secondly - Windows Software raid will blow up in your face - especially Microsoft's version. If 'lost' three RAID arays to Promise, and four to Microsoft before getting a clue and forever forsaking crappy software. Windows Software RAID sucks so hard, that even if they fix it now, it's suckyness will caryover for years.

    So you really have two choices for Windows RAID - SCSI or 3Ware.

    Aside: Too bad Microsoft and Promise are too stupid to review NetBSD's RAIDframe - this is software RAID done right. Totally abuseable - you can pull out an IDE cable and it just keeps chugging along. Easy to set up as well - no guessing if it's going to work, it just does.

    • I can't mod the parent up, but I want to second that. 3ware is fantastic. We have used it under NT, Linux and W2K. But I have to add one thing: We have never used RAID5 and always used RAID 10, simply because with IDE you can afford that ;-)

      160 capacity with 3ware RAID10 and 4 IDE disks should be cheaper than a SCSI RAID controller and RAID 5 that gives you 160 GB capacity. And I seriously doubt that the SCSI RAID 5 will be faster.

      Bye egghat.
    • Agreed... 3ware is good stuff -- their controllers are the only IDE RAID systems worth using.

      Oh, and it has to be said: It runs Linux! They officially support Red Hat and SuSE, but the driver is a kernel module for Linux 2.2 and better.

      Check out the faq [3ware.com].

  • Buy a RAID card (Score:4, Informative)

    by Will Sargent ( 2751 ) on Monday July 22, 2002 @04:31PM (#3932867) Homepage
    $300 will buy you a decent 3ware Escalade 7410 card, which comes with both Windows and Linux support.

    Promise IDE RAID is a lot cheaper, but unreliable; I would get kernel trap exceptions all the time and it wasn't worth the trouble. Asides from a problem setting it up where the onboard motherboard ATA-100 driver was conflicting with the 3ware card, I haven't had so much as a hiccup. There's an erroneous report that says they only work on 64 bit PCI, but they work fine on 32 bit as well.

    With CPU speeds being what they are, IO is really the bottleneck in your average computer. I've seen dramatic since the card went in -- I'd guess compilation time has halved.

    If you're starting fresh, see if you can get a Tyan motherboard and 64 bit PCI and you should have no problems for the forseeable future.
  • Software RAID-1 is actually quite fast. In my benchmarks it is as fast or faster than a hardware RAID-1 solution on Linux. I'd expect that MS's implementation performs similarly. It is very cheap to implement :)

    If you're doing raid 5 (or 10) you could benefit from more horsepower. You have a few options. They are (in rough order of cheaper cost --> better performance and reliability)

    - Buy faster CPUs to make up for the overhead of software RAID-5 or RAID-10. They will still not be as fast as a hardware solution, and it might be a real pain to deal with in a disaster situation. Make sure you have lots of backups.

    - Use the 3ware 7850 card to get you cheap IDE RAID-5. Obviously the benefit of this is that you can save a ton of money on disks. In my experience the card performs reasonably well and is stable, but I have to admit I've only been using it for non-critical fileservers over the past 6 months. It may not be a mature solution for all uses.

    - Buy a classic SCSI hardware raid card (like a Mylex AccellaRAID 320) with a large battery-backed RAM cache. This type of card will give you the highest performance, and you can safely enable write caching as well, which will tremendously improve your RAID-5 write performance if that is the RAID level you want to use. It's a rock solid, but expensive solution when you count the cost of the scsi drives.

    Some pitfalls:

    Don't use IDE (hardware or software) RAID with Promise controllers. I don't really have any proof, just lots of annecdotal first and second-hand reports of craptapular performance and instability.
    • I purchased an ECS K7VTA3 3.1 motherboard for our mail server. I wanted to use Linux software raid 1 for the mail spool / queue since I wasn't keen on doing backups for 1.6G of mail.

      The motherboard has a Promise PDC20265R. I didn't care about its RAID abilities, just the fact that it had a pair of ATA100 ports.

      The goddamned FastTrack Lite BIOS refuses to leave the drives alone! It sets up the RAID array and then Linux sees only /dev/hde (the primary master). I really don't want to use the Promise RAID driver. Does anyone know a way to either tell the driver to back off and let me use hde/hdg in a (stable, standard) software RAID array or how to get the damned FastTrack BIOS to leave the drives alone altogether?

      • It's usually an option in the motherboard BIOS setup - maybe something like 'PDC20276 mode' with options of ATA or RAID. Setting to ATA should stop the Promise BIOS from initting.
      • If you can't do anything else, try creating a 1-disk stripe for each drive. That's what I have to do with my PCI FastTrak card if I want to access non-striped drives.

        • If you can't do anything else, try creating a 1-disk stripe for each drive. That's what I have to do with my PCI FastTrak card if I want to access non-striped drives.

          I didn't have the option. The BIOS let me select one of two possibilities: RAID0 or RAID1+0, each using both drives.

          At any rate I'm a bit of an idiot; hde and hdg were both seen by linux even with the BIOS doing this. I am now happily running stable software RAID1 with ext3 in full-journalled mode. It may be slower than ext2 and hardware-assisted RAID but I know my data (mail spools and queue) is pretty damned safe. :-)

      • The goddamned FastTrack Lite BIOS refuses to leave the drives alone!
        I'm using three Gigabyte MoBo's with that Promise IDE/RAID controller: the board has 4 total IDE channels.

        Next to the third & forth IDE connectors, there's a jumper that has to be set to either "RAID", "IDE", or possibly "Disabled", knocking that extra IDE controller out completly. On a Win2K box I'm using the "RAID" settting; on an OpenBSD box I'm just using it has straight "IDE", so I have three HD's and a CD-ROM, each master on their respective channel.

        So, uhm, my point... I'd see if there isn't a jumper you can set to disable the RAID function, and just use them as straight IDE controllers.

        • So, uhm, my point... I'd see if there isn't a jumper you can set to disable the RAID function, and just use them as straight IDE controllers.

          I appreciate the imput but this motherboard has no such setting, not on the board itself nor in the BIOS. Oh well. I have it fixed now I think. :-)

      • by Alex ( 342 )
        Raid 1 is NOT a reason to not have backups - disk loss is not the only thing to destroy data.

        Though with a mirror you will get faster reads which is useful for a mail server.

        Alex
        • Raid 1 is NOT a reason to not have backups - disk loss is not the only thing to destroy data.

          Are you sure you're not Captain Obvious? I know that, but I'm saying that having a system in place so that, barring a dumb mistake, the data will be safer than with a single drive.

          A quarterly backup with daily "diff" backups should do fine and keep the backup size small.

  • I use Promise FastTrack RAID controllers in a mirror configuration (two drives, one on each cable) in 15 Novell servers. I have both 66 and 100TX2 models in service (most of them for over a year) with no problems.

    I also used one on my workstation (striped, two 7200RPM 20GB drives) for the better part of last year and it sped up my computer substantially with no problems.

    When a server has gone down then usually both hard drives have good, valid data. When one hard drive goes down the other keeps trucking until I replace it (offline - I didn't get hot-swap enclosures, it happens so infrequently it's not worth it).

    So for the low end (ie, CHEAP) hardware RAID from Promise is right on the money. If you want something without such bad anecdotal evidence (as attested by other posts in this story) then you will have to pay more.

    As always, your customers get what they pay for. So far my company's investment has paid off over and over again - I don't have to recreate the entire server from the ground up (or from a backup) when at least one hard drive is good. I've had to replace 4 servers in the last year and one or both hard drives have always survived whatever caused the server to go down. (These are low usage, but physically punished servers)

    -Adam
  • The real answer is to use server class hardware from a real supplier. Both Dell and Compaq have rather solid server offerings with hardware raid built in. No need to worry with MS's implementation. And if you're doing RAID 5, you really should be in hardware anyway, or performance will kill you.

    If this is for home use, or for fun, then play all you want. But since you're paying for windows anyway, you might as well poney up for real hardware. Your life will be happier for it.
  • by Telastyn ( 206146 ) on Monday July 22, 2002 @05:14PM (#3933177)
    If you're working for a company just get a decent server with hardware RAID. Dell's servers use Perc2 or 3 cards which can support upto a metric buttload of drives (12-16? not including 2-4 external connectors). Win2k can use all of the cards nicely and will let the controller do its thing. Most Linux distros come with drivers for them too if you want to make sure you aren't stuck with win2k.

    Poweredge 2550's can carry 4 drives and are fairly cheap by company standards (~$5k decked out).

    Notice: I do not work for Dell, but I am a Windows admin for a company that does buy from Dell
  • The nature if IDE is Synchronous transfer..Only one drive can talk at a time. RAID is ok on ide for redundancy but will never be anywhere near as fast as a SCSI drive. I would never implement IDE RAID in an enviroment other then just to play.
    • That's not bad if you keep the number of drives down...240Gb on a system if you use two RAID cards with a 2-drive stripe-set on each and software-mirror between them...(RAID5 would usually be a bad move without hardware and battery-backed cache).

      Apple are using ATA RAID on Xserve (one controller per drive), btw.

  • I myself am running sucsefully a RAID5 system based on the software solution of Win2K.

    I use 5 Maxtor 60GB drives (don't ask me for the type, I am too drunken to remember) having a redundant failsave 120GB online.

    It's fast enough for my pr0n collection and I allready had a harddrive failure, so I just exchanged the drive by a new one with a harddisk of the same type (my vendor exchanged it without asking) and all I had to do was a right-click and 'enable' in the drive-manager.

    I just like it to have the opportunity to create a RAID5 system by software without worries about controlers, harddisks, drivers, etc.

    Yes, sure, software-based allways is slower than hardware-based, so I wouldn't use it in a webserver (especially if it could get slashdotted). But it's fine for a fileserver in your company as (backup-)server or for your machine at home.

    I only ensured to use 5 identical HDs...
  • Just look for hardware RAID cards with Windows 2000 compatibility. A monkey could do this for you, if you feel like training the monkey for a few hours.

    Here is what my monkey found. [google.com]
  • Our shop has deployed a half dozen terrabyte servers in-house using a pairs of 8- and 10-port cards (they just announced the 12-port cards, BTY) and fully populated with 120GB and 160GB drives in various arrangements on Linux machines.

    There have been some bumps with the hardware, but 3Ware has been responsive to our bug reports, and our current revs of the drivers/firmware have been solid. The drivers have been incorporated into the main kernel, and you can download the latest driver from 3Ware's site.

    The 3dm software is great for managing and monitoring the arrays. It has a web interface, email notification, and SNMP access (I think).

    Give them a fair evaluation and you should be impressed.

  • There are some rules you must follow:
    1) If you have a server NEVER EVER EVER used IDE for Harddrives!! SCSI consumes a lot less resources than IDE and it is fast. IDE raid will never be ready for realworld servers as it uses the CPU too much, SCSI has it's own CPU.
    2) Software RAID is evil. It is too unpredictable. Use a good quality (read Adaptec) SCSI RAID controler, we use the Adaptec 2100S
    3) Don't skimp on hardware for servers, they may cost a lot but it's better to spend now than later. If you are using M$ windows, invest in a REAL server motherboard (Intel), ECC RAM, SCSI RAID, IBM Hot Swap, Harddrives, good backup tape drive.

    The one rule I have learnt with servers is that when the server goes down at 9:30am, you want good hardware not the cheap piece of crap.

    Intel motherboards are the bees knees of motherboards. They almost never fail, are over engineered, and have the best management features around.
    • by Faldgan ( 13738 )
      You know, I have seen lots of people say the exact same things as you. "IDE is so bad! Never use it!" "Use brand $x, it's the only REAL brand!" and overall, "Spend more money than you want to"

      I'd like to respond in general to these things.
      #1) IDE bad SCSI good.
      The most common argument I hear is because of CPU resources. Now let's think about this. We'll go with the largest drive that each interface has. SCSI: 181GB @ @1000 IDE: 160GB @ $222 That is a price difference of almost $800 for $800, you can buy yourself TWO intel 2.4GHz processors. So if you arn't already running the fastest processor out there, you'd be better off (price-wise) getting IDE and purchasing a faster processor (or two, or whatever) This result is even more valid if you have more drives. (bigger savings) Quality of the drives? In many cases, they are the exact same drive with different electronics attached to them. The quality is the same. Also, there are IDE raid cards that have their own CPU. But you can just do software raid with the faster CPU. BTW, people: RAID does NOT improve performance. It hurts it. Read some benchmarks if you don't believe.
      #2) ALWAYS buy the best you can afford.
      I've got 4 servers that were the top of the line, most reliable hardware that are 5 years old. They are all working just fine. They cost $8k each back then. I've also got about 10 desktop computers flipped on their side, with 'server' written on them in crayon. They were about $2k each. They all still run just fine. If we had purchased all of them as desktops, I could have paid myself $24k extra. That money was wasted. Sometimes, (very seldom) it pays to buy the best. But if something is redundant anyway, get cheap! If it breaks, replace it. You've still saved the money. If it can be down, just keep it backed up, and buy cheap. You'll save money (a LOT of money) in the long term.
      My basic idea here is that spending more money isn't always the best thing to do. Yeah, it's a lot more fun to play with a new Sun220R than a used P450 desktop from "Mikes Computers" but with a $10k price difference, there needs to be some VERY good reason to buy the expensive stuff.
      • 1) IDE drives usually spin at 7500RPM, SCSI 15000rpm
        2) IDE ~60mb/s, SCSI 160mb/Sec
        3) SCSI to SCSI copy is many many times faster than IDE to IDE

        4) You nitwit!! RAID 1 (mirror) is just as fast (in hardware RAID) as non-raid. RAID 5, striping is FASTER!!
      • RAID does NOT improve performance. It hurts it. Read some benchmarks if you don't believe.
        You're joking right? Are you a RAID hater or what? Let's say you're using straight striping (Raid 0). That improves performance. Don't even argue it, since it just stripes, there's not even a checksum overhead. If you're using RAID 5 then there are more complex considerations, since there is far more than just writing data. But to say that it does not improve performance just shows that you're not a RAID user. Also if you think that the difference between SCSI and IDE is CPU load, I also suggest you smell the coffee.
        In many cases, they are the exact same drive with different electronics attached to them.
        So we'll just ignore the 15k rpm SCSI units then shall we? Or how about just the 10k units? And we won't even get into looking at the 2 unit per bus limit on IDE will we. Or the lack of external connectors.
        I've also got about 10 desktop computers flipped on their side, with 'server' written on them in crayon.
        Can we just not go there? You clearly have no idea about running a proper high availability server setup.
        If it can be down, just keep it backed up, and buy cheap.
        No comment. Sheesh, and this got modded up? jh
        • I'm not a RAID hater. I run it on my home system, and 4 servers at work. But it's NOT for performance. RAID0 generally doesn't hurt performance much but RAID1 does. You have to write twice as much to disk! (and a lot of RAID cards actually don't 'mirror' they do parity for even one drive. RAID1 and RAID3 end up being the same thing. (saves on programming costs)) RAID5 ALWAYS hurts performance.
          Any combo-raid (10, 0+1, 15, 51) hurts performance. I hate to tell you this, but you are simply wrong.
          Yes, there are 10k and 15k SCSI drives, while IDE has no models over 7200. But there are MANY 7200RPM SCSI drives. Do some research sometime. Please.
          For your information, servers are not there just to have lots of hardware. They are there to *do* something. If your application doesn't require more than 2 drives (and at 160GB for IDE drives, that's a pretty hefty chunk of data) then who cares about the 2 drive limit? Also on those lines, who cares about external connectors when all of your drives are internal? Why pay for features that you arn't going to use? Do all of your servers have GeForce 4 4600 cards in them? What about sound cards? Why not? You want external SCSI connectors on them that you won't use, so why not add other things you won't use.
          The maximum curent transfer rate on IDE is 133MB/s, and SCSI is 320MB/second sounds amazing, right? Well, the drives themselves are around 55MB/second average maximum sustained transfer rate. (this is on the Seagate Cheetah X-15, the 15000RPM drive) You arn't saturating the bus with this. Yes, there is cache on the drive, and you can burst much faster, but If you ever do anything over 4MB in size, you are out of cache, and are dealing with the sustained transfer rate. On a 160GB drive, in order to ignore the sustained transfer rate, you'd need to have 40k files minimum.
          I know exactly how to create and run a proper high availability server setup. I've done it multiple times. I also know how to not waste money on features and equipment that arn't needed. (which you do not) If you need the features, and you need the speed of SCSI (Yes, it *is* faster, I never said otherwise) then go for it. Spend as much as you need! But if you don't, you can save a lot of money buying IDE, or other things. (perhaps a second machine) I'd be interested to see a comparison of a single large SCSI RAID webserver with severral smaller IDE webservers.
          What you shoould have gotten from this whole message is that when purchasing hardware (esp. disk drives) you really need to think about what the hardware is going to do, and buy hardware based upon requirements. People who automatically go out and buy the best they can are in part responsible for many companies going broke. Before I learned this, I spent $750k on hardware at my last start-up. We never used more than 1% of the capabilities of that equipment. I could have (should have) purchased the *right* equipment, not the *best* equipment, and saved half a million dollars.
          • I quote from people who should know:

            http://www.adaptec.com/worldwide/product/markedito rial.html?sess=no&prodkey=quick_explanation_of_rai d [adaptec.com]

            RAID Level 0
            RAID Level 0 is not redundant, hence does not truly fit the "RAID" acronym. In Level 0, data is split across drives, resulting in higher data throughput. Since no redundant information is stored, performance is very good, but the failure of any disk in the array results in all data loss. This level is commonly referred to as striping.

            RAID Level 1
            RAID Level 1 is commonly referred to as mirroring with 2 hard drives. It provides redundancy by duplicating all data from one drive on another drive. The performance of a Level 1 array is slightly better than a single drive, but if either drive fails, no data is lost. This is a good entry-level redundant system, since only two drives are required. However, since one drive is used to store a duplicate of the data, the cost per megabyte is high.

          • To say that RAID isn't for performance is inaccurate: this is the whole point [3ware.com] of RAID 0.

            Even RAID 1 (mirroring) will give you a performance boost according to this white paper [3ware.com]. I'm using RAID 1 and I've noticed a difference. YMMV.

      • Someone recently accused me of looking for the trolls on /. and responding to them.

        Guilty as charged!

        BTW, people: RAID does NOT improve performance. It hurts it. Read some benchmarks if you don't believe.

        RAID is faster on reads than writes. It depends on your application, but most servers tend to do more reads than writes anyway. But the prime motivating factor usually is improving reliability of the server as a whole.

        I wouldn't recommend any RAID system which didn't have a hot-spare and the drives weren't hot swappable. Might as well keep your server up and running when you have a drive fail, right?

        I've got 4 servers that were the top of the line, most reliable hardware that are 5 years old.

        I've got a server like that under my desk. It's an older production system that they were throwing out because it was no longer under a maintenance agreement from Compaq. I now use it for testing different software configurations.

        They cost $8k each back then.

        Oh, I'm afraid my Proliant 5000 with quad PPro-200, 1 Gig of RAM and five 9.1 Gig 10k drives with a Smart array controller cost quite a bit more than $8k in it's day. My guess is more like $40k. It's worth maybe $1000 today, how's that for depreciation! :) I guess I'm questioning your concept of "top of the line".

        Companies and people tend to be cheap when they start out, but as you grow, at some point you need to mature and get past that tendency. I used to have the same attitude as you, but numerous failures have taught me a lesson.
    • Software RAID is evil. It is too unpredictable. Use a good quality (read Adaptec) SCSI RAID controler,
      If the data is really important, running software mirroring might actually be a very good idea, since it lessens the effect of a controller going bad. For RAID5 software is fairly evil though :)

      Mylex also make some nice RAID kit, btw.

  • SCSI vs IDE
    1) SCSI drives have low seek times and higher transfer rate for three reasons.
    1.1) The SCSI bus operates faster (around 160mb/s)
    1.2) SCSI drives spin faster
    1.3) SCSI does not use the CPU to transfer data. It uses DMA

    2) Hard vs soft
    Hardware raid is invisible to the OS (almost). So in recovery situations it is better. Plus hotswap is just cool.
    • SCSI vs. IDE (Score:2, Insightful)

      by AlecC ( 512609 )
      As an interface, the difference between SCSI and IDE is small. Yes, Scsi has a few more controllability and asynchronous features, but these are not a big deal. The difference is that the manufacturesers use Scsi is a marker for a generally higher level of build quality and testing. Just as PCs marketed as servers are built better than desktop workstations, SCSI drives are simply better built than IDE ones. The price difference is not the trivial/zero cost of the different interface, it is better bearings, stronger actuators, more rigid cases, bigger buffer rams, cleverer firmware, extra levels of ECC, more vibration testing and so on. Check the MTBF figures - when I last looked, SCSI drives had 5 times the MTBF of comparable IDE drives from the same manufacturer. Basically, IDE is designed down to minimum cost for the cutthroat desktop/home market, while Scsi is designed up to beat the competition in the less price sensitive server market. [Most of this derived from talking to the tech support of a major disk manufacturer]

      Which means that if you really, really, want your data to stay there, the delta of SCSI is probably worth it. OTOH, I would go for Raid-ed IDE before non-Raide-ed SCSI - drives fail, even the best.

      There is no technical reason why IDE cannot be made host-swap - but not in an ordinary PC case. You need a mounting enclosure designed to make/break contacts in the right order, and a controller designed for hot swap. These cost money, and people tend to put that money alongside the premium features already in Scsi rather that the minimum-cost IDE.
  • I've setup a few raid servers for testing at work, had a software based scsi raid 5, hardware based scsi raid 5, and an ide one.

    The Hardware scsi was by far the easiest and best performing of all of them. The controller (adaptec 1000U2 series) had a nice simple menu and was the only one which allowed me to include the system partition in the raid 5.

    Our software raid 5 had to be setup once we had the machine up and running, so we couldnt include the system partition, and it ran very slow when do large file transfers.

    Our Ide one we did the least testing on, since by the time we got around to it, we had decided to go with the hardware scsi. It ran ok, faster than the software scsi one, though still slower than the hardware scsi. It also wasnt hot swapable either, but that was only a minor concern for us

  • Check out the ARCO IDE Raid Controllers [arcoide.com] ... they build a number of devices that basically sit between your iDE controller and the hard drive and mirror the hard drive transparently.

    It's only RAID 1, and there isn't any performance benefits ... but seems quite solid.

    You setup the controller with 2 or 4 drives on it, and your system basically sees 1 or 2 drives. All writes done to a drive is automaticlay done on both drives of the mirrored pair.

    It's a hardware solution, so it's OS independant.

    mm
  • We have a Promise RM8000, an external SCSI device that houses an IDE RAID unit with 8 drive bays.

    When we first installed it, we put four 160 gig drives in it and created a RAID 5 array formatted as an NTFS volume. We needed the space right away, so we started using it.

    When the remaining four 160 gig drives came in, we converted the disk to a Windows "dynamic disk". This allowed us to simply plug in the four new drives and extend the existing volume onto them.

    After a day or so of whizzing and whirring the RAID 5 array was happy with its reorganization and we had a happy little 1 terabyte volume. And in case you are wondering, yes, it really did take almost 24 hours before the RAID array stopped shuttling data around. The volume was available for use immediately, however.

    A couple of weeks later we added a high-speed SCSI scanner onto the same SCSI chain as the RM8000. Suddenly the drive was not visible in Windows anymore. We checked for proper SCSI termination, etc., but it wouldn't show up.

    We removed the scanner and put all cable and termination settings back to their previous values. The drive showed up again, but Windows said that it was Unallocated!!!

    That's right...our 1 terabyte drive was gone...poof! We had 600 gigs of data on that thing!

    Promise told us that what we were describing was "impossible". Microsoft also had no explanation. Thanks...thanks a lot.

    I probably won't touch IDE RAID or dynamic disks ever again, unless I see some real proof that they have become much more reliable.
  • I have a Highpoint Rocket Raid 100 in my stereo, working fine so far, with 4 drives in 0+1 config for 160 Gb under NT 4sp6a.

    Haven't (yet) tried to replace one of the drives, but the initial install was ez.
  • Cheap customers hmmm...
    Give me a second, I'm having trouble with the concept of Windows(AnyVer) and reliability.
    Nope, can't do it 2K is more reliable than Win3x or Win95 or Win98... But I just can't put reliability with it.
    Have you considered using a *nix system for backup of the Win system?
    Cost being a deciding factor, the trade is going to have to be in potential loss of data,
    depending on the load you could give the customer tiered options for recovery of data.

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...