Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Which RAID for a Personal Fileserver? 898

Dredd2Kad asks: "I'm tired of HD failures. I've suffered through a few of them. Even with backups, they are still a pain to recover from. I've got all fairly inexpensive but reliable hardware picked out, but I'm just not sure which RAID level to implement. My goals are to build a file server that can live through a drive failure with no loss of data, and will be easy to rebuild. Ideally, in the event of a failure, I'd just like to remove the bad hard drive and install a new one and be done with it. Is this possible? How many drives to I need to get this done, 2,4 or 5? What size should they be? I know when you implement RAID, your usable drive space is N% of the total drive space depending on the RAID level."
This discussion has been archived. No new comments can be posted.

Which RAID for a Personal Fileserver?

Comments Filter:
  • RAID 1.... (Score:2, Interesting)

    by jsimon12 ( 207119 ) on Wednesday June 16, 2004 @03:23PM (#9444784) Homepage
    I just got x2 250gig drives and mirrored them.
  • RAID complexity (Score:2, Interesting)

    by ckaylin ( 193523 ) * on Wednesday June 16, 2004 @03:26PM (#9444833)
    It's axiomatic that the more money you spend for reliability the more likely you are to have some kind of failure. Our fancypants Dell PowerVault RAID enclosures are constantly giving us trouble, yet the machines with just a single IDE drive keep on ticking for years and years.
  • Raid 5 (Score:3, Interesting)

    by silas_moeckel ( 234313 ) <silas.dsminc-corp@com> on Wednesday June 16, 2004 @03:26PM (#9444839) Homepage
    If your running a fileserver with a decent ammount of writes yours going to want RAID5 as it has the least penalty. Hot swap drives are easy enough with SCSI or FC a bit more complicated with SATA and rather complicated with IDE but can be done. For a simple setup as little as 3 disks will do and you will get 2 disks worth of space performance setups will have more spindles. You didn't state as to what sort of load your expecting and that makes a huge difference. For the ultra cheap I have picked up IDE raid 5 cards supprting 4 drives with hot swap for sub 30 bucks on ebay they will only work with 120 gig drives max and are limited to ultra 66 but thats a third of a TB usable as well for a few hundred bucks and it's performance is good enough for a 100bt file server.
  • Re:RAID 1 (Score:5, Interesting)

    by arth1 ( 260657 ) on Wednesday June 16, 2004 @03:37PM (#9445013) Homepage Journal
    For a file server, I'd use the combination of RAID 1 and striping known as RAID 1+0 or RAID 10.
    The benefits are that you get the same protection as with RAID 1, but lose the speed penalty, all without needing special hardware or spare CPU power for expensive CRC calculations.
    With a 4 drive RAID 1+0, you'll get read performance of 2x-4x a single drive, while writes will be from 1x-2x. In theory, that is. In reality, if using a RAID PCI card or motherboard solution hooked to the south bridge, you'll most likely max out the read speed.

    Anyhow, it's a very cheap solution that doesn't tax your CPU too much even if done through software (like with a highpoint controller), and it does give you piece of mind.

    The worst downside is that you will have to take the system down to change a drive (correct me if I'm wrong, but I've never seen a hot-swappable RAID 1+0 solution), and the performance before you do that will take a substantial hit.

    Raid 4/5 is nice because it doesn't waste a lot of drive space, but it comes at the price of very slow writes, and very high CPU use unless you also get a hardware controller with an onboard CPU.

    Regards,
    --
    *Art
  • RAID 5 (Score:2, Interesting)

    by Luciq ( 697883 ) on Wednesday June 16, 2004 @03:41PM (#9445054) Homepage
    Actually, I just built a 1TB fileserver for my home last month (I do a lot of video editing and need a secure place to store it). I'm using Mandrake Linux 10, but most any flavor will do as long as you have the raidtools installed. Also be sure to install Samba so you can map drives on both Windows and Linux systems.

    One great thing about using Linux on the fileserver is that you can use software RAID. As the name implies, this requires no special controller cards (which is nice, since RAID 5 controllers typically run $200+). You also have the option of setting spare drives, which allows the array to begin rebuilding immediately in the event that one drive fails - the spare takes its place. Setup is easy - create a RAID, select what type you want, and then add drives to it and format.

    I'm using a RAID 5 setup with 5 x 250GB drives giving me 4 x 250GB = approx. 1TB of storage space. As has been mentioned, using RAID 5 allows you to recover if one drive fails. The odds of more than one drive failing before you have a chance to rebuild the array are essentially the odds of your box being destroyed (tornado, fire, etc.).

    Also previously mentioned, never attach more than one drive per IDE bus (assuming you're using IDE like I am). Doing so is irresponsible from a bandwidth standpoint as well as from a reliability standpoint, since a drive crash typically brings down the bus, and all drives on the bus with it (and as we all know by now, losing >1 drive is not survivable). Buy some cheap PCI IDE controllers, keeping in mind to ensure that they're dual channel if you plan on connecting >1 drive per controller.

    Take some time and read this [tldp.org] - it will tell you everything you need to know.
  • Re:RAID 1 (Score:2, Interesting)

    by swv3752 ( 187722 ) <[moc.liamtoh] [ta] [2573vws]> on Wednesday June 16, 2004 @03:48PM (#9445141) Homepage Journal
    Most raid 10 solutions don't give a spead boost, and are much more vulnerable than you would first believe. If any two drives fail in the array, 1/2 of the time the array is toast. Some of the cheaper raid hardware may not even allow rebuilding one drivein a raid 10.

    Pretty much the solution is Raid 1 or Raid 5. Besides on most raid controllers, Raid 1 is faster read throughput than a single drive, though writing does take a bit of a performance hit. Raid 5 is expensive, while most any Raid controller can do decent Raid 1.
  • 0+1 (Score:2, Interesting)

    by whitelabrat ( 469237 ) on Wednesday June 16, 2004 @03:49PM (#9445147)
    In my career I've had the best results with a 0+1 RAID also known as a striped mirror. Particularly because RAID 5 has some performance hitches to due to the redundancy method, you have to have a lot of disks to really get good performance and redunancy, and if you loose a disk your performance drops like a bomb.

    In 0+1 is all just data baby! Loose a disk, just break the mirror and you'll still get good speed until you can fix the failed disk.
  • by mooboy ( 191903 ) on Wednesday June 16, 2004 @03:52PM (#9445194)

    I've found the linux kernel's built-in RAID [ibm.com] capabilities more than adequate for most of my fault tolerance needs. The best part is I can move the drives to pretty much any system - a new motherboard, whatever - without having to worry about kernel support or finding that IDE driver. If a drive fails I can boot its mirror up in any system and be in great shape. I also use the utility mdadm [unsw.edu.au] to email me if one of the drives fails. For some linux firewall systems I've built, I use old crappy 6GB drives, but mirror them so there's no risk if one of them goes out. Looking at my basement firewall now and...

    root@fw01:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] read_ahead 1024 sectors md0 : active raid1 hdb2[0] hda2[1] 38796864 blocks [2/2] [UU] unused devices: <none>

    everything is cool!

  • My strategy (Score:2, Interesting)

    by bcs_metacon.ca ( 656767 ) on Wednesday June 16, 2004 @03:54PM (#9445216)
    I have two 80GB drives, with /boot (100MB) and /home (40GB) mirrored, but the rest is / on one drive and /data on the other.

    Basically, I'm more worried about keeping what's in /home than I am about full failover redundancy in the case of single-disk failure. Rebuilding the OS is a reasonably painless process but some of my data is irreplaceable (and backup CDs/DVDs are too easy to lose/break/corrupt/tempting to re-use). /data holds information I don't care about so much or that I can get back (like my ripped-from-CDs-I-own music).

    If zero-downtime is a critical factor for you, you probably want to RAID-1 the whole disk (just remember to copy the MBR, too!)
  • Re:RAID 1 (Score:5, Interesting)

    by robi2106 ( 464558 ) on Wednesday June 16, 2004 @03:58PM (#9445254) Journal
    No kidding. I wonke up one morning, turned on my system and found my one an only partition for my storage drive (non-raid) totally gone. WinXp Pro just decided to wipe it clean. I surfed around a while and find a nice Russian (or some other foreign site) that served up a juicy hacked exe for a harddrive recovery app. It did the trick and recovered my data by rebuilding the partition table based on the data (or something like that).

    I was even thinking of buying the app until I surfed to the company's site and found it was >$2K US. Screw that. If it happens again, I may not reciver my stuff.

    I didn't have anything critical on there, but it woudl have been very time consuming to re-rip my CDs again.

    jason
  • Re:RAID 1 (Score:3, Interesting)

    by slaker ( 53818 ) on Wednesday June 16, 2004 @04:01PM (#9445284)
    The Sony ES-series 400-disc jukeboxes can be daisy-chained in groups of up to three per one logical unit. As long as you have something on-hand to act as an index (I use a trivial little web database), it's very easy access a substantial number of video DVDs quickly. ... but then I use the same web database to access approximately 6TB computer files that I ALSO keep on-line.
  • Re:RAID 1 (Score:5, Interesting)

    by dead sun ( 104217 ) <aranachNO@SPAMgmail.com> on Wednesday June 16, 2004 @04:04PM (#9445317) Homepage Journal
    And if two drives go down in a RAID 1 you're how much better off than in RAID 5? RAID 1 consists of two drives. At least with RAID 5 you'd still have at least one good drive that you could hastily format and use if need be.

    The only semi-common RAID I know of that could handle two drives failing at the same time would be RAID 10, A mirrored set of striped drives, and then only if one side of the mirror died.

    For your diligence bit, I've actually worked with a machine that had a drive fail in the RAID 5 set and then as the hot spare came online and started rebuilding the data needed to keep the R in RAID another drive died. The whole set was then completely unusable and somebody probably would have been fired if there weren't a set of recent backups around. As it was a couple people got to work about 12 more hours on top of their 8 for the day to make sure the machine was running again by the next day.

    Thus my moral, RAID isn't a replacement for backups, as there still can be failures. RAID will reduce the frequency with which you need said backups, hopefully to never, but it can still fail. Nothing replaces a good backup.

    Oh, and also another good reason for RAID 5 instead of 1, there should be a bit of speedup since there's multiple disks involved, assuming, of course, your RAID card can handle all the XORs.

  • Re:RAID 1 (Score:5, Interesting)

    by jsebrech ( 525647 ) on Wednesday June 16, 2004 @04:06PM (#9445338)
    DO NOT RELY ON RAID TO PROTECT YOUR DATA.

    Amen. I have vivid memories of typing rm -rf * in the wrong directory (and that was WITH pwd in my prompt). It took an entire week to duplicate the work lost.

    Combining the rm command and lack of sleep is like combining a loaded gun and your forehead. You can only do it so often before you destroy something valuable.
  • by dasMeanYogurt ( 627663 ) <texas DOT jake AT gmail DOT com> on Wednesday June 16, 2004 @04:07PM (#9445348) Homepage
    It sounds to me like this guy just needs a quality HDD and good tape backup. Do not put your faith in RAID, put in a good off-site backup. I've seen RAID solutions fail to many times. I've seen RAID solutions fail twice recently. The first one was a company with a slick server and nice hot-swappable SCSI drives but their controller card went out. It was replaced by the manufacturer but the techs were unable to recover the data. Next one happened when a machines case fan went out and the mirrored HDDs cooked themselves to death. The moral of the story: NEVER TRUST RAID and as always keep a backup.
  • Re:RAID 1 (Score:5, Interesting)

    by Cecil ( 37810 ) on Wednesday June 16, 2004 @04:10PM (#9445383) Homepage
    Yeah, because if two drives fail in a RAID 5 configuration it'll work just fine. And a 2-drive RAID 1 also works fine if two drives fail.

    If you're having two drives fail before you can get one replaced you need better hardware or a better failure notification system, or both.

    And, speaking from personal experience, and both the theoretical and real-world benchmark tests, I can say quite firmly that the software RAID 1+0 on my dual P3 1GHz fileserver does give a 'spead boost'. Not the theoretical maximum of 4x read 2x write, obviously, but certainly a noticable speed boost.

    And finally, you complain about a writing performance hit under RAID 1? Have you ever even used or benchmarked a RAID 5 system? Computing parity information, unelss you have a *very* expensive RAID 5 controller, puts RAID 5 well behind every other type of RAID when it comes to writing speed.

    Like seriously man, have you ever even experimented with different RAID setups, or are you just extrapolating these ideas from something you read on the web?
  • Re:Software raid (Score:3, Interesting)

    by brandon ( 16150 ) on Wednesday June 16, 2004 @04:12PM (#9445417)
    I've run software raid 5 for some time and in my experience it has been more stable than running ide raid with promise ide controllers. I have had 2 promise cards go bad but 2 systems running software raid 5 under debian have worked extremely well. Even hot-swapping works well and I've had to use it several times.

    Performance with an IDE raid controller is pathetic. You can't get much more than 22MB/s. I can hit 68MB/s reading and 31MB/s on one system with 4 7200 8 MB cache IDE drives. (This system has 2 extra pci ide cards in it so each drive is a master with no slave).

    If you want to go scsi then you have software and any ide raid card beat by a long shot. But "personal fileserver" usually means raid is too expensive.
  • Re:Software raid (Score:5, Interesting)

    by Oestergaard ( 3005 ) on Wednesday June 16, 2004 @04:37PM (#9445640) Homepage
    If it's just a mirror, writes are slowed slightly

    Hardware controllers with batter backed RAM (note; not all controllers have this), will have an edge over software solutions on ALL writes - no matter which RAID level you use.

    Don't even bother trying to do RAID 5 in software

    SW RAID is usually a lot faster than HW RAID solutions, when you factor out the battery-backed RAM part. Any HW RAID controller with battery backed memory will lose big-time to SW RAID on even moderately faster CPUs (like 500MHz P-IIIs), especially on RAID-5 which is compute intensive, an even more on RAID-6 which is also compute intensive but not XOR based.

    Modern HW RAID controllers have reasonably fast CPUs with XOR accelerators built in - therefore they can do RAID-5 as fast as the pure SW solution. But this is not the case with older controllers.

    I know of people who use 3ware cards for large RAID-5 servers, but only use the 3ware cards as "dumb" IDE controllers, and leave the RAID-5 handling to SW-RAID. The reason? Their benchmarks indicate that this is significantly faster.

    And when you think about it, it makes sense. Nobody puts a GHz processor on a RAID controller. Even a slow-by-todays-standards P-III is able to XOR more than a gigabyte of data per second - much much more than anything you put thru most file servers out there.

    So, the "HW RAID is faster than SW RAID" is true in one scenario only; when you have write-intensive workloads and a HW RAID controller with battery backed cache.

    In *all* other cases, SW RAID will be a win, performance wise.

    For a personal file server, I wouldn't hesitate to run RAID-5 in plain software. It's as fast or faster than any HW RAID controller in the sub-$3K price range, it's reliable, and the flexibility beats the heck out of any HW based solution out there (mixing IDE/SCSI, allowing a cryptographic layer between the RAID layer and the physical disks, etc. etc...)

  • Re:RAID 1 (Score:5, Interesting)

    by tedgyz ( 515156 ) * on Wednesday June 16, 2004 @04:40PM (#9445666) Homepage
    The hot swap bays let me yank a drive out on my way out of the house if the place catches on fire. Yes, I know I should be storing that third drive at a friend's house, but it's too inconvenient to retrieve it every time I want to backup my array. So a fire may destroy everything if I'm not home or can't safely pull a drive on my way out. I'm comfortable with that.

    You can resolve this issue with high-capacity, portable storage. I keep all most critical stuff (software, licenses, photos, pr0n, etc.) on my 40GB portable drive. Forget those keychain things. The FireLite SmartDisk [smartdisk.com] is a USB 2.0, aluminum encased laptop drive. It draws power from USB - it even worked on my old USB 1.l system. They provide a special power cable, in case your old USB ports aren't pushing enough power. I toss thing in backpack every day and lug it all over - it has yet to show signs of weakness.

    I totally agree with your configuration. For my Linux server, I've been using Linux (RH7.2) Software RAID-1 mirrored for ~3 years without a single issue.
  • by KaiLoi ( 711695 ) on Wednesday June 16, 2004 @04:44PM (#9445700)
    I have just finished doing this exact thing.

    I basically built a box to do nothing other than fileserv. I put together a nice simple old PC (550mhz with 256 meg of ram) and mounted it in an old rack mount case I had lying round.

    It's running debian with 2.4.26.

    I'm running software raid and installed 2 x 2 interface IDE cards.

    I threw in 6 seagate 120 gig drives (the ones with the 8 meg cache) and ran raid5 across 5 of them and a hot spare to rebuild the raid should a drvie fail. Each drive has it's own IDE channel to prevent channel faliure from screwing my raid.

    I'm using ext3 as the filesystem and wrote my own little raid mon script that SMS's me should a drive fail and alarms locally.

    This setup has been rock steady and gives me 460 (ish) gig of usable space after formatting.

    For added peice of mind the machine is plugged into a UPS that is connected to the machine via Serial. If the UPS kicks in it shuts the machine down properly after sending an alarm SMS (the DSL and switch are also on the UPS) (yes I'm a paranoid freak)

    This makes a perfectly good media and file server and I've had no problem with it in the few months I've had it.

    I also reccomend setting the spin down time onm the drives manually with hdparm. It was getting awfully warm in the box till I turned that on on the seagates. Modern drives are rather hot. ;)

    I have the whole thing mounted via SMB on my other boxes around the house and it's fast,(gig ethernet) reliable and easy.

    Tho do remember that no amount of raiding will save you if you lose 2 drives through some horrible freak of badness, and no raid level is going to protect you from a house fire. Hence mine also rsyncs all my absoloutely vital files (scanned family photos and docs) offsite to a file storage site every night at 2am so as not to chew my bandwidth dduring usable times. Don't forget the only truely secure data is that which is backed up.. and offsite.... twice. ;)

  • by egarland ( 120202 ) on Wednesday June 16, 2004 @05:32PM (#9446279)
    RAID 5 is only really appropriate if you are building a large array. The money you will spend on the controller will make the cost/megabyte higher than RAID 1 unless you are looking for a very big array (more than you can get with a mirrored pair.) I have a RAID 5 array I built about 2 years ago with 4 160GB drives on a 3ware 6000 series RAID controller. It has worked great and I'm planning on using RAID 5 again for my next array. I've only had one drive failure so far but it recovered from it beautifully.

    If you are willing to fork out about $1100 for storage you can create a really nice array. I'd recommend a 3Ware 4 port 9000 series controller like the 9500S4LP (around $330) or a RaidCore card reviewed [tomshardware.com] recently over at tomshardware. Add in 4 $180 250GB SATA drives and you have a nice 750 GB array for around $1100. The Promise FastTrack SX6000 is quite economical and supports more drives if you don't mind it's bad performance and crappy Linux support. 8 port cards are also pretty economical but it's hard to put that many drives in most cases. You have to design a system carefully in order to create arrays much bigger than 4 drives.

    Once you have your array, it's a good to use Linux or something with a reliable journaling filesystem on top of it. Once you have a RAID array your filesystem becomes a much more important point of failure. Using a reliable one will do a lot towards reducing your likelihood of data loss.

    I also use a separate drive with a separate filesystem for backup. I have a script that manages it for me (ignoring certain directories) which runs every night. A RAID array is pretty reliable and a big step up from single drives so it's a good half way point but I wasn't comfortable with it so I went further. How far you go us up to you.
  • I went with RAID5 (Score:2, Interesting)

    by steve_ellis ( 586756 ) on Wednesday June 16, 2004 @05:50PM (#9446435) Homepage
    within the last month I set up a RAID5 system after a nasty disk crash. I know I still need to do backups, but I needed lots of space anyway. I went with:
    • 3ware 9500S-8 8 port SATA RAID controller $485
    • 5 250GB Maxtor Maxline Plus-II drives $195 each
    • Supermicro 742T 7-bay SATA hotswap server case $330
    The drives are in a 4 drive array with one drive as a hot spare. About $1800 total, which includes the server case--pretty steep for ~700GB usable space, but I now have:
    • expandability to at least 7 hot swap drives
    • a hot spare
    • a dual xeon capable case with a 550W supply
    • plenty of airflow
    • online capacity expansion (3ware says available this summer)
    Yes, it is still a personal server, but we keep a lot of video on it as part of my DVArchive setup to support my ReplayTVs. I installed Fedora Core 2 on it right after Core 2 was released.

    Now, when I need to store a few hundred more hours of video, I can just throw 2 more Maxline Plus-II drives at it to get up to ~1.2TB--leaving final cost at under $2/GB, including the computer case, power supply and hotswap bays.

    provantage.com has the 4 port 3ware 9000 card for about $320, I think. -se

  • Re:Software raid (Score:3, Interesting)

    by Rei ( 128717 ) on Wednesday June 16, 2004 @05:51PM (#9446451) Homepage
    Hmm... this is interesting. I've noticed almost everyone here has been discussing IDE raid. Why is that? Do so few people use SCSI raid for home use? And if so, why?

    Are that many people really in need of a huge read thoroughput, but at the same time happy to accept high seek times? Is this really the best way to get performance out of your system? 3.6 ms seek time seems bad enough to me, but I can't imagine having my root partition on your average IDE drive's 8.5-9.5 ms seek time. I mean, really - you can get a 9 gig scsi drive for your root partion, brand new, for 30 bucks (inc. shipping) that has a seek time of 5 ms. Why would anyone use IDE for a root partition - but then try and make it raid for performance?

    It's something that really has me baffled. Certainly, seek time isn't important on, say, listening to mp3's or watching videos - your bulk data. But when loading libraries to run programs, compiling, starting X, etc, it makes a *really* big difference. And to think that many people out there have their *swap* on IDE drives also...
  • Re:RAID 1 (Score:5, Interesting)

    by bersl2 ( 689221 ) on Wednesday June 16, 2004 @05:57PM (#9446502) Journal
    I've done that once. This is why I've started to touch -- -i in every directory with important data. In case of accidental rm -rf *, you're not fucked. I forget where I learned that trick, but I'm sure it will be a life-saver someday.
  • Ghost (Score:2, Interesting)

    by drew_92123 ( 213321 ) on Wednesday June 16, 2004 @05:58PM (#9446511)
    I have a removable drive, I ghost every month and do incremental backups every week to a DVD-RW.(seperate DVD for each incremental)... Worst case I lose a week and a couple of save games.

    The nice thing about ghosting is that I get to use a cheap 120GB 5400 rpm drive and save many compressed images from the drive I'm backing up... I'm backing up a 36GB(?) 10,000 rpm WD Raptor. It only take 15 mintues to restart, boot from my ghost cd, save the image, and reboot into windows XP.

    I didn't like the RAID solutions I was looking at, this not only works just as well for my needs, but I get to keep the removable HDD in a safe(fireproof of course) at the other end of the house just in case...

    Just a thought...
  • Re:Software raid (Score:3, Interesting)

    by Rei ( 128717 ) on Wednesday June 16, 2004 @08:00PM (#9447523) Homepage
    You (and the next poster) completely missed the point. RAID gives you *read thoroughput*, not *seek time* (it actually tends to hurt your seek time a little). It may seem like a trivial distinction, but it's actually very important: most home computing application disk performance is limited by seek time, and IDE drives have god-awful seek time. As I mention in a thread a little bit above here, you can get a good sized scsi root partition - *from scratch* (I.e., if you have nothing already, not even a scsi cable) - for under 50 dollars, using only brand new components, and including shipping - that will cut your seek time almost in half from a good IDE. Given how much people spend on their systems, this is a really trivial amount for the performance increase it gets you.

    What do you need high read thoroughput (not write - RAID doesn't give that to you) for? Are you serving 2 gig files over the web? If not doing things like that, such a configuration is borderline pointless.

    Take a look at /usr/lib some time. What's your median file size? Something like 25k? When your system has seeked to the proper location, assuming merely 12MB/s performance, you're looking at 2 ms to read that data. To get that 25k, your system has to read the root inode, the /usr inode, the /usr/lib inode, the inode for the symlink, the /usr inode again, the /usr/lib inode again, the file's inode (to get the blocks), and then it can read the blocks. Now, realistically, most of that will be cached (not true for lesser used directories). You'll probably only have 1-2 separate read commands issued. With a *good* IDE drive, you'll be spending 8.5 to 17 ms on the seeks, and 2 ms on the reads. Optimizing the 2 ms is beyond pointless. And this example uses some kind assumptions for the IDE drive (8.5 ms seek time, but only 12 MBs sustained transfer rate). And lets not even get into swap....

    Do you see what I'm saying here? Using IDE as a root partition is dumb, but making it RAID is dumber.

    Now, for slow bulk storage, nothing beats IDE. :) You won't catch me arguing with that.
  • Re:Software raid (Score:3, Interesting)

    by megabeck42 ( 45659 ) on Wednesday June 16, 2004 @08:53PM (#9447895)
    This is actually a common trend. For example, Software WEP far outperforms hardware WEP. A modern processor will spend Jeff Mogul has a great paper describing how TCP Offloading is slower than software TCP:
    http://bbcr.uwaterloo.ca/~brecht/courses/856/readi ngs-new/mogul-offload-2003.pdf [uwaterloo.ca]
  • Re:Software raid (Score:3, Interesting)

    by catenos ( 36989 ) on Wednesday June 16, 2004 @09:07PM (#9447971)
    That's 50$ for a root partition that will give you a 70% speed boost over a 7200 RPM ide drive.

    Hm, I'd rather invest those $50 into RAM (you easily get additional 512MB for that). It won't speed up boot time (but in my case, I don't care about whether booting once a day needs 1 or 2 minutes), but after first use, everything is really fast (well, at least under Linux :)

    I could even preload some stuff into a RAM disk and prevent seek times this way (via dd), but as I said, first startup isn't that important to me.

    I am also not sure, why you are speaking of fast swap access several times. My swap partition didn't get much use for the last 5 years (even when I was still at 386MB)[1]. If you aren't into video editing or such, today's average 512MB or such should be plenty.

    Another possibility for fast access times without spending too much, which I have done recently on a database server, is using average disks and putting software RAID on it (I needed much space and the fast disks with the needed size were about several times the prize of the lesser disks).

    This worked so well with SCSI disks that I intend to try it with my home system on the next upgrade. Though I expect less performance due to IDE constraints.

    [1] It gets used whenever Linux decides that it's a good idea to swap unused parts out in order to increase the mem availabe for the filesystem cache - which is why I still have a swap.
  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Wednesday June 16, 2004 @11:31PM (#9448815) Homepage Journal
    I also reccomend setting the spin down time onm the drives manually with hdparm.

    NO NO NO NO NO. Repeat: NO . Don't do that. Really, don't. Drives, particularly the high-RPM types you're likely to find in servers, do not like to be cycled a lot; it's the single most stressing thing you can do to one. You will dramatically shorten your drives' livespans if you do this.

  • Re:Software raid (Score:3, Interesting)

    by deque_alpha ( 257777 ) <{qhartman} {at} {gmail.com}> on Wednesday June 16, 2004 @11:53PM (#9448942) Journal
    I have a similar-ish setup that is now nearly 5 years old and only just now am I considering upgrading. I have 4 9GB 10K RPM SCSI drives using software RAID 5 for my / and swap. I have a 250GB 7200 RPM IDE drive for /archive (my equivalent to your /scratch). I got a "high end" IDE drive for the archive simply because of the better warranty, the improved performance over the cheaper model was just a bonus. So anyway, the throughput on my array matches the throughput of my modern "fast" IDE drive, and has about 1/3 the seek time. When I LAN with friends, I'm always the first with the level loaded, even though I have the "slowest" system of the group in terms of CPU, RAM, graphics card, etc.
    It cost quite a bit when I put it together, but it's been well worth it, seeing as how it has taken 5 years for the desktop-level stuff to catch it performance-wise. When I do upgrade, I will probably go with an escalade driving 74GB Raptors, since the have command queueing they are beating all but the most high-end SCSI drives out there now.

Always draw your curves, then plot your reading.

Working...