What Kind Of Software RAID Are You Running? 148
ErikZ asks: "Lately, I'm having issues with my RAID. Specifically, closed source drivers for my RAID card that only support Red Hat 9. So I've decided to Ebay the card, and try to figure out how to turn 4 SATA drives into a software driven RAID 5 setup. Yes, I know I'll lose all the data, and I'm not worried about it. Finding a 4 port (or more) SATA controller card, that's well supported under Linux, has been difficult. Everyone wants to slap on their own RAID chip and charge you another 100$ for the pleasure. Where can a guy get a highly recommended, well supported, 4 port SATA card for Linux? The Rocket 1540 cards have vanished off the face of the earth. There are a few motherboards out there that have 4+ SATA connectors on them, but they also add RAID and some other cutting edge features that aren't well supported under Linux.
So, I thought I'd try another route and ask Slashdot: What are you using for your Linux software RAID needs? What do you suggest?"
I'm using md, aka Linux Software Raid (Score:5, Informative)
For more information check: man md
Also RAID 5 is distributed parity raid, no data loss if only one drive goes. it takes two failures to lose data on a raid 5 array.
Re:I'm using md, aka Linux Software Raid (Score:4, Interesting)
When I first got it, I stuffed a lot on the raid drive, disabled it, wiped out one disk, and re-activated the raid. It rebuilt it and worked fine.
I asked this question on a Debian (or Debian based) user list at a time when a lot of experienced admins were around, and overall the feeling was that there was no need to go hardware and the software raid would do the job.
Re:I'm using md, aka Linux Software Raid (Score:2, Insightful)
Which defeats the purpose, as the performance gains from RAID are going to be greater on the system drive (swap space, loading programs, libraries, program resources etc) than on the data drive, which is typically multimedia data where performance isnt a factor, as when you save it, your download speed is the bottleneck, and when you play it, the multimedia files have set bit
Re:I'm using md, aka Linux Software Raid (Score:2)
I also am confused -- you say data drives are mostly multimedia, and point out that they have set bitrates for reading, then get vulgar about someone getting good performance for mp3 players. It seems you a
Re:I'm using md, aka Linux Software Raid (Score:2)
[...]
I actually looked up both words, as I do to every single word I ever type in a word forum (after all web forums are the epitome of literary prestige) and concluded that infact imply was the inferior choice.
I suggest you look up the words "your" and "you're". See if you can find "infact" anywhere as well.
Re:I'm using md, aka Linux Software Raid (Score:2)
Re:I'm using md, aka Linux Software Raid (Score:2)
Re:I'm pretty sure you can boot from it (Score:2)
I don't see the problem with putting a 100MB partition for
Re:I'm pretty sure you can boot from it (Score:2)
SOL? Knoppix, any other LiveCD, or install CD with md support in its kernel will get you booted again if you loose
Re:I'm pretty sure you can boot from it (Score:2)
Re:I'm pretty sure you can boot from it (Score:2)
You still need /boot on a non-RAID device, so your system is still vulnerable to disk failure.
Re:I'm pretty sure you can boot from it (Score:2)
Booting from a Linux software RAID drive works just fine. I currently run a RAID1 (mirror) array for my / partition.
I have two 160GB SATA disks. Partitioning is like this:
sdb is an exact copy of sda. I boot fine from /dev/sda3 (the grub boot block is on /dev/sda1, but that is only to dodge the 1024 cylinder limit, the actual kernel file is on sda3)
To sum up booting from a /dev/mdX device is easy. Just have
Re:I'm using md, aka Linux Software Raid (Score:2)
Re:I'm using md, aka Linux Software Raid (Score:2)
Re:I'm using md, aka Linux Software Raid (Score:3, Informative)
Another point to this: there are three kinds [linux.yyz.us] of raid setups:
From my reading on forums and other various articles, there's almost no (if any) benefit to using fi
Re:I'm using md, aka Linux Software Raid (Score:3, Informative)
IMHO, the only reasons to use these as anything other than bog-standard ATA controllers are a) if you have a pre-existing RAID setup that you w
Re:I'm using md, aka Linux Software Raid (Score:2)
Yes, there is - the same major (practically *only*, these days )advantage hardware RAID gives you:
Transparency.
Both hardware and "firmware" RAID present a single block device to the OS and BIOS, making it feasible to install and boot the OS on that RAID device and not worry about a hardware failure crippling the machine (a distinct possibility with software RAID i
RTFP (Score:2)
He's not asking how to set up the software, he's asking about hardware that contains the features he needs for software RAID (many ports) but not redundant features that reduce compatibility and/or add significant cost. (hardware/firmware RAID).
Re:I'm using md, aka Linux Software Raid (Score:2)
Because first I would need a Motherboard with 4 SATA ports on it. Which I don't have, so I need to get a SATA card.
The device-mapper works fine here. (Score:2, Interesting)
There are some things to be aware of. If you want to mount / as a raid, it can be tricky. The initrd needs to be properly configured, or the drivers must be built into the kernel.
Sometimes, the raids don't shut-down completely. I've never been able to completely solve this problem. Most of the time it's OK, but some machines have trouble. The most common culprit has been NFS.
GRUB &
Re:The device-mapper works fine here. (Score:2)
Re:The device-mapper works fine here. (Score:2)
megaraid (Score:3, Informative)
I like this one: MegaRAID SATA 150-4 [lsilogic.com]. Admittedly, I've only used it under OS X Server, as it's apparently what Apple uses in their OEM; but they do have linux drivers and I can only assume that they work as well, if not better. Straightforward setup on the CLI, and not too expensive.
Personally, for $300 I wouldn't screw around with a software raid unless this is your own personal box and the drives only have MP3s.
Re:megaraid (Score:2)
Re:megaraid (Score:2)
the linux megaraid drivers are pretty terrible. we have a couple of machines at work with *unbelievable* throughput problems due to the poor linux megaraid drivers.
MOD PARENT UP (Score:2)
For cheap setups, I go with 3ware. For more expensive ones, we use an external raid array with a scsi uplink to the computer. The cache, battery backup, and simplicity of host
Re:MOD PARENT UP (Score:2)
In July 2004 we had a disk fail, and the entire partition got knocked offline. Not exactly what you'd expect from a raid5 array, is it? We had to rebuild the array offline (ie, system downtime) and then fsck
Re:megaraid (Score:2)
Although it costs an extra $80, the MegaRAID SATA 150-6 [lsilogic.com] is really one of the best of its kind on the market simply due to its available battery backup support. IME, battery backup can really make the difference in terms of reliability, especially when you have controller caching enabled (particularly if you are using a DB/transaction work with RAID 5). Just a thought.
Uhmm, what's your problem really... (Score:4, Insightful)
But can't you just use your raid card as a SATA card, and ignore the raid functionality? Why do you absolutely need it to be non-RAID? I'm sorry, but I'm having real trouble understanding what's the difficulty here...
2 problems (Score:2)
b) There's a good chance that even the non-RAID capabilities of the controller have been compromised by lack of documentation to write a good driver
Re:2 problems (Score:2)
Re:Uhmm, what's your problem really... (Score:2)
It's the Fastrack S150 SX4.
ARECA has worked nice for me (Score:2)
# hdparm -tT
Timing cached reads: 2052 MB in 2.00 seconds = 1026.16 MB/sec
Timing buffered disk reads: 380 MB in 3.01 seconds = 126.27 MB/sec
Re:ARECA has worked nice for me (Score:2)
Vanilla Western Digital SATA 250 GB
3ware, 3ware 3ware. (Score:5, Informative)
Why real hardware RAID? Say, for example, your boot drive goes out in a software RAID configuration. Your system is suddenly unable to boot, requiring manual intervention for a rebuild. With hardware RAID, the BIOS built-in to the card handles things smoothly and your system can boot without a problem.
3Ware has the best reputation. (Score:3, Informative)
Of all the PC RAID card manufacturers, 3Ware has the best reputation. However, you cannot boot from one drive in a 2-drive mirror. If for some reason you don't have a working 3Ware card, you cannot get your data. It is lost.
If you use 3Ware cards, keep one or two spare cards.
Re:3ware, 3ware 3ware. (Score:2)
This month's Linux Magazine has a review of a four port 3ware hardeware RAID5 controller that is (duh) supported under Linux. They gave it 5/5 Penguins.
Now the card is $440, which may be more than you are willing to spend, but that would solve your problem.
Re:3ware, 3ware 3ware. (Score:2)
Hmm, I've never really had that problem with bootable mirrored software RAID's that I've setup.
There was a HOWTO about bootable software raid somewhere.... but it's what I used in the following.
I had a lab server on a remote site set up with two mirrored drives and the BIOS set to boot the first drive...then the second. That way, if one died, as they are a mirrored pair, everything still reboots fine. md detects the dud/missing
Re:3ware, 3ware 3ware. (Score:2)
I don't know if you've actually battle-tested this, but you'll almost certainly find it won't work because /boot (or wherever your kernel and initrd are) can't be on a RAID partiition.
Unless you're "manually" syncing up two copies of /boot onto each device, of course, but that's a rather ugly hack j
Re:3ware, 3ware 3ware. (Score:2)
Sure you can, since grub will boot off a raid1 partition easily. I do it on all of my remote servers, separate raid "array" for
As long as the kernel has support for raid built in (or you have the modules in an initrd), you'll be fine with a RAID1
Re:3ware, 3ware 3ware. (Score:2)
It works great, however, in JBOD mode (Just a Bunch of Disks), running software RAID
Re:3ware, 3ware 3ware. (Score:2)
I have a 7810 running 4x250G in the XP2600 box I'm typing this on. The card is in a normal pci slot. The fs of the single 702G (usable) partition is xfs (with su and sw set properly). I'm running kernel 2.6.11 w/ cfq io scheduling, udev, and a ge
Re:3ware, 3ware 3ware. (Score:2)
All I had to do to make interactive use of the machine exceedingly painful was to start dumping a big file from my Window
FUD (Score:4, Insightful)
If one drive in the pair fails, things keep ticking along smoothly. They're really just identical partitions with identical data on different disks.
LILO merrily writes boot code to the array without episode. Meanwhile, the machine's BIOS is happy to boot from disks other than primary-master, all by itself.
I've booted the system after randomly unplugging devices. It works just fine.
Why do all of you 3ware goons think that the world wants to buy hardware which offers no clear advantage over having no hardware at all? (As if I want to add -more- potential points of failure to my systems . .
Re:FUD (Score:2)
Re:FUD (Score:5, Insightful)
This is simply one advantage to using a real hardware raid card like the 3ware. There are plenty of other reasons too: Does your chipset/hardware support hot swapping? If you use SATA, does it support command queueing? Do your drives? How much cache does it have? Does it have cache? Can it tolerate all types of hardware failure? Does it have *ahem* 16 ports with individual controllers for each drive? It's not like the BIOS/IDE chipset makers write out in their specs how their hardware performs under drive failure conditions so you have the overhead of testing each configuration to make sure it works proeprly before you have to rely on it. It's not so much a performance difference between hardware and software raid (until RAID-5 anyway) but an issue with how the hardware will respond when something goes wrong, which is one of the primary reasons for using anything above RAID-0 anyway.
Yes, running a 3ware card costs more. There are times when that $400 costs a lot less than the time spent configuring and testing an alternative software-only implementation. There are times when it doesn't and spending another $400 doesn't make a lot of sense. I have run both setups. I have machines deployed with both IDE software-only RAID arrays, IDE 3ware arrays, SCSI software RAID5's, SCSI Adaptec RAID's etc.. it's all application specific. There's no reason to call somebody a goon for recommending 3ware hardware. It's really good hardware; maybe you should try it sometime.
Re:FUD (Score:2)
It does... try it sometime. One caveat, it has to be RAID1.
Re:3ware, 3ware 3ware. (Score:2)
3ware's SATA implementation is ugly; it's effectively a bridge from their PATA one, so it doesn't support NCQ.
Personally I use Areca [areca.com.tw] cards - a 16-way card that can run RAID '6' (RAID 5 but with two parity discs) and a hot-spare, has its own Ethernet port for remote access to the firmware is rather good. Oh, and it has (unofficial) kernel sources suitable for 2.3 - 2.6.
Very good.
Re:3ware, 3ware 3ware. (Score:2)
Re:3ware, 3ware 3ware. (Score:2)
The right way to do this is either just get one of these [realweasel.com] or one of the many more expensive/featureful alternatives. Or better yet, just get a real server that has a real serial console (or if you run windows and/or have more money than brains, get some KVM over IP thing)...
A RAID controller with its own telnet service for remote access to the firmware... *shudder*...
Re:3ware, 3ware 3ware. (Score:2)
Restarting = downtime = bad. No, really.
The Areca card is for proper environments where you just don't have the option to take the system down to pull a disc or three from the array. Hotswap discs, hot-rebuilding of RAID arrays, etc. It's a proper RAID card.
Re:3ware, 3ware 3ware. (Score:2)
No... "proper" RAID cards have software that you can use to administer and
Re:3ware, 3ware 3ware. (Score:2)
Two areas you may run into problems with:
1) The cards require a lot of power and riser cards can be troublesome.
2) They aren't the fastest cards on the market. They do use a custom PATA=>SATA bridge, even on the 9500 cards. That being said, they are still blazingly fast and reliable.
I'd recommend getting the 9500 boards. The 8500's didn't support onboard cache and there is
Re:3ware, 3ware 3ware. (Score:2)
My personal experience with WD has been terrible. I swore off the drives many years ago, but it was hard to turn down 10K RPM drives. We took a risk getting WD drives and it didn't pan out. We've since (using 3ware's CLI) swapped out all of the WD drives for Seagates. Again, no down time, plus we were going to larger drives.
We had 3 failures of WD drives in the first 3 months of operation. It's been 4 and no sea
You know.... (Score:3, Funny)
HighPoint RocketRAID cards won't boot XP reliably. (Score:2)
HighPoint RocketRAID cards do not function well when used as the boot device for Windows XP. This was verified by HighPoint technical support. We did not try them under Linux. But read my comment above about timing issues.
I did not have problems for a long time. (Score:2)
I also did not have problems. Some motherboards would work correctly for weeks. Then there would be an unexplained failure of the mirror. HighPoint tech. support said they were not able to understand why the failures were occurring. (Promise told me the same thing.) Highpoint said that the failure was common.
Why not the 1640 cards? (Score:3, Informative)
Re:Why not the 1640 cards? (Score:2)
I bought a RocketRaid 100 [highpoint-tech.com]. While I had no problem getting it to work under Windows, I was unable to get it to work under any of a number of flavors of Linux. Of course, my ineptitude at compiling a patched linux kernel may have led to my difficulties.
I wound up using the card as a plain old IDE interface and then build software RAID on the drives connected to it. In retrospect, I should've bought a 3ware card, despite its significantly higher cost because it would've saved
What Kind Of Software RAID Am I Running? (Score:3, Funny)
From my notes when setting up my Soft-RAID server (Score:5, Informative)
It really depends on what you are using the server for. I ended up going for the pure Software RAID option. Its for home and I'm a cheap. If you're not cheap or it is for a work server, I'd stick with the pure Hardware solutions.
________
Hardware RAID:
The expensive Adaptec, 3ware, etc SCSI cards found in most servers.
Pro - Offloads XOR calculations from the CPU to internal processor.
Pro - No manual intervention required in case of a raid failure.
Con - Expensive.
Con - Third Party and/or closed source drivers often required.
Semi-Hardware RAID:
These are typically the SATA RAID controllers that are built into motherboards but includes cheapo add-on cards. Generally if less than 150 bucks, not full hardware RAID. I believe all of the RocketRAID cards fall in this section.
Pro - No manual intervention in case of a disk failure.
Pro - Cheap.
Con - Minimal or No CPU offloading.
Con - Third Party and/or closed source drivers often required.
Software RAID:
Use Linux and plain old SATA/PATA controllers to handle all of your RAID needs.
Pro - Very cheap.
Pro - No worry about driver incompatiblity or closed source drivers.
Con - No CPU offloading. You essentially trade CPU power for Disk speed/redundancy... and its a significant trade.
Con - Manual intervetion required in case of disk failure.
Con - PATA Only. Must be one drive per channel! NO SLAVES! Apparently data loss can occur on both drives in the chain if one goes bad. http://www.tldp.org/HOWTO/Software-RAID-HOWTO-4.h
Performance is also hurt in a Master/Slave combo.
________
Re:From my notes when setting up my Soft-RAID serv (Score:2)
Re:From my notes when setting up my Soft-RAID serv (Score:2, Informative)
Use Linux and plain old SATA/PATA controllers to handle all of your RAID needs.
Con - PATA Only. Must be one drive per channel! NO SLAVES! Apparently data loss can occur on both drives in the chain if one goes bad. http://www.tldp.org/HOWTO/Software-RAID-HOWTO-4.h t ml#ss4.1
Performance is also hurt in a Master/Slave combo.
Um...why PATA only? I've done software raid using loopback devices. I can't see why SATA drives would be any less likely to work.
Re:From my notes when setting up my Soft-RAID serv (Score:3, Informative)
Re:From my notes when setting up my Soft-RAID serv (Score:3, Informative)
> Con - Manual intervetion required in case of disk failure.
You can get around this for some failure modes, as long as your boot partition is always raid1. I do this at home,
The success of this depends on the disk either failing so badly that the system can't see it anymore and so boots off another disk, or that the part of the failed disk that holds
Re:From my notes when setting up my Soft-RAID serv (Score:2)
Personalities : [raid0] [raid1] [raid5] [multipath]
md0 : active raid1 sda1[1] hdc1[0]
120060736 blocks [2/2] [UU]
sda is a sata, hdc is a pata drive
Re:From my notes when setting up my Soft-RAID serv (Score:3, Informative)
Maybe if you're using a 486.
The CPU overhead of software RAID 5 is insignificant on any remotely modern machine. Even "ancient" ca. 500Mhz P3s have checksumming speeds over 1GB/sec
We use software RAID-1 here... (Score:2)
And one thing I've been wondering about, obviously you would keep the drives on different IDE channels if possible (hda, and hdc here).
If you also have non-RAID drives on hdb, and a CD-ROM on hdd... that should not overly influence the data speeds of the RAID drives except when there is actual data transfer on hdc/hdd, or would a machine automatically split the available pipe upon having two IDE devices (master/slave) on a given IDE channel?
Re:From my notes when setting up my Soft-RAID serv (Score:2)
Woah! Wait, what?
SATA is one driver per channel.
Use the Onboard SATA and bypass the Hardware RAID (Score:5, Informative)
1. As long as the onboard SATA chip is well supported on your linux kernel, use the onboard chip.
2. Don't worry about the "hardware RAID" built into the motherboard. You don't have to use it. In fact, most people bypass it.
3. Use the non-BIOS SATA driver for your motherboard. Some motherboards have two different chips. Mine (an Epox 8RDA+Pro nForce Ultra2/400) uses both the common Silicon Image SIL3114 which supports 4 SATA drives and an additional 2 SATA drives provided by the onboard nForce 2 Ultra Gigabit MCP chipset. Quite nice for RAID and I still have normal PATA IDE drives 0 - 3.
4. Quite often the SATA RAID hardware only supports RAID 0,1 and 10 (or 01 depending). If you're looking for RAID 5 then you'll have to buy a more expensive outboard solution. The problem with outboard solutions are that they will eat into your PCI bandwidth. If you will be using PCI-X then you will probably also be paying significantly more for your outboard solution. Most people have a ton of CPU lying around, so handing off the I/O doesn't really buy you that much.
5. When it comes down to it you might as well just use software RAID because you have more control over it. You can use the onboard SATA controllers which allow you to take advantage of the increased on-motherboard bandwidth as well as having a significantly less expensive solution.
6. Another advantage to using Linux software RAID is that you don't have to learn a new RAID management system everytime you upgrade your machine and controller. You can also connect to your machine remotely and manage your raid system through a firewall. Sometimes you can do that with your hardware RAID system and sometimes you need to manage it from the BIOS itself.
7. Once you get comfortable with software RAID you can experiment with mixing and matching various I/O systems underneath it. One of the things I'd like to play with would be using software RAID with Firewire 800 external drives in a pseudo-SCSI arrangement.
8. The LVM2 system doesn't need software RAID, but it works very nicely with it none-the-less and gives you snapshot support etc.
9. Personally, I'm going for RAID 10 (striped mirroring) because drives have gotten very inexpensive and I don't mind burning a few more to get higher I/O rates. Remember, if you go with a mixture of RAID 0 and 1 then you want a striping over mirroring -- that way if you have a single drive failure the array keeps going.
Have fun and don't use RAID instead of backups. Backups save the stuff that you deleted intentionally but need to recover.
Doesn't anyone use striping anymore? (Score:2)
I'm doing this because the mirroring is a performance AND a disk space hit, and is only worth it if I am planning for the disk to fail. With striping, I lose $20 for buying two 80 gig drives instead of one 160 gig drive, and I get twice the speed.
The annoying part is that I have to redo stuff if I want to add to the array. That's the one a
Promise SATA150 TX4 (Score:2)
I use straight-up Linux md (Score:2, Informative)
Since a desktop/workstation machine does mostly reads anyway, I am getting the benefit of striped reads. I don't really care that my writes incur a slight penalty.
Granted, hardware RAID would use less CPU time... but hardware RAID is tied to a particular card. What happens if you move your disks to a new machine? You have to move the RAID card. If you go with an integrated RAID solution on your motherboard, that's tough.
With Linux md RAID, that is
Failure question (Score:2, Interesting)
I've considered setting up software raid on my Linux server, but I haven't found any doc yet about what happens in the event of an unexpected crash or poweroff part way through writing a RAID-5 stripe.
Suppose I have 4+1 disks in a RAID-5 configuration, and during a write to a stripe of the disk only two of the disks are written to before the system crashes. This leaves me with 2 disks with new content, 2 disks with old content, and a useless parity.
I found a page [redhat.com] at RedHat that indicates that as of 200
I have a Linux RAID question (Score:2)
I have a Promise FastTrak100 Lite controller built into my MB, and I've been using it for firmware RAID for about three years now. It worked fine in Windows (using the Promise SCSI emulation drivers) and in Linux 2.4 (via
Is there any way to get a 2.6 kernel to see the array while leaving the data intact?
Re:I have a Linux RAID question (Score:3, Informative)
Networked RAID, anybody? (Score:2)
I know DRDB [drbd.org], but that's more for HA pairs and cannot sync drives in background while mounted.
Some tips (Score:5, Informative)
1) offloads operations to the controller, so eats less CPU/IO bandwidth.
2) can have battery backed cache
3) often looks like "just a scsi controller" to linux and the boot loaders, so booting from f.e. a RAID5 set is often easier.
software RAID has these advantages
1) is cheaper
2) CPU time lost makes hardly any difference
3) has well-tested and supported tools to manage your raid setup. (imagine if you could only set up your raid sets by rebooting and entering the raid bios)
4) disk-layout is non-proprietary (controller died? don't have the same brand lying around? manufacturer left the market? no problem!) - so all-around more flexibility.
Look here for properly supported sata disk controllers:
http://linux.yyz.us/sata/
Some of these cards come with BIOS smarts that provides you with software raid which offer you the advantage of point 3) of hardware raid, ie: bios and boot loader support for your raid.
however, this does mean that the on-disk layout has to be recognized in linux, so linux can make sense of it and set up the raid sets properly. In linux 2.4 there were some drivers that did that themselves, however for linux 2.6 there's now a little userspace program that recognizes a whole bunch of on-disk layouts, and sets them up using the device-mapper facility (part of LVM2).
The advantage of this is that you can use the same well-tested and -supported linux drivers mentioned on http://linux.yyz.us/sata/ , but still use the (bios) facilities provided by the hardware. Another advantage is that this program will probably be used by all ATARAID ("mostly-software-raid") devices on linux, so it is, or will be well-tested and -supported in itself.
You can find this program, called DMRAID here:
http://people.redhat.com/~heinzm/sw/dmraid
So if you decide to go the SW-RAID way, think and decide if you want the advantage of dmraid. I haven't tried this myself yet, and the only aspect I'm unsure of is the management aspect of it (like with HW-RAID drivers) - DMRAID doesn't use MDADM, so how can you properly monitor, hot-add,
MDADM itself isn't going away any time soon either, if I understand correctly. (And even if it does, it's probably very likely that they'll make DMRAID understand the MDADM on-disk layout to provide an upgrade path.)
If however you decide to go the HW-RAID way, make sure you get a reliable and reputable manufacturer - with open source drivers (!), preferably with a known on-disk layout, and be prepared to spend money. I've heard a lot about 3ware, but I have no direct experience with them myself, so I can't vouch for them.
The real issue (Score:3, Funny)
Re:The real issue (Score:2)
Need more information (Score:3, Interesting)
Personally, I'm not worried about performance beyond being able to play full-motion video. I have a PPC 604 180MHz from 1997 with a SCSI card and a RAID rack. 8x18GB at RAID 5 gives me 118GB or so of redundant storage, and I serve it over NFS to my other machines. Just for kicks, I have it going through a cryptoloop, too (LVM on cryptoloop on Linux RAID5 on SCSI). The initial cost was low (the drives were $15 each, the rack was around $100 on eBay, the trays were given to me, the SCSI card was under $40 on eBay, the 100Mbit ethernet card was about $20, and the computer had been a doorstop until I put Linux on it). The ongoing (electricity and cooling) costs are a little high (they are 10K drives), but that's life. I can play an MPEG or AVI from two machines on the network at once without hiccups, so I'm happy.
If I were going to build a RAID server today, I'd probably buy a Mac Mini, four large PATA drives, and four FireWire enclosures. Assuming 160GB drives, I'd have 320GB of RAID5 storage available over NFS (with a spare drive to swap in) for an investment of under $1200, and I can vary that cost with the size and number of drives. Yes, I'd be daisy-chaining FireWire, which means that each drive has only a portion of the total bandwidth. Then again, my network card will only manage 100MBit, so 3/4 of the FireWire bandwidth will be of minimal use anyway (except for reducing latency due to readahead and such, of course).
Re:Need more information (Score:2)
There was a 3rd party software implementation of RAID5 for OS/X, but they went out of business.
Re:Need more information (Score:2)
Don't get a four port controller w/ software RAID (Score:2)
The problem you run into is bandwith.
Picture four drives on a single PCI controller running in a software RAID5. For every block written, four commands must be issued to the PCI card, one for each drive.
This works great w/ a hardware RAID controller, because they emulate a single SCSI drive, thus only one write command.
Even so: we saw better throughput via software raid ( tested via Bonnie++ on a knoppix 'tora
A Few Question From The Linux-RAID Newbie (Score:2)
1) The original poster is looking to setup a 4 drive array, RAID-5 prefered and is looking for a 4 port SATA adapter. My recommendation would be get two adapters with two drives each to provide greater redundancy. I am guessing this can be done with stock PCI SATA controllers. Would a configuration such as this have a negative performance
How I built a 2.8TB RAID storage array (Score:2, Interesting)
Hardware RAID not always the fastest. (Score:2)
Our first setup was 2x15kRPM U320 SCSI drives on an LSI MegaRAID controller. Apparently the 2.6 kernel driver has serious issues, because we can't get read performance over 50MB/s. This is slower than reads off a single drive on a vanilla SCSI controller.
Our second setup was the same two drives on an LSI U320 SCSI HBA. The HBA has a 'simple' stripped raid via firmware. This wo
I've had major problems with Promise cards... (Score:2)
I've had major problems with Promise cards under Windows XP. Promise RAID is part software, part hardware. RocketRAIDs have had the same problems. There seems to be basic problems, like Microsoft may not want other RAIDs competing with their Windows 2003 software RAID. Windows seems to have timing problems that confuse RAID cards.
The problem seems to be detecting when the RAID array is broken. This problem has gotten much worse with faster motherboards, because the timing window is shorter. If so, then Li
Re:it doesn't really matter, does it? (Score:2)
also, software raid is migratable! any linux machine with software raid modules will read the partitions and use them where hardware raid ties you to a specific chipset and even card model.
hardware raid has its place but linux software raid it VERY VERY GOOD at SIMPLE raid
Re:it doesn't really matter, does it? (Score:2)
I raid5 5 external scsi drives. When I want to swap on out, unplug it, plug in the new one, add it to the raid set and your done. No downtime. Simple. Checking
Re:it doesn't really matter, does it? (Score:3, Informative)
Also, since you're using SATA, why only 4 drives? Even wit
Re:it doesn't really matter, does it? (Score:2)
Because 4 drives fit perfectly behind the 120mm fan in the front of the case.
Because I can power everything in the case with a common power supply, also with a 120mm fan.
Because that's all I can afford.
It doesn't have to be high performance. It's my linux box for the house. The RAID is for keeping drive images, mp3s, video recorded from TV, etc.
Re:Promise (Score:2)
I don't quite understand how you're going to boot a mirrored root volume automagically via software raid (much less a striped one). Or is it acceptable to have to reconfigure to boot?
Re:Promise (Score:2)
The only real restrictions are that
Re:Highpoint-Tech SATA Raid Cards (Score:2)
Beware of a company that is so sloppy: (Score:2)
Re:Software raid... (Score:2)
Re:Software raid... (Score:2)
Re:whooosh (Score:2)
Except for 3ware, they make a damn fine card. You pay the money, though.
I use software raid on our production field servers where the machine losing a drive would be catastrophic due to the fact that it'd take at least a day to get out there and fix. I use raid1, and it's worked flawlessly. I've had a few failures (western digital.... I'm going to maxtor) and all of those times it's just purred along nicely.
I guess I shoulda put
Re:I suggest moving to Windows (Score:2)