Cross Platform, Low Powered Home Servers w/ RAID? 94
Milo_Mindbender asks: "At home I've collected too much data to easily backup, so I've been thinking about RAID5 for a little extra data security. I multiboot my computers for both Linux and Windows so I really need a RAID solution that will make the data at least readable by both OS's. I don't think this can be done on a single machine (can it?) so I'm looking to put together a Linux home server with RAID5 serving both SAMBA and NFS. Aside from the usual questions (software/hardware RAID, types of disk to use...etc) because I live by myself in an apartment I have a few tricky requirements I hope the Slashdot crowd can help me with." How would you set up a RAID5 server to perform Samba/NFS sharing duties without it wasting a lot of wattage, while it idles?
"I hate to waste electricity, so how can a Linux RAID5 server be setup to automatically spin down to the lowest possible standby power use, then spin back up when a computer accesses it? I don't have a basement, garage or other remote place to put the thing, so it needs to be quiet or at least not die a thermal death if I lock it in a closet. What's the sweet spot for choosing CPU type/speed, hardware/software RAID controller, motherboard and memory to make a home server? Since this is only going to be serving a few machines (and maybe doing router/gateway duty), I'm sure there's a point where adding more CPU horsepower doesn't improve performance much. Any suggestions on motherboards, cases or even complete systems that work particularly well for this kind of small headless home server?"
Yep .. (Score:2, Informative)
Re:Yep .. (Score:2)
I think maybe because NFS uses the available bandwidth more efficiently? I'm not really sure. I don't do enough big data transfers to really care.
Re:Yep .. (Score:2)
Re:Yep .. (Score:2)
If you're using Windows, then Samba is the only answer. But this is slashdot, so I assume that you don't use Windows. In that case, I prefer the 20+ years of testing that NFS has endured (ov
Re:Yep .. (Score:2)
There's AFS and Coda. Haven't used either, so I can't comment on anything other than their existence.
> That's not true. If you install Microsoft Services for Unix, you can use and serve NFS as well.
Sure, but that's like using Samba for Linux. Not the best solution. Windows file sharing for Windows machines and NFS for UNIX will cause you the least headaches.
Re:Yep .. (Score:2)
Its not that fast though
Re:Yep .. (Score:2)
Re:Yep .. (Score:2)
Re:Yep .. (Score:2)
Re:Yep .. (Score:2)
I agree that NFS has been around longer, but that doesn't necessarily make it better. Have you never had a mount lock up on you that couldn't be recovered? It happened regularly to me. Doing research to fix the problem led me to the real reason that I selected Samba - the community behind it. Say what you will, but there sure seem to be a lot more people running Samba than NFS these days. There doesn't seem to be an equivalent to
Re:Yep .. (Score:2)
Here's a tip for NFS mounts. Use the option "-o intr,soft". This will allow you to kill processes that have a file open on a hung nfs mount, thereby allowing you to umount the filesystem. NFS uses "hard" mounts by default, in which case a process won't get a return from a read or write system call until that call is successfull, which has the side effect of causing processes to hang when the nfs server goes away. Soft mounts will al
Re:Yep .. (Score:2)
Re:Yep .. (Score:2)
This is surprising news. I'll have to stop using it, then.
Re:Yep .. (Score:1)
Using 100 Mbps ethernet the transfer rate is ~4.5 MB/s with Samba and ~11 MB/s with NFS.
There are also the weird cases when transfer rate drops to ~200 kB/s with Samba. I've only seen that happen when transferring between Windows and Samba though. Between two Samba boxes the performance seems to be consistent.
Hardware RAID (Score:3, Interesting)
Re:Hardware RAID (Score:2)
We've been running software RAID for a while now and love it. The problem with hardware RAID is that in order to talk to the controller, you have to have special drivers. Why do you need to talk to the controller? Well, how are you going to tell if something goes wrong with a drive and it needs to be replaced? I once worked for a company that had all its RAID drives fail over
Re:Hardware RAID (Score:2)
Sam
Re:Hardware RAID (Score:3, Informative)
For our Linux boxes, we just run the following bash script every hour in cron:
Why in the hell do you do that? Go look up mdadm's --monitor --scan and -ft modes, and then configure smartd to also email you out warnings. Beats the shit out of some manual process that relies on the /proc format not changing over time!
Re:Hardware RAID (Score:2)
I guess we coul
Skip RAID (Score:2)
Don't use RAID to begin with. It's needlessly expensive for a home server and introduces an unneccesary point of failure. Maybe I have bad luck, but I have experienced many more RAID controller failures than I have hard drive failures. I even once got redundant RAID controllers, and the controller-controller failed. Let's say you were willing to spend $500 on this RAID solution: rather than do that, spend $250 on improving your backups and pocket the difference.
Don't worry about CPU (Score:4, Informative)
Go with slower hard drives, ie 7200 RPM drives, maybe slower - and you won't have the heat problems. However you might want to look into RAID 15, so if you can get a system that will hold 6, even better.
Now remember, to drop back CPU power, and raw disk speed for the thermal/power savings
Re:Don't worry about CPU (Score:2)
Amen.
My home server is a Pentium 233MMX, and has no problem saturating my 100BaseT network.
If you're going for low power/low noise, there's a lot of room to underclock any modern CPU.
Re:Don't worry about CPU (Score:2)
Raid 5 will work with 3 or more drives and only 1 drives worth of data is unusable regardless of the number of drives in the raid. Raid 15, if I understand it right (I'm much more familiar with raid 5), would set aside 4 drives worth as unusable in a 6 drive array.
If I have a bunch of 100GB drives:
a 3 drive raid 5 gives me 200GB of usable space.
a 6 drive raid 5 gives me 500GB of usable space.
a 6 drive ra
Re:Don't worry about CPU (Score:2)
Re:Don't worry about CPU (Score:1)
Some Ideas (Score:4, Interesting)
Then grab a PCI SATA card. It won't need RAID capability, just a ton of SATA ports.
Attach a smallish hard drive to the master onboard PATA port and set a CDROM on the slave on the same channel. Install your SATA card and attach some big-assed SATA drives.
Install Debian to the PATA drive and then remove the CDROM. Disable, in BIOS, everything you won't be using.
Once you are in Debian and everything works, use 'mkraid' to initalize the SATA drives in a RAID5 config. Mount that under
Some might say that RAID5 will be too slow. But, across a network, chances are the wire will be saturated before the hard drives hit the sustained transfer rate. If you are concerned about performance, throw a Gig-E NIC in there and use RAID0+1 or RAID3.
I'm not sure how well Linux can deal with suspending the hard drives in a RAID controller during inactivity. If the kernel can handle it, use something like 'hdparm' to sleep the drives when they aren't in use.
Good luck, man...
Search.. (Score:5, Informative)
http://ask.slashdot.org/article.pl?sid=05/09/25/0
http://ask.slashdot.org/article.pl?sid=05/10/07/2
http://ask.slashdot.org/article.pl?sid=05/11/09/0
http://ask.slashdot.org/article.pl?sid=05/04/27/1
*cough*
Re:Search.. (Score:1, Funny)
Re:Search.. (Score:2, Funny)
Re:Search.. (Score:4, Insightful)
The objectives of large scale redundant and able to be put in a closet are needs unmet by todays storage designs and are likely to be as common tomorrow as wireless ethernet is today.
So.. basically.. snoo snoo off.
I'm using something like that... (Score:5, Informative)
I recently had to set up two new servers. One is for business, and one is for personal data. For both, I used RAID 5. They run NFS and Samba, with different directories shared as needed to other systems. RAID 5 is EXTREMELY simple to set up (it's a one line command, once you install mdadm, which, on Debian, installed like a dream), and I'd just suggest Googling for mdadm and tutorial. You'll get several tutorials. There's really no need to pay for hardware RAID cards on Linux (unless you're using an old, slow system). Besides, until you get into the range of something like $300, the RAID cards all do the work through drivers anyway, so you might as well just get a cheap ($10-$20) PCI IDE Controller card to add to your existing IDE channels. Just make sure it works on Linux and is NOT Adaptec (they fsck with the drive order).
On both my systems, all the drives are the same size and model number. I figure you can't always tell if a 160GB drive is 160GB or 140GB, and I didn't want to mess with that. RAID 5 takes 3 drives, but with mdadm, you can add a spare for failover (and the monitoring daemon will e-mail an account on that system in case of failure, so you have a warning to replace the bad drive). My only concern about using the same model for all drives is that there may be a flaw in that model. I found drives that were given a large number of good reviews at NewEgg.
You can also add more spares and more devices with mdadm, or replace faulty devices (not hot swappable, unless you have special hardware, and I don't even know if Linux supports that).
One last note on mdadm: when you first set up a RAID 5 array with it, you'll get an immediate warning of something like a degraded event. This is normal. I think (can't remember details) mdadm and the kernel (mdadm is by the person who wrote the RAID code for the Linux kernel) don't do an exact version of RAID 5 and, instead, use something that lets it rebuild on a new drive faster than it would otherwise.
Re:I'm using something like that... (Score:5, Informative)
You can also add more spares and more devices with mdadm, or replace faulty devices (not hot swappable, unless you have special hardware, and I don't even know if Linux supports that).
Here's another tip: If you're using Linux software RAID, carve your drives into multiple partitions, build RAID arrays over those, then use LVM to weld them into a larger pool of storage. It may seem silly to break the drives up into paritions, just to put them back together again, but it buys you a great deal of flexibility down the road.
Suppose, for example, that you had three 500GB drives in a RAID-5 configuration, no hot spare. That gives you 1TB of usable storage. Now suppose you're just about out of space, and you want to add another drive. How do you do it? In order to construct a new, four-disk array, you have to destroy the current array. That means you need to back up your data so that you can restore it to the new array. If there were a cheap and convenient backup solution for storing nearly a terabyte, this topic wouldn't even come up.
If, instead, you had cut each 500GB drive into ten 50GB partitions, created ten RAID-5 arrays (each of three 50GB partitions) and then used LVM to place them all into a single volume group, when it comes time to upgrade, you will have another option. As long as you have free space at least equal in size to on of the individual RAID arrays, you can use 'pvmove' to instruct LVM to migrate all of the data off of one array, then take that array down, rebuild it with a fourth partition from the new disk, then add it back into the volume group. Do that for each array in turn and at the end of the process you'll have 1.5TB, and not only will all of your data be safely intact, your storage will have been fully available for reading and writing the whole time!
Note that this process isn't particularly fast. I did it when I added a fifth 200GB disk to my file server, and it took nearly a week to complete. A backup and restore would have been faster (assuing I had something to back up to!). But it only took about 30 minutes of my time to write the script that performed the process and then I just let it run, checking on it occasionally. And my kids could watch movies the whole time.
For anyone who's interested in trying it, the basic steps to reconstruct an array are as follows. This example will assume we're rebuilding /dev/md3, which is composed of /dev/hda3, /dev/hdc3 and /dev/hde3 and will be augmented with /dev/hdg3
In order to make this easy, you want to make sure that you have at least one array's worth of space not only unused, but unassigned to any logical volumes. I find it's a good idea to keep about about 1.5 times that much unallocated. Then, when I run out of room in some volume, I just add the 0.5 to the logical volume, and then set about getting more storage to add in.
Re:I'm using something like that... (Score:2)
This is a great idea.
I'm thinking about rebuilding my home server which mainly is a mythtv backend. Currently using LVM to get bigger partitions than I have drives (my tv partition is at 480GB right now) and was thinking about RAIDing them for added security, but was put off by the fact you can't easily extend a raid.
I'll follow your tip, but will probably add boot and swap partitions to every drive. Not because it's needed on every drive, just to keep the setup consistent over all drives.
Couple quest
Re:I'm using something like that... (Score:3, Informative)
I'll follow your tip, but will probably add boot and swap partitions to every drive. Not because it's needed on every drive, just to keep the setup consistent over all drives.
I did that too. I also had a couple of drives which were actually slightly bigger than 200GB, so I used the extra space for /root partitions (mirrored).
This will work with those newfangled extended partitions, right? Didn't use those since the days we dual booted DOS and OS/2.
Yep. In fact, to keep the numbering clean, it's a
Re:I'm using something like that... (Score:2)
Your ideas and suggestions are welcome.
Bear with me - I'm still recollecting parts of my just exploded brain :-)
I can see you wouldn't want to extend a degraded raid though. OTOH, if one knows that one can reconfigure it later, no trouble just replacing the disk and then rework one partition after the other, at one's leisure.
What I'm trying to work out right now is this: I read the total size of a raid is (number of drives -1)*size of smallest drive. Right now there's 4 drives in that computer - 120,
Re:I'm using something like that... (Score:3, Informative)
where Sum_{j=1}^{m} is the sum over m arrays, N_{j} is the number of partitions in the jth array and S_{j} is the size of a partition in the jth array. One thing that this equation shows us, is that each array in the LVM does not need to be the same size! The other constraint in
Re:I'm using something like that... (Score:2)
Re:I'm using something like that... (Score:2)
Thanks for the lead. I started with 40GB as 'ideal' size because it'll fit best into my disc sizes. I realized that even a 2 drive raid is ok, because it can still work - the more partitions, the better, though.
I put 3 4-drive partitions at 120GB each, 1 3-drive partition at 80, and a 2-drive partition at 40, with 50 left. Gets me 480GB usable out of 730.
I could even scale down to 20GB partitions - not to optimize usage, but to allow for smaller partitions which will help in extending/moving to another dr
Re:I'm using something like that... (Score:1)
Re:I'm using something like that... (Score:2)
Can't do that. The idea of a raid 5 is that one partition is allowed to fail without data loss. Soon as you put 2 partitions onto the same drive, losing that drive will make your raid fail.
If you don't care about that, easiest is to forget about the raid and go with LVM directly on the drives.
Re:I'm using something like that... (Score:2)
Right now there's 4 drives in that computer - 120, 160, 200 and 250GB. By slicing them up to partitions and taking one partition of each drive into a raid, I'd get the same size than doing a single raid over all drives - a maximum of 3*120=360GB. Losing over half my current diskspace for redundancy.
Not really. You'd have 360GB in the RAID array(s), but you'd still have the other 40+80+130 = 250GB available to use in other ways. If you wanted to maximize your space, but have redundancy on as much of it
Re:I'm using something like that... (Score:1)
Essentially, you are doing RAID-0 (striping, no redundancy) of RAID-5 blocks. This means that if a single Raid-5 block goes out (not partity rebuild, but failed), so does all the rest of your data.
What I do is simple do software Raid5, and when I need to expand, bring my computer and a new disk in to work, dump my data onto a big frickin disk array we have there, reformat the Raid5 with an extra disk, restore, bring home, vio
Re:I'm using something like that... (Score:2)
Essentially, you are doing RAID-0 (striping, no redundancy) of RAID-5 blocks. This means that if a single Raid-5 block goes out (not partity rebuild, but failed), so does all the rest of your data.
Maybe I'm missing something, but what you're saying doesn't make any sense. How would one of the partition-based RAID-5 arrays fail? The exact same way a RAID-5 array that uses whole disks would fail: two failed hard disk drives. In either case, if you're doing RAID-5 of complete disks or multiple parallel
Low wattage storage array (Score:3, Interesting)
Servers don't reboot (Score:3, Insightful)
However, you've then got all your eggs in one basket... not a good long term situation... you're going to need off-site backup... which is yet another Ask Slashdot question.
--Mike--
There is something new here... (Score:1)
Has anyone used solaris 10? ZFS is looking nice; I just wonder about the power management
VIA C3 (Score:4, Interesting)
Now, anyone know of a socket 370 motherboard that'll take 4 or more SATA drives?
Re:VIA C3 (Score:2)
Of course, there is also the regular LV Pentium M, which hits 1.5GHz at a TDP of 10W.
Re:VIA C3 (Score:2)
Plus, why pay Intel prices?
Re:VIA C3 (Score:2)
True, though there are still Pentium M mini ITX motherboards, and of course the Asus adapter lets you use it in any Asus board.
Plus, why pay Intel prices?
Because Pentium Ms have much higher performance while drawing less power. The 1GHz C3 is three or four years old, it can't compete because it is simply out of date. Plus I have a hard time trusting Cyrix derived cores ever since the horrible Cyrix M2.
Yes, the Pentium M costs more, but
Re:VIA C3 (Score:2)
Re:VIA C3 (Score:2)
And because the Pentium-M draws HALF the power of the C3 while it is busy being faster, and he wants a low power solution. 5W is a nice improvement over 11W, or whatever the C3 was.
Re:VIA C3 (Score:2)
Pray tell, what other services are going to require extra CPU power?
Re:VIA C3 (Score:2)
You're also mistaken about software RAID's CPU usage. Doing some googling, one user reported a 5-disk RAID-5 array used 80% of the processing power of an Athlon 700 to do writes. The Cyrix 3 core is
Re:VIA C3 (Score:2)
> machine. Some of the better clients such as Azureus are
> notoriously demanding on the CPU (and memory) front. Perhaps
> one might use it as a media server, which sometimes involves
> transcoding the video to a format supported by the output
> device (MPEG-2 or WMV).
The question was about a file server. File servers usually don't have a mouse, much less a bittorrent client.
> You're also mistaken about software RAID's CPU usage. Doing
Re:VIA C3 (Score:2)
I run Linux 2.6 software RAID-5 on systems with Pentium II 200MHz CPUs, and stick GB-sized databases on top, all on ReiserFS. The guy whose Athlon couldn't cope was clearly doing something wrong.
12W is low power in my book. Sure, maybe a Pentium-M can do it in 5W, but the price diffe
Re:VIA C3 (Score:2)
Given that a Pentium II 200MHz can handle RAID-5 without pulling a sweat (I have some seriously obsolete servers running at work), CPU speed really isn't much of an issue for me.
As for the C3 being out of date, well, yes, that's why there's the VIA C7, which looks like it may leapfrog the Pentium-M in CPU power per watt. http://www.viaarena.com/default.aspx?PageID=5&Arti cleID=40 [viaarena.com]
Cooling has been my biggest concern... (Score:2)
The most trouble I've had is keeping it all cool. Between the P4 and all the drives, a fair bit of heat is generated. Stopping the drives when not in use would help there, but then I've always been concerned that the extra starts would increase the failure rate. The way I fig
If money is no object.... (Score:1)
you may want to take a look at www.littlepc.com [littlepc.com] these guys have some interesting low voltage, and fanless systems that could serve the basis a good home server system.
go for hardware (Score:2)
http://www.infrant.com/products_ReadyNAS600.htm [infrant.com]
Re:go for hardware (Score:1)
hmm (Score:5, Informative)
I guess if it's just porn you got for free or whatever it doesn't matter, but if the data is important you still need some sort of backups.
RAID protects against:
Disk Failure
Backups protect against:
Disk Failure
Accidental Deletion
Malicious users
Malicious programs
Filesystem corruption
Errant program causing file corruption
RAID won't protect you from any of those other things one bit.
Re:hmm (Score:1)
Re:hmm (Score:3, Informative)
Indeed. In RAID options for OpenBSD [openbsd.org] you see the following warning:
Re:hmm (Score:2)
That's simply misleading.
Nothing can eliminate downtime. Even if you have backups, restoring from backup means something is wrong, ergo, downtime.
RAID, by itself, does reduce downtime. It's simply not sufficient for error-proof data integrity.
Re:hmm (Score:2)
Maybe, but some people have a gigantic hard-on for RAID and think it magically solves all reliability problems. Skim any thread here about storage, you'll see a mystical faith in the awesome power of RAID.
Check LinkSys NSLU2 (Score:3, Informative)
Having the backup done by normal file copying rather than RAID is not a problem in my view - after all, backup is the purpose, and that's done by the firmware. RAID ain't always ideal: A friend of mine had a nice RAID5 setup in his computer. Then the primary drive got corrupted - and that was immediately mirrored to the second drive! He lost all his data...
No mention of the NSLU2 is complete without noting that it's eminently hackable [nslu2-linux.org]. :)
LinkSys NSLU2 and EFG250? (Score:2)
I don't see it mentioned at nslu2-linux.org, is it based on the same hardware and firmware?
--dave
Re: LinkSys NSLU2 and EFG250? (Score:1)
Re: LinkSys NSLU2 and EFG250? (Score:2)
--dave (who will avoid it) c-b
Get a Buffalo (Score:3, Interesting)
I recently bought a single drive NAS unit with a 300 GB hard drive, use it for backup/storage for both Linux and WinXP (uh oh.) I also has additional tricks like built-in Gigabit ethernet, ftp server, printer server, backup of itself to attached USB 2.0 drive and misc. other tricks. Very nice device.
The main advantage of doing your backups onto a device such as this is the power savings -- this thing uses very little power compared to running an additional PC/server. Doesn't make much noise and generates very little heat. You can get up to 1.5 TB of storage out of one of these for a pretty price.
Check out the handsome little Buffalos at:
http://www.buffalotech.com/ [buffalotech.com]
RAID vs NAS (Score:2)
Synthesis from /. + new advice (Score:2, Informative)
I've set up a fileserver in my garage, Linux mandriva 2005, serving NFS and SaMBa shares. Running since 3-4 months
I use EVMS [sourceforge.net] as professional LVM. Raid 0 or 1 available, and bad blocs relocation too. Also SMART monitoring is running as daemon.
Your main problem for spinning down drives is the filesystem:
With journaled FS (recommended) disks will spin up every 10mn or so, after some tuning. For me too it's still too much and I'd like
Re:Synthesis from /. + new advice (Score:2)
If a file system hasn't been written to since the last syncing of the journal to disk, why would a journaled file system bother writing to (or reading from) the disks, thus causing them to be spun up?
Re:Synthesis from /. + new advice (Score:1)
But it seems to me that it's what happen with ext3 journal... as a kind of periodic checkpoint - no matter the activity.
To minimize access and spin up my fstab entries look like
(that's 600s = 10mn)
I hope I'm wrong, maybe I've overlooked something that keep the drive spinning. If course I won't use a noflush patch for such a fileserver!!!
Mini ITX + linux + software RAID (Score:2)
Once every month or two, or with the completion of a project, I'll burn a incremental DVD with data.
Once a year or ~18 months, I swap out the drives for newer ones. I then move the old drives into storage
Backup (Score:2, Insightful)
Re: (Score:2)
Re:What I Do (Score:2)
that is a pretty good solution..but it might not fit the submitters "low-power" constraint
Re: (Score:1)
Re:What I Do (Score:2)
As for the other comments about power, from the handbook [sun.com]:
Maximum Power Consumption 832 Watts
So, fairly hefty power draw...
Why RAID5, just replicate data (Score:1)
Ok, RAID5 is c00l, but I rather work simple. For important data, I store it to two different disks (on different computers) and offline. For really important data, fourth, fifth location is that far away to protect from about else that direct hit by a large asteroid.
I have EPIA PD10000 motherboard on my server (Debian, of course). Currently, there are 3 PATA disks (one old 60G from desktop and two quite new 250G) thus I have internal capacity to add more disks on PATA channel. Disks are configured to s
Re:Why RAID5, just replicate data (Score:1)
File server as a router (Score:2)
You may not want your file server doing firewall duty. If it gets rooted, all your files are compromised. In addition, if it fails for any reason, you've lost all your files, and your Internet connectivity. It makes google searching to fix the problem that much harder.
Consider having a dedicated machine serve as the firewall/gateway/router. If it gets compromised, the intruder will still have another layer to
Re:File server as a router (Score:2)
Without sounding snarky or sarcastic, why do you do this? I bought a $40 wireless router a couple years ago and use its built-in firewall with great success. The thing is about the size of 2 packs of cards, has no moving parts, and the 4 LAN ports are perfectly suited to connect my home server, desktop, media PC, and printer. Granted, I only use about half a dozen firewall rules (in addition to a few i
Re:File server as a router (Score:3, Interesting)
Well, I initially did it because I had the parts laying around, and those routers at the time (> 5 years ago) cost a tad more than they do now
Re:File server as a router (Score:2)
because I had the parts laying around
those routers
I can add more NICs
alter the distribution to add other extras to it
I'm not really counting the following reasons, because they do not present any advantage or difference over a wireless broadband router:
I am not limited in the number of ports I can use - just slap a larger switch on the inside NIC (you could plug that same switch into a LAN port on a home router)
a wireless NIC could be set u
Re:File server as a router (Score:1)
My setup... (Score:2)
The system is a P2-300 with 256 MB and a 40 gig drive, running Debian Woody. It currently serves the files via SAMBA, and it also has Apache, PHP, MySQL, and PostgreSQL installed (for web app dev work). I have things set up so that cron jobs (and in the case of the one 'doze box, a task manager run DOS batch file) copy various files from the