
SCSI vs. SATA In a File Server? 303
turboflux asks: "I'm currently in the process of replacing an aging file server with something more robust. Company-wide, there will be about 100 people who could be using this server, but I don't imagine there being more than 50 concurrent users. Right now, I'm torn between spending alot on SCSI hardware, much like our other servers, or spending less, but getting more space, with SATA II drives. Whatever I decide, the server will be setup with a RAID 1+0 array for the numerous benefits it offers. Does Slashdot have opinions or suggestions on performance, reliability, and stability?"
Have you considered...? (Score:5, Funny)
Re:Have you considered...? (Score:5, Funny)
Re:Have you considered...? (Score:2)
Re:Have you considered...? (Score:2)
Re:Have you considered...? (Score:2)
Re:Have you considered...? (Score:2)
You get round chads too.
Re:Have you considered...? (Score:2)
you get to roll your own.
SCSI?? (Score:3, Funny)
Re:SCSI?? (Score:2)
Re:SCSI?? (Score:4, Funny)
In summary, unattractive squares should stick to Linux [atspace.com] and Windows [atspace.com].
FireWire is for different thinkers [atspace.com].
Re:SCSI?? (Score:2)
Re:SCSI?? (Score:4, Funny)
Re:SCSI?? (Score:2)
obgligatory ballmer... (Score:3, Funny)
Now Ballmer says he is going to F**KING KILL USB.
Man I need a T-Shirt that says that.
"Steve Ballmer says, "I'm going to f**king kill you!"".
Re:SCSI?? (Score:3, Informative)
Re:SCSI?? (Score:3, Funny)
And you are showing your senility:
Don't forget what SCSI stands for - Small Computer (Serial|Standard) Interface.
Heh. Wrong.
Small Computer System Interface [t10.org].
Re:SCSI?? (Score:3, Informative)
While he did screw up the second 'S' in SCSI, you cannnot seriously expect anyone who knows anything about the evolution of SCSI to take you seriously after you stated the above.
I will prove your statement false with a single counterexample: Serial Attached SCSI [t10.org] (PDF). Note the date of the document.
Remeber that with SCSI-3, the standard became more modularized in order to do things like separate the SCSI command set and the SCSI physical interface.
Here's the SAS FAQ [scsita.org] from t
SATA is fine (Score:5, Insightful)
Re:SATA is fine ... for some things (Score:5, Informative)
In my experience, if you've got alot of random I/O, SATA is not a viable solution. That said, even if your I/O is mostly random, if there's not a heavy load on the disk, then you're probably ok. If you've got 200 people hitting a database or email server, you're probably going to have some performance problems. Swap it out with SCSI drives, or a quality disk array, and you'll be doing much better. If you've got a web server, or a database server that is exclusively reading, you can probably get away with SATA. Again, it all depends on how much and how random the disk I/O for your application is.
Re:SATA is fine ... for some things (Score:2)
I guess I can understand what you mean though. It should also be clarified that it's not just any SCSI drive that you need, that it would need to be an array of 15k RPM drives if latency is the real issue because there do exist 10k RPM SATA drives with decent capacity that is more or less the same drive mechanicals with a different interface as a SCSI counterpart.
Re:SATA is fine ... for some things (Score:5, Informative)
What you end up with is the following throughput when disks are empty:
1x147GB 15K SCSI -- 150MB/s
8x250GB 7200 SATA -- 275MB/s to 550MB/s depending on exact RAID configuration
Now fill up both configurations with 140GB of data and the throughput of the 15K SCSI has dropped in half to 75MB/s because the heads are now positioned at the "slower" inner portion of the disk. Meanwhile, the 2TB SATA config is 7%-15% slower depending on the RAID config.
Latency also benefits from many disks for the same reason. Fill up a disk and you possibly have to traverse the entire disk. So while a 15K drive has a seek time of 2-3 times faster, you end up having to move 10X-15X farther than in a mega array where the heads pretty much just hover over the 2X faster outer portion.
The big advantage for SCSI is the better TCQ algorithms for multi-user access. This can be mostly negated if you use a SATA RAID controller with enough onboard RAM to reorder IO at the controller level versus depending on the drive's NCQ.
This is the route we've taken -- we went from a LSI MegaRAID 320-1 + 4-drive SCSI RAID config to an Areca 1170 + 1GB RAM + 24-drive SATA RAID. Every aspect of performance is up by big amounts -- throughput, latency, multi-user access. The drive array is actually TOO fast for our 2x244 Opteron server to drive. We ended breaking the array into 3 8-drive volumes and mirroring 2 volumes against each other for more redundancy. One of these days, we'll upgrade to faster CPUs and retest a 16-drive volume.
market differentiation (Score:2)
It's not the interface itself. Interface incompatibility is used to split the market into regular users and those who don't have to spend their own personal money.
Think about rotation speeds, seek times, and bearing wear. How could these specs possibly have anything to do with the data cable? They don't, except in the mind of a VP of marketing. Despite the obvious lack of technical reasons, SCSI drives often come with better specs. That's life.
Re:market differentiation (Score:2)
Re:SATA is fine (Score:3, Interesting)
Re:SATA is fine (Score:2)
Re:SATA is fine (Score:5, Interesting)
First, make sure you get a SATA2 controller. NCQ is a must for multiuser environments.
Second, whatever controller you buy, grab 3 of them. RAID is great for disk failure, but people rarely think about what they'll do when the controller fails.
Look at some of the stranger RAID options. If you just use RAID5, you'll be selling yourself short. RAID3 is worth a look. I'd actually suggest you put two controllers in a machine. Run RAID0 on 4 drives on a single controller. Run RAID0 on 4 drives on the other controller. Then use Windows or Linux software RAID to run RAID1 between the two RAID0 drives. Very fast performance and fully fault tollerant.
Keep the OS on a small, slow hard drive seperate from the array. You can do funny things there, but I'd suggest you set it up properly and then use a disk clone utility to create an offline backup to store somewhere.
Arrange for testing in the first few months. Unplug drives from the array and see what happens. Verify that you can restore from tape backup to the array. Veryfy that your cloned OS hard drive can actually get the array online. Extensive testing before you go live. Two tests in the first month after going live. One test every 6 months after that.
If you are using commodity servers, get a spare for everything in there.
If you want high avalibility, look into DRBD. It's like RAID1 over a network.
Monitor the damn thing! My last job someone let the server die. It had RAID5 over 5 drives. One drive had failed and no one noticed. When the second failed, that was the end of it. Learn to use SNMP or get some good monitoring utilities that will notify you of problems. You need to know if a drive fails, if it reports SMART errors, drive temp, proc temp and usage, NIC utilization, drive utilization, system temp, and memory utilization. MRTG+RRD Tool and SNMP will give you pretty charts for all that crap. MotherboardMonitor will also give some nice readouts for Windows. If it's commodity server, look at installing a Crystalfontz LCD so that you can walk by and get a quick status without the need to login.
Re:SATA is fine (Score:5, Informative)
Uhh. Yes. Then you can lose one disk in each side, and you have lost all your data.
This would perhaps be slightly less than fully fault tolerant.
Perhaps you meant to set up 4 mirror pairs, 2 on each controller, and use software to RAID0 them together.
I have successfully done this with a 24 disk 5U chassis, and it is an IO steamroller (our database server, right now).
Re:SATA is fine (Score:2)
Re:SATA is fine (Score:2)
Re:SATA is fine (Score:5, Informative)
But I guess it depends on what your users need. If they need raw throughput, RAID 0+1 is better. If they need low latency, then RAID 10 may be the answer. Or maybe both systems would fall within the margin of error of each other.
In any event, once you get into what-if situations, no RAID will be good enough. What if you lose a disk? What about two? Five? Well, what if lightning hits the chasis or the janitor unplugs it to buff the floor?
The best you can do is roll the dice and play the odds. You'll see that I told him to use RAID 0+1. I also told him to use good monitoring setups to mitigate problems. I also suggested a tape backup. Actually, maybe I didn't, but I did tell him to verify his backups work and that he is able to restore from them, so that's kind of the same thing.
When it gets down to it, oppinions are like assholes; everyone has one. And most people only care about their own and don't really want to look at their coworkers'. I guess I'm the same in that respect.
Re:SATA is fine (Score:3, Insightful)
Re: (Score:3, Informative)
Re:SATA is fine (Score:3, Informative)
RAID 0+1 is much inferior to RAID10. 0+1 is what the GP poster said... stripe 4 disks in RAID-0, and mirror those. You're no more fault tolerant than a RAID5 array.. if ANY two drives fail, you're hosed. You lose 50% of the spa
Re:SATA is fine (Score:3, Informative)
As for "extra redundancy" The difference between RAID 10 and RAID 01 is in the failure mode, not strictly in the redundancy.
In RAID 01, the data is stored like this:
[ABCD] - four drives striped
[ABCD] - four drives striped
Re:SATA is fine (Score:5, Insightful)
He has up to 100 users and says that there will probably only be 50 or so concurrent users. Reasonable performance for such a system doesn't require lots of crazy tweaking. Implement RAID5 with a hot spare and be done with it. If you have a drive failure it automatically rebuilds and you're safe. If you have another drive failure after that before replacing the dead drive, you're still running. If you are concerned about drive performance, then spread the array across as many spindles as possible. If you have any sense you will already have a decent monitoring system in place and will know the drives have failed.
I find myself saying this often on Slashdot, but for the average IT department it makes far more sense to buy a business line server that comes with proper support for everything that you need than to try and cobble it together yourself out of parts, and then try to keep enough spare parts around in case of failure, and try to get warranty service from 5 different parts suppliers with different warranty lengths. I mean really, who does that kind of thing?
Go to HP, buy a Proliant server that fits your needs and price range, and use the included management software to set up email alerts when there is a hardware problem (like a drive failure or imminent drive failure). HP has the replacement part at your doorstep next day (unless you buy a warranty with faster turnaround, and next-day is still faster than you'll get from most part suppliers), and you don't have anything to worry about. I'm sure IBM and Dell can do something similar too.
Back in the day it actually used to be cheaper to build your own computer. Not only would you save money, but you get to choose exactly the components you wanted. Nowdays the computer market has been so commoditized that it's actually much more expensive to build your own. You don't get any of the advantages of economies of scale, and the profit margins are so slim on retail models that the savings of eliminating it is negligible. And of course, now you can have your system custom built to your specs anyway. The only reason to build your own is if you want to be able to tweak and upgrade it piecemeal, like the "enthusiast" market does. That's what I do with my home PC, but I would never consider doing that with business PCs, especially a server. A server should be deployed, and after that it should pretty much sit there with zero hardware maintenance (except in the case of hardware failure).
Re:SATA is fine (Score:2)
Uhh. No. You'd need to lose two more disks to lose all your data. RAID 4 is striping with a dedicated parity drive. Losing one disk on each side means you're running in a degraded mode but still are keeping data. Losing a third disk means one array is shot but you still have your mirror. Only losing a forth drive would mean you'd lose everything. By then, I'd hope you'd have replaced some drives.
SCSI (Score:2, Funny)
Re:SCSI (Score:5, Insightful)
Re:SCSI (Score:2, Informative)
SATA II is not your father's SATA (Score:5, Informative)
Re:SATA II is not your father's SATA (Score:2)
Re:SATA II is not your father's SATA (Score:2, Informative)
It's not necessarily for "internal components". It's also for entry-level servers and raid arrays. You can get hotplug bays that fit in 5 1/4" slots on a machine. This provides an easy way to swap out the drives in case of failure. If you're running a server you nor your users want downtime. If you're running RAID 1, R
BACKUP! (Score:4, Interesting)
OTOH, there are 300GB U320 Disks now, which you could use if latency is not an issue. Otherwise, go with lots of disk arms (72GB or 36GB U320 Disks)
Re:BACKUP! (Score:4, Informative)
Re:BACKUP! (Score:3, Interesting)
That depends on how long you need to store the data.
If you need it for a short time, you might be correct.
But if you may need the data 5 years or more from now, tape is clearly far superior.
Re:BACKUP! (Score:5, Insightful)
You have much luck getting data back from a tape five years later?
First you have to find the tape. You can't have misplaced it and you can't have reused it due to the damn high cost of magnetic tape.
Then you have to find a drive that can read the tape. The one you wrote it with died two years ago, its no longer manufactured and oh darn none of the three you picked up off ebay use the same compression format.
Next you need the old backup software. You've been using Acme Archiver for the past three years; It doesn't understand the old SuperBackup format and unfortunately SuperBackup only ran in DOS with an 8-bit ISA SCSI card.
Finally you have to pray that the tape is still good. They're like floppy disks; they go bad just sitting on the shelf.
Buddy, I've been there. It ain't pretty. So for the last 7 years I've stored my backups on hard disks. No pain! No pain!
Re:BACKUP! (Score:3, Insightful)
Yes.
That is no problem at all. I keep detailed listing of what backup set is stored on what backup media.
As far as the "damn high cost of magnetic tape", you must be talking about those cheap tape drives that use expensive tapes. We have a couple of those around here, but we don't use them much at all.
Int
Re:BACKUP! (Score:5, Informative)
and last time I checked, an Ultrium 3 tape was half the price of a 400GB Drive.
I wouldn't use disks for backup, unless they're to be used as live backups, and then I'd still archive to tape (provided it was affordable).
Re:BACKUP! (Score:3, Insightful)
Re:BACKUP! (Score:2)
That doesn't include the ~$2000 cost for the tape drive itself. The real cost for tape is $2000 + $75 per tape, while the real cost of SATA is just $200 per disk (or $240 if you want an external enclosure for each one (see below)). Unless you've got more than 6.4 TB of data to back up (the point where 2000+75x = 200x, where x is a 400GB-capacity tape or disk) (or 4.8TB assuming enclosures), hard disks are cheaper.
(Incidentally, I
Hmm (Score:5, Informative)
SATA and Linux will be much faster... Soon. (Score:4, Informative)
The site to track progress on the library and driver status is here: http://linux.yyz.us/sata/ [linux.yyz.us]
The project has been moving along quite well. I think their goal is to completely modularize, simplify, optimize, and consolidate the ATA, ATAPI, and SATA kernel pieces into one overarching (underlying?) library. I like this kind of work. I can't see why ALL disk-like I/O isn't under one big modular kernel library, it seems like it would make adding new transport types and drivers a lot simpler and reduce maintainance all-around.
Re:SATA and Linux will be much faster... Soon. (Score:2)
Re:SATA and Linux will be much faster... Soon. (Score:2)
I'd say SCSI (Score:2, Insightful)
Re:I'd say SCSI (Score:2)
Re:I'd say SCSI (Score:5, Interesting)
We have several terabytes of SATA storage at work to hold our main business-critical digital asset archive.
We've been using a ATA/SATA disk-only strategy for over 5 years now. It's worked great, and eliminated our slow and unreliable tape robot, which has greatly improved productivity.
Back in 1999/2000 SCSI wasn't an option for the main archive because a terabyte of SCSI would have broken the bank. We went ATA back then. It was a mess trying to route 24 ATA cables in a case, I admit. SATA fixes that nicely.
We keep three copies of our data, two onsite and one offsite. We use rsync-incremental snapshots to do disk-based incremental backups. Because the cost of SATA is less than 1/3rd the cost of SCSI, we get a high reliability solution for less than the price of a single SCSI RAID.
One more advantage of SATA is that the disks are so cheap, it's easy to just replace all of them every two or three years. The disks you replace them with generally are twice as large after 2 or 3 years, so every cycle your RAIDs get more reliable as the number of disks is slashed in half.
Most companies wouldn't replace every SCSI disk every two years, it would cost way too much. And considering the slow pace of SCSI size growth, you wouldn't see as much gain, a double hit against SCSI.
So basically unless you need the excellent latency performance of SCSI, higher than even the WD Raptor can offer, I see no compelling reason to use SCSI for anything anymore.
SATA? I don't know.... (Score:5, Informative)
Here are some scenarios where I wouldn't hesitate to use SATA:
- You have redundant servers. Using LVS and/or Heartbeat and your favorite tools, you can get full server redundancy using less expensive hardware. The overall solution can be quite elegant, with hot failover. Why just cover the drives?
- Front-end cluster nodes. You have a powerful, expensive backend server (with a cheaper failover) and you use inexpensive front-end servers for serving client requests. Sounds like overkill for what you want, but with the right server load balancing technology, it can give you a scalable, fault-tolerant and damn fast solution.
- You can live with downtime. Install a server with a couple of SATA disks in a RAID configuration and hope for the best.
SCSI file servers (Score:2)
Re:SCSI file servers (Score:2)
The price is a huge issue. The cost of regenerating the lost data is enormous.
Re:SCSI file servers (Score:2)
i'm confused...
also, the guy with the question mentioned that at most, there will be 50 concurrent connections, and the place only has 100 employees. i've run labs and had well over 50 machines connecting to a single symantec ghost server... no raid, standard ATA. no trouble (yes, i realize they don't connect and transfer large amounts of data all the time like a fileserver). if he was really worried about speed, i'm sure S
SCSI for tier 1, SATA for tier 2... for now (Score:2, Insightful)
For a file server, you'll be fine with SATA.
For my tier 2 servers, I am moving a ton of stuff off of my EMC gear (because fibre channel drives are damned expensive) onto SATA-II drives in an iSCSI setup. I'm al
The real info (Score:5, Informative)
Speed differences (Score:2)
And to be really honest, SATA 150's 150MB/s is not shared with any drives, where as SCSI 320's 320MB/s is shared among all the drives on the ribbon cable, AFAIK. I'd personally rather have 8 drives with 300MB/s a peice than 8 drives sharing 320MB/s. The former makes the PCI-X port the bottle neck (or perhaps the raid controller card itself).
Be sure you get NCQ support (Score:2)
SCSI what?
SAS (Score:4, Interesting)
Re:SAS (Score:2)
These are different media for diff jobs (Score:3, Informative)
If not, SATA is still pretty fast, much less expensive, less clever controllers, but still very reasonable for things like archiving, steady low-concurrency-demand streaming, and so on.
SATA also has the advantage of not needing loads of austere cables with distance limitations imposed on them; it's a serial rather than a parallel bus-- hence the S in SATA. Use SATA when you don't need the absolute fastest you can get-- and you won't have to spend the most on the controller (which is hopefully a SCSI PCI-X controller or other fast clocker), the drives, the pricey cables, and so on. But if you need the speed, there is no faster than SCSI except for flash drives, which are still hideously expensive.... and not writeable as much as we'd like them to be.
Fibre Channel (Score:4, Informative)
One benefit that SATA does have over SCSI is the cabling....it's smaller and blocks less airflow (and easier to do the cabling).
SCSI on the other had has other benefits....like it's used in enterprise servers now. Faster, daisy-chained, more RAID options, etc.
Of course, Fibre Channel is basically SCSI on steroids and has the cabling benefits that SATA has.
With more room thanks to less data cabling, u can add watercooling to reduce the heat generated by the 15k+ rpm drives.
As with all things, it depends on the usage... (Score:2, Insightful)
Serial Attached SCSI (Score:5, Interesting)
SAS hardware is currently a little harder to find than SCSI or SATA stuff, but I'm sure there's a good selection out there if you take the time to look.
I was checking out the Sun Fire 4100 [sun.com] a while ago, and it takes SAS drives, however the form factor is 2.5", and I haven't yet seen any 2.5" SATA drives (I wanted that compatibility). Also, I've heard SATA drives don't work with the Sun Fire 4100's SAS controller anyway. Not sure about that, since the SAS spec says they should work, but just something to keep in mind when you're looking for a server or mobo or controller that supports SAS.
Stick with SCSI (Score:4, Informative)
One way to curb some of the cost, I might add, would be to switch to something like RAID 5... you won't have as high throughput, but you'll still see performance gains and end up with more usable drive space. The throughput likely won't be your problem, anyways... typically it would be the drive's ability to handle multiple simultaneous requests, which heavily relies on low access times (which is why SCSI dominates in this type of environment).
Here's a quick reference [storagereview.com] of some IOMeter benchmarks using a file server test pattern. You'll see what I mean. Wealth of info on drives on that site.
Depends on load (Score:4, Interesting)
SCSI can (depending on which particular SCSI) provide you with more devices per controller without sacrificing (any noticeable) performance. If you need to shove a ton of drives into one server, this will add up quickly. Since you are talking about RAID 0+1, depending on how much storage you are shooting for, this may be a strong factor (but you may be able to skin by on the 4-6 SATA ports you'll find on most mobo's).
SCSI is more mature. So drivers are likely to be more robust, more efficient, and more stable than those you'll find in your garden variety SATA.
You'll typically find that under heavy load, SCSI performs better. Again, this is mostly due to so called "market segmentation" schemes, but that is why you pay more. If your users are going to be mostly dealing with the usual, periodic saving of word processing documents, spreadsheets, and a couple of light media files - you probably don't need to handle really heavy loads. The RAID controller will eat the peaks of write demand in cache (if you get a decent RAID controller - see later), and you should have fairly smooth performance. Then again, if your users are constantly running large installers (development test environment) or working with large remote files - you should really go SCSI.
All that said: I think you would be served best by investing in a better RAID controller rather than investing in top of the line drives. The RAID controllers they integrate on to most motherboards are crap (for what you are trying to do, desktop use - meh). You want something with a ton of cache, and good management soft/firmware. If you buy a real server class motherboard, you may get a better onboard RAID, but however you go about this - pay the most attention to this detail. Unless you really need low latency for high demand, random access applications, top end drives probably won't give you much over the usual network latencies.
Go with SCSI (Score:3, Interesting)
BTW, we used 3ware controllers with WD RAID Edition HDs. We're supporting approximately 75 users per server.
SATA (Score:5, Informative)
If you buy the right model, you can get SATA drives that have gone through the rigorous quality control testing that has historically been reserved for SCSI drives. Many of the higher end server-grade SATA models are warrantied for 24/7 operation. SCSI has lost its advantage there.
SATA has Native Command Queueing, formerly a SCSI-only performance feature. Note that it's optional for SATA drives though, so make sure you get a controller and drives that support NCQ. Again, one of SCSI's few advantages has disappeared.
Last, but most definately not least, SATA cabling is far simpler and robust than SCSI cabling. SCSI cabling is a finicky nightmare where even high-end cables can cause data corruption if you're not careful, whereas even the cheapest SATA cables I've seen worked reliably. I've had hardware related data loss on hard drives twice in my life. One case was an IBM Deathstar, the other was a SCSI cable that started flaking out and corrupted data on three drives at once. I haven't touched SCSI with a ten foot pole since that incident.
REALLY depends on the task at hand (Score:2, Insightful)
Large files/streams that require heavily mixed-mode I/O beat the balls off of SATA. E.g. Correct me if I'm wrong, but my partial understanding of SATA is that if many writes are cached and a read enters the queue, the cached writes are trashed.
so if you are working with check-in/check-out I/O type such as Samba profiles, SVN stuff, or (Samba|N)FS on a small-medium number of small-medium size files, or web stuff, SATA offers best price
Either way, not RAID 1+0 (Score:2)
Re:Either way, not RAID 1+0 (Score:2)
The very definition of RAID... (Score:5, Insightful)
I've already read a bunch of posts about how SCSI is more reliable than SATA. Well, they actually mean SCSI drives are generally more reliable than SATA drivers (and some actually say so). They're quite correct for the most part.
Here's what storage vendors don't want you to know: It doesn't matter.
Use RAID. With SCSI or FC disks, you'll have to use RAID5. At that point, two disk failures in a given array and you're screwed. You REALLY care that two disks don't fail at the same time. And when you're using low-end or even mid-range drives, it happens.
Why do you have to use RAID5? Because with SCSI or FC disks, RAID5 is the only economical option. With a 300GB SCSI drive going for at least $1200USD, and FC drives of that size going for $2500USD, even the biggest corporations end up using RAID5.
Of course, RAID5 isn't the only level of RAID. It's the least redundant of any level of RAID, as a matter of fact.
Go SATA with RAID10, at least 4 drives, ideally six or more. With six drives, the likelyhood of having two drives fail before you can replace the first one is somewhat higher than if you're using SCSI, but the likelyhood of that second drive causing you data loss due to a failed array is infinitesimally smaller. It's guaranteed with RAID5, and the chance for RAID10 is inversely proportional to the number of disks in the array. So first the first drive has to fail, then the second drive which fails has to be of the same RAID1 set. Add onto that that drives do indeed "go old", and the heavier you work them, the faster they get old. With RAID5, disks tend to get worked a lot harder (without any cache, or if the cache misses, each write requires n-2 reads, and 2 writes).
Of course, you've pretty much decided that RAID10 is the way to go. At that point it's cost. If you're looking for 50GB of fast redundant storage, SCSI is going to be slightly cheaper. If you need any amount of storage though, SATA is going to be a whole lot cheaper for the same level of reliability (which requires more spindles), and typically better speed (more spindles means more seeks per second and more megs per second, though one needs to be mindful that big SATA disks are only 7200RPM, while the slowest SCSI disks you're going to get are 10kRPM).
Summary? I'm value-concious. I'd go the SATA route. RAID10, four disks minimum to start, a pair of 4-port 3ware SATA cards with 128MB+ of battery-backed cache. I'd do the RAID entirely with software (Linux MD), with each RAID1 set split across two controllers. We get cheap disk redundancy, cheap disk speed, cheap I/Os, and cheap controller redundancy. I'd consider using less fancy controllers, the 3ware jobbies tend to be expensive, but when you're doing big writes the cache makes a massive difference (75MB/s across four disks of RAID10 versus 20MB/s). I've considered putting together a dedicated storage appliance, exporting via SMB/NFS/NBD/GFS/what-have-you, without the battery-backed cache, but with a pair of 1U UPS units (one for each power supply). Then I'd go around turning off all the application-level fsync()ing, and see what happens with 4GB of disk cache. Bet it'd be fast. And with shutdown initiated via UPS trigger, almost as safe as a battery-backed cache. Remember: "Redundant Array of INEXPENSIVE Disks."
God I ramble.
Re:The very definition of RAID... (Score:2)
First, one of the benefits of buying cheaper drivers is that you can afford to buy an extra one that sits idle most of the time to use as a hot spare. The expected worst-case scenarios are much less serious if you start getting a rebuild to the spare the minute any one drive fails; you need two failures in the amount of time it takes to copy a disk to be dead, rather than two failures
Lies, Damn Lies, and Statistics (Score:2)
Assumption: You are talking about the six drives in the RAID 10 array being three sets mirroed drives striped, if you are talking about using three drives for each mirros so that the data is double-redundant, you are taking away the price benefit.
So, while adding more drives mak
Re:The very definition of RAID... (Score:3, Informative)
Actually, the definition has been back-formed to "Redundant Array of Independent Disks, since you won't necessarily be using inexpensive drives any more.
Just because you put 500gb drives in a RAID array, doesn't suddenly make them inexpensive, but they are each independent.
What about SAS? (Score:2)
Higher throughput than standard SCSI, easier to manage and daisy chain and somewhere I'd read that you could attach SATA drives to SAS controllers - although that's never been confirmed.
Do both (Score:2)
Use SATA and SCSI.
There are devices available that appear to the comuter to be a SCSI drive when it is really a RAID array of SATA drives.
Something like the the Maxtronic Arena Sivy SA-4830/SA-4831 [maxtronic.com] could give you a 2 TB SCSI drive.
Re:Do both (Score:2)
Controller matters much more than the drives (Score:2, Informative)
Many low to mid range SCSI raid cards (most? all?) either don't have any sort of interface to find the raid status when the server is up (they just beep at you and expect that somehow that's going to be hard over the AC and server noises when you're walking by the machine), or the tools for checking the raid
SAS (Score:2, Interesting)
SAS has:
- lean SATA cables
- 3Gbps transfer, soon to be 6Gpbs. Better than U320
- 15,000 rpm disks
- NCQ like SATAII
- RAID-capable controllers
- SATA on SAS possible
use SCSI... (Score:5, Insightful)
However, you already have SCSI. Management is used to paying for SCSI machines. If you have 50-100 people depending on something, and it's slow, that's a productivity drag. If you assume that all those people cost $100k/year each (not at all unreasonable with benefits), 50 people are getting paid about 2,500 bucks an hour, or about 20,000 dollars a day. In other words, if you speed them up by just 5% with better hardware, you're saving the company a thousand dollars a day. Even if it's a tiny 1% speed gain, that's still 200 bucks a day. Saving six grand a month for an upfront investment of ten grand is a total no brainer.
Buy SCSI.
Something to remember.. (Score:4, Funny)
--
Current setup - 4x Seagate 400GB SATA, NVRAID-0, ThermalTechno 4000 1U Case w/ Ground-FX, 3x Zalmat 80mm SilentKiller Fans (soon)
Take whatever's cheapest. Buy two. (Score:3, Insightful)
If my limited experience has taught me anything about computer reliability, it's that a single mis-set bit somewhere can bring down a system. Maybe the bit got there by user error, maybe it got there because of RAM or disk failure, maybe it got there from a bug in the application, OS, or firmware. Maybe a component on the motherboard shorted out. Maybe it's the climate. Maybe it's the phase of the moon.
I've seen it happen with discount ghetto hardware, I've seen it happen with high end hardware. I've seen it happen on Windows. On Linux. On FreeBSD. On Solaris. I've seen servers go down due to catastrophic hardware failure and I've seen them go down because a $2 fan died. I've seen people come inches from major power supply caused injury working on a desktop PC.
Everything will break.
There's just too much freaking complexity. Now I just buy whatever's cheapest so I can buy way more than I need. Mix up the configurations a bit so you get some bio-diversity; if one drive manufacturer has a bad year, you don't want all of your eggs invested in them.
Most important of all, at the first sign of trouble, throw it away.
Try to resist the urge to fix it. I mean it. You cost more than that piece of junk. Put in a purchase request and move on.
Write cache (Score:2)
SCSI. Still. (Score:5, Informative)
If you doubt it, try both.
For going on twenty years it's been the same: those who haven't tried SCSI claim that there's no or little difference. Those who have used both SCSI and [MFM,RLL,IDE,ATA,SATA] in high-load environments hate to try to make due with anything but SCSI.
For performance and reliability reasons both, you want SCSI if you're dealing with high-random-access-load or high-throughput situations. ATA/SATA is fine if you're just offering up noncritical bulk network storage but for the rest you want the real deal, and you will notice the obvious difference if you try both in a stressed environment.
Do RAID 5 ! (Score:5, Insightful)
No, choose RAID 5 instead of RAID 1+0. Here is why:
To give you a datapoint, I have set up multiple Linux software RAID 5 arrays on various servers with 10+ SATA disks, and the I/O throughput is over 500+ MB/s (enough to saturate 2 full-duplex GigE links !). At my previous work we had about 200 servers, all using Linux software RAID 5. And we have been MUCH MORE HAPPY than the previous setup where all of them were using hardware RAID 5. Moreover, Linux's software RAID 5 is more flexible (create arrays on ANY disk on ANY SCSI/SATA card in the system), more consistant (one and only one control software to learn: mdadm(8), no need to use crappy vendor tools or reboot into vendor BIOSes), cheaper (no hardware to buy), more reliable (no hardware card = 1 less hw component that can fail), easier to troubleshoot (plug the disks on ANY linux server and it works, no reliance on any particular hw card) and more scalable (spread the load across multiple disk controllers, multiple PCI-X/PCIe busses, or even multiple SAN devices).
It's amazing the amount of misinformation and misconceptions about RAID that is spread around the world. I hate to say it but 95% of IT engineers don't make good choices regarding RAID servers because of all those misconceptions.
MTBF usually better on SCSI (Score:3, Informative)
If you want reliability for the disk you had better check what the manufacturer claims for the MTBF (mean time between failure).
Many SATA drivers have a MTBF of around 0.6 to 1 where as SCSI have between 1 and 2. Your SCSI disk therefore has about twice the life expectancy. If you couple this with the speed of the SCSI I guess for the moment if your budget allows for it then go for SCSI
If your budget doesn't allow for it...just make sure you have good redundancy in your RAID with at least 2 redundant disks
Re:What's this SCSI you speak of? (Score:2, Insightful)
At least give us a 2 line explanation, so I don't think you're speaking crap that you read in an advertisement! Please, if you have some justification, I'd love to know what it is... seriously.
Re:What's this SCSI you speak of? (Score:2)
Re:What's this SCSI you speak of? (Score:2)
Re:What's this SCSI you speak of? (Score:2, Insightful)
"I can't find the network drive"...
SCSI will seem a bit more expensive at first but that cost isn't just for the interface most of that cost is for the extra testing and hardware reliability you get with SCSI.
I am a pir8 and I back up everything important so I can run an SATA raid-0.
But you want something with modular controllers hotswappable raid arrays and reliability, hell if I was running a businnes off my home PC that would be
Re:Only a rookie would suggest RAID 0+1 (Score:3, Insightful)
0+1 is a mistake, but 1+0 isn't. 0+1 loses the data set if any one arbitrary drive fails in both sides of the mirror; 1+0 only if you lose both disks in a single mirror pair.
RAID 5 is noticably slower disk performance for writes, and radically slower performance for reads and writes if you lose a disk. In many cases, the performance duri
Re:Only a rookie would suggest RAID 0+1 (Score:2)
Are you willing to spend for 10 terabytes, just to get 1 terabyte of usable space? Well, then don't expect to have incredible throughput while experiencing the most catastrophic failure imaginable...