Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

SCSI vs. SATA In a File Server? 303

turboflux asks: "I'm currently in the process of replacing an aging file server with something more robust. Company-wide, there will be about 100 people who could be using this server, but I don't imagine there being more than 50 concurrent users. Right now, I'm torn between spending alot on SCSI hardware, much like our other servers, or spending less, but getting more space, with SATA II drives. Whatever I decide, the server will be setup with a RAID 1+0 array for the numerous benefits it offers. Does Slashdot have opinions or suggestions on performance, reliability, and stability?"
This discussion has been archived. No new comments can be posted.

SCSI vs. SATA In a File Server?

Comments Filter:
  • by Anonymous Coward on Tuesday January 24, 2006 @10:32PM (#14553887)
    Have you ever thought about the benefits of RLL?
  • SCSI?? (Score:3, Funny)

    by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Tuesday January 24, 2006 @10:41PM (#14553918)
    I don't mean this as a troll, but I was under the impression that USB 1.0 replaced SCSI? Or was that desktop only, and it still has server uses? I mean, I thought USB killed SCSI? Or am I thinking of something different?
    • No, you're thinking Firewire. :P
    • That was for external devices like scanners.
    • USB 1.0 didn't kill SCSI, Steve Ballmer F**KING KILLED SCSI!

      Now Ballmer says he is going to F**KING KILL USB.

      Man I need a T-Shirt that says that.

      "Steve Ballmer says, "I'm going to f**king kill you!"".

    • Re:SCSI?? (Score:3, Informative)

      by Anonymous Coward
      You people are really showing your (inadvanced) age! Back in the good old days, many external peripherals (such as scanners) were connected to your machine via a SCSI bus. Don't forget what SCSI stands for - Small Computer (Serial|Standard) Interface. I believe our friend here was referring to peripherals, and in that case he's right - SCSI was replaced by USB. As the rest of you seem to have been born in the 80s, you probably thought he was referring to SCSI hard disks - the most common use of SCSI the
      • Re:SCSI?? (Score:3, Funny)

        by Kymermosst ( 33885 )
        You people are really showing your (inadvanced) age!

        And you are showing your senility:

        Don't forget what SCSI stands for - Small Computer (Serial|Standard) Interface.

        Heh. Wrong.

        Small Computer System Interface [t10.org].

  • SATA is fine (Score:5, Insightful)

    by Bombcar ( 16057 ) <racbmob@@@bombcar...com> on Tuesday January 24, 2006 @10:41PM (#14553919) Homepage Journal
    SATA is fast and cheap; just make sure you spend a little bit more to get the "nearline" storage drives and not just desktop drives. Put them behind a 3ware 9550 and you'll fly.
    • by abcess ( 14260 ) on Tuesday January 24, 2006 @11:16PM (#14554096)
      As a matter of fact, you may not be flying at all. It all depends what you're using it for. The problem with SATA is latency, and there's not much that controller is going to do about it. If you've got a server that is performing latency sensitive tasks, then SATA can cause performance problems.

      In my experience, if you've got alot of random I/O, SATA is not a viable solution. That said, even if your I/O is mostly random, if there's not a heavy load on the disk, then you're probably ok. If you've got 200 people hitting a database or email server, you're probably going to have some performance problems. Swap it out with SCSI drives, or a quality disk array, and you'll be doing much better. If you've got a web server, or a database server that is exclusively reading, you can probably get away with SATA. Again, it all depends on how much and how random the disk I/O for your application is.
      • So NCQ or whatever doesn't cut it?

        I guess I can understand what you mean though. It should also be clarified that it's not just any SCSI drive that you need, that it would need to be an array of 15k RPM drives if latency is the real issue because there do exist 10k RPM SATA drives with decent capacity that is more or less the same drive mechanicals with a different interface as a SCSI counterpart.
      • by Anonymous Coward on Wednesday January 25, 2006 @05:26AM (#14555773)
        Assuming equal storage sizes, SCSI drives would have way better throughput and latency than a SATA drive because you can get 15K SCSIs. However, the sizes are NOT equal. Fact is that for the price of a 147GB 15K SCSI drive, you can get about 2TB of 7200RPM SATA space.

        What you end up with is the following throughput when disks are empty:

            1x147GB 15K SCSI -- 150MB/s
            8x250GB 7200 SATA -- 275MB/s to 550MB/s depending on exact RAID configuration

        Now fill up both configurations with 140GB of data and the throughput of the 15K SCSI has dropped in half to 75MB/s because the heads are now positioned at the "slower" inner portion of the disk. Meanwhile, the 2TB SATA config is 7%-15% slower depending on the RAID config.

        Latency also benefits from many disks for the same reason. Fill up a disk and you possibly have to traverse the entire disk. So while a 15K drive has a seek time of 2-3 times faster, you end up having to move 10X-15X farther than in a mega array where the heads pretty much just hover over the 2X faster outer portion.

        The big advantage for SCSI is the better TCQ algorithms for multi-user access. This can be mostly negated if you use a SATA RAID controller with enough onboard RAM to reorder IO at the controller level versus depending on the drive's NCQ.

        This is the route we've taken -- we went from a LSI MegaRAID 320-1 + 4-drive SCSI RAID config to an Areca 1170 + 1GB RAM + 24-drive SATA RAID. Every aspect of performance is up by big amounts -- throughput, latency, multi-user access. The drive array is actually TOO fast for our 2x244 Opteron server to drive. We ended breaking the array into 3 8-drive volumes and mirroring 2 volumes against each other for more redundancy. One of these days, we'll upgrade to faster CPUs and retest a 16-drive volume.
    • SCSI is fast and reliable for non-technical reasons.

      It's not the interface itself. Interface incompatibility is used to split the market into regular users and those who don't have to spend their own personal money.

      Think about rotation speeds, seek times, and bearing wear. How could these specs possibly have anything to do with the data cable? They don't, except in the mind of a VP of marketing. Despite the obvious lack of technical reasons, SCSI drives often come with better specs. That's life.

    • Re:SATA is fine (Score:3, Interesting)

      by Dante ( 3418 ) *
      Having replaced sixteen drives and lost two raid 5s with hot-spares no less, in the last eighteen months. I can't recommend SATA for anything that is mission critical.
    • Even the "enterprise" SATA drives are crappy. Out of more than 200 hard drives, I've replaced four times as many of the SATA disks vs. the SCSI disks (12 vs. 3).
  • SCSI (Score:2, Funny)

    by armanox ( 826486 )
    Clearly the answer is SCSI - go with what you know, servers are not the best computers to experiment with random equipment.
  • BACKUP! (Score:4, Interesting)

    by Anonymous Coward on Tuesday January 24, 2006 @10:44PM (#14553933)
    Think about the fact that you need a sufficient Backup. You can buy lots of cheap storage with SATA Disks, but Ultrium 3 Tapes (400/800) are still expensive as fuck. Never forget that cost when calculating.

    OTOH, there are 300GB U320 Disks now, which you could use if latency is not an issue. Otherwise, go with lots of disk arms (72GB or 36GB U320 Disks)
    • Re:BACKUP! (Score:4, Informative)

      by Spazmania ( 174582 ) on Tuesday January 24, 2006 @10:55PM (#14553998) Homepage
      ATA and SATA drives are a great choice for online backup. Its pretty easy to put several terabytes worth in a box these days, software raid-5 them with Linux and then use tar and gzip. The price is not exceptionally higher than tapes either and the reliability (i.e. your success rate restoring data) is superior.

      • Re:BACKUP! (Score:3, Interesting)

        by eric76 ( 679787 )

        ATA and SATA drives are a great choice for online backup. Its pretty easy to put several terabytes worth in a box these days, software raid-5 them with Linux and then use tar and gzip. The price is not exceptionally higher than tapes either and the reliability (i.e. your success rate restoring data) is superior.

        That depends on how long you need to store the data.

        If you need it for a short time, you might be correct.

        But if you may need the data 5 years or more from now, tape is clearly far superior.

        • Re:BACKUP! (Score:5, Insightful)

          by Spazmania ( 174582 ) on Wednesday January 25, 2006 @12:30AM (#14554516) Homepage
          But if you may need the data 5 years or more from now, tape is clearly far superior.

          You have much luck getting data back from a tape five years later?

          First you have to find the tape. You can't have misplaced it and you can't have reused it due to the damn high cost of magnetic tape.

          Then you have to find a drive that can read the tape. The one you wrote it with died two years ago, its no longer manufactured and oh darn none of the three you picked up off ebay use the same compression format.

          Next you need the old backup software. You've been using Acme Archiver for the past three years; It doesn't understand the old SuperBackup format and unfortunately SuperBackup only ran in DOS with an 8-bit ISA SCSI card.

          Finally you have to pray that the tape is still good. They're like floppy disks; they go bad just sitting on the shelf.

          Buddy, I've been there. It ain't pretty. So for the last 7 years I've stored my backups on hard disks. No pain! No pain!
          • Re:BACKUP! (Score:3, Insightful)

            by eric76 ( 679787 )

            You have much luck getting data back from a tape five years later?

            Yes.

            First you have to find the tape. You can't have misplaced it and you can't have reused it due to the damn high cost of magnetic tape.

            That is no problem at all. I keep detailed listing of what backup set is stored on what backup media.

            As far as the "damn high cost of magnetic tape", you must be talking about those cheap tape drives that use expensive tapes. We have a couple of those around here, but we don't use them much at all.

            Int

    • Re:BACKUP! (Score:5, Informative)

      by kahanamoku ( 470295 ) on Wednesday January 25, 2006 @12:03AM (#14554378)
      I've seen more dead HDD's than backup tapes, and have seen 60 times as many backup tapes than HDD's...

      and last time I checked, an Ultrium 3 tape was half the price of a 400GB Drive.

      I wouldn't use disks for backup, unless they're to be used as live backups, and then I'd still archive to tape (provided it was affordable).
      • Re:BACKUP! (Score:3, Insightful)

        by eric76 ( 679787 )
        You are absolutely correct. In addition, with a tape, you can much more easily take copies off-site for storage. I frequently suggest that people get a safety deposit box in a bank at least 20 miles away from their facilities and store a copy of their backups there.
      • last time I checked, an Ultrium 3 tape was half the price of a 400GB Drive.

        That doesn't include the ~$2000 cost for the tape drive itself. The real cost for tape is $2000 + $75 per tape, while the real cost of SATA is just $200 per disk (or $240 if you want an external enclosure for each one (see below)). Unless you've got more than 6.4 TB of data to back up (the point where 2000+75x = 200x, where x is a 400GB-capacity tape or disk) (or 4.8TB assuming enclosures), hard disks are cheaper.

        (Incidentally, I

  • Hmm (Score:5, Informative)

    by PrvtBurrito ( 557287 ) on Tuesday January 24, 2006 @10:44PM (#14553936)
    We have both SCSI raid (2 1TB arrays with 10k RPM SCSI drives on a dell powervault) and a several arrays with 3ware cards (an 8 way and a 12 way both with 200 or 250GB drives). We run Red Hat WS. We find that the 3ware cards are excellent for large data storage but have latency issues compared to the SCSI raid array. We are happy with both systems, but the price break on the 3ware shows, and I wouldn't recommend for really heavy use.
    • by MarcQuadra ( 129430 ) * on Tuesday January 24, 2006 @11:06PM (#14554040)
      I recently did all this research myself. SATA on Linux is going to get MUCH faster, probably as fast as SCSI, but you'll have to wait for the libATA improvements to take hold. Right now NCQ isn't implemented, and neither are 'multiple sector transfers'. I bought hardware that WILL support those features because I know that NCQ will dramatically improve speed and latency (under high-use conditions) when it is finally fully-baked.

      The site to track progress on the library and driver status is here: http://linux.yyz.us/sata/ [linux.yyz.us]

      The project has been moving along quite well. I think their goal is to completely modularize, simplify, optimize, and consolidate the ATA, ATAPI, and SATA kernel pieces into one overarching (underlying?) library. I like this kind of work. I can't see why ALL disk-like I/O isn't under one big modular kernel library, it seems like it would make adding new transport types and drivers a lot simpler and reduce maintainance all-around.
  • I'd say SCSI (Score:2, Insightful)

    It's more reliable, as far as I know, compared to SATA. SATA is good enough for desktop performance, but I have yet to hear any glowing reviews of it in the server market.
    • Yes, SCSI is far more reliable, but it really all depends on the application. If the file server is going to be used only intermittently, SATA may be okay. However, if the server is I/O intensive, you really need to go with the SCSI drives. Basically, SATA drives are rated for desktop use (something like a 10% duty cycle...don't quote me), while the SCSI drives are rated at a 100% duty cycle. This is why SCSI is recommended for database servers, while SATA is often recommended for nearline use, like a disk-
    • Re:I'd say SCSI (Score:5, Interesting)

      by GigsVT ( 208848 ) on Tuesday January 24, 2006 @11:17PM (#14554103) Journal
      OK, hear mine then.

      We have several terabytes of SATA storage at work to hold our main business-critical digital asset archive.

      We've been using a ATA/SATA disk-only strategy for over 5 years now. It's worked great, and eliminated our slow and unreliable tape robot, which has greatly improved productivity.

      Back in 1999/2000 SCSI wasn't an option for the main archive because a terabyte of SCSI would have broken the bank. We went ATA back then. It was a mess trying to route 24 ATA cables in a case, I admit. SATA fixes that nicely.

      We keep three copies of our data, two onsite and one offsite. We use rsync-incremental snapshots to do disk-based incremental backups. Because the cost of SATA is less than 1/3rd the cost of SCSI, we get a high reliability solution for less than the price of a single SCSI RAID.

      One more advantage of SATA is that the disks are so cheap, it's easy to just replace all of them every two or three years. The disks you replace them with generally are twice as large after 2 or 3 years, so every cycle your RAIDs get more reliable as the number of disks is slashed in half.

      Most companies wouldn't replace every SCSI disk every two years, it would cost way too much. And considering the slow pace of SCSI size growth, you wouldn't see as much gain, a double hit against SCSI.

      So basically unless you need the excellent latency performance of SCSI, higher than even the WD Raptor can offer, I see no compelling reason to use SCSI for anything anymore.
  • by toofast ( 20646 ) * on Tuesday January 24, 2006 @10:49PM (#14553965)
    I use SATA on our smaller, non-mission-critical servers. For our data backend, I wouldn't touch it with a 10-foot pole.

    Here are some scenarios where I wouldn't hesitate to use SATA:

    - You have redundant servers. Using LVS and/or Heartbeat and your favorite tools, you can get full server redundancy using less expensive hardware. The overall solution can be quite elegant, with hot failover. Why just cover the drives?

    - Front-end cluster nodes. You have a powerful, expensive backend server (with a cheaper failover) and you use inexpensive front-end servers for serving client requests. Sounds like overkill for what you want, but with the right server load balancing technology, it can give you a scalable, fault-tolerant and damn fast solution.

    - You can live with downtime. Install a server with a couple of SATA disks in a RAID configuration and hope for the best.

  • Go SCSI. It's pretty amazing that there are so many people out there whose only exposure to SCSI has been a mention in a textbook or having read about it, considering it was the standard for performance for so long. The speed and reliability is definitely better than SATA and as you well know, price isn't much of an issue when the RAID fails is it?
    • price isn't much of an issue when the RAID fails is it?

      The price is a huge issue. The cost of regenerating the lost data is enormous.
    • are you saying that raid has a bigger chance of failing on a SATA drive than SCSI?

      i'm confused...

      also, the guy with the question mentioned that at most, there will be 50 concurrent connections, and the place only has 100 employees. i've run labs and had well over 50 machines connecting to a single symantec ghost server... no raid, standard ATA. no trouble (yes, i realize they don't connect and transfer large amounts of data all the time like a fileserver). if he was really worried about speed, i'm sure S
  • by Anonymous Coward
    For any tier 2 storage, SATA is the future. I'd go so far as to say that in about 80% of all instances, good SATA-II drives with NCQ and large caches in a proper RAID setup will be far, far more than adequate for most production servers unless you do tons of non-sequential I/O, need tons of iops, etc.

    For a file server, you'll be fine with SATA.

    For my tier 2 servers, I am moving a ton of stuff off of my EMC gear (because fibre channel drives are damned expensive) onto SATA-II drives in an iSCSI setup. I'm al
  • The real info (Score:5, Informative)

    by sabreofsd ( 663075 ) on Tuesday January 24, 2006 @10:53PM (#14553988) Homepage
    There might be some benefits to you to sticking to SCSI vs. SATA, it really depends on your preference. Both SCSI and SATA offload the main processor from the duties associated with reads and writes. SATA also now has optimized reading patterns just like SCSI. The only real advatages SCSI has right now are the speeds (SATA 150 (there is a newer faster one coming) vs SCSI 320). Also, most SCSI drives are desgined for 24/7 use, whereas most SATA drives are designed for desktop use. Just make sure the SATA drives you buy are made for Enterprise level operation. So it really comes down to compatability/speed vs. cheap/larger. Hope this helps!
    • Well, SATA 300 is already out. We're using that for 3 1TB file storage servers on our domain, using a PCI-X x16 Raid controller with RAID5.

      And to be really honest, SATA 150's 150MB/s is not shared with any drives, where as SCSI 320's 320MB/s is shared among all the drives on the ribbon cable, AFAIK. I'd personally rather have 8 drives with 300MB/s a peice than 8 drives sharing 320MB/s. The former makes the PCI-X port the bottle neck (or perhaps the raid controller card itself).
  • If you go with SATA 150, make sure that the drives and the controller support NCQ. This is incredibly important for a server. If you want to split the difference between SATA and SCSI, go SATAII with a PCIx 64-bit or PCIe RAID card with expandable on-board cache of around 128MB.

    SCSI what?
  • SAS (Score:4, Interesting)

    by Punboy ( 737239 ) * on Tuesday January 24, 2006 @11:00PM (#14554022) Homepage
    Serial-attached SCSI. 15K RPM drives, SAS RAID 1+0. Heaven.
  • by postbigbang ( 761081 ) on Tuesday January 24, 2006 @11:05PM (#14554037)
    SCSI is very fast, and usually more expensive. You can get really fast, highly cached drives in SCSI with high-RPM spindles, and cool controllers. But they're $$$$. Do you need the speed?

    If not, SATA is still pretty fast, much less expensive, less clever controllers, but still very reasonable for things like archiving, steady low-concurrency-demand streaming, and so on.

    SATA also has the advantage of not needing loads of austere cables with distance limitations imposed on them; it's a serial rather than a parallel bus-- hence the S in SATA. Use SATA when you don't need the absolute fastest you can get-- and you won't have to spend the most on the controller (which is hopefully a SCSI PCI-X controller or other fast clocker), the drives, the pricey cables, and so on. But if you need the speed, there is no faster than SCSI except for flash drives, which are still hideously expensive.... and not writeable as much as we'd like them to be.
     
  • Fibre Channel (Score:4, Informative)

    by MoFoQ ( 584566 ) on Tuesday January 24, 2006 @11:10PM (#14554061)
    I'd say Fibre Channel.

    One benefit that SATA does have over SCSI is the cabling....it's smaller and blocks less airflow (and easier to do the cabling).

    SCSI on the other had has other benefits....like it's used in enterprise servers now. Faster, daisy-chained, more RAID options, etc.

    Of course, Fibre Channel is basically SCSI on steroids and has the cabling benefits that SATA has.

    With more room thanks to less data cabling, u can add watercooling to reduce the heat generated by the 15k+ rpm drives.
  • I am a huge fan of efficent, cheap systems. The bulk of our server load is handled by dual opteron machines with 3ware cards and a 10k rpm system spindles and 7.2k rpm data spindles. However, even the best sata drives choke under file system and database loads and our primary data stores are U320. StorageReview.com has a good review of the new 150gig 10k rpm WD drive that shows it gettting spanked by SCSI drives under non-linear server loads. Long story short, if you expect a lot of drive activity you m
  • Serial Attached SCSI (Score:5, Interesting)

    by klparrot ( 549422 ) <klparrot@ho[ ]il.com ['tma' in gap]> on Tuesday January 24, 2006 @11:13PM (#14554071)
    You could go with Serial Attached SCSI [wikipedia.org] (SAS). SAS drives offer the high-end performance of traditional SCSI, and you can also hook up regular SATA drives to an SAS controller if you want to go chep for now and upgrade later, or if you only need some of your drives to have high performance.

    SAS hardware is currently a little harder to find than SCSI or SATA stuff, but I'm sure there's a good selection out there if you take the time to look.

    I was checking out the Sun Fire 4100 [sun.com] a while ago, and it takes SAS drives, however the form factor is 2.5", and I haven't yet seen any 2.5" SATA drives (I wanted that compatibility). Also, I've heard SATA drives don't work with the Sun Fire 4100's SAS controller anyway. Not sure about that, since the SAS spec says they should work, but just something to keep in mind when you're looking for a server or mobo or controller that supports SAS.

  • Stick with SCSI (Score:4, Informative)

    by dFaust ( 546790 ) on Tuesday January 24, 2006 @11:14PM (#14554075)
    SATA drives have definitely improved, and for file servers NCQ definitely helps out alot.... but for the absolute best performance in a (true) multi-user environment, 15k SCSI drives still offer gobs of performance over even the new 150gig 10k Raptor SATA drive. Ultimately it will come down to how important price vs. size is to you... but speaking purely on performance, 15k SCSIs are the way to go.

    One way to curb some of the cost, I might add, would be to switch to something like RAID 5... you won't have as high throughput, but you'll still see performance gains and end up with more usable drive space. The throughput likely won't be your problem, anyways... typically it would be the drive's ability to handle multiple simultaneous requests, which heavily relies on low access times (which is why SCSI dominates in this type of environment).

    Here's a quick reference [storagereview.com] of some IOMeter benchmarks using a file server test pattern. You'll see what I mean. Wealth of info on drives on that site.

  • Depends on load (Score:4, Interesting)

    by darkwiz ( 114416 ) on Tuesday January 24, 2006 @11:14PM (#14554085)
    You cannot buy the same performance class of drives in SATA as you can with SCSI. Some people call this a market segmentation scheme, I call it catering to the market. People who demand top class drive performance typically also want the other benefits of SCSI as well. Whether those benefits are needed for your requirements, well, depends on your requirements.

    SCSI can (depending on which particular SCSI) provide you with more devices per controller without sacrificing (any noticeable) performance. If you need to shove a ton of drives into one server, this will add up quickly. Since you are talking about RAID 0+1, depending on how much storage you are shooting for, this may be a strong factor (but you may be able to skin by on the 4-6 SATA ports you'll find on most mobo's).

    SCSI is more mature. So drivers are likely to be more robust, more efficient, and more stable than those you'll find in your garden variety SATA.

    You'll typically find that under heavy load, SCSI performs better. Again, this is mostly due to so called "market segmentation" schemes, but that is why you pay more. If your users are going to be mostly dealing with the usual, periodic saving of word processing documents, spreadsheets, and a couple of light media files - you probably don't need to handle really heavy loads. The RAID controller will eat the peaks of write demand in cache (if you get a decent RAID controller - see later), and you should have fairly smooth performance. Then again, if your users are constantly running large installers (development test environment) or working with large remote files - you should really go SCSI.

    All that said: I think you would be served best by investing in a better RAID controller rather than investing in top of the line drives. The RAID controllers they integrate on to most motherboards are crap (for what you are trying to do, desktop use - meh). You want something with a ton of cache, and good management soft/firmware. If you buy a real server class motherboard, you may get a better onboard RAID, but however you go about this - pay the most attention to this detail. Unless you really need low latency for high demand, random access applications, top end drives probably won't give you much over the usual network latencies.
  • Go with SCSI (Score:3, Interesting)

    by dclxv ( 553385 ) * <kkimball@gmail.com> on Tuesday January 24, 2006 @11:15PM (#14554087)
    I bought into the whole "SATA is the new less-expensive SCSI" and put in two new file servers using SATA last spring. I can say that I'm unimpressed with the SATA servers as compared with our SCSI servers. I now wish we'd spent the extra $1000/server and gone with SCSI. I recommend SCSI -- you won't second guess your decision down the road.

    BTW, we used 3ware controllers with WD RAID Edition HDs. We're supporting approximately 75 users per server.
  • SATA (Score:5, Informative)

    by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Tuesday January 24, 2006 @11:19PM (#14554109) Homepage
    SATA's peak raw transfer rate (150 MB/sec) is half that of the peak raw transfer rate of SCSI (320 MB/sec), but you're going to be limited by the individual hard drive's transfer rate anyway. Keep in mind that a proper SATA implementation will be 150MB/sec PER DRIVE, since each drive is on its own channel. SCSI is 320 MB/sec per channel, but you're in for a cabling nightmare if you want only one drive per channel. Note that there is a 300 MB/sec SATA standard, although few drives and controllers seem to support it.

    If you buy the right model, you can get SATA drives that have gone through the rigorous quality control testing that has historically been reserved for SCSI drives. Many of the higher end server-grade SATA models are warrantied for 24/7 operation. SCSI has lost its advantage there.

    SATA has Native Command Queueing, formerly a SCSI-only performance feature. Note that it's optional for SATA drives though, so make sure you get a controller and drives that support NCQ. Again, one of SCSI's few advantages has disappeared.

    Last, but most definately not least, SATA cabling is far simpler and robust than SCSI cabling. SCSI cabling is a finicky nightmare where even high-end cables can cause data corruption if you're not careful, whereas even the cheapest SATA cables I've seen worked reliably. I've had hardware related data loss on hard drives twice in my life. One case was an IBM Deathstar, the other was a SCSI cable that started flaking out and corrupted data on three drives at once. I haven't touched SCSI with a ten foot pole since that incident.
  • For some things you NEED SCSI, for others you don't. That much is obvious.

    Large files/streams that require heavily mixed-mode I/O beat the balls off of SATA. E.g. Correct me if I'm wrong, but my partial understanding of SATA is that if many writes are cached and a read enters the queue, the cached writes are trashed.

    so if you are working with check-in/check-out I/O type such as Samba profiles, SVN stuff, or (Samba|N)FS on a small-medium number of small-medium size files, or web stuff, SATA offers best price
  • RAID 5, unless you like 50% waste, rather then 1/n (n>=3) waste.
  • by dbarclay10 ( 70443 ) on Tuesday January 24, 2006 @11:37PM (#14554244)
    The very definition of RAID is "Redundant Array of INEXPENSIVE Disks". Emphasis mine.

    I've already read a bunch of posts about how SCSI is more reliable than SATA. Well, they actually mean SCSI drives are generally more reliable than SATA drivers (and some actually say so). They're quite correct for the most part.

    Here's what storage vendors don't want you to know: It doesn't matter.

    Use RAID. With SCSI or FC disks, you'll have to use RAID5. At that point, two disk failures in a given array and you're screwed. You REALLY care that two disks don't fail at the same time. And when you're using low-end or even mid-range drives, it happens.

    Why do you have to use RAID5? Because with SCSI or FC disks, RAID5 is the only economical option. With a 300GB SCSI drive going for at least $1200USD, and FC drives of that size going for $2500USD, even the biggest corporations end up using RAID5.

    Of course, RAID5 isn't the only level of RAID. It's the least redundant of any level of RAID, as a matter of fact.

    Go SATA with RAID10, at least 4 drives, ideally six or more. With six drives, the likelyhood of having two drives fail before you can replace the first one is somewhat higher than if you're using SCSI, but the likelyhood of that second drive causing you data loss due to a failed array is infinitesimally smaller. It's guaranteed with RAID5, and the chance for RAID10 is inversely proportional to the number of disks in the array. So first the first drive has to fail, then the second drive which fails has to be of the same RAID1 set. Add onto that that drives do indeed "go old", and the heavier you work them, the faster they get old. With RAID5, disks tend to get worked a lot harder (without any cache, or if the cache misses, each write requires n-2 reads, and 2 writes).

    Of course, you've pretty much decided that RAID10 is the way to go. At that point it's cost. If you're looking for 50GB of fast redundant storage, SCSI is going to be slightly cheaper. If you need any amount of storage though, SATA is going to be a whole lot cheaper for the same level of reliability (which requires more spindles), and typically better speed (more spindles means more seeks per second and more megs per second, though one needs to be mindful that big SATA disks are only 7200RPM, while the slowest SCSI disks you're going to get are 10kRPM).

    Summary? I'm value-concious. I'd go the SATA route. RAID10, four disks minimum to start, a pair of 4-port 3ware SATA cards with 128MB+ of battery-backed cache. I'd do the RAID entirely with software (Linux MD), with each RAID1 set split across two controllers. We get cheap disk redundancy, cheap disk speed, cheap I/Os, and cheap controller redundancy. I'd consider using less fancy controllers, the 3ware jobbies tend to be expensive, but when you're doing big writes the cache makes a massive difference (75MB/s across four disks of RAID10 versus 20MB/s). I've considered putting together a dedicated storage appliance, exporting via SMB/NFS/NBD/GFS/what-have-you, without the battery-backed cache, but with a pair of 1U UPS units (one for each power supply). Then I'd go around turning off all the application-level fsync()ing, and see what happens with 4GB of disk cache. Bet it'd be fast. And with shutdown initiated via UPS trigger, almost as safe as a battery-backed cache. Remember: "Redundant Array of INEXPENSIVE Disks."

    God I ramble.
    • Couple of comments on this message, which I generally agree with the theme of but with have some caveats on implementation.

      First, one of the benefits of buying cheaper drivers is that you can afford to buy an extra one that sits idle most of the time to use as a hot spare. The expected worst-case scenarios are much less serious if you start getting a rebuild to the spare the minute any one drive fails; you need two failures in the amount of time it takes to copy a disk to be dead, rather than two failures
    • but the likelyhood of that second drive causing you data loss due to a failed array is infinitesimally smaller. It's guaranteed with RAID5, and the chance for RAID10 is inversely proportional to the number of disks in the array.

      Assumption: You are talking about the six drives in the RAID 10 array being three sets mirroed drives striped, if you are talking about using three drives for each mirros so that the data is double-redundant, you are taking away the price benefit.

      So, while adding more drives mak

    • > The very definition of RAID is "Redundant Array of INEXPENSIVE Disks".

      Actually, the definition has been back-formed to "Redundant Array of Independent Disks, since you won't necessarily be using inexpensive drives any more.

      Just because you put 500gb drives in a RAID array, doesn't suddenly make them inexpensive, but they are each independent.
  • Or Serial Attached SCSI.

    Higher throughput than standard SCSI, easier to manage and daisy chain and somewhere I'd read that you could attach SATA drives to SAS controllers - although that's never been confirmed.

  • Use SATA and SCSI.

    There are devices available that appear to the comuter to be a SCSI drive when it is really a RAID array of SATA drives.

    Something like the the Maxtronic Arena Sivy SA-4830/SA-4831 [maxtronic.com] could give you a 2 TB SCSI drive.

  • All hardware (and software) sucks, and it breaks, it's a fact of life. No matter if you go with SCSI or SATA, the important thing is that you can find out when a drive dies so that it can get replaced.

    Many low to mid range SCSI raid cards (most? all?) either don't have any sort of interface to find the raid status when the server is up (they just beep at you and expect that somehow that's going to be hard over the AC and server noises when you're walking by the machine), or the tools for checking the raid
  • SAS (Score:2, Interesting)

    by SebNukem ( 188921 )
    Serial Attached SCSI takes the best of both world together:

    SAS has:
      - lean SATA cables
      - 3Gbps transfer, soon to be 6Gpbs. Better than U320
      - 15,000 rpm disks
      - NCQ like SATAII
      - RAID-capable controllers
      - SATA on SAS possible
  • use SCSI... (Score:5, Insightful)

    by Malor ( 3658 ) on Wednesday January 25, 2006 @12:33AM (#14554530) Journal
    50 concurrent users is a LOT. You may not really mean concurrent, as in "50 people actually reading from or writing to this drive at the same time". If you DO mean that, you desperately need SCSI, the fastest you can find. You'll need seek time more than anything else; the drives need to respond as fast as possible to multiplexed requests for data. Rotation speed, which improves seek time and transfer rate, is good too, but it's seek time that's most crucial in heavy multitasking environments. If by 'concurrent' you mean '50 people occasionally hitting the disk', then yeah, you could probably do SATA.

    However, you already have SCSI. Management is used to paying for SCSI machines. If you have 50-100 people depending on something, and it's slow, that's a productivity drag. If you assume that all those people cost $100k/year each (not at all unreasonable with benefits), 50 people are getting paid about 2,500 bucks an hour, or about 20,000 dollars a day. In other words, if you speed them up by just 5% with better hardware, you're saving the company a thousand dollars a day. Even if it's a tiny 1% speed gain, that's still 200 bucks a day. Saving six grand a month for an upfront investment of ten grand is a total no brainer.

    Buy SCSI.
  • by deep44 ( 891922 ) on Wednesday January 25, 2006 @12:33AM (#14554532)
    .. if you do end up with SATA, make sure to get some neon lighting for inside the case.

    --
    Current setup - 4x Seagate 400GB SATA, NVRAID-0, ThermalTechno 4000 1U Case w/ Ground-FX, 3x Zalmat 80mm SilentKiller Fans (soon)
  • by defile ( 1059 ) on Wednesday January 25, 2006 @12:38AM (#14554564) Homepage Journal

    If my limited experience has taught me anything about computer reliability, it's that a single mis-set bit somewhere can bring down a system. Maybe the bit got there by user error, maybe it got there because of RAM or disk failure, maybe it got there from a bug in the application, OS, or firmware. Maybe a component on the motherboard shorted out. Maybe it's the climate. Maybe it's the phase of the moon.

    I've seen it happen with discount ghetto hardware, I've seen it happen with high end hardware. I've seen it happen on Windows. On Linux. On FreeBSD. On Solaris. I've seen servers go down due to catastrophic hardware failure and I've seen them go down because a $2 fan died. I've seen people come inches from major power supply caused injury working on a desktop PC.

    Everything will break.

    There's just too much freaking complexity. Now I just buy whatever's cheapest so I can buy way more than I need. Mix up the configurations a bit so you get some bio-diversity; if one drive manufacturer has a bad year, you don't want all of your eggs invested in them.

    Most important of all, at the first sign of trouble, throw it away.

    Try to resist the urge to fix it. I mean it. You cost more than that piece of junk. Put in a purchase request and move on.

  • So many replies about performance and all the great new features in SATA IO... Too bad no one has mentioned the real issue for high end use... All modern scsi drives allow the HBA to make sure data has actually been written to disk. SATA does not. If you lose power, you lose the data in cache. Worse even in a raid - there the data might be written to two of the data disks but your redundancy disk(s) have not. Now, if a drive fails, you'll restore from parity and boom - you have wrong data that according to
  • SCSI. Still. (Score:5, Informative)

    by aussersterne ( 212916 ) on Wednesday January 25, 2006 @01:48AM (#14554933) Homepage
    SCSI still tears the alternatives to shreds for price/performance at the heavy end of the load curve, no doubt about it.

    If you doubt it, try both.

    For going on twenty years it's been the same: those who haven't tried SCSI claim that there's no or little difference. Those who have used both SCSI and [MFM,RLL,IDE,ATA,SATA] in high-load environments hate to try to make due with anything but SCSI.

    For performance and reliability reasons both, you want SCSI if you're dealing with high-random-access-load or high-throughput situations. ATA/SATA is fine if you're just offering up noncritical bulk network storage but for the rest you want the real deal, and you will notice the obvious difference if you try both in a stressed environment.
  • Do RAID 5 ! (Score:5, Insightful)

    by this great guy ( 922511 ) on Wednesday January 25, 2006 @03:21AM (#14555380)
    Whatever I decide, the server will be setup with a RAID 1+0 array for the numerous benefits it offers.

    No, choose RAID 5 instead of RAID 1+0. Here is why:

    • RAID 5 offers more usable disk space. With N disks of X GB, RAID 5 gives you (N-1)*X GB while RAID 1+0 only gives you (N/2)*X GB.
    • The maximum theoretical I/O throughput is better with RAID 5 than with RAID 1+0. With N=4 it is 1.5 times better, and when N is large (>= 8) it tends to be twice better.
    • RAID 5 is more customizable than RAID 1+0, giving you more control on the usable space / total space ratio. For example with N=10 you can choose to create 1, 2 or 3 RAID 5 arrays while with RAID 1+0 you only have 1 choice (1 large array, creating multiple smaller arrays is equivalent to a large one).
    • Linux's RAID 5 implementation rocks and consumes MUCH less CPU than what people think especially with today's 2+ GHz processors. Kernel hackers have found their implementation to be WAY MUCH FASTER than most expensive RAID 5 hardware cards.

    To give you a datapoint, I have set up multiple Linux software RAID 5 arrays on various servers with 10+ SATA disks, and the I/O throughput is over 500+ MB/s (enough to saturate 2 full-duplex GigE links !). At my previous work we had about 200 servers, all using Linux software RAID 5. And we have been MUCH MORE HAPPY than the previous setup where all of them were using hardware RAID 5. Moreover, Linux's software RAID 5 is more flexible (create arrays on ANY disk on ANY SCSI/SATA card in the system), more consistant (one and only one control software to learn: mdadm(8), no need to use crappy vendor tools or reboot into vendor BIOSes), cheaper (no hardware to buy), more reliable (no hardware card = 1 less hw component that can fail), easier to troubleshoot (plug the disks on ANY linux server and it works, no reliance on any particular hw card) and more scalable (spread the load across multiple disk controllers, multiple PCI-X/PCIe busses, or even multiple SAN devices).

    It's amazing the amount of misinformation and misconceptions about RAID that is spread around the world. I hate to say it but 95% of IT engineers don't make good choices regarding RAID servers because of all those misconceptions.

  • by abdulwahid ( 214915 ) on Wednesday January 25, 2006 @04:37AM (#14555634) Homepage

    If you want reliability for the disk you had better check what the manufacturer claims for the MTBF (mean time between failure).

    Many SATA drivers have a MTBF of around 0.6 to 1 where as SCSI have between 1 and 2. Your SCSI disk therefore has about twice the life expectancy. If you couple this with the speed of the SCSI I guess for the moment if your budget allows for it then go for SCSI

    If your budget doesn't allow for it...just make sure you have good redundancy in your RAID with at least 2 redundant disks

The use of money is all the advantage there is to having money. -- B. Franklin

Working...