Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Technology

SATA vs ATA? 111

An anonymous reader asks: "I have a client that needs a server with quite a bit of storage, reasonable level of reliability and redundancy and all for as cheap as possible. In other words they need a server with a RAID array using a number or large hard drives. Since SCSI is still more expensive than ATA (or SATA), I'm looking to using either an ATA or a SATA RAID controller from Promise Technologies. While I had initially was planning on using SATA drives, I have read some material recently to make me rethink that decision and stick with ATA drives. What kind of experiences (good and bad) have people had with SATA drives as compared to ATA drives, especially in a server type environment?"
This discussion has been archived. No new comments can be posted.

SATA vs ATA?

Comments Filter:
  • by nneul ( 8033 ) * <nneul@neulinger.org> on Friday June 18, 2004 @06:41PM (#9468181) Homepage
    Nice idea, but poor implementation, they have had a tendency to easily come loose on several servers we have.
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Friday June 18, 2004 @06:43PM (#9468206)
    Comment removed based on user account deletion
  • by DocSponge ( 97833 ) on Friday June 18, 2004 @07:42PM (#9468733)
    You may want to read this whitepaper [sr5tech.com] and see what they have to say about using ata or sata drives in a raid configuration. It is possible, due to the use of write-back caching, to lose the integrity of the raid array and lose your data eliminating any intial cost benefits. To quote the paper:
    Though performance enhancement is helpful, the use use of write back caching in ATA RAID implementations presents at least two severe reliability drawbacks. The first involves the integrity of the data in the write back cache during a power failure event. When power is suddenly lost in the drive bays, the data located in the cache memories of the drives is also lost. In fact, in addition to data loss, the drive may also have reordered any pending writes in its write back cache. Because this data has been already committed as a write from the standpoint of the application, this may make it impossible for the application to perform consisten crash recovery. When this type of corruption occurs, it not only causes data loss to specific applications at specific places on the drive but can frequently corrupt filesystems and effectively cause the loss of all data on the "damaged" disk.
    Trying to remedy this by turning off write-back caching severly impacts the performance of the drives and some vendors do not certify the recovery of drives that deactivate write-back caching so this may increase failure rates.

    Losing data on an ata raid array happened to a friend of mine and I wouldn't advise using something other than SCSI without understanding the ramifications.

    Best regards,

    Doc

    I made a new years resolution to give up sigs...so far so good!

  • by cpeterso ( 19082 ) on Friday June 18, 2004 @07:58PM (#9468848) Homepage

    Why doesn't Promise abstract their cross-platform code from the Linux and Windows device driver "glue" code? Then they could just port the Linux and Windows specific code once and all their device drivers' platform-independent code should "just work". (but keep your fingers crossed anyways) ;)

    I know Linus does not like cross-platform wrapper crap code in his kernel, but there is nothing preventing Promise from doing this outside the Linus tree or wrapping the Linux device driver API around the Windows device driver model.

  • by Devalia ( 581422 ) on Friday June 18, 2004 @08:32PM (#9469115)
    Whether its just maxtor in general or a few poorly constructed hard drives i've had a few problems with the connectors - simply the plastic tabs at the back had a bad habit of being extremely easy to break :( (i.e to hold cable in place)
  • by Guspaz ( 556486 ) on Friday June 18, 2004 @09:04PM (#9469282)
    IIRC, I read something about a certain drive that had some sort of retention clip system. So it seems that the falling-out problem has already been solved by at least some manufacturers.
  • by hamanu ( 23005 ) on Friday June 18, 2004 @09:16PM (#9469354) Homepage
    Actually I just had a 120GB maxtor drive that i used to replace a 60GB one that had failed give me kernel messages to the effect off "flush cache command failed", meaning the disk refused to obey when the kernel told it to flush the write-back cache (probably to make windows benchmarks look better). Why should I trust this drive when I tell it to disable to write-back cache entirely?

    Furthermore, if I am using a hardware raid how do I use hdparm? And finally, ATA drives have write-back ON by default, SCSI drives have it OFF by default.
  • by SlashingComments ( 702709 ) on Friday June 18, 2004 @10:29PM (#9469868)
    Stability: 3Ware #1 AMI Megaraid (the 4 port ones) #2 Naked drive with linux raid #3 ... rest are either crap or I did not use them.

    Performance: Naked drive with linux raid #1 Megaraid/3ware - both slower

    I don't know why but how come linux with naked drives using software raid *always* comes in the top with performance. May be you guys can tell me.

  • Experience... (Score:4, Interesting)

    by poofmeisterp ( 650750 ) on Friday June 18, 2004 @10:29PM (#9469878) Journal
    The backplanes on server cases are horrid for SATA. They work, but you have to have special hookups for the LEDs (drive fail and activity) and often the controller cards or motherboards don't supply them. All I've managed to get is power LEDs on the front of the Super Micro cases I've worked with.

    SATA is not that much faster in practice than PATA, because the kinds of load that you put a drive under in a production environment are not like the speed/load tests used to generate benchmark numbers.

    You asked for opinions, and mine is that PATA (ATA-133) is more than fast enough, and the cost of SATA and the quirks that have yet to be ironed out are not worth it. It's the latest shiny object, and shiny objects are not always the most useful.

    I base my experience on the Western Digital SATA (mostly 36 gig) drives and the Western Digital 40 and 80 gig JB drives connected to multiple brands of motherboards and add-on controller cards.
  • RaidCore (Score:3, Interesting)

    by beernutz ( 16190 ) * on Saturday June 19, 2004 @04:10AM (#9471281) Homepage Journal
    This product will blow your socks off!
    Here are some of the highlights from their page [raidcore.net]:

    Online capacity expansion and online array level migration

    Split mirroring, array hiding, controller spanning, distributed sparing

    All RAID levels including RAID5/50, RAID1n/10n

    Serial ATA-based

    Choice of 4 or 8 channels and 2 functionality levels

    64-bit, 133 MHz PCI-X controller in a low-profile, 2U module

    And the HIGH-END board can be had for under $350!

  • by 0x0d0a ( 568518 ) on Saturday June 19, 2004 @08:48AM (#9471834) Journal
    Trying to remedy this by turning off write-back caching severly impacts the performance of the drives and some vendors do not certify the recovery of drives that deactivate write-back caching so this may increase failure rates.

    I don't buy this argument one bit.

    I agree with you that write-back can break journalling FS guarantees.

    However, I don't know of any consumer drive vendor that guarantees that their write-back algorithms are in-order. This means that write-back can trash *any* filesystem, and whether it be RAID or not be damned.

    Write-back should *never* be on on drives using modern filesystems.

    As for an impact on performance, I call foul again. The write-back cache benefits are useful only in the presence of an OS that does poor disk caching. Take a nice Linux box -- it'll use all available free memory as a big fat writeback cache. There is only a single advantage to using a drive's native writeback controller -- the drive knows the true geometry of the disk (not whatever fantasy geometry it's handed off to the host), and furthermore knows the performance characteristics (settle time, seek times, etc) of the drive. That's useful, but it's not comparable to having ten times or more the amount of memory for buffering.

    Hard drive vendors would be *much* better off from a performance standpoint exporting a profile of their drive's performance characteristics to the host -- "settle time on the drive can be determined by this function, seek time can be determined by this function, this is the real geometry", etc. Then, the much more powerful (in both memory, CPU, and code size) host could do whatever scheduling it wanted to try out.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...