SATA vs ATA? 111
An anonymous reader asks: "I have a client that needs a server with quite a bit of storage, reasonable level of reliability and redundancy and all for as cheap as possible. In other words they need a server with a RAID array using a number or large hard drives. Since SCSI is still more expensive than ATA (or SATA), I'm looking to using either an ATA or a SATA RAID controller from Promise Technologies. While I had initially was planning on using SATA drives, I have read some material recently to make me rethink that decision and stick with ATA drives.
What kind of experiences (good and bad) have people had with SATA drives as compared to ATA drives, especially in a server type environment?"
Connectors are poor on SATA (Score:3, Interesting)
Comment removed (Score:4, Interesting)
Dangers of using ATA or SATA for Raid (Score:5, Interesting)
Losing data on an ata raid array happened to a friend of mine and I wouldn't advise using something other than SCSI without understanding the ramifications.
Best regards,
Doc
I made a new years resolution to give up sigs...so far so good!
Re:It's all in the name (Score:3, Interesting)
Why doesn't Promise abstract their cross-platform code from the Linux and Windows device driver "glue" code? Then they could just port the Linux and Windows specific code once and all their device drivers' platform-independent code should "just work". (but keep your fingers crossed anyways)
I know Linus does not like cross-platform wrapper crap code in his kernel, but there is nothing preventing Promise from doing this outside the Linus tree or wrapping the Linux device driver API around the Windows device driver model.
Non-Performance Related Problems (Score:2, Interesting)
Re:Connectors are poor on SATA (Score:3, Interesting)
Re:Dangers of using ATA or SATA for Raid (Score:2, Interesting)
Furthermore, if I am using a hardware raid how do I use hdparm? And finally, ATA drives have write-back ON by default, SCSI drives have it OFF by default.
My experience with RAID cards (Score:2, Interesting)
Performance: Naked drive with linux raid #1 Megaraid/3ware - both slower
I don't know why but how come linux with naked drives using software raid *always* comes in the top with performance. May be you guys can tell me.
Experience... (Score:4, Interesting)
SATA is not that much faster in practice than PATA, because the kinds of load that you put a drive under in a production environment are not like the speed/load tests used to generate benchmark numbers.
You asked for opinions, and mine is that PATA (ATA-133) is more than fast enough, and the cost of SATA and the quirks that have yet to be ironed out are not worth it. It's the latest shiny object, and shiny objects are not always the most useful.
I base my experience on the Western Digital SATA (mostly 36 gig) drives and the Western Digital 40 and 80 gig JB drives connected to multiple brands of motherboards and add-on controller cards.
RaidCore (Score:3, Interesting)
Here are some of the highlights from their page [raidcore.net]:
Online capacity expansion and online array level migration
Split mirroring, array hiding, controller spanning, distributed sparing
All RAID levels including RAID5/50, RAID1n/10n
Serial ATA-based
Choice of 4 or 8 channels and 2 functionality levels
64-bit, 133 MHz PCI-X controller in a low-profile, 2U module
And the HIGH-END board can be had for under $350!
(non) Dangers of using ATA or SATA for Raid (Score:3, Interesting)
I don't buy this argument one bit.
I agree with you that write-back can break journalling FS guarantees.
However, I don't know of any consumer drive vendor that guarantees that their write-back algorithms are in-order. This means that write-back can trash *any* filesystem, and whether it be RAID or not be damned.
Write-back should *never* be on on drives using modern filesystems.
As for an impact on performance, I call foul again. The write-back cache benefits are useful only in the presence of an OS that does poor disk caching. Take a nice Linux box -- it'll use all available free memory as a big fat writeback cache. There is only a single advantage to using a drive's native writeback controller -- the drive knows the true geometry of the disk (not whatever fantasy geometry it's handed off to the host), and furthermore knows the performance characteristics (settle time, seek times, etc) of the drive. That's useful, but it's not comparable to having ten times or more the amount of memory for buffering.
Hard drive vendors would be *much* better off from a performance standpoint exporting a profile of their drive's performance characteristics to the host -- "settle time on the drive can be determined by this function, seek time can be determined by this function, this is the real geometry", etc. Then, the much more powerful (in both memory, CPU, and code size) host could do whatever scheduling it wanted to try out.