15k RPM IDE Hard Drives? 96
OutRigged asks: "SCSI hard drives have had speeds in excess of 10,000RPM for years, yet IDE has always been stuck at 7200RPM. Is there some kind of technical reason IDE drives don't go above 7200RPM? I can't imagine cost being that big of an issue, and the connection is certainly not a problem, with Parallel ATA capable, at least theoretically, of speeds over 100MB, and Serial ATA capable of even more. With hard drives now reaching sizes in excess of 300GB, don't you think we need a speed increase?" If you are wondering what the terms "Parallel ATA" and "Serial ATA" refer to, check out this article.
Par/Ser ATA - why not ethernet? (Score:3, Interesting)
I've always wondered, why not simply connect all those harddrives with gigabit ethernet? Seems to be as fast, available, can be connected/disconnect while computer is on, can be used over much greater distances, etc, etc.
Re:Par/Ser ATA - why not ethernet? (Score:1)
125MB/s theoretical maximum.
Re:Par/Ser ATA - why not ethernet? (Score:5, Informative)
1000^3 bits/sec = 1,000,000,000 bits/sec
1,000,000,000 bits/sec / 8 = 125,000,000 bytes/sec
125,000,000 bytes/sec / 1024 = 122070.3125 Kilobytes/sec
122070.3125 Kilobytes/sec / 1024 = 119.20928955078125 Megabytes/sec
Fast Ethernet:
100,000,000 bits/sec / 8 = 12,500,000 bytes/sec
12,500,000 bytes/sec / 1024 = 12207.03125 Kilobytes/sec
= 11.920928955078125 Megabytes/sec
Bus bandwidth:
32-bit/33 Mhz PCI ---> 127.2 MB/sec
64-bit/33 Mhz PCI ---> 254.3 MB/sec
64-bit/66 Mhz PCI ---> 508.6 MB/sec
64-bit/133 MHz PCI-X ---> 1017.3 MB/sec
IDE Interface bandwidth:
Ultra ATA/33 ---> 33 MB/sec
Ultra ATA/66 ---> 66 MB/sec
Ultra ATA/100 ---> 100 MB/sec
Ultra ATA/133 ---> 133 MB/sec
Serial ATA 1.0 ---> 150 MB/sec
SCSI Interface bandwidth:
Wide ---> 10 MB/sec
Fast ---> 10 MB/sec
Fast Wide ---> 20 MB/sec
Ultra ---> 20 MB/sec
Wide Ultra ---> 40 MB/sec
Ultra2 ---> 40 MB/sec
Wide Ultra2 ---> 80 MB/sec
Ultra160 ---> 160 MB/sec
Ultra320 ---> 320 MB/sec
Single disk sequential transfer rates (STR):
SCSI Seagate X-15K.3 --> 76.4MB/s - 51.1MB/s
SCSI Seagate X-15 - 36 LP --> 60.5 MB/sec - 45 MB/sec
SCSI Seagate X-15 --> 41 MB/sec - 29 MB/sec
SCSI IBM Ultrastar 36LZX --> 34.8 MB/sec - 22.8 MB/sec
IDE IBM 60GXP --> 39 MB/sec - 21 MB/sec
IDE Western Digital Caviar WD1000JB --> 43.8 MB/s - 27.9 MB/sec
Re:Par/Ser ATA - why not ethernet? (Score:1)
Wide ---> 10 MB/sec
Fast ---> 10 MB/sec
Fast Wide ---> 20 MB/sec
Ultra ---> 20 MB/sec
Wide Ultra ---> 40 MB/sec
Ultra2 ---> 40 MB/sec
Wide Ultra2 ---> 80 MB/sec
Ultra160 ---> 160 MB/sec
Ultra320 ---> 320 MB/sec
Um... Ultra 320 is not actually 320mb/sec... Ultra 320 is just a form of Ultra 160 raid that was developed in a whay where you can have 160 per HDD... while if you only have one HDD, the channel max is still only 160... I am not sure how or why it works like that rather then a higher bus no matter how many drives, but it is all explained at Adaptec's website.
Re:Par/Ser ATA - why not ethernet? (Score:4, Informative)
Re:Par/Ser ATA - why not ethernet? (Score:1)
One of two things needs to happen before this is even an option.
1) TCP/IP stack is implemented in hardware (which would probably be costly)
2) A new protocol needs to be written so that data frames can be sent over raw ethernet without the use of TCP/IP.
Just my 2 cents.
Re:Par/Ser ATA - why not ethernet? (Score:1)
NetBEUI, the universally despised networking "protocol", is basically just passing raw SMB frames over ethernet. There is no reason you couldn't pass raw HDD data over ethernet.
However, ethernet is a VERY BAD option for this kind of thing. Unreliable protocol (data is not guaranteed to be delivered), collisions, etc. Just not a good idea.
Re:Par/Ser ATA - why not ethernet? (Score:1)
Re:Par/Ser ATA - why not ethernet? (Score:1, Informative)
Re:Par/Ser ATA - why not ethernet? (Score:1)
Well, at least for IDE drives, the interface is not the bottleneck. The fastest drives out there can pump out something like 60 MB/s, when the heads are on the outside of the platter. It gradually degrades to 30 MB/s or so when the heads move to the inside of the platter. So, it does not reach the maximum speed of the ATA-100 interface. Gigabit ethernet would not help a bit here. Serial ATA (150 MB/s) also seems overkill until the drives can reach this transfer rate. SATA has more to offer than just increased maximum throughput, of course.
Re:Par/Ser ATA - why not ethernet? (Score:2)
Also, in terms of disk I/O, the packet latency on ethernet is an eternity.
Re:Par/Ser ATA - why not ethernet? (Score:1)
Here (Score:2, Informative)
Re:Here (Score:3, Informative)
Re:Here (Score:5, Interesting)
Personally, I've mostly stuck to 5400 rpm ATA in RAID for higher reliability. For storing large files with little random access, the rotational latency isn't really a big deal, so you can make up the difference in sequential speed by adding an extra drive or two.
That said, I did recently build an ad hoc NAS computer with 180GB 7200 RPM WD ATA drives, quantity 5, in software RAID5 for about 680GB usable. I used two ATA100 two port Promise controllers (with their own additional cache), and both onboard ATA channels for the RAID disks.
The root/OS disk and CDROM was some random smallish SCSI stuff we had laying around. This was to free up available ATA ports.
That thing flys. Compared to other 3ware ATA RAID5's we have with 5400GB Maxtor disks with 2MB cache, it pushes out a lot more per/disk throughput.
I'm kinda leery considering the promise cards have cache, and also the drives have large cache, none of which is battery backed directly, but this server is not being put into a critical role, and is kept on a UPS. I've noticed that battery-backed cache seems to have lost favor in RAID controllers. There is still a danger, correct?
One thing that is striking about it is the latency. It just "feels" fast. I think that may have something to do with using Linux software raid5d rather than 3ware hardware RAID, in addition to the cache and higher rotational speed.
Re:Here (Score:2, Interesting)
What happened is the machine in which the array lived started to degrade (it was old) in multiple ways. The result was getting bad superblocks on several disks at once, and an array which was theoretically unrecoverable. What it taught me is that software RAID has alot more failure paths than hardware RAID. Bad memory, bad motherboard, bad controller, all can affect the integrity of your array, because the array depends on the integrity of the kernel in order to maintain a self-consistent state.
So now all my important arrays are hardware RAID controllers. Yes, if the controller goes, I could still have a bad day, but at least it is just the one component, but the whole machine, which I am depending on.
Re:Here (Score:3, Informative)
This is patches for 2.4.xx, and is likely to be included by Linus for 2.6. Gentoo has it as an option, and I will put dollars on its inclusion in the next Mandrake, possibly the next RedHat Enterprise.
A RAID-4 built on cheap, FireWire-attached chassis will provide impressive throughput, and can be constructed in a "star" topology, which removes the SCSI-style chain problems.
RAID-4 is preferred by SAN vendors, as the independant parity volume takes separate I/O load, removing the write-cost associated with RAID-5. If that's the spindle-set that fails, swap the parity disk, and re-build in the background.
Re:Here (Score:1)
Re:Here (Score:1)
Actually, he's right... at least in the original sense of the term. Here's [arstechnica.com] an Ars Technica page on the subject. Basically, back when the concept of RAID was developed, it was the alternative to a SLED, or Single Large Expensive Disk. Thus, a RAID was, most assuredly, a Redundant Array of Inexpensive Disks. It's been gradually bastardized into "independent."
Re:Here (Score:2)
For example, a Comp^H^H^H^H HP Proliant DL580 G2 server could have 4 hard drives running RAID 6, Hot Plug memory running RAID 1, and an external 4 tape drive setup running RAID 5.
Re:Here (Score:1)
Re:Here (Score:2)
The 5xxx series controllers that support this contain a PowerPC chip for the calculations, instead of the old standard of an Intel i960 (aka 486).
Re:Who the hell wants these at home. (Score:2, Funny)
Mine are at any rate! A Compaq with 4X10K RPM U-W SCSI disks and an external array with five more. Sounds like a jet plane taking off as the controller starts the drives one-by-one :)
But I can see the appeal. True geeks don't give a rat's about noise. Gimme the speed!What on earth are you talking about? (Score:2)
- A.P.
Toms (Score:5, Informative)
Initially it will run at arround 150Megabytes a second, however should be able to increase to 600.
Re:Toms (Score:1)
Of course the Linux kernel will probably support it a few months from now.
Another plus you didn't mention... The cable length is 1 meter, but since it is serial, it's likely that can be eventually extended to a couple meters (depending on controller, you might get away with it now), for a ultra high speed connection for external hard disks sometime in the future.
Re:Toms (Score:1)
snatched from the site:
What is the difference between Parallel SCSI and Serial Attached SCSI?
Parallel SCSI is a proven enterprise level technology for I/O and device requirements with a twenty-year history of reliability, flexibility and robustness. Parallel SCSI has limited device addressability as well as certain physical limits associated with the nature of its distributed transmission line architecture (performance and distance), plus large connectors that make it unsuitable for certain dense computing environments.
Serial Attached SCSI will leverage the proven SCSI technologies that customers expect in data center environments, providing robust solutions and generational consistency. It will be based on a serial interface, allowing for increased device support and bandwidth scalability, reducing the overhead impact that challenges today's SCSI environments. It will provide easy solutions for systems with simplified cable routing. It will also utilize Serial ATA development work on smaller cable connectors, providing customers a downstream compatibility with desktop class ATA technologies.
Finally, this simplified routing will enable a new generation of dense devices, such as small form factor hard drives, which will enable storage solutions to scale externally where traditional parallel SCSI cannot, due to cabling and voltage challenges.
Is Serial Attached SCSI complementary to or competitive with Serial ATA?
Serial Attached SCSI complements Serial ATA by adding dual porting, full duplex, device addressing and it offers higher reliability, performance and data availability services, along with logical SCSI compatibility. It will continue to enhance these metrics as the specification evolves, including increased device support and better cabling distances. Serial ATA is targeted at cost-sensitive non-mission-critical server and storage environments. Most importantly, these are complementary technologies based on a universal interconnect, where Serial Attached SCSI customers can choose to deploy cost-effective Serial ATA in a Serial Attached SCSI environment.
There really isn't a serious one (Score:4, Informative)
Note the recent move to 1 year warrenties on IDE hard drives. SCSI drives are still 3-5 years. Honestly, I'm seriously thinking of doing SCSI in my next computer. Two years ago, I got a new computer and got ATA in it. It's been a good computer, but it's starting to feel it's age. My previous computer had scsi in it, and was a dual processor. The extra money I spent (almost 3k when I bought it) helped it last an extra year over theis one as far as speed was concerned.
If you do any serious disk activity, SCSI is a very very good way to go. If you plan on more than one person on a computer at a time, go scsi. For instance, I have a coworker who runs windows 2k at work and has Terminal Services running in admin mode. I logged in and started installed cygwin on it (we're testing cfengine on windows), and it hammered his machine. Made it unusable. That was just downloading stuff to disk! It's a p4 1.7 dell desktop job. My dual p3-700 with scsi never experienced anything like that until both processors were hammered running chemistry code and doing heavy disk activity.
I don't have any empirical data, I just have experienced too much IDE sub-standardness. You pay extra money for a reason, but I personally think it's money well spent.
Comment removed (Score:3, Interesting)
Re:Multiple heads? (Score:3, Insightful)
Can anyone with a clue answer this question?
Re:Multiple heads? (Score:4, Insightful)
The head assembly also takes up quite a bit of room. You would probably have to go to a half height 5-1/4 form factor like old SCSI disks were.
Re:Multiple heads? (Score:4, Informative)
Since there's only one head assembly they're mounted on, tuning one head means the other heads get out of whack and become useless while the other's operating.
This requirement for precision means a multi-headed HD like that would need multiple head assemblies. Open up your favourite HD and see if you can work out where to put it
In short -- it's not worth it. You introduce more compexity (== cost == less demand) and things to go wrong, when you could just buy another drive and stripe and probably still come out cheaper and more reliable than a single two headed drive.
It'll probably be faster, too, since you've then got two interfaces to squeeze data down.
Re:Multiple heads? (Score:1)
Of course your suggestion isn't as radical as a head for each track.
It's not necessary (Score:1)
My experiment with IDE RAID-0 [tomshardware.com] turned out wonderfully. For $160 I got what amounts to a 160-gig drive that was 2mb/sec slower than a 15k RPM SCSI drive (according to SiSoft SANDRA [sisoftware.co.uk]) That was from two plain old 7200 RPM 80-gig IDE's. When Serial ATA get's big, setups like this will be even easier, since the main limitation with IDE RAID is the number of drives you can attatch to the board.
Re:It's not necessary (Score:1)
Re:It's not necessary (Score:2, Informative)
Re:It's not necessary (Score:4, Informative)
Raid 3 has synchronized disk heads, which means all drives will be reading the same stripe, or writing to the same stripe, at the same time.
For best performance with redundancy, Raid 10 (or 0 + 1) is by far the best choice. A Raid10 array gives you 2 different data paths for writing data (just like a 2disk raid0), but gives 4 locations for reading data back (like a 4disk raid0). Plus you still have the redundancy built in where if any single drive failed, no data loss. The downside is that 4 60gb drives will only give you 120gb of usuable space.
Re:It's not necessary (Score:2)
RAID 10 vs 0+1 (Was Re:It's not necessary) (Score:1)
RAID 1+0 (or 10) (mirroring plus striping) gives you a chance that if one drive dies, you still have a fully functioning side of the mirror on the other disk.
RAID 0+1 (striping plus mirroring) gives you a chance that if one drive dies, half of the mirror dies immediately.
Thus, if you lose a drive in the other half for 10, your stripe set continues with one (non-mirrored) disk on each side. But if you lose a drive in the other half for 0+1, your mirror set fails completely since both sides are missing half of the stripes... bang! you're dead.
Check out a more detailed writeup [ofb.net] that we consulted when debating this for a client...
Re:It's not necessary (Score:1)
It's not arguable, unless you use non-sequitor arguments, as you just did.
A hard drive mirror is fast. (Score:2)
The parent comment is important, but it is easy not to see the importance.
First, see the Tom's Hardware article about RAID (mirroring/striping) controllers referenced in the parent comment: Fast and Secure: A Comparison of Eight RAID Controllers [tomshardware.com]. As is usual for Tom's Hardware, the article is a bit confused. Apparently it was written hastily.
Motherboards now often have integrated mirroring/striping controllers, so the cost is low. Even Intel has a mother board with an integrated mirroring/striping controller now. In a 2-drive system configured as a mirror, the heads of each drive are moved independently to read data more efficiently than a single drive. Read performance is excellent, and write performance the same as a single drive. Since most systems do more reading than writing, the overall performance is excellent.
A mirror is far more reliable, since if one hard drive fails, all the data can be recovered from the other drive, and the system keeps running until the bad drive can be replaced.
In systems in which there are only one or two users, SCSI is slower. SCSI is only faster when there are many simultaneous users accessing storage.
Extreme solutions such as 15,000 RPM drives and SCSI and RAID 5 are appropriate for e-mail servers, but the noise and expense and lower reliability of the single drives doesn't make sense unless the computer is a server of some type.
Hard drives with a high rotational rate are not necessarily faster at providing data than those with a slow rate. The bottleneck is often the time it takes to move the heads, and the time it takes to present the data to the CPU, not the latency of waiting until the data is under the head.
RAID controllers can do striping, or mirroring, or both. When they do both, 4 drives are required, but read performance is high. Having more than one read head and being able to move them separately is very efficient. A 30,000 RPM drive would still have only one head mechanism.
It is good to see other companies entering the market. Promise Technology was one of the first with low-cost mirroring controllers. Promise is, in my experience, an unbelievably backward company. The products work well, but Promise has sold products with poor setup methods for years. For those who remember DOS programs, the Promise setup user interface is like a DOS shareware program written by a novice programmer who is considerably worse than average in user interface design.
Promise Technology is also known for the poor quality of their manuals. (The company says the manuals are being re-written.)
The parent comment is correct. For most applications, a RAID controller with mirroring, or mirroring plus striping, is excellent.
Re:A hard drive mirror is fast. (Score:2)
a 10K RPM drive has about 166 rotations/second -- or 6ms/revolution, giving an average rotational latency just over 3ms
Similarly, a 7200RPM drive has a rotational latency just over 4 ms. This compares to high-end seek times around 3ms. In other words, it's quite possible to be in a situation where it's cheaper to jump tracks to get data than to wait for the disk to rotate to the wanted data on the same track.
A Rotational speed increases both rotational latency and overall transfer speeds.. (a 15K disk is going to give you about twice the transfer rate of a 7200RPM disk with the same geometry, if all of the data is on one cylinder) The effect is far from trivial compared to seek times. It is, however, pretty much guaranteed to wear out your bearings much faster. I, for one, would be very worried about keeping a 15K hard disk in my desktop for 5 years.
Re:A hard drive mirror is fast. (Score:1)
Correct me if I'm wrong, but I thought that read performance would be better and write performance would be worse than a single drive.
Reasons: for read performance, you only have to wait for 1 of the two drives to retrieve the data, i.e. the drive that happens to get the data the fastest.
For write performance, you have to wait for both drives to finish writing (otherwise the whole purpose of mirroring is defeated).
You're right of course that most applications do more reading than writing, so on average a mirrored setup should be slightly faster than a single drive.
Cheap Storage vs. Fast and Reliable... (Score:5, Insightful)
When you raise the number of bits per inch of storage surface you create more stress and heat. When you raise the RPMs you create more heat (alot @15000RPM). The overall effect is that you can't use the cheap parts that are used in most IDE drives...every piece of the drive must be manufactured to the highest specifications. Motors have to be of the highest quality. Hydrodynamic bearings must be used instead of metal ball-bearings...this all increases the cost (as it pushes the technology).
The reason why these faster drives are not sold as IDE is simple. Anyone who is willing to pay $1000 for a ~80GB harddrive is also willing to pay $75 for a decent controller card (if it's not already built into their workstation).
How many ppl are going to be willing to pay $1000 for an 80GB IDE drive when they can buy a 300GB drive for 1/3 the cost? The end result is that most consumers simply don't care about the speed...the majority of IDE drives go into OEM systems and the consumer probably won't know if they put a 4500RPM drive in the system.
So, why not get the best of both worlds. Buy a 20GB 15000RPM SCSI and put your system files and most widely used apps on that (~$130 for a 18G Seagate). And then buy a larger IDE drive for archives.
When you think about it, you shouldn't need more than 20GB for your system, apps, and maybe a few games.
As far as the slower IDE drive, just spend your money on more RAM for the system and increase the cache. And don't rely on the CPU intensive built-in IDE controller on most Intel/AMD motherboards...buy a decent controller card instead.
And if you really want to get ~15000RPM with IDE technology, just get an IDE RAID controller and use striping...using this method you can actually get to much higher theoretical speeds than a single 15000RPM drive. with 4 7200RPM drives you could get up to a theoretical speed of 28800RPM!!!
Re:Cheap Storage vs. Fast and Reliable... (Score:1)
Do keep in mind there are 2 factors in hard-drive performance: Access Times and Transfer rate. Your IDE-RAID zero is primarily upping your Sustained Transfer Rate along with giving you 4 times as many points of failure. Hope you have those drives on a 64-bit PCI bus too, or you get capped at 100 MB/sec by the bus boddle neck.
Re:Cheap Storage vs. Fast and Reliable... (Score:2)
I'm not suggesting that this is all you do, only that it's a fairly good option if you want more speed from available IDE technologies...
Hmmm. Re:Cheap Storage vs. Fast and Reliable... (Score:2)
Yeah, but only while they are expensive.
When you think about it, you shouldn't need more than 20GB for your system, apps, and maybe a few games.
Where do you get off telling everyone what to think? Seriously? My system has 80 gig right now, partly because it's a multiboot and I like keeping my disks 30-50% empty because it improves performance.
As far as the slower IDE drive, just spend your money on more RAM for the system and increase the cache.
Beyond a certain point adding RAM doesn't help much, caches only increase in speed marginally for a doubling of the cache size. Adding faster disks helps basically all of the slowest OS tasks go much faster.
And don't rely on the CPU intensive built-in IDE controller on most Intel/AMD motherboards...buy a decent controller card instead.
Yeah right CPU intensive makes a big difference in these days of 3 Ghz processors. Processors are getting faster MUCH faster than drives- the overhead is dropping by a factor of nearly 2 each year.
And if you really want to get ~15000RPM with IDE technology, just get an IDE RAID controller and use striping...using this method you can actually get to much higher theoretical speeds than a single 15000RPM drive. with 4 7200RPM drives you could get up to a theoretical speed of 28800RPM!!!
No. 15000 rpm gives half the latency of any number of 7200 RPM. Latency usually is the bottleneck, not throughput. RAID improves throughput, not latency.
Rule of thumb, unless you have lots of disks on one processor- SCSI is a waste of time and money.
Re:Hmmm. Re:Cheap Storage vs. Fast and Reliable... (Score:2)
SCSI is a more robust bus with better error detection. It also has a well thought out and more reliable electrical specification, allows multiple initiators, and can be used for hot swap. All of this while being the the least expensive bus you can get 10k and 15k RPM drives for.
Rule of thumb, unless you don't care about performance or added reliability isn't worth the price to you, use SCSI.
BTW, if you're seeing a significant performance increase from keeping your drive mostly empty, you're using the wrong file system.
Re:Hmmm. Re:Cheap Storage vs. Fast and Reliable... (Score:2)
Sure, for a RAID server it's probably ideal. For a desktop? Why?
Rule of thumb, unless you don't care about performance or added reliability isn't worth the price to you, use SCSI.
No; that's nonsense. There is no significant difference in peak throughput, reliability or latency between IDE and SCSI; both throughput and reliability are dominated by the performance and reliability of the physical harddrive. The bits simply come off the disk at a certain rate determined by the spin, and that's a rate well below the capacity of ATA133.
BTW, if you're seeing a significant performance increase from keeping your drive mostly empty, you're using the wrong file system.
Oh definitely; but I try to keep my disks mostly full, and not nearly full. The fragmentation increases even under UNIX as the partition fills; although it deals with it far better than say FAT32, there's still a hit. Ever wondered why most UNIX filesystems can be 109% full? It's because in that last 10% performance is dropping off very markedly; and in fact above 85% full things are starting to really crawl. But the dropoff starts earlier than that.
Re:Hmmm. Re:Cheap Storage vs. Fast and Reliable... (Score:2)
It depends on how you define common, and wether you care about an error if you don't notice it right away. It would be more accurate to say "more common" rather than just saying "common."
Sure, for a RAID server it's probably ideal. For a desktop? Why?
A home desktop is exactly the place where most people don't care about added reliability or performance. A fast hard drive is only going to improve load times in your games and office software, and you're not doing backups, so you're going to loose all your data in a few years anyway. If it's an office PC you're probably storing your important data on the nice fast SCSI disks in the file/database server, so you can use IDE in the desktop. Both situations fit my rule.
No; that's nonsense. There is no significant difference in peak throughput, reliability or latency between IDE and SCSI; both throughput and reliability are dominated by the performance and reliability of the physical harddrive. The bits simply come off the disk at a certain rate determined by the spin, and that's a rate well below the capacity of ATA133.
The bottom line is that you can't get the fast, high quality disks with ATA/IDE. What you said about the reliability is just nonsense. SCSI has tighter specs for cables, and error detection is better than what's available with IDE. No matter what the speeds of the busses are, what's available in the marketplace dictates that if you care about performance you use SCSI, and if all that matters is capacity or price, you use IDE.
BTW, I personally have a 10K RPM 18.4GB SCSI drive and two 80GB 5400 RPM ATA-133 disks in my machine. I store the system and source trees on the SCSI disk for speed, and everything else on the IDE disks because I can't afford to have all my data fast. When you're using a journaling filesystem or reading and writing lots of tiny source and object files, and single fast disk will outperform a stripeset any day. Oh, and XFS kicks ass.
-OFFTOPIC- Re:Hmmm. Re:Cheap Storage vs. (Score:1)
Ever wondered why most UNIX filesystems can be 109% full? It's because in that last 10% performance is dropping off very markedly;
Actually it's because 10% is usually reserved for 'root' so that when a user fills the disk up enough to see an "out of disk space" message, the system can still write to logs and the administrator still has some room to do general maintance to keep the machine running. Most filesystems let you change the percentage. Some filesystems also allow sparse files which can have a size that is larger than the number of blocks that have been assigned to them (because those blocks were never written but a block with a higher offset was written, or under some filesystems because they contain only zeros), and certain utilities can (incorrectly) report those files as part of the used space.
The fragmentation increases even under UNIX as the partition fills; although it deals with it far better than say FAT32, there's still a hit.
You can't really make a generalization like that about "UNIX filesystems," because there are so many different types that behave differently. There are filesystems available that provide uniform performance dispite the disk utilization, but now I'm just being pedantic.
Re:-OFFTOPIC- Re:Hmmm. Re:Cheap Storage vs. (Score:2)
That's historically not the reason in fact, although the spare space is used for this reason.
You can't really make a generalization like that about "UNIX filesystems," because there are so many different types that behave differently.
Only in detail. Every single filesystem I've seen detailed benchmarks for (XFS, FAT32, ReiserFS, ext3, ext2) degrade sharply above the 85% usage point, the exact point varies by a few percent, but in general this occurs.
Anyway, if you don't believe me; no skin off my nose, go ahead max out your disk usage- I don't care if you're filesystem grinds to a halt.
Re:Cheap Storage vs. Fast and Reliable... (Score:1)
Actually, IIRC, it's more like a 100% speed increase with 4 drives in RAID 0. But you make a good point. Four 120GB (= 480GB) IDE drives in RAID 0 cost less than one 80GB 15K SCSI drive, and are probably faster. Most new mobo's have a RAID controller as well, nowadays.
Because of priorities (Score:2)
* Cost
* Size
* Reliability (unfortunately, hard to measure...MBTF is kind of BS)
* Noise
* Speed
* Heat
I tend to move Heat higher up, given the impact it has on Noise and Reliability.
And, you know what? For most applications (workstation) hard drive speed is completely a non-issue. HD transfer rates improve over time *anyway*. If you increase aureal density but keep rotational speed the simple, you're increasing peak non-cache data transfer rate. So you get a faster hard drive now than you used to. Second, for the vast majority of workstation applications, hard drive time is simply not important. It's almost never the bottleneck for critical applications. If you're paging, yes, but RAM is cheap and does such a far better job that you're better adding another 512MB of RAM to your system. File copies are rarely a problem -- you don't need to remove a hard drive, so you can just background the copy and forget about it, unlike in the days of floppies. If it's a copy to/from removable media, it's almost always the removeable media that's the bottleneck, not the drive, so more drive speed will give you basically nothing.
The other thing to remember is that RAM caching is far better than it once was (partly based on sheer amount of memory). Most of the time, your working data set will fit into memory just fine, and be cached. Linux has very good disk caching. Windows less so, but still much better than the dark days of 9x. And a silly little difference like a 5200 RPM drive being 25% slower than a 7200 RPM drive pales in comparison to the thousand or so times faster that your memory is. You're almost always better off getting more solid-state storage and not trying to work the bejeezus out of the mechanical parts of your hard drive.
I would never recommend anything but a 5400 RPM IDE drive to anyone. 7200 and above will buy you heat issues, reliability issues, and noise issues. Tack on a fan and you help a bit with heat (of course, having "hot spots" in your drive and then heavily cooled spots isn't great either), but then you get more dust, and more noise. Of all the people I know, all the drives in the past three years that failed have been 7200 RPM, not 5400 RPM. That speed difference isn't huge to you, and is far nicer to the cheap, fragile mechanism in the hard drive.
In conclusion -- buy 5400 RPM. You'll be a lot happier.
Re:Because of priorities (Score:1)
Re:Because of priorities (Score:3, Informative)
Maxtor Atlas 15k RPM SCSI drive [maxtor.com]
Maxtor DiamondMax 7200 RPM IDE drive [maxtor.com]
Re:Because of priorities (Score:2)
Re:Because of priorities (Score:1)
Most of the time, you are absolutely right. However, there are applications where pure throughput (i.e. MB/s) is critical (e.g. analog video capture), and in that case the 7200 RPM drives actually do perform noticeably better.
Ceramic Heater? (Score:5, Funny)
:)
Re:Ceramic Heater? (Score:1)
I see an opening here for people interested in writing sabatour trojanware: have your code write to a range of flash addresses a few 100K times, wipe the cells out.
It's the market (Score:4, Insightful)
Market forces drive IDE drives to be built as cheaply as possible while still having the right buzzwords to make consumers believe they're faster than their competitor. RPMs higher than 7200 still don't register with the mass populace, so it's not yet a factor.
SCSI hard drives are all about top-end performance. That's why some SCSI drives cost $1,500 for the same capacity as a $150 IDE drive. It's about being able to reliably move the platter at twice the speed of IDE, and having the correct drive logic and buffer memory to make it useful in the real world, getting very high MTBF numbers, etc..
Comparing typical IDE drives versus high-end SCSI (or FC for that matter) drives is like comparing small asian economy cars with the contenders in the F1 racing series. They have entirely different goals.
Re:It's the market (Score:2)
Re:It's the market (Score:2)
Re:It's the market (Score:2)
Re:It's the market (Score:2)
Sounds like market forces to me...
HQ Myth is a bunch-a-crap (Score:2, Interesting)
Well, this is my opinion, and now you have it.
Peter
Re:HQ Myth is a bunch-a-crap (Score:1)
I don't want it (Score:2)
Are there any such drives out there now?
Yes (Score:2)
It's called a SCSI drive.
For example, here, a 18G 10k RPM SCSI drives cost about the same prive as a 40G IDE 7200 RPM. Warranty of 5 years instead of 1, MTBF of 1-2 Mh instead of 500 kh. And it's even faster.
Re:I don't want it (Score:3, Informative)
Quiet cooling? That big wooden door between you and the big roaring box that contains the drives should suffice.
Re:I don't want it (Score:2)
Actually, closer to $10 each for the classic 9 gig ST410800N. Shipping costs will dominate.
I'd include 5.25" HP drives from the early-mid-90's in the same category as "built to last forever", too. Don't see as many of them, but the C3323 and its brethern are rock-solid.
Re:I don't want it (Score:2)
Re:I don't want it (Score:1)
And the full-height 8gig 50-pin SCSI drive I bought for $20 or so was so SLOOOOW that it was unusable. Seriously, that thing had about as much speed as my floppy drive. I ended up giving it away.
.
Re:I don't want it (Score:2)
Shop around, you can still find some real bargains out there.
Target Audience? (Score:3, Funny)
IDE is targetted at the "at home" user, whereas SCSI is now almost the exclusive domain of businesses looking for performance and haX0rs looking to cut compile times down. The average IDE user just takes the drive, plugs in the cables, and sticks it in... cooling is never even thought of, indeed, you'll be lucky if he puts more than one screw in it.
Even a 10K drive runs HOT. If its on for more than a few days without a fan you're risking your data. A 15K drive that a non-clueful user stuck bare into his PC would be:
1. A support nightmare (hey, you're newfangled hard drive turned into a pile of pudding in my PC)
2. A fire hazard (even if its the customers own damned fault, better to not get him burned to death)
Re:Target Audience? (Score:2)
http://www.elanvital.com.tw for where I get equipment.
10k = even shorter warranties (Score:1, Interesting)
Why waste money on HDD (Score:1)