Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

15k RPM IDE Hard Drives? 96

OutRigged asks: "SCSI hard drives have had speeds in excess of 10,000RPM for years, yet IDE has always been stuck at 7200RPM. Is there some kind of technical reason IDE drives don't go above 7200RPM? I can't imagine cost being that big of an issue, and the connection is certainly not a problem, with Parallel ATA capable, at least theoretically, of speeds over 100MB, and Serial ATA capable of even more. With hard drives now reaching sizes in excess of 300GB, don't you think we need a speed increase?" If you are wondering what the terms "Parallel ATA" and "Serial ATA" refer to, check out this article.
This discussion has been archived. No new comments can be posted.

15k RPM IDE Hard Drives?

Comments Filter:
  • by joostje ( 126457 ) on Saturday November 16, 2002 @08:20AM (#4684972)
    Parallel ATA capable, at least theoretically, of speeds over 100MB

    I've always wondered, why not simply connect all those harddrives with gigabit ethernet? Seems to be as fast, available, can be connected/disconnect while computer is on, can be used over much greater distances, etc, etc.

    • by larien ( 5608 ) on Saturday November 16, 2002 @09:12AM (#4685072) Homepage Journal
      That's what iSCSI [techtarget.com] is for.
    • Because something needs to process the TCP/IP stack. If it is your host CPU, even a P3-733 can hit 100% cpu usage doing transfers on Gig-E.

      One of two things needs to happen before this is even an option.

      1) TCP/IP stack is implemented in hardware (which would probably be costly)

      2) A new protocol needs to be written so that data frames can be sent over raw ethernet without the use of TCP/IP.

      Just my 2 cents.
      • You can run things other than IP over ethernet, and you certain don't have to run TCP over IP.

        NetBEUI, the universally despised networking "protocol", is basically just passing raw SMB frames over ethernet. There is no reason you couldn't pass raw HDD data over ethernet.

        However, ethernet is a VERY BAD option for this kind of thing. Unreliable protocol (data is not guaranteed to be delivered), collisions, etc. Just not a good idea.
      • There are some pretty nice 3com cards that offload almost everything. At work we compared two identical machines with a more standard card (3c905b) and the nice card (forget version) and the cpu difference was like 80% doing file transfers over gig-e.
    • by Anonymous Coward
      What is it with everyone typing "etc, etc" at the end of everything now? That is really annoying to read. On top of that, if you knew what "etc." was short for and meant, you would realize you don't need more than one. Typing more than one looks stupid and makes you look like an idiot.
    • I've always wondered, why not simply connect all those harddrives with gigabit ethernet?

      Well, at least for IDE drives, the interface is not the bottleneck. The fastest drives out there can pump out something like 60 MB/s, when the heads are on the outside of the platter. It gradually degrades to 30 MB/s or so when the heads move to the inside of the platter. So, it does not reach the maximum speed of the ATA-100 interface. Gigabit ethernet would not help a bit here. Serial ATA (150 MB/s) also seems overkill until the drives can reach this transfer rate. SATA has more to offer than just increased maximum throughput, of course.
    • Exactly because ethernet does all those things you mentioned. An IDE controller is so simple they cost less than $1 to make. A gigabit ethernet controller is significantly more expensive because of all that added complexity.

      Also, in terms of disk I/O, the packet latency on ethernet is an eternity.

      /me crosses fingers and prays for the death of iSCSI

    • This is very much what Serial ATA is doing - in fact, it has a raw transfer rate of about 1.5 gigbits/second, or about one and a half times as fast gigabit ethernet. After protocol overhead, that works out to the net throughput of ~150 megabytes/second that gives SATA/150 its name. "Parallel" ATA was designed when serial transfer rates, such as ethernet, were much lower in comparison. Now that serial has caught up (as evinced by the exponential increases in ethernet speed!), drive busses *are* migrating towards a more ethernet-like structure, but optimized for discrete, reliable, short-distance transfers instead of fragmented, potentially unreliable, long-distance transfers.
  • Here (Score:2, Informative)

    by Konster ( 252488 )
    SCSI drives are built to a much higher standard than IDE drives, especially the 10K RPM drives. Cost is a huge issue, especially when much faster spindle rates are concerned. Increase the rotational inertia and speed and you have to have a pretty fancy bearing system to cope with these loads. This sounds simple, and it is, but it is not cheap. The greater the rotational inertia a drive has, many aspects like passive cooling, fancy materials have to be considered vs the intended consumer of such a drive. Typically, SCSI drives are used by corporations that usually have a nice service contract attached to the hardware. In terms of IDE, which is the end-user and home market of such a device, coupled with the limitations of the IDE interface combined with Microsoft's problematic IO/IDE software, and we realize that going faster does nothing for anyone. Except for maybe driving up costs for everyone involved and curtailing the MTBF figures. If anyone will do it, WD will, what with its fluid bearings. But, we shall see =).
    • Re:Here (Score:3, Informative)

      Yeah, we used several boxes with 10k SCSI drives in various raids. We used to lose a drive quite frequently (especially on shutdown-restarts) and the service contract made sure we had a replacement within 24 hours (though we kept spares for just that reason). I don't need the stress of wondering whether my HDD will come back up with my machine at home (or the extra noise) so 7.2K IDE works well enough for the moment.
      • Re:Here (Score:5, Interesting)

        by GigsVT ( 208848 ) on Saturday November 16, 2002 @10:41AM (#4685271) Journal
        I've also seen in several benchmarks, the modern 7.2k ATA drives with 8mb cache in RAID configurations with a decent (or even Promise :) controller sometimes beat out 10k SCSI in the same RAID configurations. I'm sure this is also dependent on load patterns, driver/controller efficiency, etc, but it is something to chew on.

        Personally, I've mostly stuck to 5400 rpm ATA in RAID for higher reliability. For storing large files with little random access, the rotational latency isn't really a big deal, so you can make up the difference in sequential speed by adding an extra drive or two.

        That said, I did recently build an ad hoc NAS computer with 180GB 7200 RPM WD ATA drives, quantity 5, in software RAID5 for about 680GB usable. I used two ATA100 two port Promise controllers (with their own additional cache), and both onboard ATA channels for the RAID disks.

        The root/OS disk and CDROM was some random smallish SCSI stuff we had laying around. This was to free up available ATA ports.

        That thing flys. Compared to other 3ware ATA RAID5's we have with 5400GB Maxtor disks with 2MB cache, it pushes out a lot more per/disk throughput.

        I'm kinda leery considering the promise cards have cache, and also the drives have large cache, none of which is battery backed directly, but this server is not being put into a critical role, and is kept on a UPS. I've noticed that battery-backed cache seems to have lost favor in RAID controllers. There is still a danger, correct?

        One thing that is striking about it is the latency. It just "feels" fast. I think that may have something to do with using Linux software raid5d rather than 3ware hardware RAID, in addition to the cache and higher rotational speed.
        • Re:Here (Score:2, Interesting)

          by Wugger ( 17867 )
          I concur that Linux Software RAID performs quite wonderously. However, I recently had a bad experience which has soured me on it for all but "data-I-dont-need-no-steeeking-data" situations.

          What happened is the machine in which the array lived started to degrade (it was old) in multiple ways. The result was getting bad superblocks on several disks at once, and an array which was theoretically unrecoverable. What it taught me is that software RAID has alot more failure paths than hardware RAID. Bad memory, bad motherboard, bad controller, all can affect the integrity of your array, because the array depends on the integrity of the kernel in order to maintain a self-consistent state.

          So now all my important arrays are hardware RAID controllers. Yes, if the controller goes, I could still have a bad day, but at least it is just the one component, but the whole machine, which I am depending on.

          • Re:Here (Score:3, Informative)

            I know I've mentioned it in other topics, but really - Check out EVMS [sourceforge.net]. It's IBM's port of the AIX Enterprise Volume Manager, with command-line, nCurses and GTK+ interfaces. Handles any legacy linux disk and mdX volume-type. Adds Veritas-style on-the-fly dynamic volume management, snapshot (block-level backup), etc.

            This is patches for 2.4.xx, and is likely to be included by Linus for 2.6. Gentoo has it as an option, and I will put dollars on its inclusion in the next Mandrake, possibly the next RedHat Enterprise.

            A RAID-4 built on cheap, FireWire-attached chassis will provide impressive throughput, and can be constructed in a "star" topology, which removes the SCSI-style chain problems.

            RAID-4 is preferred by SAN vendors, as the independant parity volume takes separate I/O load, removing the write-cost associated with RAID-5. If that's the spindle-set that fails, swap the parity disk, and re-build in the background.

      • I find this funny because of what RAID stands for ;-) Redundant Array of In-Expensive Disks. So building them out of IDE if you can meet your speed requirements is what the whole idea was about in the first place! Amazing ;-)
  • Toms (Score:5, Informative)

    by isorox ( 205688 ) on Saturday November 16, 2002 @09:16AM (#4685078) Homepage Journal
    I finished reading an article on serial ATA about an hour ago at Toms Hardware [tomshardware.com]. Basically its

    • Potentially faster
    • Easier to plug in thanks to smaller cables
    • More reliable, interference in the cable cancels out, like a ballenced XLR microphone lead.
    • Longer cables, so you can plug drives in at the top of a tower case
    • Backwards compatability, use your current IDE HDD with the new controller
    • Hot plugging


    Initially it will run at arround 150Megabytes a second, however should be able to increase to 600.
    • Hot plugging serial ATA is not going to be available for Windows users until the next version of Windows, which MS now says won't be until 2005.

      Of course the Linux kernel will probably support it a few months from now.

      Another plus you didn't mention... The cable length is 1 meter, but since it is serial, it's likely that can be eventually extended to a couple meters (depending on controller, you might get away with it now), for a ultra high speed connection for external hard disks sometime in the future.
    • For SCSI Ultra640 is being hammered out along with SA-SATA [serialattachedscsi.com]. Soon you'll be able to use big cheap SATA drives (or the usual expensive performance SCSI) on a SA-SCSI interface (but not visa versa). SA-SCSI will feature the benefits of SATA and more.

      snatched from the site:

      What is the difference between Parallel SCSI and Serial Attached SCSI?
      Parallel SCSI is a proven enterprise level technology for I/O and device requirements with a twenty-year history of reliability, flexibility and robustness. Parallel SCSI has limited device addressability as well as certain physical limits associated with the nature of its distributed transmission line architecture (performance and distance), plus large connectors that make it unsuitable for certain dense computing environments.

      Serial Attached SCSI will leverage the proven SCSI technologies that customers expect in data center environments, providing robust solutions and generational consistency. It will be based on a serial interface, allowing for increased device support and bandwidth scalability, reducing the overhead impact that challenges today's SCSI environments. It will provide easy solutions for systems with simplified cable routing. It will also utilize Serial ATA development work on smaller cable connectors, providing customers a downstream compatibility with desktop class ATA technologies.

      Finally, this simplified routing will enable a new generation of dense devices, such as small form factor hard drives, which will enable storage solutions to scale externally where traditional parallel SCSI cannot, due to cabling and voltage challenges.

      Is Serial Attached SCSI complementary to or competitive with Serial ATA?
      Serial Attached SCSI complements Serial ATA by adding dual porting, full duplex, device addressing and it offers higher reliability, performance and data availability services, along with logical SCSI compatibility. It will continue to enhance these metrics as the specification evolves, including increased device support and better cabling distances. Serial ATA is targeted at cost-sensitive non-mission-critical server and storage environments. Most importantly, these are complementary technologies based on a universal interconnect, where Serial Attached SCSI customers can choose to deploy cost-effective Serial ATA in a Serial Attached SCSI environment.

  • by Raleel ( 30913 ) on Saturday November 16, 2002 @09:21AM (#4685095)
    AFAIK, the spinning mechanics of SCSI drives are the same as IDE ones, just that they are generally machined to a higher spec than the IDE ones. Another "let's give the common people something less durable, banking on that it won't be used as hard" thing.

    Note the recent move to 1 year warrenties on IDE hard drives. SCSI drives are still 3-5 years. Honestly, I'm seriously thinking of doing SCSI in my next computer. Two years ago, I got a new computer and got ATA in it. It's been a good computer, but it's starting to feel it's age. My previous computer had scsi in it, and was a dual processor. The extra money I spent (almost 3k when I bought it) helped it last an extra year over theis one as far as speed was concerned.

    If you do any serious disk activity, SCSI is a very very good way to go. If you plan on more than one person on a computer at a time, go scsi. For instance, I have a coworker who runs windows 2k at work and has Terminal Services running in admin mode. I logged in and started installed cygwin on it (we're testing cfengine on windows), and it hammered his machine. Made it unusable. That was just downloading stuff to disk! It's a p4 1.7 dell desktop job. My dual p3-700 with scsi never experienced anything like that until both processors were hammered running chemistry code and doing heavy disk activity.

    I don't have any empirical data, I just have experienced too much IDE sub-standardness. You pay extra money for a reason, but I personally think it's money well spent.
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Saturday November 16, 2002 @10:20AM (#4685204)
    Comment removed based on user account deletion
    • Re:Multiple heads? (Score:3, Insightful)

      by isorox ( 205688 )
      Would be interesting, however HDD read heads are different to CD's. A CD is a moving laser that goes in and out, a HDD head is more like a record player - you could get two heads on there, physically, you may even be able to get three, but they would be running very close to each other.

      Can anyone with a clue answer this question?
      • Re:Multiple heads? (Score:4, Insightful)

        by GigsVT ( 208848 ) on Saturday November 16, 2002 @10:52AM (#4685297) Journal
        It would reduce reliability, as most catestrophic failures of hard disks involve head crash of some sort. Twice as many moving parts is bad, mmmkkkay. :)

        The head assembly also takes up quite a bit of room. You would probably have to go to a half height 5-1/4 form factor like old SCSI disks were.
    • Re:Multiple heads? (Score:4, Informative)

      by Fweeky ( 41046 ) on Saturday November 16, 2002 @11:35AM (#4685420) Homepage
      HD's already have multiple heads - one for each platter. However, they can't all be used in parallel to get some sort of on-disk striping system because the heads need to individually fine-tune to the specific track they need to operate reliably.

      Since there's only one head assembly they're mounted on, tuning one head means the other heads get out of whack and become useless while the other's operating.

      This requirement for precision means a multi-headed HD like that would need multiple head assemblies. Open up your favourite HD and see if you can work out where to put it :)

      In short -- it's not worth it. You introduce more compexity (== cost == less demand) and things to go wrong, when you could just buy another drive and stripe and probably still come out cheaper and more reliable than a single two headed drive.

      It'll probably be faster, too, since you've then got two interfaces to squeeze data down.
    • It's called a drum. Has a head for every track on the disk. Were used for swap in mainframes back in the day.

      Of course your suggestion isn't as radical as a head for each track.
  • As of now, it's simply expensive to make very high RPM hard drives. Cost is the reason I didn't opt for SCSI, and I'm glad I didn't. The higher rotational speeds offered in SCSI drives offer only marginal speed increases, and they usually only come in small sizes (18 GB). RAID is the answer to higher performance with hard drives.

    My experiment with IDE RAID-0 [tomshardware.com] turned out wonderfully. For $160 I got what amounts to a 160-gig drive that was 2mb/sec slower than a 15k RPM SCSI drive (according to SiSoft SANDRA [sisoftware.co.uk]) That was from two plain old 7200 RPM 80-gig IDE's. When Serial ATA get's big, setups like this will be even easier, since the main limitation with IDE RAID is the number of drives you can attatch to the board.
    • Just don't forget you also got yourself (at best) half the reliability. :)
      • by Jerph ( 550853 )
        It's arguable. I've heard nightmare stories about high RPM SCSI drives' reliability. And, from a cost point of view, you could probably buy 4 20 GB IDE drives for the price of one 18GB 15k RPM drive and set them up in RAID-0+1 or -3 and have vastly better reliablity and speed.
        • by jemhddar ( 53448 ) <matt,helling&gmail,com> on Saturday November 16, 2002 @01:13PM (#4685858)
          Raid 3 is pretty atrocious unless you are reading and writing HUGE files all the time.

          Raid 3 has synchronized disk heads, which means all drives will be reading the same stripe, or writing to the same stripe, at the same time.

          For best performance with redundancy, Raid 10 (or 0 + 1) is by far the best choice. A Raid10 array gives you 2 different data paths for writing data (just like a 2disk raid0), but gives 4 locations for reading data back (like a 4disk raid0). Plus you still have the redundancy built in where if any single drive failed, no data loss. The downside is that 4 60gb drives will only give you 120gb of usuable space.

          • With Volume Management, you can build a RAID-4 volume, with the parity "disk" on a Raid-0+1. Best of many worlds - very EMC2 SAN-style.
          • Danger! Danger! RAID 10 and RAID 0+1 is different, and the MTBF is drastically worse for 10 (the standard version included on the controllers) due to the way it's handled... think about it for a few moments:

            RAID 1+0 (or 10) (mirroring plus striping) gives you a chance that if one drive dies, you still have a fully functioning side of the mirror on the other disk.
            RAID 0+1 (striping plus mirroring) gives you a chance that if one drive dies, half of the mirror dies immediately.

            Thus, if you lose a drive in the other half for 10, your stripe set continues with one (non-mirrored) disk on each side. But if you lose a drive in the other half for 0+1, your mirror set fails completely since both sides are missing half of the stripes... bang! you're dead.

            Check out a more detailed writeup [ofb.net] that we consulted when debating this for a client...
        • That's nice. What the hell does that have to do with his specific scenario of two striped IDE drives?

          It's not arguable, unless you use non-sequitor arguments, as you just did.


    • The parent comment is important, but it is easy not to see the importance.

      First, see the Tom's Hardware article about RAID (mirroring/striping) controllers referenced in the parent comment: Fast and Secure: A Comparison of Eight RAID Controllers [tomshardware.com]. As is usual for Tom's Hardware, the article is a bit confused. Apparently it was written hastily.

      Motherboards now often have integrated mirroring/striping controllers, so the cost is low. Even Intel has a mother board with an integrated mirroring/striping controller now. In a 2-drive system configured as a mirror, the heads of each drive are moved independently to read data more efficiently than a single drive. Read performance is excellent, and write performance the same as a single drive. Since most systems do more reading than writing, the overall performance is excellent.

      A mirror is far more reliable, since if one hard drive fails, all the data can be recovered from the other drive, and the system keeps running until the bad drive can be replaced.

      In systems in which there are only one or two users, SCSI is slower. SCSI is only faster when there are many simultaneous users accessing storage.

      Extreme solutions such as 15,000 RPM drives and SCSI and RAID 5 are appropriate for e-mail servers, but the noise and expense and lower reliability of the single drives doesn't make sense unless the computer is a server of some type.

      Hard drives with a high rotational rate are not necessarily faster at providing data than those with a slow rate. The bottleneck is often the time it takes to move the heads, and the time it takes to present the data to the CPU, not the latency of waiting until the data is under the head.

      RAID controllers can do striping, or mirroring, or both. When they do both, 4 drives are required, but read performance is high. Having more than one read head and being able to move them separately is very efficient. A 30,000 RPM drive would still have only one head mechanism.

      It is good to see other companies entering the market. Promise Technology was one of the first with low-cost mirroring controllers. Promise is, in my experience, an unbelievably backward company. The products work well, but Promise has sold products with poor setup methods for years. For those who remember DOS programs, the Promise setup user interface is like a DOS shareware program written by a novice programmer who is considerably worse than average in user interface design.

      Promise Technology is also known for the poor quality of their manuals. (The company says the manuals are being re-written.)

      The parent comment is correct. For most applications, a RAID controller with mirroring, or mirroring plus striping, is excellent.
      • Hard drives with a high rotational rate are not necessarily faster at providing data than those with a slow rate. The bottleneck is often the time it takes to move the heads, and the time it takes to present the data to the CPU, not the latency of waiting until the data is under the head.

        a 10K RPM drive has about 166 rotations/second -- or 6ms/revolution, giving an average rotational latency just over 3ms

        Similarly, a 7200RPM drive has a rotational latency just over 4 ms. This compares to high-end seek times around 3ms. In other words, it's quite possible to be in a situation where it's cheaper to jump tracks to get data than to wait for the disk to rotate to the wanted data on the same track.

        A Rotational speed increases both rotational latency and overall transfer speeds.. (a 15K disk is going to give you about twice the transfer rate of a 7200RPM disk with the same geometry, if all of the data is on one cylinder) The effect is far from trivial compared to seek times. It is, however, pretty much guaranteed to wear out your bearings much faster. I, for one, would be very worried about keeping a 15K hard disk in my desktop for 5 years.

      • Read performance is excellent, and write performance the same as a single drive.

        Correct me if I'm wrong, but I thought that read performance would be better and write performance would be worse than a single drive.
        Reasons: for read performance, you only have to wait for 1 of the two drives to retrieve the data, i.e. the drive that happens to get the data the fastest.
        For write performance, you have to wait for both drives to finish writing (otherwise the whole purpose of mirroring is defeated).
        You're right of course that most applications do more reading than writing, so on average a mirrored setup should be slightly faster than a single drive.
  • by OneFix ( 18661 ) on Saturday November 16, 2002 @11:12AM (#4685346)
    It IS expense...so often we forget, but only recently were harddrive manufacturers having problems with their 7200RPM and in some cases even 5400RPM drives. The reason is heat. If you check around, you'll find that the largest 15000RPM drive is made by Seagate (it's ~80GB and it's ~$1000)...why???

    When you raise the number of bits per inch of storage surface you create more stress and heat. When you raise the RPMs you create more heat (alot @15000RPM). The overall effect is that you can't use the cheap parts that are used in most IDE drives...every piece of the drive must be manufactured to the highest specifications. Motors have to be of the highest quality. Hydrodynamic bearings must be used instead of metal ball-bearings...this all increases the cost (as it pushes the technology).

    The reason why these faster drives are not sold as IDE is simple. Anyone who is willing to pay $1000 for a ~80GB harddrive is also willing to pay $75 for a decent controller card (if it's not already built into their workstation).

    How many ppl are going to be willing to pay $1000 for an 80GB IDE drive when they can buy a 300GB drive for 1/3 the cost? The end result is that most consumers simply don't care about the speed...the majority of IDE drives go into OEM systems and the consumer probably won't know if they put a 4500RPM drive in the system.

    So, why not get the best of both worlds. Buy a 20GB 15000RPM SCSI and put your system files and most widely used apps on that (~$130 for a 18G Seagate). And then buy a larger IDE drive for archives.

    When you think about it, you shouldn't need more than 20GB for your system, apps, and maybe a few games.

    As far as the slower IDE drive, just spend your money on more RAM for the system and increase the cache. And don't rely on the CPU intensive built-in IDE controller on most Intel/AMD motherboards...buy a decent controller card instead.

    And if you really want to get ~15000RPM with IDE technology, just get an IDE RAID controller and use striping...using this method you can actually get to much higher theoretical speeds than a single 15000RPM drive. with 4 7200RPM drives you could get up to a theoretical speed of 28800RPM!!!
    • "And if you really want to get ~15000RPM with IDE technology, just get an IDE RAID controller and use striping...using this method you can actually get to much higher theoretical speeds than a single 15000RPM drive. with 4 7200RPM drives you could get up to a theoretical speed of 28800RPM!!! "

      Do keep in mind there are 2 factors in hard-drive performance: Access Times and Transfer rate. Your IDE-RAID zero is primarily upping your Sustained Transfer Rate along with giving you 4 times as many points of failure. Hope you have those drives on a 64-bit PCI bus too, or you get capped at 100 MB/sec by the bus boddle neck.

      • Yes, I know about seek time. The RAID option only solves the transfer rate issue, and if you're doing all of this without a JFS, you diserve to have your system fail.

        I'm not suggesting that this is all you do, only that it's a fairly good option if you want more speed from available IDE technologies...
    • The reason why these faster drives are not sold as IDE is simple. Anyone who is willing to pay $1000 for a ~80GB harddrive is also willing to pay $75 for a decent controller card (if it's not already built into their workstation).

      Yeah, but only while they are expensive.

      When you think about it, you shouldn't need more than 20GB for your system, apps, and maybe a few games.

      Where do you get off telling everyone what to think? Seriously? My system has 80 gig right now, partly because it's a multiboot and I like keeping my disks 30-50% empty because it improves performance.

      As far as the slower IDE drive, just spend your money on more RAM for the system and increase the cache.

      Beyond a certain point adding RAM doesn't help much, caches only increase in speed marginally for a doubling of the cache size. Adding faster disks helps basically all of the slowest OS tasks go much faster.

      And don't rely on the CPU intensive built-in IDE controller on most Intel/AMD motherboards...buy a decent controller card instead.

      Yeah right CPU intensive makes a big difference in these days of 3 Ghz processors. Processors are getting faster MUCH faster than drives- the overhead is dropping by a factor of nearly 2 each year.

      And if you really want to get ~15000RPM with IDE technology, just get an IDE RAID controller and use striping...using this method you can actually get to much higher theoretical speeds than a single 15000RPM drive. with 4 7200RPM drives you could get up to a theoretical speed of 28800RPM!!!

      No. 15000 rpm gives half the latency of any number of 7200 RPM. Latency usually is the bottleneck, not throughput. RAID improves throughput, not latency.

      Rule of thumb, unless you have lots of disks on one processor- SCSI is a waste of time and money.

      • Rule of thumb, unless you have lots of disks on one processor- SCSI is a waste of time and money.

        SCSI is a more robust bus with better error detection. It also has a well thought out and more reliable electrical specification, allows multiple initiators, and can be used for hot swap. All of this while being the the least expensive bus you can get 10k and 15k RPM drives for.

        Rule of thumb, unless you don't care about performance or added reliability isn't worth the price to you, use SCSI.

        BTW, if you're seeing a significant performance increase from keeping your drive mostly empty, you're using the wrong file system.
        • SCSI is a more robust bus with better error detection. Errors are common on IDE? Not as far as I know. It also has a well thought out and more reliable electrical specification, allows multiple initiators, and can be used for hot swap.

          Sure, for a RAID server it's probably ideal. For a desktop? Why?

          Rule of thumb, unless you don't care about performance or added reliability isn't worth the price to you, use SCSI.

          No; that's nonsense. There is no significant difference in peak throughput, reliability or latency between IDE and SCSI; both throughput and reliability are dominated by the performance and reliability of the physical harddrive. The bits simply come off the disk at a certain rate determined by the spin, and that's a rate well below the capacity of ATA133.

          BTW, if you're seeing a significant performance increase from keeping your drive mostly empty, you're using the wrong file system.

          Oh definitely; but I try to keep my disks mostly full, and not nearly full. The fragmentation increases even under UNIX as the partition fills; although it deals with it far better than say FAT32, there's still a hit. Ever wondered why most UNIX filesystems can be 109% full? It's because in that last 10% performance is dropping off very markedly; and in fact above 85% full things are starting to really crawl. But the dropoff starts earlier than that.

          • Errors are common on IDE?

            It depends on how you define common, and wether you care about an error if you don't notice it right away. It would be more accurate to say "more common" rather than just saying "common."

            Sure, for a RAID server it's probably ideal. For a desktop? Why?

            A home desktop is exactly the place where most people don't care about added reliability or performance. A fast hard drive is only going to improve load times in your games and office software, and you're not doing backups, so you're going to loose all your data in a few years anyway. If it's an office PC you're probably storing your important data on the nice fast SCSI disks in the file/database server, so you can use IDE in the desktop. Both situations fit my rule.

            No; that's nonsense. There is no significant difference in peak throughput, reliability or latency between IDE and SCSI; both throughput and reliability are dominated by the performance and reliability of the physical harddrive. The bits simply come off the disk at a certain rate determined by the spin, and that's a rate well below the capacity of ATA133.

            The bottom line is that you can't get the fast, high quality disks with ATA/IDE. What you said about the reliability is just nonsense. SCSI has tighter specs for cables, and error detection is better than what's available with IDE. No matter what the speeds of the busses are, what's available in the marketplace dictates that if you care about performance you use SCSI, and if all that matters is capacity or price, you use IDE.

            BTW, I personally have a 10K RPM 18.4GB SCSI drive and two 80GB 5400 RPM ATA-133 disks in my machine. I store the system and source trees on the SCSI disk for speed, and everything else on the IDE disks because I can't afford to have all my data fast. When you're using a journaling filesystem or reading and writing lots of tiny source and object files, and single fast disk will outperform a stripeset any day. Oh, and XFS kicks ass.
          • I'm replying to this part seperatly because it's off topic.

            Ever wondered why most UNIX filesystems can be 109% full? It's because in that last 10% performance is dropping off very markedly;

            Actually it's because 10% is usually reserved for 'root' so that when a user fills the disk up enough to see an "out of disk space" message, the system can still write to logs and the administrator still has some room to do general maintance to keep the machine running. Most filesystems let you change the percentage. Some filesystems also allow sparse files which can have a size that is larger than the number of blocks that have been assigned to them (because those blocks were never written but a block with a higher offset was written, or under some filesystems because they contain only zeros), and certain utilities can (incorrectly) report those files as part of the used space.

            The fragmentation increases even under UNIX as the partition fills; although it deals with it far better than say FAT32, there's still a hit.

            You can't really make a generalization like that about "UNIX filesystems," because there are so many different types that behave differently. There are filesystems available that provide uniform performance dispite the disk utilization, but now I'm just being pedantic.
            • Actually it's because 10% is usually reserved for 'root' so that when a user fills the disk up enough to see an "out of disk space" message, the system can still write to logs and the administrator still has some room to do general maintance to keep the machine running.

              That's historically not the reason in fact, although the spare space is used for this reason.

              You can't really make a generalization like that about "UNIX filesystems," because there are so many different types that behave differently.

              Only in detail. Every single filesystem I've seen detailed benchmarks for (XFS, FAT32, ReiserFS, ext3, ext2) degrade sharply above the 85% usage point, the exact point varies by a few percent, but in general this occurs.

              Anyway, if you don't believe me; no skin off my nose, go ahead max out your disk usage- I don't care if you're filesystem grinds to a halt.

    • And if you really want to get ~15000RPM with IDE technology, just get an IDE RAID controller and use striping...using this method you can actually get to much higher theoretical speeds than a single 15000RPM drive. with 4 7200RPM drives you could get up to a theoretical speed of 28800RPM!!!

      Actually, IIRC, it's more like a 100% speed increase with 4 drives in RAID 0. But you make a good point. Four 120GB (= 480GB) IDE drives in RAID 0 cost less than one 80GB 15K SCSI drive, and are probably faster. Most new mobo's have a RAID controller as well, nowadays.
  • You want to know *why* people don't try looking for 15K RPM drives on IDE? Most IDE drives are built to be as cheap as possible. People look for the following, in order (I suspect that a lot of them wish in retrospect that they had put reliability first).

    * Cost
    * Size
    * Reliability (unfortunately, hard to measure...MBTF is kind of BS)
    * Noise
    * Speed
    * Heat

    I tend to move Heat higher up, given the impact it has on Noise and Reliability.

    And, you know what? For most applications (workstation) hard drive speed is completely a non-issue. HD transfer rates improve over time *anyway*. If you increase aureal density but keep rotational speed the simple, you're increasing peak non-cache data transfer rate. So you get a faster hard drive now than you used to. Second, for the vast majority of workstation applications, hard drive time is simply not important. It's almost never the bottleneck for critical applications. If you're paging, yes, but RAM is cheap and does such a far better job that you're better adding another 512MB of RAM to your system. File copies are rarely a problem -- you don't need to remove a hard drive, so you can just background the copy and forget about it, unlike in the days of floppies. If it's a copy to/from removable media, it's almost always the removeable media that's the bottleneck, not the drive, so more drive speed will give you basically nothing.

    The other thing to remember is that RAM caching is far better than it once was (partly based on sheer amount of memory). Most of the time, your working data set will fit into memory just fine, and be cached. Linux has very good disk caching. Windows less so, but still much better than the dark days of 9x. And a silly little difference like a 5200 RPM drive being 25% slower than a 7200 RPM drive pales in comparison to the thousand or so times faster that your memory is. You're almost always better off getting more solid-state storage and not trying to work the bejeezus out of the mechanical parts of your hard drive.

    I would never recommend anything but a 5400 RPM IDE drive to anyone. 7200 and above will buy you heat issues, reliability issues, and noise issues. Tack on a fan and you help a bit with heat (of course, having "hot spots" in your drive and then heavily cooled spots isn't great either), but then you get more dust, and more noise. Of all the people I know, all the drives in the past three years that failed have been 7200 RPM, not 5400 RPM. That speed difference isn't huge to you, and is far nicer to the cheap, fragile mechanism in the hard drive.

    In conclusion -- buy 5400 RPM. You'll be a lot happier.
    • Why does everyone say 7200RPM drives overheat and are loud..? I've got four 60GB Maxtor and Western Digital drives, all right next to each other with no fan, and running' all the time. Not a single one is hot; they're all just moderately warm.
    • Rotational speed isn't for high speed data transfers in my case, its for fast seeks, and if you look at the specs on some of these drives (below), you'll see that you can get 3.2ms access times; and that makes the difference for database and web apps where you've got thousands (or millions) of small files all over the place on the drive. A fast IDE drive, like the Diamond Max below, has up to 8MB of buffer space for caching but a ~9ms seek time (4.17ms latency).

      Maxtor Atlas 15k RPM SCSI drive [maxtor.com]

      Maxtor DiamondMax 7200 RPM IDE drive [maxtor.com]

    • And a silly little difference like a 5200 RPM drive being 25% slower than a 7200 RPM drive pales in comparison to the thousand or so times faster that your memory is.

      Most of the time, you are absolutely right. However, there are applications where pure throughput (i.e. MB/s) is critical (e.g. analog video capture), and in that case the 7200 RPM drives actually do perform noticeably better.
  • by Hadlock ( 143607 ) on Saturday November 16, 2002 @12:47PM (#4685729) Homepage Journal
    maybe i should just start selling ceramic heaters in a regular hard drive profile, attach a 512mb compact flash card, and claim it's a half-gig 20,000 rpm drive. people'd probably believe me, too!

    :)
    • First you'll have to replicate the significant noise a fast hard drive makes. Also, you'll have to come up with an explanation why the write-cycle life of your 'improved' drive is so poor.

      I see an opening here for people interested in writing sabatour trojanware: have your code write to a range of flash addresses a few 100K times, wipe the cells out.
  • It's the market (Score:4, Insightful)

    by photon317 ( 208409 ) on Saturday November 16, 2002 @01:36PM (#4685980)

    Market forces drive IDE drives to be built as cheaply as possible while still having the right buzzwords to make consumers believe they're faster than their competitor. RPMs higher than 7200 still don't register with the mass populace, so it's not yet a factor.

    SCSI hard drives are all about top-end performance. That's why some SCSI drives cost $1,500 for the same capacity as a $150 IDE drive. It's about being able to reliably move the platter at twice the speed of IDE, and having the correct drive logic and buffer memory to make it useful in the real world, getting very high MTBF numbers, etc..

    Comparing typical IDE drives versus high-end SCSI (or FC for that matter) drives is like comparing small asian economy cars with the contenders in the F1 racing series. They have entirely different goals.
    • And its probably relatively cheap to put those 4 and 8Meg memory buffers on them to make them perform really well in gaming-type situations.
      • I think it is important to point out that writing to cache is not safe. A power outage prior to committing to disk will cause data loss or corruption.
        • I wasn't trying to infer that this cache memory was 'a good thing', but rather that this is where a lot of the performance numbers come from when comparing IDE and SCSI.
  • by pbox ( 146337 )
    Do not buy the argument of "market forces" ot "high quality componenets yield high price and low demand". Half of the Slashdot crowd and most of the geeks would all buy 15K IDE drives if available, even if it would cost almost the same as SCSI. Hard disk manufactuerers already make the drives, it is just a question of slapping a IDE controller board vs. SCSI board on the drive. R&D cost is about 0.00001 canadian cent per drive. No additional investment beyond the distribution channel, and viola you have a new product, that can potentially increase your market share. There will be a low but steady demand for these drives, and if any of the manufacturers spend a little money on marketing it can actually turn into the battle of RPM (a la MHz). I do not see why this would not benefit the makers.

    Well, this is my opinion, and now you have it.
    Peter
  • --nope, don't want 10 thou or 15 thou drives. What I WANT is a 7x thou drive BUILT with the same specs and bearings as the high speed SCSI drives but limited in rotational speed, ie, "over built for reliability". Slightly more expensive then the ide drives now, cheaper than the scsis. I'll swap bleeding edge expensive over clocked turbo-ized nitroed for reliable-stable-medium powered any day. I don't want it to break. I'll take an 80 gig reliable as heck over a 160 gig might work might not a year from now. I want it to last 20 years not two years or two months. I want built in quiet cooling somehow, too, for that matter.

    Are there any such drives out there now?
    • by dago ( 25724 )
      As you said in your comment before your question.
      It's called a SCSI drive.

      For example, here, a 18G 10k RPM SCSI drives cost about the same prive as a 40G IDE 7200 RPM. Warranty of 5 years instead of 1, MTBF of 1-2 Mh instead of 500 kh. And it's even faster.
    • Re:I don't want it (Score:3, Informative)

      by SN74S181 ( 581549 )
      If you want decent performance and staggeringly high reliabilty, try to find NOS (new old stock) server grade SCSI drives. The big 5-1/4" full height Seagate drives are built to last forever, and because they have an 'unfashionable' large form factor, you can get them on eBay for $30-50 each in 7-9 gig size. Stick 'em in the back room on an NFS server with a Fast Ethernet card and you've got your reliable storage solution.

      Quiet cooling? That big wooden door between you and the big roaring box that contains the drives should suffice.
      • you can get them on eBay for $30-50 each in 7-9 gig size

        Actually, closer to $10 each for the classic 9 gig ST410800N. Shipping costs will dominate.

        I'd include 5.25" HP drives from the early-mid-90's in the same category as "built to last forever", too. Don't see as many of them, but the C3323 and its brethern are rock-solid.

      • --appreciate the tip you and the previous and following had. I'll look for some.
      • If you want speed, you have to go 68-pin. You're lucky to get 4MB/sec out of a 50-pin.

        And the full-height 8gig 50-pin SCSI drive I bought for $20 or so was so SLOOOOW that it was unusable. Seriously, that thing had about as much speed as my floppy drive. I ended up giving it away.
        .
      • I picked up about 40 NOS IBM 9G SCA drives (back from when IBM made good drives) for ~ $20 each at auction (real old-fashioned auction, not ebay). Great drives, bit chunky, bit hot, but very fast indeed. All my dev boxes have 2 or 3 of these in RAID-0. (Though the server is RAID-5 IDE just due to space considerations).

        Shop around, you can still find some real bargains out there.
  • by jasonditz ( 597385 ) on Saturday November 16, 2002 @05:52PM (#4687273) Homepage
    The major issue here isn't "can't" so much as "shouldn't".

    IDE is targetted at the "at home" user, whereas SCSI is now almost the exclusive domain of businesses looking for performance and haX0rs looking to cut compile times down. The average IDE user just takes the drive, plugs in the cables, and sticks it in... cooling is never even thought of, indeed, you'll be lucky if he puts more than one screw in it.

    Even a 10K drive runs HOT. If its on for more than a few days without a fan you're risking your data. A 15K drive that a non-clueful user stuck bare into his PC would be:

    1. A support nightmare (hey, you're newfangled hard drive turned into a pile of pudding in my PC)

    2. A fire hazard (even if its the customers own damned fault, better to not get him burned to death)

    • 5 x 10k drives in an SCA container (3 x 5 1/4" bays in size) with two side fans and two rear blowers generates a _lot_ of heat :-).

      http://www.elanvital.com.tw for where I get equipment.
  • If anyone has noticed, hard drive warranties have been shrinking due to the quality, size, speed, etc of newer drives. It's getting hard to find a 3 year warranty on IDE drives lately, 1 year is becoming standard and there's even DOA warranties on some drives now. This boils down to huge sizes spinning fast...so if we wanted 10k rpm IDE drives today, that would essentially make warranties on drives nonexistent. I think the best bet us consumers can do is only purchase IDE drives with 3 year warranties, even if it will cost a couple bucks more. Manufacturers will keep pushing crappy IDE drives on us, and if we tell them we don't want short-life drives with our $$ they might decide to up their quality and evenutally we could end up with 10k rpm drives that are reliable and fast as hell. I don't know about you, but i'm sick of RMA'ing drives...
  • If they were smart they would quit wasting time trying to beat a dead horse.. There's only so fast you can spin the disc before it will destroy itself.. Why are we so obsessed with a technology derived from records.. With chips getting smaller, faster, and cheaper, they should be making NVRAM-Drives. I would much rather have 100GB NV-RAM Drive than an HDD. Imagine how much faster the computer would run, no more latency issues with HDDs, no more head crashes, less space to take up, no more of this PM/SM/CS/Slave crap. Imagine updgrading NVRAM-Drive space, it would be as simple as installing another module... Imagine the possiblities

The use of money is all the advantage there is to having money. -- B. Franklin

Working...