Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Are Newer And Faster IDE Drives Troublesome? 48

viperjsw writes: "Earthweb is running an interesting article on how there seems to be a failing trend in newer 7,200 RPM IDE hard drives. I am the lead hardware engineer for my co with four thousand 7,200 RPM ATA100 Maxtor and IBM hard drives. I have not seen any failure trends, though failure rates are at about 5-10%. Are Earthweb's reports verifiable?"
This discussion has been archived. No new comments can be posted.

Are Newer And Faster IDE Drives Troublesome?

Comments Filter:
  • Momentum and heat (Score:4, Informative)

    by Deagol ( 323173 ) on Monday March 25, 2002 @04:41PM (#3223666) Homepage
    From what I gather, the speed of the drive has a positive relation to the heat of the drive. I don't know whether it's due to friction (the bearings?) or the power consumed. With increased heat comes a reduction in the life of the supporting electronics. Also, a higher temperature change when powering on and off will mean more wear-n-tear, too.

    Also, with a faster speed, the spin-up will be more harsh on those drives.

    I wonder how the failure rates of 10,000 and 15,000 rpm SCSI drives compares to those of lesser speeds.

    • One thing worth considering is the number of power cycles the drive sees. I think the typical installation of a 10K RPM or 15K RPM SCSI drive sees only a few dozen cycles in its lifetime, due to the "power it up and let it go" nature of servers. Real servers also tend to leave some space between the drives in an array, so that each drive is guaranteed some cooling.

      Also, I wonder if cheap drives geared to home users have sloppier tolerances built into them, where some drives are just doomed to fail. For example, I recently bought a cheap 40GB drive that vibrates noticibly (and it worries me), but all the server-grade SCSI drives I've seen run real smoothly (no worries until the bearings start grinding after a few years).
    • Mu (coefficient of friction) is probably no higher for a faster drive and neither is the rate of efficiency of whatever circuitry is in there. However if more power is going in (faster speed, so yes unless there's a transmission), then you're gonna get more excess heat. Nothing special.
  • by stienman ( 51024 ) <adavis@@@ubasics...com> on Monday March 25, 2002 @04:47PM (#3223774) Homepage Journal
    7200 RPM drives run hotter than previous drives, and they must be cooled. Previously people rarely gave a thought to drive cooling, and if they don't take it into account now they will see large failure rates. If your drive is too hot to touch after running for an hour, then you need to cool it off.

    I've been installing 7200 rpm IDE drives into servers and workstations for well over a year now, and the only complete failure I've had was one that didn't work from the start. I've had drive errors crop up from heat (put a fan in, seperate it from other equipment (don't sandwich it between the floppy and zip), etc) and from using a 40-wire IDE cable instead of the ata-100 80-wire cables.

    FWIW, I've used Fujitsu until a few months ago, IBM, Maxtor, and few seagates. They have all been at the lower end of the price range ($99 wholesale - went from 10G to 20G and currently using 40G).

    -Adam
    • by Anonymous Coward
      I've been using 7200 RPM (SCSI) drives for many many years, and never had any particular problem cooling them. (Did have some issues with the early 10K Cheetahs though. Something like this [3dcool.com] helped.)

      My guess is that there's other engineering factors going into the heat equasion other than just the RPM number.
      • by stienman ( 51024 ) <adavis@@@ubasics...com> on Monday March 25, 2002 @06:51PM (#3224811) Homepage Journal
        You can have any two:
        • Cheap
        • Fast
        • Low to no cooling requirments
        Of course your 7.2k scsi drives run well - first of all you don't put them in cheap, poorly designed cases. Secondly they are not inexpensive, and much of that extra money goes toward making them last longer - one way to make something last longer is to lower its heat buildup.

        Cheap drives cut corners on motors, bearings, and well-engineered cases. So cool fast drives cost more money than cool slow drives or hot fast drives.

        -Adam
        • Cheap drives cut corners on motors, bearings, and well-engineered cases. So cool fast drives cost more money than cool slow drives or hot fast drives.

          That is kind of a troll. This is done in almost all product. Thay are made as efficient as possible. And even expensive scisi drives fail. Thay are sometimes just the same as the cheap drive with the added scsi interface, and the price tag added.

          if scsi then
          price=price*4
          end if.

          Ok what is important:
          -MTBF
          -Service agreement ()
          -Some technical specifations that are hidden deep in de documentation.

    • Very true, I find it amazing how case manufactures leave only 1-2mm between drive bays. Usually there is plenty of room inside the case for 1-2cm spacing.
  • by crow ( 16139 )
    I'm not sure about failure rates, but heat and noise can be problem with the faster drives. That's the general consensus among people upgrading ReplayTV and TiVo units. Granted, that's a special case where the extra speed is of no value, whereas acoustics are very important. Anyway, the point is that you don't get the extra speed for free.
  • The harddisks are really suffering if you start them up/power them down often, because they're mecanical and the material inside the disk is not that strong to support that. Therefore a lot more disks are failing in workstations than in servers. Heat is a really important subject, too!
  • by rudy_wayne ( 414635 ) on Monday March 25, 2002 @05:03PM (#3223938)
    Lack of cooling is certainly an issue with 7200 rpm and faster drives. Since installing fans on all my hard drives, the number of failures has gone way down.

    However, there is a more troubling issue:
    How is it that you can now buy a 40 gig hard drive for less than $100? Simple -- the manufacturer cuts corners on quality and cranks them out by the thousands in third world sweat-shops.

    IBM is now putting disclaimers on some of their hard drives, not recommending operating them for more than 8 - 10 hours per day.
  • But it seems that 5-10% is quite high. From what I recall from my fucking worthless stats class a few years back when .com wasn't tantamount to saying "I had a fucking posh job and got fired,", the 5% figure was statistically important. It seems that if a business has a mfg process, a failure rate of such a high percentage would be a sure fire way to seriously gang bang the bottom line. Is 5-10% fucking true?
  • The culprit... (Score:1, Redundant)

    by OneFix ( 18661 )
    Not all new drives experience this problem...specifically mentioned in the article is the Hydrodynamic Bearings, like are used in my newer Quantum Fireball AS Plus 60 [maxtor.com].

    So, the real culprit is the old Metalic Bearings that are still used in some of these drives...

    So, what's the big fuss about? Well, it would be like an automobile manufacturer making a car without airbags today (only without that whole life or death thing)... :)
  • Yes, there's been some disturbing reports of various hard drives failing... when I bought my 40 gig, a number of maxtor 7200 drives were having problems (I went with the WD 5400... it was all they had in stock anyway), then there's the IBM issue...

    While some of this can be attributed to bad products, it makes you wonder, with hard dirives getting bigger, are more people speaking out with complaints because they loose more? Or are more hard drives getting sold than ever, thus increasing the number of incidents that there's flaws (same # damaged per million, just more millions sold)

    Just my musings... never had a problem with my hard drives yet.

  • I had some really bad experiences with one certain type of IDE drive... the 46 GB IBM DeskStar. We originally bought about 15, which all had to be replaced within about 3 or 4 months. The replacements died too. It did not matter if we installed them in desktops or servers (in the server, every drive actually had its own fan). Finally, we received 45 GB Maxtors or 60 GB IBM DeskStars, which have been working fine since. Funny thing is, the bad drives were manufactured in different countries, so it could not just have been a bad batch - more like a design flaw.
  • IDE Disks in General (Score:2, Interesting)

    by Anonymous Coward
    For enthusiasts (and anyone serious about reliability and speed), to lovingly configure a high-end Pentium 4 or Athlon system and then throw a bunch of IDE devices in it is absolute idiocy.

    I've learned the hard way to cut corners somewhere else if you have to, but always buy SCSI drives.

    I would take a Pentium 100 with SCSI disks over any Athlon/P4/whatever system with IDE disks. Spinning faster may make IDE disks fail sooner, but they're going to fail sooner anyway. The rule of thumb, I've found, is generally true: IDE drives are shoddily engineered, slow, and prone to failure. You get what you pay for.

    SCSI disks aren't perfect, of course, but I would never trust anything important (much less a server of any significance) to IDE disks.
    • Except of course that drive manufacturers tend to use the exact same drive for IDE & SCSI drives just /w different controllers slapped on to them.

      The reason you don't hear about SCSI failures is that the nunber in use compared to IDE is small due to the home PC explosion. I suspect the failure rate is the same, there's just a lot fewer SCSI drives out there.
      • by Anonymous Coward
        Except of course that drive manufacturers tend to use the exact same drive for IDE & SCSI drives just /w different controllers slapped on to them.

        For the most part that's not true anymore. Check the spec sheets. Also, check the warrantee length and the mfg MTBF numbers.
      • . . .

        I once thought about what you say - same disks, different interfaces. Still, this doesn't explain the price differential, which is substantial between IDE and SCSI.

        I've two answers to my own dilemma - either SCSI interface drives undergo far better tolerance checks and testing before they ship, which might explain a good deal of reliability or the drive manufacturers are gouging their SCSI customers.

        I think that the real answer is a combination of the two factors. There's nothing wrong with many IDE drives (except the interface, for SCSI afficionados, of course) but SCSI drives have a much better record with everyone I've ever talked to. I bet most people with an important workstation of server consider the cost of SCSI (or FC/AL for that matter)drives a no - brainer as well as a small part of the overall machine cost / work performed value in any event. Companies and serious individuals are natural "suckers" for being sold robust but expensive kit. Just think in comparison a P4 Intel box versus a SUN Sunblade 2000. I know from experience that buying or building really nice Intel boxes soon shaves a whole load from the price advantage Intel has over other architectures.

        I think the main comment I have is that IDE drives are majority sold to consumer markets (though I'd be interested to be corrected on that) whereas SCSI drives are mainly sold to a "professional" market. This really affects the whole price / quality focus of the manufacturers.

        .

      • Except that I've noticed the same trend -- of otherwise similar IDE and SCSI drives, used essentially the same way -- the IDEs have a higher failure rate. But I've only noticed this in small scale (i.e. 20 servers, half IDE and half SCSI) and with one manufacturer (and I believe all of them were the same drive, as well, but it might have been two or three different drives that were all manufactured within a few months of eachother).

        My theory was that when manufacturer QA'd the the drives, the ones that were perfect or close-to got to be SCSIs and the other ones that passed got to be IDEs. This makes a certain amount of sense, since most people who need serious reliability go for SCSI. IDE is generally used for desktop applications, where 24x7 reliability isn't really a factor, and while restoring from backup (you *do* have backups, right?) is a PITA, it's probably not going to lose you several million dollars of revenue, or whatever. Not to mention that, arguably, the stress on a workstation drive will be lower than that on a server (personally, I think the power cycling might negate this, not to mention that anyone with half a brain has their servers on conditioned power, whereas one cannot say the same about workstations, esspecially home ones) so failure rate for marginal drives might still be lower, and one desktop user isn't likely to ever sue you for one crashed drive, where there is that possibility if a significant percentage of server drives fail prematurely in a company's server farm.

        But I have no evidence to back me up on this one. I didn't really care that much (all I cared about was: don't use the IDEs for the servers!)

        Another theory might be that the SCSIs are made at a different plant and thus the QA or something else is causing them to ultimately be of higher quality.

        *shrug*
  • by hamjudo ( 64140 ) on Monday March 25, 2002 @07:31PM (#3225102) Homepage Journal
    I want last year's model, but not just any year old model. I want a model that has had a low failure rate.

    Where would I find reliability ratings for disks?

    Actually, for me, two year old models should be fine. 40 Gbytes is way more than I need for most of my systems. But, I want a new drive, not one that's been sitting on a shelf for 18 months. An old drive probably has some new failure modes, hardening of the lubricants or something.

    • Back when the IBM Deskstar thing was going on, Storage Review [storagereview.com] put together a reliability survey where people posted what drive they had (I don't think the drives could be first manufactured before 1998 to be included), how long they had used it, and whether or not it had failed, was still running, and so on.

      The survey is currently down for maintenance, but whenever it comes back up, just go to this page, [storagereview.com] sign up, and browse the results. Of course, some of them probably haven't been updated in a while, so you may not get the most reliable info, but it's still better than info that was published at the same time the drive came out.
  • by E-prospero ( 30242 ) on Monday March 25, 2002 @09:48PM (#3225807) Homepage
    On a somewhat related note, does anybody have any experience with drive problems resulting from the physical mounting of drives at unusual angles (i.e., at a 45 degree roll or pitch, rather than horizontal or vertical)? Should one expect higher failure rates, or lower drive lifespans, as a result of unusual mounting arrangements?

    Manufacturer specifications always state that drives must be mounted horizontal or vertical, but who ever pays any attention to the manufacturer.... :-)

    Similarly for CD and DVD drives - are there any potential problems with mounting these drives at an angle? I have played around with mounting drives at angle; the drive trays etc seem to work fine when the drive is on an angle, but it is difficult to test long term performance or failure likelyhood when you only have one drive to play with.

    The reason I'm interested: I'm working on a case mod, but it looks like I will have to mount the drives at wierd angles to accomodate the case geometry...

    Thanks,
    Russ Magee %-)
    • are there any potential problems with mounting these drives at an angle

      Those drives, no. Some 5 1/4 inch drives needed to be reformatted if you mounted them a different way. All the 8 inch drives were like that. I've never seen a 14 inch drive mounted anyway but horizontal. It would be bad.

      As the disk drive arms get shorter, the less the angle matters. I've never heard of a laptop disk crashing because someone turned the laptop on its side.

    • It might matter if you have an IBM drive... I have a 40GB IBM Deskstar (the 5400 rpm version--I heard of all the problems with the 7500 rpm ones and got the 5400 rpm version instead), and it recently started making strange clicking noises and developed a lot of bad sectors. It wouldn't even boot up anymore. Then I took it out of its horizontal drive bay, and let it hang by the cable so it's vertical, with the cable coming out the top, and it works fine for now...

      So much for IBM quality...
  • I have a Lian Li case with two fans in front of my HD going at full. So far I had two drives fail, and IBM 40gb and a WD 60gb, I had both RMA'ed but I'm hearing my repaced 60gb making funny click again :(

    All this in 6 months time.

    I can also hear my geforce fan dying too :(
  • It's true, especially with hard drives...
    --Heat--
    I've seen a lot of posts about adequate cooling but sometimes there can be too much cooling.

    Heat is accounted for and used in higher speed drives. It's use? Thermal viscocity breakdown.

    I don't know what type of lubrication the drives use, but i'm about %100 positive it's made from silicone. Silicone grease doesn't conduct so if it leaks it won't cause any problems with the underlying circutry. We all know this from CPU fans.

    What I'm saying is, you have a sealed bearing system on these drives. They use silicone grease. It has to get hot in order for it to *break down*. I am guessing that there is a certain point where chemically this stuff goes from grease to liquid.

    When you're spinning at 7200 or 10,000 RPM's I would think that the bearing would need something non grease like, as in more liquidy to maintain those speeds.

    Now before I get modded off as troll ask yourself, how many of these top of the line hard drive technologies have I actually worked with? Have high end SCSI drives allways been hot? Yes! It is by design, not by defect and people should really jump to conclusions about it. Your best bet lies in placement which I am about to cover.

    --Placement--
    I learned this from a ex fujitsu hard drive support person. If you are looking for someone to support you hard drives or other similiar products lemme know I can hook you up.

    Placement of the drives is very important. Picture this... You have a small head, about the size of a match and it's floating on a cushion of air no wider than a few humans hairs thick. Think about all the laws of physics to make that trick work. Funny, we just had an article about a spinning disk creating gravimetric distortions. Anyways the drive's are engineered to be reliable at any 90degree angle. Anything other than that and you're asking for it
  • Lots of the new mobos come with the Promise RAID IDE onboard. It's limited to mirroring and striping but for the extra $10-$20 it's worth getting, plus drives are cheap.

    I can't affort those $2000 DLT tapes or to spend many hours feeding 10-20 CDRW to do backups, this is the most convenient solution.

    • ... MIRROR it, don't stripe it. Striping means you double your chances of losing your data, whereas mirroring greatly reduces it:

      What are the chances that both drives would die within hours of each other...

  • The average consumer looks at $$$/Gig. No further. If the majority looked at failure rates first, then $$$/Gig. The industry would be a different story. At a certain point the number of failures to a particular drive creates word of mouth, "don't buy one of xxx drives". So the industry then tends toward the tolerable failure rate that it can get away with.

    Personally, in the last year I've had more trouble with hard drives than the last fifteen years. I think it sucks!
    • Personally, in the last year I've had more trouble with hard drives than the last fifteen years. I think it sucks!

      I have to agree with you. I have 3 drives RMAed right now including an IBM 40GB that is 6 months old. I'm seriously looking into RAID, SCSI or both. I'm real gun shy with IBM right now.

  • The only problems I have ever seen have been on the IBM drives. I have a Western Digital and it's been great. Of course it is next to my zip and since i have purchased a CD-RW drive, my zip gets little use. If you have more then one Hard Disk and you have two 7200's mounted in the same place in the case, well that's asking for trouble. At work we have a 15,000 RPM SCSI drive in the worst environment of all....a 1 x server and it hasn't failed yet and it's been on 24/7 for a while now. Some may say that 1 x servers are great, but to me they are portable heaters...the air coming out the back is hotter then the air coming out of the mainframe (and it used to be the hottest thing in the room).
  • if you run, let's say a UDMA ATA66 controller, if you have some kind of trouble, the driver might force the OS to mark sectors as bad blocks -even if they were not. I had an IBM drive, and after rewiping the disk and testing many times, the bad blocks were gone.

    So maybe many UDMA drivers are not up to date. Especially High-Point Tech (which Abit and others use) have many Beta-drivers which I do not trust completely.`
  • Does anybody know of a good source for aluminum/copper/whatever fin material to make a heatsink for a harddrive? IE: remove the stickers from the top, light coat of heatsink grease, apply large heatsink, maybe with zip ties or some form of clip. Maybe if the fins stuck up half the bay height, you could install the drive above upside down and do likewise with it, and place a grill in the bezel slot to allow air to be drawn over it by the case fans?
    I tried google, and the closest thing I found was the The coolermaster Cool Drive [heatsink-guide.com], which seems to be crippled by the need to stay in a single 5.25" bay, and therefore probably dosen't supply anywhere near enough airflow space or fin area.
    • i wouldnt remove the stickers from the top of the HD if i were you. IDE failure rates are higher than that of scsi and if you take the sticker off (or badly discolor it/mark it up) you can almost assure yourself the manuf will not take it back.

      i think that you are looking for something like this [hardcorecooling.com]. $13.95, you can probably check pricewatch.com [pricewatch.com] and find it cheaper or others like it.

      hope this helps.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...