Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Where are the High-Capacity SCSI Drives? 138

An anonymous reader asks: "Storage technology has really exploded in recent years, giving us ATA drives up to and exceeding 200-250 GB per drive. Why is it that SCSI drive technology has remained stagnant? I can't find a SCSI drive exceeding about a 146 GB capacity. Instead, businesses (and some individuals) wanting greater storage capacities are required to buy more drives which takes up more space, generates more heat, provides more points of failure, uses more electricity, etc. Why is this so?"
This discussion has been archived. No new comments can be posted.

Where are the High-Capacity SCSI Drives?

Comments Filter:
  • I don't know, but it srives me crazy!
  • Instead, businesses (and some individuals) wanting greater storage capacities are required to buy more drives which takes up more space, generates more heat, provides more points of failure, uses more electricity, etc.

    Are you unfamiliar with the concept of RAID [google.com]? That's where all those SCSI drives are going, and it most certainly does not add more points of failure as it pertains to systems. Business do not want high-capacity single SCSI drives, especially when they can pile together 146 GB drives.
    • I think the point is that they can pile 4 250 GB drives into a server to have a terabyte array, or they can pile in 7 146 GB drives to get the same result.

      Being limited to 146 GB drives means you are limited in scaling, which, of course, is what RAID is all about, as you've pointed out.
    • "Business do not want high-capacity single SCSI drives"

      Why? they make for higher capacity raids.

      The more devices the controller has to be able to handle the more expensive, also although more drives means better overall performance, the overall efficiency goes down.

      The big thing is like beggers home users have pretty much been locked out of SCSI. Even a single scsi drive yields better performance than an IDE drive.

      If scsi drives were offered widely in home pc's, there would obviously be a performance in
      • Heh, Apple used to use SCSI as the standard, but they switched(ironic) a while ago. For the home user, there isn't really the need for that much power.
        I know, it's ironic that that when computers are advertised, they usually have the most blazing fast cpu and crap for the rest(thought this is improving). For the money, more RAM would probably help users out better than a SCSI disk(or 2 since the advent of high end video games, mp3s and movies has really caused the demand for storage to soar)
        • "For the money, more RAM would probably help users out better than a SCSI disk(or 2 since the advent of high end video games, mp3s and movies has really caused the demand for storage to soar)"

          I doubt it, since 90+ percent of them are running windows and windows starts to swap before you hit the desktop, no matter how much memory you have. That means the hard drive is your bottleneck, not memory.

          On a linux, bsd, or pretty much anything not windows I'd agree, you need to put in enough memory that it's rare
    • **Business do not want high-capacity single SCSI drives, especially when they can pile together 146 GB drives.**

      that's failed logic. that's saying that businesses don't want more space. of course they want more space, higher capacity, more reliability and faster speeds and you don't lose reliability at all if the drives are as reliable and as many.

      however.. as to the original poster: just buy some damn sata drives.


    • Actually, some businesses *DO* want the capacity.

      We bought two Promise 15100 arrays, and put in 30 250Gb drives. Sometimes an array is wanted to be sometimes large, sometimes redundant, and in this case, both.
  • My Guess (Score:5, Insightful)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Friday July 23, 2004 @09:20PM (#9786018) Homepage
    My guess is a simple one. Who buys SCSI stuff? It's expensive so it's mostly businesses and others who need high reliability (which one of the major reasons SCSI is more expensive). Now while normal people can "afford" to lose 250, 300, or more GB of data, for a business that could be worth billions of dollars.

    The solution to this reliability problem is the RAID. There are two RAID levels that are ideal (there are more, but this is a simple explanation). There is 1, which is just a mirror; and 5, which is striping with parity.

    With RAID 1, if you have 500 GB of data, you would need 2 500 GB drives. You lose 50% of the capacity you buy. The other option is RAID 5, where you lose (1/number of disks). So you could store 500 GB of data on 6 100 GB disks. This way you've only lost 100 GB of storage to redundancy as opposed to 500 GB.

    So when businesses want to store large ammounts of data, it's more economical to use many smaller drivers than to large drives. Even if you don't need the redundancy (for example the disk is just being used for temporary storage while working on large digital picture or video files) they it's still better to use many small disks. While using a 500 GB drive will only go so fast (lets just say 60 MB/s sustained), by using a RAID, you can mulitply that. So by using 5 100 GB drives, you might be able to sustain 300 MB/s (assuming the bus can keep up, etc). Even if you only scale at 50% (that would be 150 MB/s) that's still 2 to 3 times faster than a single drive. That performance can save you money.

    So, if you can afford it you can get much better performance or economics from using multiple smaller drives from one large one.

    That's my theory/understanding. Begin tearing it apart!

    • Re:My Guess (Score:4, Informative)

      by Robbat2 ( 148889 ) on Friday July 23, 2004 @09:54PM (#9786222) Homepage Journal
      RAID is a wonderful concept, but work needs to be on points of failure other than the drives.

      Most decent external RAID units today have dual hot-swappable dual power supplies and fans. However there is still only a single backplane and RAID controller board (IBM PowerPC chips are very popular for this) involved. I've both a backplane and controller a fail on me in the span of 2 years, in both cases taking all the data with them. These units were 6x200GB IDE drives, 1TB usable, 1 parity drive, and we had several cold spares available to hot-swap in on a failure.

      Sure I agree that statistcally your drives, fans and power supplies are much more likely to fail than the backplane or controller, but it can still happen.

      Never forget the important of having backups, and make sure you can recover from them as part of implementing your backup solution. (1 month rotation of Ultrium tapes here).

      There is a solution to the above, but it's very costly, and that's RAID over distributed storage (iSCSI and the like).
      • I guess you've never used an IBM 2105 Enterprise Storage Server. Totally redundant, dual controllers, SSA loops to avoid failures in SCSI cables. You *can* get fully redundant kit, but as it's aimed at mainframe and high end people, it's about half as much again as, say, a 7133.
      • Re:My Guess (Score:3, Informative)

        by innosent ( 618233 )
        Higher-end controller cards (read: NOT IDE) can share a bus with another controller, allowing two systems (with a controller in each) to share access to a single (external) array, with dual power supplies, and technologies like SSA allow even the cabling to be redundant.

        Of course, by the time you spent the money on this type of setup, you could probably have purchased another complete machine, with another array in it, and used software to handle redundancy and updates to the array. We did this with our S
        • Higher-end controller cards (read: NOT IDE) can share a bus with another controller, allowing two systems (with a controller in each) to share access to a single (external) array, with dual power supplies, and technologies like SSA allow even the cabling to be redundant.

          Of course, by the time you spent the money on this type of setup, you could probably have purchased another complete machine, with another array in it, and used software to handle redundancy and updates to the array. We did this with our SQ
      • There's really not much you can do about having only one backplane. Most RAID manufacturers deal with this by not putting stuff that really matters on the backplane, they're typically just pass-through with maybe some passive filters.

        However, every RAID unit I've dealt with has at least had a slot for a redundant controller. Of course, these are SCSI RAIDs. I guess now you know what the price difference is all about.

        That said, unless there's something extremely screwed up about the design of your RAIDs, t
        • If the controller goes really wonky and starts writing bad data to the drives, then it doesn't matter if the drives are in good shape or not, if the data on them is wrong.

          For example, if you send a block of 00000 to your RAID array, and the controller barfs and actually tells the drives to write 00100, then it doesn't matter if all your drives are okay, the data on them is actually wrong, meaning that your controller corrupted your data.

          • That's true, but then that isn't a problem that's going to be solved with redundancy[1], which was what the grandparent was griping about the lack of.

            Also, while this is theoretically possible, I've never seen it happen. I troubleshoot RAIDs for a living, and I probably average about a controller a day between my various fixtures (several different chassis from several different manufacturers) over the last 2 years, so I don't think that's due to lack of exposure. In my experience, controllers either work
            • HP high end servers DO in fact use redundant systems to check every calculation and every data transaction. Every calculation is done by two cpu's and if the results aren't identical it is run again, if they still aren't the same both CPU's are taken offline and the thread is migrated to another part of the system. Likewise all data path's are redundant and ECC'd. This costs LOTS of money, but if you are paranoid about data corruption/loss then that's what you pay for. Btw controllers going crazy isn't all
              • HP high end servers DO in fact use redundant systems to check every calculation and every data transaction. Every calculation is done by two cpu's and if the results aren't identical it is run again, if they still aren't the same both CPU's are taken offline and the thread is migrated to another part of the system. Likewise all data path's are redundant and ECC'd. This costs LOTS of money, but if you are paranoid about data corruption/loss then that's what you pay for.

                Interesting. That's a little beyond t
      • See DRBD. It's 2 computers that mirror each other in a RAID1 fassion...
    • Re:My Guess (Score:3, Informative)

      by Alphanos ( 596595 )
      I think it is more likely that high rotation speed doesn't combine easily with high capacity. If you are spinning the disks more than twice as quickly as standard ATA drives (15k vs. 7200 rpm), then having the same data storage density isn't going to work without new technological developments. In other words, when the disk reading head moves at twice the speed, the bits need to be roughly twice as large. This is why the first CD drives didn't read at 52x: they needed time to develop the technology that
    • Re:My Guess (Score:3, Interesting)

      by lylonius ( 20917 )
      You make good arguments, but reliability and storage capacities are only two of the issues involved.

      The largest benefit is performance. Gamers invest so much in their system bus, cpu, and memory, but disk i/o is 5 orders of magnitude slower. if performance is key, a small investment in SCSI improves disk intensive apps considerably.

      1. IDE requires CPU cycles. SCSI buses have embedded ICs that handle queuing of data and such, freeing the CPU to perform other tasks.

      2. IDE channels are shared. Most IDE
      • Re:My Guess (Score:4, Insightful)

        by innosent ( 618233 ) <jmdority.gmail@com> on Saturday July 24, 2004 @02:03AM (#9787448)
        Gamers? How many gamers really NEED large (>147GB) disks? SCSI drives are not produced for gamers, they are produced for business workstations and servers. I agree with you about SCSI being better, but the reasons you gave don't apply to all IDE controllers (number 1), and certainly not to all SATA controllers/disks (all reasons). A GOOD (i.e. usually not onboard, probably something from 3ware, etc.) SATA controller has a processor, command-queueing, separate, bi-directional channels for each device, and SATA connectors are designed for hot swapping (better than SCA actually, even to the point of connections being made in sequence due to staggered pins). I've got a 12-disk SATA RAID-5 array at work, and don't have any of the problems you listed, because I hand-picked the hardware to avoid those (and other) limitations. If you really want your games to run as fast as possible, then it's going to cost a few thousand dollars anyways, and if you really need that much space, maybe it'd be a good idea to buy a decent controller.
      • Re:My Guess (Score:4, Informative)

        by megabeck42 ( 45659 ) on Saturday July 24, 2004 @02:43AM (#9787580)
        1. No Longer True.

        This has, in large part, disappeared with the advent of UDMA. It was true that IDE was very cycle expensive a decade ago when the IDE really meant Internal Disk Eletronics. The IDE "interface" was just a set of tri-state latches and the CPU would be responsible for pushing and reading every single byte. If you ever look at the pinout for an IDE cable, it's no surprise that it very closely resembles the ISA bus. Another historical note, ATA means AT-Attachment because the first set of IDE drives that were really popular were designed to attach to the IBM PC AT (the successor of sorts to the IBM PC XT) bus.

        Now, processors queue dma requests in and out of the drive and the "interface" really has grown up to be more of a "controller." They're not as complex as the SCSI adapters, of course, but then again, SCSI is a much more complex signaling system.

        2. No Longer True.

        What you're trying to describe is called as "bus disconnect." I'm not sure which side of the bus was responsible, however, the idea is that while a drive was processing a command, the bus was locked until the command finished.

        Note, the first version of SCSI did not have Disconnect either. However, given many more devices sharing the bus, bus contention was more severe, especially using slow devices like tape drives and cdroms, that it became necessary rather than just a feature.

        SCSI supports disconnection as well as Tagged Command Queueing. TCQ allows the host to issue multiple outstanding commands to the device. The device is allowed to complete these commands out of order. Many drives will reorder the requests to take advantage of the head movement.

        Recent revisions of IDE include support for TCQ.

        I will add, however, that it is still worthwhile to have only one device per channel. Compare this to putting more than two 15K drives on a U160 channel.

        3. Not even remotely true. SCSI is a parralel bus, much like IDE, ISA, or half a dozen others. Its only possible for one device to drive the bus at one time. This is clearly evident since a few of the lines in the SCSI cable are used to indicate the Target of the bus transaction. There is only one set of these signals, therefore, there can only be one target.

        Also, the electrical interface for Serial ATA is designed with hot-swap in mind.

        While your first suggestion is accurate, disk i/o is very slow and SCSI equipment tends to be of better quality than IDE hardware. SCSI drives with higher spindle speeds have much lower latency, which can lend a dramatic difference to a similar computer with IDE drives. However, that difference is of no fault of IDE. I would encourage, you, in future to be more accurate with your information.

        If you believe I have written inaccurately, I would recommend reading the draft documents from INCITS T13, the ATA technical comittee.
        • While perhaps IDE SHOULD be as fast as SCSI, it never is. while it SHOULD use no more CPU than SCSI, it doesn't seem to work that way.
        • You, my friend, know your shit.

          I suggest you contribute this (and more) to Wikipedia [wikipedia.org].
    • The first problem with that is that scsi drives usually carry a 5yr warranty, as opposed to the 1yr warranty IDE drives carry.

      The second is that in terms of performance there is a reduced efficientcy for every drive added to a raid.

      The third is that controllers to handle an increased amount of drives are orders of magnitude more expensive and you can only have 15 devices on a scsi chain. With 146gb drives that gives you a max of 2.1TB on a chain, with 250gb drives that becomes 3.7TB on a chain, yeah it's
      • Hyundai's have great warranties, too, but you still have to walk if it's in the shop, whether it's covered by a warranty or not.

        That is all, the rest is absolutely correct, except to mention that if you're running a Windows OS, the maximum volume size is 2.4TB anyway.
        • "Hyundai's have great warranties, too, but you still have to walk if it's in the shop, whether it's covered by a warranty or not."

          What sort of idiot runs anything that needs multiple TB of storage and doesn't keep extra drives on hand? Also in a raid 5 configuration, you haven't lost data, and the raid isn't down simply because a drive has failed.

          "That is all, the rest is absolutely correct, except to mention that if you're running a Windows OS, the maximum volume size is 2.4TB anyway."

          True that, althoug
          • True, but if Hyundai made hard drives, chances are they would all fail at the same time (about the computer equivalent of 24,000 miles), and it would be down. Besides, replacing drives costs money too, warranty or not, because it takes up your time, especially when UPS loses the replacements, crushes them, and then delivers them to the wrong address (at least, that's what they do with my replacement DSL modems).
            • "True, but if Hyundai made hard drives, chances are they would all fail at the same time (about the computer equivalent of 24,000 miles),"

              Maybe, fortunately with harddrives it generally doesn't work that way :) Some will be DOA, some will drop off in two months, some in a year, some in two, some in ten. Depends on the conditions, the drives of course, the luck of the draw, whether the groundhog saw it's shadow, that sort of thing.

              "Besides, replacing drives costs money too, warranty or not, because it ta
              • "It can, but most of the guys doing this are on salary, and would be reading slashdot getting paid if they weren't replacing a drive and getting paid."

                Very true, except for the UPS issue. Seriously, I had 3 modems coming for some remote offices last week, one was lost, crushed, and delivered to the wrong address, and the other two were two days late.
      • The third is that controllers to handle an increased amount of drives are orders of magnitude more expensive and you can only have 15 devices on a scsi chain. With 146gb drives that gives you a max of 2.1TB on a chain, with 250gb drives that becomes 3.7TB on a chain, yeah it's only a terabyte, a terabyte is nothing right?

        If you're only worried about how much data you can put on a chain, SCSI has a two-level addressing scheme. Each 'target' (usually a drive), an have up to 8 or 15 Drives on it... It's n

  • would you put effort into product development when you where already spending money on Serial Attached SCSI ?

    well the storage people do not think it wise to spend the money...

    iSCSI and SAS are good things !
    (pitty there is not a MacOS X driver for iSCSI...)

    regards

    John Jones
  • Could it be that the various manufacturers have a large stock of the smaller drives that they're trying to get rid of before putting larger ones to market?

    Maybe it is due to the fact that SCSI storage has typically doubled in size... 9.1, 18.2, 36.4, 72.8, 145.6... Could it be that they're currently testing 291.2GB disks?

    My $0.02.
  • THE ANSWER (Score:5, Informative)

    by icandodat ( 799666 ) on Friday July 23, 2004 @09:47PM (#9786176)
    This info is from an IBM Magnetic Storage Engineer. The reason is that the IDE market is a retail home market and very competitive. He said "If an IDE manufacturer can save 5 cents on a component he'll buy the cheaper one". The time from R and D to store shelf is less than a year. For SCSI drives on the other hand are primarily for servers and they have expensive components and are tested for a long time before they reach the market. The time from R and D to store shelf is about three years for SCSI. what was the bigest drive you could buy three years ago (ide)? Thats right about the same size as the biggest SCSI drive today. So ... what does this mean? IDE drives suck, they are cheap they are the zip lock bag of the storage industry. If you are going to grandmas with your data thats ok but if its going to the moon... buy tupperware, (SCSI).
    • Re:THE ANSWER (Score:3, Interesting)

      by Anonymous Coward
      I always find comments like this amusing. It's just not true.

      Not long ago I had to set up a several terabyte array (around 4 TB) using SCSI drives. We were constantly replacing the damn things. And this was supposedly quality hardware from Sun. Now, with as many drives as we had, there were bound to be failures. Eventually the failure rate stablized at about 1 or 2 drives per month. A rate which continues to this day, some 3 years later.

      Previous to that array I had helped set up a similar system usi
      • To my surprise, I have found the same results. I've had 5 SCSI drive failures and 1 IDE drive failure in some servers I maintain over the last 2 years. That absolutely shocks me from what I understand about the hardware. However, I do have some extremely high volume servers that have lasted 4 years - and they are still using the same SCSI drives since day 1. I continue to have faith in those servers while running RAID 5 like they are. I just think it's luck, or maybe the SCSI drives spin faster (10k vs
      • Re:THE ANSWER (Score:3, Interesting)

        by Kevin Burtch ( 13372 )

        I'm very curious which Sun array this is, and which drives you are using.

        I've worked in the Sun market for well over a decade, and I haven't seen failure rates like you're describing since the old Seagate 2.9G 5-1/4" full-height drives they used to have in their "Mass Storage" cabinets (the ones that looked exactly like a SPARCcenter 2000)... and that was only after the drives were out of production for a few YEARS (all replacements were refurbs).

        My guess is you have serious environmental issues... heat/h
        • I call BS.

          You really should check the Seagate 18.2 GB FC-AL disks. They're crap. The firmware is crap, the drive is crap, and the failure rate is WAY WAY WAY too high.

          I can't tell you how many times I've seen an entire loop on an A5200 go offline because a single disk was failing.

          Piece of ..... drives. Not to mention the multi-initiator bug, where the drive locks up both the A and B bus. That does WONDERS for clustered environments.

          The active system can't access the disks, so it attempts a failover.
      • Re:THE ANSWER (Score:3, Interesting)

        by Phillup ( 317168 )
        Were the IDE and SCSI drives rotating at the same speed?

    • I could see if SCSI was just behind IDE, but I've seen 146 gb drives available for years. In the same time, IDE has gone from 160 gb to 300+ gb.
  • They do exist! (Score:5, Informative)

    by MarcQuadra ( 129430 ) * on Friday July 23, 2004 @09:53PM (#9786216)
    Hitachi/IBM produce the 300GB UltraStar 10K300, which is a mighty drive if I've ever seen one.

    The real reason is that when you move up to higher rotational sppeds to reduce latency, you have to reduce density relative to the motion of the disk under the head, so a 10K drive can generally pack only 60%-ish as much data per-inch as a 7200RPM drive.

    The same can be seen in 15K disks, which are much lower density than their 10K counterparts. The 15K platters are smaller too, to keep them from flying apart.

    Do you remember when the 5400RPM disks had higher capacity than the 7200 ones? I sure do, it was for the same reason.

    Until the latency of the read-write head improves this will be the case.
    • Re:They do exist! (Score:4, Insightful)

      by Pegasus ( 13291 ) on Saturday July 24, 2004 @06:51AM (#9788104) Homepage
      Heck, give me then 3600rpm disks with transfer speeds of 20mb/s and capacity of 2Tb! I'd gladly have dozen of them to put my dvd collection on.

      I've heard some things about the new Hitachi 400gb drive being optimized for tv settop boxes. Does that mean that it's optimized for linear reads/writes? If so, why did they not decrease rpm in order to gain more capacity?
      • I think you'll be seeint this sort of thing once the 64-bit paradigm shift is complete. The entire system, disks and all, will be mapped-out to the address space. All you have to do is load the system with RAM, I'm talking about 64GB of RAM, and have the huge storage disk sync with the RAM every now-and-then. All of your OS and apps, and most of your recent documents will reside in RAM, the rest will shuffle off to the disk when it gets 'cold'.
  • But its probably the same reason we dont have large capacity/high speed SATA drives either.
  • I would think that SCSI has been shifted toward thin client servers. Gigabit Ethernet is fast as it stands, but extra speed calls for faster drives and faster disk access.
  • by strabo ( 58457 ) on Friday July 23, 2004 @11:32PM (#9786785) Homepage
    ...provides more points of failure...

    Yeah, that's a problem. It's much better to reduce potential points of failure... preferably down to a single point of failure.

    Or is that not what you meant?

  • by TheLink ( 130905 ) on Friday July 23, 2004 @11:47PM (#9786854) Journal
    Drive speeds haven't really gone up tremendously. Still too slow.

    Imagine you have a 1TB drive, but were stuck at a 100MB/sec max seq transfer rate. It takes you 2.7 hours to read/write the entire drive. And that's for _sequential_ access. Gets ugly for random seek.

    A similar speed 10TB drive will take you more than a day (27+ hours) to read sequentially.

    Before the point where it takes too long to read an entire single drive you might as well start using multiple drives to add capacity rather than having bigger drives.

    Taking too long is subjective, but I'd say this: how long can you make your boss/customer wait whilst you are restoring an entire disk image from backup? 27 hours or 2.7 hours? or 25 minutes?

    So 70GB would be about the limit if you have impatient users and bosses.

    Larger capacities are OK if they are to hold data that aren't important enough to be backed up, and don't require masses of data to be available quickly. Or you are doing mirroring and read speeds are important but write speeds aren't as important (but remember that restoring from backup = writing ;) ).

    • Just because you increase drive capacity, doesn't mean you increase drive utilization.

      if you have 20gb of data on your 100gb drive, and then ghost that to a 1TB drive, your copying 20gb of data. If you then ghost that 1TB drive to a 10TB drive, your still only copying 20gb.
    • by Cecil ( 37810 )
      You forget the important fact that as the drive DENSITY increases, so does the amount of data read per revolution of the platters. Bigger drive, faster transfer rate. Unless you're talking about limits on things like ATA, but those are being replaced and upgraded as needed.
      • "You forget the important fact that as the drive DENSITY increases, so does the amount of data read per revolution of the platters"

        The _evidence_ of actual transfer rates is more important that your "important fact".

        This might be helpful [storagereview.com]. Select WB99 transfer rate - Begin.

        If you have evidence of significantly faster single drives do let me know.
  • If you are actually going to use those drives in a server (guess where they are used, for the most part), you need them to be fast. An array with fewer larger drives is much slower than one with lots of small drives.
    • Only to a point. Although you can handle more simultaneous small requests faster with more drives, you can only transfer the data from those requests at 320mb/s max, and 3 scsi drives in raid 5 can provide that sustained.

      And there are always those who wants lots of larger drives.
      • Although you can handle more simultaneous small requests faster with more drives

        That's exactly the point. What do you think limits the bandwidth of, say, a database?

        you can only transfer the data from those requests at 320mb/s max

        I am pretty sure fiber channel is faster than that. That's what fast arrays are hooked up with, anyway. These are generally independent boxes, with their own highly sophisticated and intelligent controller and a really fat pipe.

        And there are always those who wants lots o
        • "I am pretty sure fiber channel is faster than that. That's what fast arrays are hooked up with, anyway. These are generally independent boxes, with their own highly sophisticated and intelligent controller and a really fat pipe."

          That is a little different, but there your still maxed at 1gb/s tops, because that is the fastest network link your looking at. Since the drives can each put out 320mb/s and in that case would be doing so in parallel, you still can't signficantly improve throughput beyond 4 drives
  • The bottom line is nobody wants it. Right now, data storage is so abundant that having twice or four times the amount of data per disk won't solve any problems. We've hit the stage in my line of work that long-term data storage is a non-issue. Suck as much data as you want and store it anywhere, we're never going to run out.
  • Reliability (Score:4, Interesting)

    by MrResistor ( 120588 ) <peterahoff.gmail@com> on Saturday July 24, 2004 @04:38AM (#9787858) Homepage
    My company was offering 180GB SCSI drives in one of our RAID products, but we had to stop due to reliability issues. There was a huge difference in reliability between the 180GB and 146GB drives (which we still offer).

    • Also, the 180GB SCSI drives were half-height, while the 146GB SCSI drives are low profile, so you can fit more of them into the same space.

      I haven't seen any of the roadmaps recently, but it has been a while since the 146GB drives came out, so it's probably time for a bump in the next 3-6 months.
  • The reason is speed. (Score:3, Informative)

    by Dirttorpedo ( 153764 ) <wirtzsNO@SPAMbitstream.net> on Saturday July 24, 2004 @07:05PM (#9791201)
    I will try to avoid the SCSI vs IDE flame war.

    1) RPM. It is easier to spin a 2.5" platter at 15K than a 3.5" platter. (someone else can figure out the addtional energy but I would guess more than double the juice adduming uniform density.)

    2) IOs per second. In large arrays the driving factor is not necessaraly throughput but IOs per second. Which leads to more transactions per second for your server farm. So more spindles = more IOs per second.

    3) Access time. The bigger the drive the longer it takes the drive's processor to position the head. Therefore increasing access times. decreasing IO per second. I now its a trivial amount of time but it adds up over millions of IO.

    4) Error correction. I cannot speak for IDE but each block on a SCSI drive has an Error Correction Code (ECC) which helps the drive recover from read errors. Again minimal.

    5) Cynical answer. Smaller drives means your drive company sells more product to meet a given capacity.

    educational point. SCSI is a protocol like IP or TCP. It can be tunneled through or carried by anything.
    SPI -SCSI Parralel interface (old school).
    FCP - Fibre channel protocol
    SAS - Serial attached SCSI. SAS can also tunnel SATA.
    iSCSI - scsi in TCP. (not ethernet)
    SBP - SCSI Block Protocol. firewire.
    ATAPI - yep SCSI ove IDE so your CDROM works.
    many others.

  • SCSI is generally more reliable, that's part of what you are paying for. But some people have figured out how to get great reliability with still lower costs. For example, Google is based upon inexpensive and easily replaceable hardware. They have so much and such a robust system that hardware failure is not a problem.
  • Capacity vs. Speed (Score:3, Interesting)

    by Detritus ( 11846 ) on Tuesday July 27, 2004 @01:34AM (#9808741) Homepage
    I've read of companies that bought a bunch of SCSI drives and then set them up to only use half their normal capacity, by throwing away half the cylinders. This reduced the average access time of the drives. I'm not sure if they reconfigured the drives in-house or if the manufacturer did it for them.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...