Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Single IDE vs Dual IDE? 61

jrsimmons asks: "I'm running performance tests on IDE interface configurations for my company. I've discovered that disk to disk I/O is significantly faster (in the realm of 30%-40%) when only a single IDE interface is active versus when two IDE interfaces are active. This is significant as our servers are used to provide Point-of-Sale availability for registers in the retail environment, which is heavily dependent on disk i/o performance for efficiency. I have run the tests under both Windows and our retail OS (sorry, no Linux) with similar results. What are some possible explanations for the detrimental effect the second active ide controller has on disk I/O speed?" Has anyone measured this deficiency on Linux and other Unices?
This discussion has been archived. No new comments can be posted.

Single IDE vs Dual IDE?

Comments Filter:
  • Sounds strange (Score:5, Interesting)

    by Outland Traveller ( 12138 ) on Monday January 21, 2002 @04:55PM (#2878182)
    Your results sound strange to me.

    For two disks, you should get the best results with both disks configured as masters on two different IDE buses.

    If you're not seeing that, I'd check that you have the correct drivers/optimizations for your IDE chipset enabled. You also might want to check IRQ allocation to make sure there's no strange conflicts . Check your windows (NT/2000) event log to make sure there's no strange IDE timeouts indicating hardware issues. If you still see the problem you should try your test on a different hardware platform (motherboard/controller combo).

    From your description, however, you might want to go with a raid technology such as RAID 1, RAID 5, or raid 1+0. It will offer much better redundancy and possibly improved performance.
    • Re:Sounds strange (Score:4, Informative)

      by rcw-work ( 30090 ) on Monday January 21, 2002 @06:42PM (#2878828)
      Your results sound strange to me.

      Not to me. I've seen the same 30-40% increase copying data between two disks on the same IDE chain as opposed to the exact same two disks on different IDE chains ever since UDMA support came out.

      • by Manic Miner ( 81246 ) on Tuesday January 22, 2002 @05:02AM (#2881009) Homepage

        I believe this is to do with UDMA spec's as to cable length an connectors etc. etc. I reciently had a lot of trouble with a UDMA100 Maxtor drive. They got back to me and informed me that UDMA wouldn't be gaurenteed to even run at UDMA100 (mode 5??) and even if the drive did detect at UDMA100 the performance would be much worse..

        Having finally got my drive detecting as UDMA100 I can totally agree with the performance issues under Windows 2000 at any rate. My slave drive gets on average 30Mb/sec when runnning a transfer rate test on top of NTFS. My master drive gets on average 60Mb/sec on the same test.

        If you read the installation instructions for all UDMA100 drivers (well all the ones I've seen ;) ) they say to make sure the drive is attached to the black connector on the cable for best performance. I looks like UDMA100 just isn't designed to run both drives on the controller at high speed.

  • by rogerl ( 143996 )
    For a server, SCSI is the only way to go.
    • Yup. Horses for courses. And a basic SCSI card with 7 basic SCSI drives isn't all THAT expensive. And will wind up VERY zippy, compared to an IDE solution, especially one with a weak CPU.
      • Isn't this an obsolete belief? With ata/133
        and udma-4, the only advantage i know of for
        scsi comes from tagged command queueing. If
        you have a bofunk scsi controller, you don't
        even get that.

        I know that SCSI used to be better. I just
        don't know any reason to believe that it still
        is.

        .
        • Re:SCSI (Score:3, Informative)

          One IDE drive vs one SCSI drive, you're right; IDE is the way to go. But multiple drives, SCSI spanks all. I often find that the people who swear, up and down, otherwise, are the people who can't afford SCSI. :-)
          • Yeah, Right [smythco.com]

            Reads at >80MB/Sec writes ~25MB/sec. Cost 1/3 of what a SCSI equivalent cost.
            • I read phrases like "In addition to hot spare, the 3ware cards support hot swap of IDE drives, but this has not been tested yet." and I get really really scared. Also, 5400 RPM drives?
              • Hot swap should not be necessary, since with the money we saved, we were able to build two completely redundant systems and STILL be less cost than SCSI.

                5400 RPM is to save on power supplies. 7200 RPM drives would have pulled a lot more on startup, because yes, there is no way in IDE to delay spin up.

                For future expansion, once we max out these current systems, we will use external IDE-SCSI chassis, they take ATA drives, hardware RAID them, and then connect to the host computer via SCSI. We can add these to infinity, and save tons of money by never buying a single SCSI drive.
        • SCSI supports concurrent transfers, which is the OTHER reason it gives IDE the smackdown in heavy load situations.
          • Agreed. I never put two IDE HDs on the same
            (non-RAID) controller when I care about
            performance. But I'd rather have 2 IDEs on
            two onboard controllers than 2 SCSI drives on
            one controller.
  • by His name cannot be s ( 16831 ) on Monday January 21, 2002 @05:23PM (#2878359) Journal
    What exactly do you mean by active? There are two drives on one IDE interface? Two Drives, one on each interface? One Drive, And both interfaces turned on in the BIOS?

    I'll take for granted that you actually have a good way of measuring drive performance, and it's not just a 'feeling'.

    What motherboard/Chipset/PC's are you talking about here? Have you replicated the results on dissimilar hardware?

    What was significance of the second active ide controller? were you moving data to two drives?

    And finally, Why is your system sooooo dependant on disk I/O? If this is the case, mayhap you need to re-engineer the app somewhat to balance out the disk IO aspect. If it's actually CONSTANTLY saturating one or two IDE channels, Quit being a complete twit, and move to SCSI, where this isn't a problem.

    If you actually want help on this, you had better provide a heck of alot more information up front.

    G
  • Try Multiple IDE controllers. A few factors come to mind regarding performance. The first is the speed of your cpu(s). Unlike scsi drives, IDE's tend to bog down your cpu - I would try throwing a heavy load at the drives, while keeping an eye on the cpu utilization. Another is bus speed. You know the saying, "a chain is only as strong as its weakest link." If your ide controller is plugged into the *shared* pci bus, see what else is on the bus. you may be able to take something off - like a pci video card etc.. If the controller is embedded in the mother then this is probably not the issue. Also - try comparing the throughput of the bus with the throughput of the ide controller(s). One final thing - on the OS level, check read cache, dma settings, etc...
  • Ummm (Score:2, Interesting)

    by haplo21112 ( 184264 )
    Just a wild Guess, but....
    If the share the same PCI(I am assuming its not a ISA ide bus) bus then you have twice the disk IO flowing through the same limited bandwidth....this is bound to show some performance degradation.
    • Seconded (Score:1, Interesting)

      by morbid ( 4258 )
      This is also my understanding : contention of the PCI bus. Some systems have multiple PCI busses (sorry, can't name one of the top of my head) and if you only use one peripheral on each bus, it gets the whole bandwidth to itself. AGP is essentially a dedicated PCI bus (although double- or quad-pumped) to the graphics processor/memory. Therefore, it has no contention and 4x the throughput (potentially).
      SCSI is a much better option for fast disk access, especially if you stripe the disks. I've seen a 100% performance boost (ie a doubling of speed) on a 12-hour job by employing disk striping.
      • This is also my understanding : contention of the PCI bus. . . . SCSI is a much better option for fast disk access

        What? How will using SCSI sidestep the PCI bus contention issue?
  • My "benchmark" (Score:4, Informative)

    by Per Wigren ( 5315 ) on Monday January 21, 2002 @05:42PM (#2878487) Homepage
    I use software RAID under Linux (striping only).
    I get almost 100% increase in speed if I have the disks configured as master on two separate controllers instead of master+slave on one.
    • That's because you can only read/write to one drive in a master/slave set at once.

      that becomes a null point when you have them both as master on seperate channels :)
  • by nadie ( 536363 ) on Monday January 21, 2002 @05:49PM (#2878540) Homepage

    If you are running I/O intensive applications, there is no subsitute for SCSI. IDE is still too braindead to do the job effectivly with decent interactive, multitasking performance. Don't waste your companies time on fiddling with consumer level hardware in a professional environment.

    How much is your time worth? How much is this application worth to your company? In a professional server, SCSI is not expensive.

  • Windows IDE quirk (Score:2, Informative)

    by mperham ( 301356 )
    I believe windows only turns DMA on for the first IDE channel by default. If you are transfering from one channel to another, you might be using PIO mode on one channel and that will definitely slow you down. Go to the properties for your IDE hardware and verify that both channels are using DMA if available.
    • Thanks for the reminder... I forgot to chance this after my last MS reinstall.... Now my CD-Burner is actually writting like it should.
    • It does.
      Only problem is Win2k has a problem with DMA mode on onboard controllers...
      http://support.microsoft.com/default.aspx?scid=k b; EN-US;q262448

      ATA66 DMA transfer mode is not supported for the onboard IDE controller.

      Which accounts for the wasted day trying to debug my 2k setup (at home... I didn't have another working PC so couldn't get to the knowledge base... luckly my Linux disks arrived the next day.. and the rest is history - only reason I keep 2k is for some games and compatibility with Office docs I port to and from work...)
  • I'd guess... (Score:4, Interesting)

    by Polo ( 30659 ) on Monday January 21, 2002 @06:57PM (#2878927) Homepage
    I'd guess one or both of the drives is not in DMA mode. It's probably configured as PIO mode.

    This is a pretty common mistake - if the drive is in PIO mode, all i/o goes through the cpu.
  • Is he referring to two IDE "Interfaces" as active meaning Master/Slave on one IDE controller (say the primary) or one drive as master on the primary, and one as master on the secondary?
    I certainly have seen a performance cut when both drives are accessed in the first (master/slave) arrangement, but All Good Techs know this already! If he is referring to the Master on Primary and Master on Secondary arrangement, I would say you have an isolated problem there! I have never seen performance penalties for running drives on separate controllers. In fact this is why when you try and burn from CD-CD the recommended arrangement is one drive on primary, and one drive on secondary....
  • Buy SCSI (Score:1, Interesting)

    by Anonymous Coward
    No one should be running a performance-critical server on IDE drives. Despite performance improvements in recent years IDE still sucks when their is more than 1 device installed, this is why SCSI continues to exist.
  • yeesh... (Score:4, Informative)

    by Anonymous Coward on Monday January 21, 2002 @08:17PM (#2879482)
    ...i write a long reply wherein i smack you with a cluestick in a heavy-handed manner. let's say that you have a hard disk set as a primary master. let's add another hard disk to that primary controller as a slave. then let's add another to the secondary master. and now another would be added to as secondary slave.

    the two devices on the primary controller could not both be transferring data at the same time, so performance would be hit severely if you were reading or writing to both simultaneously regardless of whether or not the disks were transferring data between each other or some other device on the secondary controller.

    when data is transferred between a device on the primary and a device on teh secondary controller there is no performance hit that is caused by the lack of ability to read or write simultaneously; i.e., you can read or write at the same time if each device is on a different controller, but not on the same controller.

    now in your case what i think you are saying is that you notice poor performance even in this scenario; i.e., transferring data across two controllers. the reason for this is that IDE is severely CPU dependent. What kind of CPU are you running on these machines? IDE's CPU dependence is what makes it STILL a poor substitute for i/o heavy use when compared with SCSI. SCSI devices are not CPU dependent. as well, you can simultaneously read and write to all devices on the chain. also, transfer speeds are faster and the RPM of SCSI drives tend to be faster as well.

    so i would surmise that the reason you are seeing your performance hit is that the CPU is just working twice as hard to transfer data from one controller to the other. if you actually are trying to transfer data across the same controller; i.e, from master to slave or vice versa, you should stop doing that. that's really slow and quite silly. get SCSI. it's worth it.

  • by mosch ( 204 ) on Monday January 21, 2002 @09:02PM (#2879708) Homepage
    The problem is that you're using IDE, which in case you hadn't heard, sucks. If your company gives a shit about performance, there's this thing called SCSI, which blows IDE away performance-wise. Especially in the multiple-transactions on a single controller department. You should check it out!

    I know some dick will moderate me down because I was rude, and I used the word 'dick' (which turns all the faggot moderators on), but it's true. If you care about speed, IDE is an inappropriate tool. Take it out of your toolbox, and forget about it.

    • I can't believe you guys modded a troll up to 3...

      I guess this [smythco.com] sucks?

      Or This? [accs.com]

      This? [att.com]

      This? [sdsc.edu]

      These? [raidzone.com]

      This stuff? [zero-d.com]

      IDE is here to stay in the high end market, and it's going to kick SCSI's ass. Why pay 3X more per drive for the same HDA with a different interface board?

      This is from the server in the first link above. Note that most of the write bottleneck is caused not by the drives but by the hardware RAID5 controller.

      Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
      bedford 1G 24436 11 22834 13 83890 43 361.2 2
      • Yeah, I prefer for my high-end servers to be limited to two devices per controller, not 15 or 127. SCSI and Fibre Channel are where it's at. And look at all the high-end IDE based storage arrays that companies like EMC are offering... oh wait, there are none. my bad.

        Go ahead, call me a troll, but the only reason IDE is even getting usable is because they're slowly implementing more and more of the SCSI command set. The SCSI interface isn't just different, it's better.

        • Did you follow any of the links? The IDE controllers have 8 ports of dedicated bandwidth. Not shared like SCSI. A single SCSI bus is like an ethernet hub, this is like an ethernet switch, and the disks cost 1/3rd less.

          Sure, SCSI's better, but only until you look at cost.
          • IDE Raid is like gluing Ford Escorts together, and calling it a Mercedes. Sure it's big, but if you want it to perform, you'll be fucked.

            You'll, dedicated ATA/100 bandwidth... coming from a drive that's spinning at a maximum of 7200rpm, that doesn't have a large command queue to optimize the transfers, and usually can't effectively do more than one thing at a time. That's great. That'll be.... almost half as fast as a 15k fibre channel drive. And that 8 device multi-controller hack, that gets me almost... wow, almost 1/15th of the expansion capacity of a fibre channel controller.

            IDE RAID is great for slow-speed, non-critical, single-reader/single-writer type of access. It blows for anything real. It's unfortunate that most slashdotheads don't have real jobs, so they don't understand that real servers actually have to do things, not just load mozilla and play quake.

          • Ok 8 ports of dedicated bandwidth doesn't mean shit on a slow drive. If you ran an ATA100 drive at ATA33 you wouldn't notice any difference at all in IO speed. Reason being is the internal transfer rate on a 5400 or 7200 rpm drive is slow enough not to be able to max the bandwidth of the bus it resides on. Having 8 individual ports don't mean shit in terms of throughput if your drives aren't fast enough. However with 8 7200rpm SCSI drives on the same bus you're going to maximize the bus bandwidth because none of the drives themselves can max it out but all together they shouldn't have any problem. The SCSI drives aren't going to shit themselves maxing out the bandwidth either because they individually support command queueing and can hold up to 256 commands waiting for some bandwidth on the bus to become available. They're also reordering file read/write requests as to do as little seeking between RWs as possible storing data in their often times much larger buffer. IDE drives fetch and write files in the order they are commanded, often not the most efficient order. The 8 SCSI drives are also going to stand up to thermal stress much better than IDE drives and for the most part have a higher MTBF than their IDE counterparts. Given the choice of a single 40gb IDE drive or 40GB (4x9.1GB SCSI RAID-0/5x9.1GB RAID-5) I'd definitely go with the SCSI option even though I'm paying alot more up front. Shit I'd go with a single SCSI drive over the single IDE drive especially if I'm handling a bunch of small files. The SCSI drive's command reordering is going to be much more efficient grabbing a bunch of small files and acting as a swap partition than the IDE drive is. Don't be so easily misled by the marketing claims of IDE devices, external transfer rates aren't nearly as important as internal ones.
            • You are totally correct.

              However, for some uses, like the ones that we are using them for, moving very large files around, and just storing them cheaply, IDE was the way to go.

              I am not saying that IDE is technologically better, that would be stupid. I'm saying it has a place, a place that some people might ignore in the large server market because of an almost reglious devotion to SCSI.

              You need to look at the needs of the project at hand, and design a solution that works with the best cost benefit ratio. For us, that included massive IDE arrays.
              • Of course IDE has its place, my PC isn't humming along with 10k rpm SCSI drives (yet). However if I'm asked to do a professional setup I'm going to ask for a couple extra bucks for a SCSI drive because most situations don't call for massive files to be transfered back and forth. In most retail applications you're doing alot of small transactions which are going to be done alot faster with a SCSI setup.
      • Yeah, I just installed some 15K RPM IDE hard drives.... oh wait, they don't exist.
        • I don't know of any high end RAID that uses 10 or 15K rpm drives. (not saying they don't exist, just that it isn't usual to do so)

          The heat situation would be terrible, and so would spin up power requirements.
          • What the fuck world do you live in, where servers don't use 10K and 15K drives? Even shitty-ass $5,000 PC-based servers have 10K drives in them. Let alone the big guns, like the SunFires, and the Sun E-series boxes.

            I'm trying not to be rude, but what the fuck kind of "servers" are you talking about? Have you even ever seen a data center, the kind with the raised, non-static floors, uninterruptible power, redundant heating/air conditioning and (in a small one) a couple hundred servers?

            What are you, fifteen fucking years old, and dumb enough to think you know everything?

            • What are you, fifteen fucking years old, and dumb enough to think you know everything?

              Damn man, did you forget to hit the "post anonymously" button?

              Those data centers are what (i'm guessing) 2% of companies need for IT support. The other 98% look for solutions that fit the problem within a certain budget.

              Ever stop to think that the "best technology at any cost, even if we don't need it" philosophy may have contributed in large part to the economic collapse in the tech sector?

              In regard to the other thread... I built those servers in the first link. We aren't running some huge database, they are used as a large archival and retrival system. It doesn't have to be particularly fast, only big, and reasonably fast. It was the best solution to the problem. They write at 25-35MB/sec, read at 85-140MB/sec, depending on file system and load type.
              • Apparently you haven't seen downtime cost calculations. Take down the accounting system, and the ability to bill, accept payment, or ship product. Hundreds of thousands per day are lost, and that's a LOW figure.

                The fact of the matter is that good infrastructure saves money. It requires fewer employees to maintain it, it scales better as new requirements emerge, and it helps ensure high uptime, which is absolutely critical, even if you're not a web retailer.

                In regards to your implication that I'm an AC troll, absolutely not. I stand by my comments, and my implication that you're a fucking retard.

                • You know, personal attacks are a sign that your argument is too weak to defend using rational means.

                  I'm done with this thread, it's going no where.
                  • If you were done with it, you would've stopped posting, and showing your ignorance, or perhaps would've refuted my arguments instead of merely noting that I'm offending your delicate sensibilities.
              • I don't know of any high end RAID that uses 10 or 15K rpm drives. (not saying they don't exist, just that it isn't usual to do so)
                ...
                Those data centers are what (i'm guessing) 2% of companies need for IT support. The other 98% look for solutions that fit the problem within a certain budget.

                Hi nice to meet you. I'm a sysadmin at a community college [wccnet.org]. Not that high a budget, y'know? Still, we use at least 10k scsi drives in everything we can, 15k for the ones that matter.
                We make Good Use of these drives and if they were any slower i would be getting way way too many phone calls.

                If you look at Dell's offerings [dell.com] (we buy a lot of dells here) in the server range, it's tough to find something that doesn't come with 10k scsi drives. I think their 350 is the only one that comes with IDE drives.
                Going over to Sun's lineup [sun.com], you'll see that their low-end desktop machines like their SunBlade 100 [sun.com] now have IDE drives in them but everything else has at least 10k scsi or fc drives.

                I know plenty of people who run servers off of pc, IDE based hardware, but most of these are either personal sites of fellow geeks. My home mass storage unit has one of those nifty Promise FastTrack100 [promise.com] IDE RAID cards, but that's b/c i can't afford SCSI and the storage is only used by me (well, my friends too when they download my movies/mp3s, but scp'ing via my home net connection will in no way hammer the storage unit). Most server rooms i've been to have the dells or similar equipment with SCSI in them, even the really shitty server rooms with really shitty boxes, those people still use scsi cards & drives.

                Of course you're right about cost and use, but in most environments it is essential to plan for the future. Buying more or faster disk than we currently need might seem silly now but sometimes growth occurs inversely proportionate to budget - i'm already regretting not having taken larger bites when i could of b/c some of our servers are becoming seriously underpowered and i dont know if our current budget will let us purchase what we need (but i bet i coulda swung for more when i first bought the server in question).

  • It also depends on filesystem and partition size

    e.g. an 4GB FAT32 partition will outperform a 20GB NTFS partition on the same type of disk.
  • by Graymalkin ( 13732 ) on Wednesday January 23, 2002 @10:20AM (#2887620)
    IDE hard drives are very dumb. They are given commands and execute them in the order they are received and require the guidance of a parental figure in order to work properly. They also can't bear to be alone while they do work of any kind. Any time an IDE drive processes a command it takes full control of the IDE bus and cannot release it until all commands issued are complete. If you occupy two channels on an IDE bus one of the drives is going to be losing out hardcore to the other drive when it comes to throughput. If you really want a reliable storage system under either Windows or Linux go with SCSI drives rather than IDE. SCSI drives are smart and don't need their hand held while doing work. SCSI drives will reorder read/write requests so the order they're executed is the most efficient order not just the order received. They also get a command and relinquish control over the bus when they are given commands and can hold commands in a queue until they can get some bandwidth on the bus again. Adding a second drive to a SCSI bus doesn't ruin the performance like with IDE drives. Drives can also talk to one another independent of the host system which means transfering data from a hard drive to CD-R doesn't require the total control of the host CPU like it would with IDE. Meanwhile you can still read and write data to another drive that isn't being used to burn a CD without making anything crap out on you. SCSI costs more but you get better performance out of it. You can pretty readily find 9GB SCSI drives for under 100$ and a couple of them on a RAID controller ought to provide you with plenty of throughput for a long time.
  • Not a tech problem (Score:3, Insightful)

    by sql*kitten ( 1359 ) on Wednesday January 23, 2002 @11:54AM (#2888198)
    This is significant as our servers are used to provide Point-of-Sale availability for registers in the retail environment, which is heavily dependent on disk i/o performance for efficiency.

    Whenever I come across a scenario like this, I tell people to take a step back and before making any technical decisions, figure out what it is you are actually trying to accomplish. If you are really after high performance, get SCSI disks. If you're after cheapness, then you will simply have to accept that IDE disks are slower.

    This isn't a question for a techie to answer, BTW. One of your business managers will have to think about how many transactions per day are processed, when the cost of the system can be recouped at a given percentage of each transaction, whether or not paying more for SCSI makes financial sense, and whether higher unit cost will mean you sell fewer units. Get one of your tame MBAs to think about this for you.
  • Ironic this comes up today. I was just playing around with my old BP6 motherboard with the Promise 66 controllers, and was trying to read up on the different possible setups.

    Looks like you can do either PIO, UDMA or MW DMA.

    Just by playing with a 'hdparm -t', it appears that I get the best performace with it set to UDMA.

    (I managed to almost double the read speed by tweaking the IDE driver settings)

    Anyone know where I could find out what PIO/UDMA/MW DMA is?
  • If you're running a server you really really really want SCSI. It doens't cost more, all things considered (uptime, reliability, future expansion).

1 + 1 = 3, for large values of 1.

Working...