Single IDE vs Dual IDE? 61
jrsimmons asks: "I'm running performance tests on IDE interface configurations
for my company. I've discovered that disk to disk I/O is significantly
faster (in the realm of 30%-40%) when only a single IDE interface
is active versus when two IDE interfaces are active. This is
significant as our servers are used to provide Point-of-Sale
availability for registers in the retail environment, which is heavily
dependent on disk i/o performance for efficiency. I have run the tests
under both Windows and our retail OS (sorry, no Linux) with similar
results. What are some possible explanations for the detrimental
effect the second active ide controller has on disk I/O speed?"
Has anyone measured this deficiency on Linux and other Unices?
Sounds strange (Score:5, Interesting)
For two disks, you should get the best results with both disks configured as masters on two different IDE buses.
If you're not seeing that, I'd check that you have the correct drivers/optimizations for your IDE chipset enabled. You also might want to check IRQ allocation to make sure there's no strange conflicts . Check your windows (NT/2000) event log to make sure there's no strange IDE timeouts indicating hardware issues. If you still see the problem you should try your test on a different hardware platform (motherboard/controller combo).
From your description, however, you might want to go with a raid technology such as RAID 1, RAID 5, or raid 1+0. It will offer much better redundancy and possibly improved performance.
Re:Sounds strange (Score:4, Informative)
Not to me. I've seen the same 30-40% increase copying data between two disks on the same IDE chain as opposed to the exact same two disks on different IDE chains ever since UDMA support came out.
It's all about the UDMA100 spec's (Score:4, Informative)
I believe this is to do with UDMA spec's as to cable length an connectors etc. etc. I reciently had a lot of trouble with a UDMA100 Maxtor drive. They got back to me and informed me that UDMA wouldn't be gaurenteed to even run at UDMA100 (mode 5??) and even if the drive did detect at UDMA100 the performance would be much worse..
Having finally got my drive detecting as UDMA100 I can totally agree with the performance issues under Windows 2000 at any rate. My slave drive gets on average 30Mb/sec when runnning a transfer rate test on top of NTFS. My master drive gets on average 60Mb/sec on the same test.
If you read the installation instructions for all UDMA100 drivers (well all the ones I've seen ;) ) they say to make sure the drive is attached to the black connector on the cable for best performance. I looks like UDMA100 just isn't designed to run both drives on the controller at high speed.
SCSI (Score:1)
Re:SCSI (Score:2)
Re:SCSI (Score:1)
and udma-4, the only advantage i know of for
scsi comes from tagged command queueing. If
you have a bofunk scsi controller, you don't
even get that.
I know that SCSI used to be better. I just
don't know any reason to believe that it still
is.
.
Re:SCSI (Score:3, Informative)
Re:SCSI (Score:1)
Reads at >80MB/Sec writes ~25MB/sec. Cost 1/3 of what a SCSI equivalent cost.
Re:SCSI (Score:2)
Re:SCSI (Score:1)
5400 RPM is to save on power supplies. 7200 RPM drives would have pulled a lot more on startup, because yes, there is no way in IDE to delay spin up.
For future expansion, once we max out these current systems, we will use external IDE-SCSI chassis, they take ATA drives, hardware RAID them, and then connect to the host computer via SCSI. We can add these to infinity, and save tons of money by never buying a single SCSI drive.
Re:SCSI (Score:2)
Re:SCSI (Score:1)
Re:SCSI (Score:1)
(non-RAID) controller when I care about
performance. But I'd rather have 2 IDEs on
two onboard controllers than 2 SCSI drives on
one controller.
Hang On.. Give Some Info. (Score:3, Interesting)
I'll take for granted that you actually have a good way of measuring drive performance, and it's not just a 'feeling'.
What motherboard/Chipset/PC's are you talking about here? Have you replicated the results on dissimilar hardware?
What was significance of the second active ide controller? were you moving data to two drives?
And finally, Why is your system sooooo dependant on disk I/O? If this is the case, mayhap you need to re-engineer the app somewhat to balance out the disk IO aspect. If it's actually CONSTANTLY saturating one or two IDE channels, Quit being a complete twit, and move to SCSI, where this isn't a problem.
If you actually want help on this, you had better provide a heck of alot more information up front.
G
multiple ide controllers (Score:2, Interesting)
Ummm (Score:2, Interesting)
If the share the same PCI(I am assuming its not a ISA ide bus) bus then you have twice the disk IO flowing through the same limited bandwidth....this is bound to show some performance degradation.
Seconded (Score:1, Interesting)
SCSI is a much better option for fast disk access, especially if you stripe the disks. I've seen a 100% performance boost (ie a doubling of speed) on a 12-hour job by employing disk striping.
Re:Seconded (Score:2)
What? How will using SCSI sidestep the PCI bus contention issue?
My "benchmark" (Score:4, Informative)
I get almost 100% increase in speed if I have the disks configured as master on two separate controllers instead of master+slave on one.
Re:My "benchmark" (Score:1)
that becomes a null point when you have them both as master on seperate channels
Disk I/O performance (Score:4, Insightful)
If you are running I/O intensive applications, there is no subsitute for SCSI. IDE is still too braindead to do the job effectivly with decent interactive, multitasking performance. Don't waste your companies time on fiddling with consumer level hardware in a professional environment.
How much is your time worth? How much is this application worth to your company? In a professional server, SCSI is not expensive.
Windows IDE quirk (Score:2, Informative)
Re:Windows IDE quirk (Score:1)
Re:Windows IDE quirk (Score:1)
Only problem is Win2k has a problem with DMA mode on onboard controllers...
http://support.microsoft.com/default.aspx?scid=
ATA66 DMA transfer mode is not supported for the onboard IDE controller.
Which accounts for the wasted day trying to debug my 2k setup (at home... I didn't have another working PC so couldn't get to the knowledge base... luckly my Linux disks arrived the next day.. and the rest is history - only reason I keep 2k is for some games and compatibility with Office docs I port to and from work...)
I'd guess... (Score:4, Interesting)
This is a pretty common mistake - if the drive is in PIO mode, all i/o goes through the cpu.
Master/Slave or Primary Secondary? (Score:1)
I certainly have seen a performance cut when both drives are accessed in the first (master/slave) arrangement, but All Good Techs know this already! If he is referring to the Master on Primary and Master on Secondary arrangement, I would say you have an isolated problem there! I have never seen performance penalties for running drives on separate controllers. In fact this is why when you try and burn from CD-CD the recommended arrangement is one drive on primary, and one drive on secondary....
Buy SCSI (Score:1, Interesting)
yeesh... (Score:4, Informative)
the two devices on the primary controller could not both be transferring data at the same time, so performance would be hit severely if you were reading or writing to both simultaneously regardless of whether or not the disks were transferring data between each other or some other device on the secondary controller.
when data is transferred between a device on the primary and a device on teh secondary controller there is no performance hit that is caused by the lack of ability to read or write simultaneously; i.e., you can read or write at the same time if each device is on a different controller, but not on the same controller.
now in your case what i think you are saying is that you notice poor performance even in this scenario; i.e., transferring data across two controllers. the reason for this is that IDE is severely CPU dependent. What kind of CPU are you running on these machines? IDE's CPU dependence is what makes it STILL a poor substitute for i/o heavy use when compared with SCSI. SCSI devices are not CPU dependent. as well, you can simultaneously read and write to all devices on the chain. also, transfer speeds are faster and the RPM of SCSI drives tend to be faster as well.
so i would surmise that the reason you are seeing your performance hit is that the CPU is just working twice as hard to transfer data from one controller to the other. if you actually are trying to transfer data across the same controller; i.e, from master to slave or vice versa, you should stop doing that. that's really slow and quite silly. get SCSI. it's worth it.
I found your problem (Score:3, Flamebait)
I know some dick will moderate me down because I was rude, and I used the word 'dick' (which turns all the faggot moderators on), but it's true. If you care about speed, IDE is an inappropriate tool. Take it out of your toolbox, and forget about it.
Re:I found your problem (Score:2)
I guess this [smythco.com] sucks?
Or This? [accs.com]
This? [att.com]
This? [sdsc.edu]
These? [raidzone.com]
This stuff? [zero-d.com]
IDE is here to stay in the high end market, and it's going to kick SCSI's ass. Why pay 3X more per drive for the same HDA with a different interface board?
This is from the server in the first link above. Note that most of the write bottleneck is caused not by the drives but by the hardware RAID5 controller.
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
bedford 1G 24436 11 22834 13 83890 43 361.2 2
Re:I found your problem (Score:2)
Go ahead, call me a troll, but the only reason IDE is even getting usable is because they're slowly implementing more and more of the SCSI command set. The SCSI interface isn't just different, it's better.
Re:I found your problem (Score:1)
Sure, SCSI's better, but only until you look at cost.
Re:I found your problem (Score:2)
You'll, dedicated ATA/100 bandwidth... coming from a drive that's spinning at a maximum of 7200rpm, that doesn't have a large command queue to optimize the transfers, and usually can't effectively do more than one thing at a time. That's great. That'll be.... almost half as fast as a 15k fibre channel drive. And that 8 device multi-controller hack, that gets me almost... wow, almost 1/15th of the expansion capacity of a fibre channel controller.
IDE RAID is great for slow-speed, non-critical, single-reader/single-writer type of access. It blows for anything real. It's unfortunate that most slashdotheads don't have real jobs, so they don't understand that real servers actually have to do things, not just load mozilla and play quake.
Re:I found your problem (Score:2)
Re:I found your problem (Score:1)
However, for some uses, like the ones that we are using them for, moving very large files around, and just storing them cheaply, IDE was the way to go.
I am not saying that IDE is technologically better, that would be stupid. I'm saying it has a place, a place that some people might ignore in the large server market because of an almost reglious devotion to SCSI.
You need to look at the needs of the project at hand, and design a solution that works with the best cost benefit ratio. For us, that included massive IDE arrays.
Re:I found your problem (Score:2)
Re:I found your problem (Score:2)
Re:I found your problem (Score:2)
The heat situation would be terrible, and so would spin up power requirements.
Re:I found your problem (Score:1, Troll)
I'm trying not to be rude, but what the fuck kind of "servers" are you talking about? Have you even ever seen a data center, the kind with the raised, non-static floors, uninterruptible power, redundant heating/air conditioning and (in a small one) a couple hundred servers?
What are you, fifteen fucking years old, and dumb enough to think you know everything?
Re:I found your problem (Score:2)
Damn man, did you forget to hit the "post anonymously" button?
Those data centers are what (i'm guessing) 2% of companies need for IT support. The other 98% look for solutions that fit the problem within a certain budget.
Ever stop to think that the "best technology at any cost, even if we don't need it" philosophy may have contributed in large part to the economic collapse in the tech sector?
In regard to the other thread... I built those servers in the first link. We aren't running some huge database, they are used as a large archival and retrival system. It doesn't have to be particularly fast, only big, and reasonably fast. It was the best solution to the problem. They write at 25-35MB/sec, read at 85-140MB/sec, depending on file system and load type.
Re:I found your problem (Score:1)
The fact of the matter is that good infrastructure saves money. It requires fewer employees to maintain it, it scales better as new requirements emerge, and it helps ensure high uptime, which is absolutely critical, even if you're not a web retailer.
In regards to your implication that I'm an AC troll, absolutely not. I stand by my comments, and my implication that you're a fucking retard.
Re:I found your problem (Score:1)
I'm done with this thread, it's going no where.
Re:I found your problem (Score:1)
Re:I found your problem (Score:2)
...
Those data centers are what (i'm guessing) 2% of companies need for IT support. The other 98% look for solutions that fit the problem within a certain budget.
Hi nice to meet you. I'm a sysadmin at a community college [wccnet.org]. Not that high a budget, y'know? Still, we use at least 10k scsi drives in everything we can, 15k for the ones that matter.
We make Good Use of these drives and if they were any slower i would be getting way way too many phone calls.
If you look at Dell's offerings [dell.com] (we buy a lot of dells here) in the server range, it's tough to find something that doesn't come with 10k scsi drives. I think their 350 is the only one that comes with IDE drives.
Going over to Sun's lineup [sun.com], you'll see that their low-end desktop machines like their SunBlade 100 [sun.com] now have IDE drives in them but everything else has at least 10k scsi or fc drives.
I know plenty of people who run servers off of pc, IDE based hardware, but most of these are either personal sites of fellow geeks. My home mass storage unit has one of those nifty Promise FastTrack100 [promise.com] IDE RAID cards, but that's b/c i can't afford SCSI and the storage is only used by me (well, my friends too when they download my movies/mp3s, but scp'ing via my home net connection will in no way hammer the storage unit). Most server rooms i've been to have the dells or similar equipment with SCSI in them, even the really shitty server rooms with really shitty boxes, those people still use scsi cards & drives.
Of course you're right about cost and use, but in most environments it is essential to plan for the future. Buying more or faster disk than we currently need might seem silly now but sometimes growth occurs inversely proportionate to budget - i'm already regretting not having taken larger bites when i could of b/c some of our servers are becoming seriously underpowered and i dont know if our current budget will let us purchase what we need (but i bet i coulda swung for more when i first bought the server in question).
It depends of FILESYSTEM and SIZE (Score:1)
e.g. an 4GB FAT32 partition will outperform a 20GB NTFS partition on the same type of disk.
My USB hub sure is cool (Score:3, Informative)
Not a tech problem (Score:3, Insightful)
Whenever I come across a scenario like this, I tell people to take a step back and before making any technical decisions, figure out what it is you are actually trying to accomplish. If you are really after high performance, get SCSI disks. If you're after cheapness, then you will simply have to accept that IDE disks are slower.
This isn't a question for a techie to answer, BTW. One of your business managers will have to think about how many transactions per day are processed, when the cost of the system can be recouped at a given percentage of each transaction, whether or not paying more for SCSI makes financial sense, and whether higher unit cost will mean you sell fewer units. Get one of your tame MBAs to think about this for you.
IDE performace terminology (Score:1)
Looks like you can do either PIO, UDMA or MW DMA.
Just by playing with a 'hdparm -t', it appears that I get the best performace with it set to UDMA.
(I managed to almost double the read speed by tweaking the IDE driver settings)
Anyone know where I could find out what PIO/UDMA/MW DMA is?
Re:IDE performace terminology (Score:1)
UMDA: 33.3MB/s interface (also used for 66/100)
MW DMA: 16.6MB/s DMA
SCSI (Score:1)