Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

IDE Co-Processors? 27

morbid asks: "EIDE is generally considered to be inferior to SCSI because it requires more involvement from the processor slowing the system down, but would it not be possible to build an EIDE/ATA (?) controller with its own processor, freeing up the CPU and increasing system performamce while allowing the use of inexpensive drives?"
This discussion has been archived. No new comments can be posted.

IDE Co-Processors?

Comments Filter:
  • Most modern motherboards (probably all) support bus mastering, which does what you want.


    Bus Master IDE FAQ [mirror.ac.uk]

    Bus Mastering IDE technology implements logic circuitry on your motherboard to reduce CPU's work of retrieving and storing data on your hard disk drive. This technology can potentially "free up the CPU" to do other tasks in a multitasking operating system environment such as Windows* 95, Windows NT*, OS/2* etc.

    --
  • its called scsi why reinvent the wheel?
  • Why not something faster than scsi, but for 2 devices per channel? Already, the only thing faster than ATA 100 is Ultra160, but there's the one disk at a time thing. Once this is fixed, SCSI will really need to be trashed!
  • Well, SCSI is a bit more then a fast way of talking to a drive. It also frees the CPU from many of the drive geometry issues that it would otherwise have to track. SCSI also allows things like multiple devices per channel, request queueing, smarter IO, disconnect/reconnect, etc. These features aren't likely to show up in EIDE drives simply because adding them would cost the same as SCSI and why reinvent the wheel?

    Right now the next step in consumer high-speed drives appears to be Firewire/iLink/1394 (depending on the vendor.) USB 2.0 has just appreared in silicon but it's already slower then Firewire/iLink/1394 and not as flexible. Intel is also working on PCI/X as a next generation replacement for the now venerable PCI bus. They appear to be going to a serial-bus design with smart interconnects.

    One well regarded scenario for the future of PC's has them turning into black boxes containing little more then a CPU and graphics card. Everything else would be handled through high-speed serial connections.

  • ah but you forget - SCSI 4 and 5 are on the way - up to something like 480mb/s.
  • Ultra2/80 performs remarkably better than ATA/100, and supports a load more devices, too... Ultra 320 (packetized SCSI - like FC) is not too far off, with 640 being just a little pipe-dream at the moment (the silicon tech required for the SCSI drivers... well, that many lines with that much speed with that much driving capability all on one chip is a tall order).

    Once ATA gets fixed (trashed) we won't need to bother with it anymore... damn legacy 8^)

    --
  • if i have 4 different bus mastering devices on the PCI bus, won't they start fighting for control? also, not that that quote from the FAQ says "can _potentially_ free up the CPU". i have yet to see an improvement by using bus master drivers over normal drivers
  • and at like 4x the cost per megabyte over IDE
  • Yes, bus mastering has been around since the PIIX days (95/96), but all that says is that the IDE controller is a bus master on the PCI bus, with DMA capability. Whoop-dee-freakin-doo. This still doesn't let the main proc out of the other tasks of drive management that it has to do. There's a lot of good stuff that a SCSI initiator and target take care of that the IDE controller and target don't, and this is where the benefit is.

    So, yes, bus mastering is good. My $3 network card does it too, as does (should) any non-brain-dead PCI device that involves any sort of data flow (LAN, drive controller, video, capture). It doesn't take care of everything, though.
    --
  • There are any number of vendors that are working on PCI-X, but PCs would still be better served if we could see some 64bit slots in them. Any 33MHz card will cause the 66MHz(PCI) or 133MHz(PCI-X) bus to operate at 33MHz (unless you add more bridges so they are separate phsical busses), but 64b slots can buy you a lot with not too much effort (a bunch of extra board traces, chip pinout, connector, etc) and not all that much more cost. Heck the DEC Alpha PC boards (note: 21164PC chip, not PC as in x86) had 64b PCI since, what, 96?
    ......
    --
  • Western Digital WDE18300-0048 18.0GB ULTRA2 WIDE SCSI 68pin 7200RPM 6.9MS 2MB CACHE 18.2GB $243

    Western Digital18.0GB EIDE 7200 RPM OEM,1 YEAR WARRANTY $109

    Doesn't look like 4-to-1 to me. I'll pay the extra $130 for SCSI, personally.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • Besides the advantages that have already been mentioned, Ultra160 supports much longer cables (12 meters) than any variety of IDE. Assuming you had enough IDE controller chips, how are you going to connect 8 hard drives to a PC with 18" cables?
  • Thought recent incarnation of the IDE bus may adddress these problems, lower cpu overhead is not the only reason that SCSI driver are better, the support for tagged command queueing and the ability of a SCSI drive to accept a command over the bus and then get off the bus till it has the data (thus allowing commands and data to flow over the bus to and from other devices while the first is till working) also made the SCSI system better than (at least early) IDE systems, especially for server systems running unix where there are lots of concurent transactions.
  • Think spheres; get a bunch of K'nex (sort of but not quite legos or erector set); cases are for those who lack imagination.

    I've got 6 short and 8 full height drives, 2 dual ppro systems, powers supplies fans & so on, in less space than a single full size tower case.

    though I'm a big SCSI fan, I got to admit the single 8' SCSI cable is much more of a bitch to deal with than the IDE cables.

  • Or, do what I do, and buy IDE for mass "garbage" storage and SCSI (soon RAID-ed) for speed.

    You get what you pay for.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • Well, I have. Using Windows NT (cough) with normal drivers, copying a large file (40-50Mb or more) would take 100% of processor time, while with bus mastering drivers it took just 3%-5%, or even less.

  • If IDE requires so much assistance from the main CPU to get its deed done, then would it not be possible to SCSIfy an IDE drive with some tantric controller card / embedded adapter encasing that would do this "dirty work" and emulate SCSI with its lovely 1% cpu usage ? Even dedicated ATA/66 and ATA/100 cards like the Promise Raid Controller swallow a good slice of the CPU time.

    For most hard drives it's not too bad, but the real problem comes when you have a high-speed cdrom. Think Kenwood 72x. That thing sucks away nearly 20% CPU no matter how fast the PC. Another brand name 40x drive also bogs down my other box by a good 12-15%. On the other hand, my SCSI 24x could (theoretically) run on a 286/12 without a hiccup (of course the system bus would freak out).

    Are there spec incompatibilities between SCSI and IDE that would prevent this kind of gadget from existing ? It would allow high-end workstation users to add cheap unreliable storage for their pr0n on those fancy-shmancy SGI boxes.
  • It has already been done, at least as a RAID solution. I was searching for fast, huge storage solutions 3 years ago and found a company that was making SCSI RAID drives based on IDE. I can't remember the company that I talked to back then, but here's one link [transtec.co.uk] I could find with a brief search.

    I think if you take it to RAID-0 you can array all those IDE drives to look like one HUGE SCSI drive.
  • You mean something like this SCSI-to-IDE bridge [acard.com]?
    --
  • Older Mac PowerBooks used 2.5" SCSI drives. The most common upgrade drives for those machines were IDE drives with an extra controller card that made them appear to be SCSI to the computer.
  • Many SCSI drives used to actually be ST506 and ESDI drives with SCSI bridge boards installed by the manufacturer. There is no reason that it couldn't be done with IDE drives.
  • Did you look at the IDE RAID controllers? They use normal IDE drives, so the controller must be fairly intelligent. And I believe you can select whether writes are reported as complete when the controller gets the request or when the data has been physically written to the disks.
  • by BitMan ( 15055 ) on Wednesday August 30, 2000 @11:07AM (#817936)

    Forget Promise, SIIG and others. 3Ware [3ware.com]'s Escalade series [3ware.com] of products are just what you are looking for. Keys to performance with Escalade:

    • On-board co-processor that acts like a SCSI target from the standpoint of the OS/driver. The same you'll find on most SCSI RAID controllers (i960 or similiar). This dedicated CPU drives your CPU, not your mainboard chipset's southbridge (which normally requires some CPU overhead even with bus mastering).
    • One IDE drive per-channel. No "slave" issues. 100% Hot-Swap capability (although you'll need a IDE hot-swap bay/chassis for full hot-swap capability). Maximum performance.
    • 2-8 channel boards, roughly $50-60/channel -- not much more than those crappy Promise FastTrak cards, only much, much faster.
    • 100% Linux support. 3Ware controller support is built-in to most newer 2.2.x kernels.

    If you want to minimize cost and performance, 3Ware's Escalade is what you want. Their new 6000-series offers 2-8 channels of RAID-0/1/1+0 with Ultra66 support for $139/279/479 (2/4/8 channel) [thelinuxstore.com].

    3Ware is also working on a 64-bit PCI board with RAID-5 support (as well as Ultra100). Be looking out for it (I know I will).

    -- Bryan "TheBS" Smith

  • Er, that should read "dedicated CPU drives your disks" not "dedicated CPU drives your CPU." [ DOOH! ]

    Also wanted to point out that with any striping, mirroring, parity, you are "bothering" your CPU with BIOS or software RAID solutions (and the Promise solution *IS* a "BIOS" RAID solution). The 3Ware controller will off-load these routines onto it's own co-processor.

    -- Bryan "TheBS" Smith

  • "If you want to minimize cost and performance"...wouldn't you just spend nothing and receive same? :-)
  • Sounds like a space heater.
  • This debate has gone on for a long time... SCSI has many more benefits than just speed over IDE. Processor overhead is a big issue of course, and i can see a tremendous difference in an IDE and SCSI system because of this. Another HUGE issuse is bus width. One reason SCSI outpreforms IDE is the fact that all the wide SCSI flavors are working on a 16bit wide bus. (hence the 68 pin cable) ATA/33/66/100 is all a burst mode thing. Sure you can get the high speed, but it's really a joke because you only get that high speed for a very short time. Wide SCSI can sustain much higher speeds for much longer times. How often do you see IDE servers? There's a reason for that, you can't have your disk access bog down when more than a couple of users hitting it.

    My personal favorite part of SCSI is the ability of devices to work independently. On an IDE controler the bus is dominated by the slowest device on the chain. On a SCSI system, you can have SCSI-1, SCSI-2, SCSI-3 all running at their own speeds happy together.

    Over all, as IDE gets faster, so will SCSI. They are two technologies and one is really not going to bury the other. IDE will have it's market with budget minded customers and SCSI will be there for the preformance applications and power users. Both these technologies will eventually reach their peak and fade out, but for now it's what we have.

Air pollution is really making us pay through the nose.

Working...