Can Linux Support a PCI Expansion Chassis? 16
Snowfox asks: "Between having multiple network cards, video cards, SCSI controllers, audio, etc, I'm always hurting for expansion slots. Five or six just aren't enough for an everything box. Several companies offer PCI expansion chassis. I see these vendors on the show floor at Game Developers Conference and Siggraph every year, but the prices are high and none of the vendors can tell me whether these support Linux. Has anyone had any dealings with one of these units?"
"Magma has some sweet-looking units which even support 64-bit PCI, Mobility has some units which are far cheaper, and DigiDesign has a 7- and 14-slot unit as well. All three claim to be Plug-And-Play for Windows and Mac, but as on the show floor, none have responded to inquiries about Linux support or which chipset is used to bridge the busses.
I know that Linux supports PCI-PCI bridges which are on the motherboard, as are commonly used for on-board network, sound and drive controllers, but what about these external offerings?"
Additional question... (Score:2)
Google turned up a few reviews of these chassis. They show a slight reduction in the performance of an Adaptec SCSI controller when used in the external chassis.
Is there some sort of extra protocol overhead involved in accessing a remote PCI bus, or does it take an extra cycle to respond or ...?
Re:Additional question... (Score:3, Informative)
The external box is connected to the motherboard via a PCI Bridge. Any PCI bus can have several "slave" bridges all connected back to the master bridge that actually interfaces with the motherboard chipset. The reason for the reduction in performance is that the bridge must act as a regular device on the PCI bus.
Any devices that need high bandwidth should always be placed nearest to the master PCI controller. Devices strung off a bridge must first negotiate to talk on the bridge's PCI bus, then the PCI bridge will negotiate to talk on the master PCI bus. These negotiations get very complicated, and take time. Hence the slowdown. The only reliable way to increase performance across the bridge is to tweak how the PCI bus is controlled so that certain devices (bridges) will receive a higher priority. No easy task, however.
To answer the question more directly... (Score:1)
Good luck finding any cheap ones though, try ebay [ebay.com] I guess.
how about using more than one box? (Score:2, Insightful)
Re:how about using more than one box? (Score:1)
I personally couldn't say, though, seeing as how a major accident involving a sister and numerous other factors have destroyed a great number of my components, as to wether or not something like this would work in linux. As someone else said, though, It do think that it acts just as another PCI device, so it could work, even if you did have to slop together some drivers. SourceForge, anyone?
Re:how about using more than one box? (Score:3, Informative)
I'll preface by saying that there's no good reason to be doing any of this, but it's relatively fun:
I run multiple flat panel displays, each of which gets its own GeForce card (one AGP GF3, several PCI GF2MX). The combination of some of these cards and a low-end chasis should be both cheaper and faster than the 4-head PCI DVI card options. (Why multiple screens? Four $300 1024x768 screens and xinerama gives me a nice all-digital 2048x1536 LCD for $1200 instead of five grand for analog LCD.)
I'd also like to get a few of the $50 gigabit ethernet cards and use crossover cables between my workstations instead of shelling out a grand for a multi-port gigabit switch. That means another two slots gone to reach the two other workstations, with the uplink network card still remaining.
I make games for a living, and I like running the game system dev stations via a TV card instead of keeping a TV around. I also like to run the TV while I work, so I'd like to see if I could get two bttv cards going. If possible, I'd like to use the bttv bus mastering support and have the TV cards directly DMAing to video card overlays on the remote PCI bus, thereby gaining performance from the PCI segmentation.
Re:how about using more than one box? (Score:1)
...unless the breakout box's bridge chip is super-intelligent and knows to keep traffic off the main bus, which I suppose it could be...
Re:how about using more than one box? (Score:2)
If there's negotiation overhead for the processor to reach the remote bus, it would make sense that this wouldn't be a passive/transparent link. I already know that the bttv chipset supports direct communication with other PCI cards, so I'm actually quite hopeful about this one.
Should work... (Score:2)
The answer is yes. (Score:5, Informative)
The bigger problem is that the BIOS on your PC must support at least 3 levels of PCI bridging for these devices to work. Most of these types of chassis use one bridge chip on the PCI card that plugs into your PC, then the box itself has several more bridge chips, each of which control a number of the PCI slots.
Who needs PCI its on the Motherboard (Score:1)
Here is just one example:
http://tyan.com/products/html/thunderhe_p.html
Blekko
It should work (Score:1)
- hyperbolix
PCI Backplanes / PICMG (Score:1)