Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Can Linux Support a PCI Expansion Chassis? 16

Snowfox asks: "Between having multiple network cards, video cards, SCSI controllers, audio, etc, I'm always hurting for expansion slots. Five or six just aren't enough for an everything box. Several companies offer PCI expansion chassis. I see these vendors on the show floor at Game Developers Conference and Siggraph every year, but the prices are high and none of the vendors can tell me whether these support Linux. Has anyone had any dealings with one of these units?"

"Magma has some sweet-looking units which even support 64-bit PCI, Mobility has some units which are far cheaper, and DigiDesign has a 7- and 14-slot unit as well. All three claim to be Plug-And-Play for Windows and Mac, but as on the show floor, none have responded to inquiries about Linux support or which chipset is used to bridge the busses.

I know that Linux supports PCI-PCI bridges which are on the motherboard, as are commonly used for on-board network, sound and drive controllers, but what about these external offerings?"

This discussion has been archived. No new comments can be posted.

Can Linux Support a PCI Expansion Chassis?

Comments Filter:
  • I'd also be curious to know -

    Google turned up a few reviews of these chassis. They show a slight reduction in the performance of an Adaptec SCSI controller when used in the external chassis.

    Is there some sort of extra protocol overhead involved in accessing a remote PCI bus, or does it take an extra cycle to respond or ...?

    • The external box is connected to the motherboard via a PCI Bridge. Any PCI bus can have several "slave" bridges all connected back to the master bridge that actually interfaces with the motherboard chipset. The reason for the reduction in performance is that the bridge must act as a regular device on the PCI bus.

      Any devices that need high bandwidth should always be placed nearest to the master PCI controller. Devices strung off a bridge must first negotiate to talk on the bridge's PCI bus, then the PCI bridge will negotiate to talk on the master PCI bus. These negotiations get very complicated, and take time. Hence the slowdown. The only reliable way to increase performance across the bridge is to tweak how the PCI bus is controlled so that certain devices (bridges) will receive a higher priority. No easy task, however.

  • i mean, an everything box might be cool, but if you're using more than 5 or 6 pci slots, you're probably hitting limitations in the pci bandwidth stream anyway. why not split the functionality out into discrete machines?
    • I agree. I looked into a lot of different PC options to get alot of things into one box, but then, after having read an article in Maxium PC about three 'Dream Machines' ( A Gaming Machine, A Content Creation Box and An Entertainment Machine) I saw that you really can't get the full functionalty.


      I personally couldn't say, though, seeing as how a major accident involving a sister and numerous other factors have destroyed a great number of my components, as to wether or not something like this would work in linux. As someone else said, though, It do think that it acts just as another PCI device, so it could work, even if you did have to slop together some drivers. SourceForge, anyone?

    • i mean, an everything box might be cool, but if you're using more than 5 or 6 pci slots, you're probably hitting limitations in the pci bandwidth stream anyway. why not split the functionality out into discrete machines?

      I'll preface by saying that there's no good reason to be doing any of this, but it's relatively fun:

      I run multiple flat panel displays, each of which gets its own GeForce card (one AGP GF3, several PCI GF2MX). The combination of some of these cards and a low-end chasis should be both cheaper and faster than the 4-head PCI DVI card options. (Why multiple screens? Four $300 1024x768 screens and xinerama gives me a nice all-digital 2048x1536 LCD for $1200 instead of five grand for analog LCD.)

      I'd also like to get a few of the $50 gigabit ethernet cards and use crossover cables between my workstations instead of shelling out a grand for a multi-port gigabit switch. That means another two slots gone to reach the two other workstations, with the uplink network card still remaining.

      I make games for a living, and I like running the game system dev stations via a TV card instead of keeping a TV around. I also like to run the TV while I work, so I'd like to see if I could get two bttv cards going. If possible, I'd like to use the bttv bus mastering support and have the TV cards directly DMAing to video card overlays on the remote PCI bus, thereby gaining performance from the PCI segmentation.

      • I absolutely adore the idea of TV-to-video overlay over an independent bus, and technically it should be possible, but I doubt it would work out.

        ...unless the breakout box's bridge chip is super-intelligent and knows to keep traffic off the main bus, which I suppose it could be...
        • I absolutely adore the idea of TV-to-video overlay over an independent bus, and technically it should be possible, but I doubt it would work out.

          ...unless the breakout box's bridge chip is super-intelligent and knows to keep traffic off the main bus, which I suppose it could be...

          If there's negotiation overhead for the processor to reach the remote bus, it would make sense that this wouldn't be a passive/transparent link. I already know that the bttv chipset supports direct communication with other PCI cards, so I'm actually quite hopeful about this one.

  • A PCI to PCI bridge is the same if it's on the motherboard or off. It's just another PCI device. (Albiet a special one) As long as it's a standard PCI bridge it should just work like the "second" PCI bus that's on alot of less expensive dual PCI bus motherboards. I wouldn't put a video card on the other side of it though :)
  • The answer is yes. (Score:5, Informative)

    by Phaid ( 938 ) on Friday November 30, 2001 @11:33AM (#2636499) Homepage
    Linux supports pci-to-pci bridges, which is what these devices use.

    The bigger problem is that the BIOS on your PC must support at least 3 levels of PCI bridging for these devices to work. Most of these types of chassis use one bridge chip on the PCI card that plugs into your PC, then the box itself has several more bridge chips, each of which control a number of the PCI slots.
  • There are several motherboards designed for servers that have nearly everything you describe onboard already: 1 or 2 ethernet ports, AGP video, sound chip, even SCSI. Tyan makes one and so does ASUS.

    Here is just one example:
    http://tyan.com/products/html/thunderhe_p.html

    Blekko
  • The magma website reports that their PCI systems work under Solaris. Its reasonable to say that it should also work under linux as it is of course just a simple PCI-PCI bridge.
    - hyperbolix
  • Over the years I've used boards like:

    with NetBSD and FreeBSD. Given how clean they work - I suspect linux to be fine. Usually with some intel procesor board. It works well. The typical usage was to have 8 slots with a 4 port ethernet card - and combining this with DummyNet. And then letting the processor netboot off its own ethernet and then act as a packet delay line of sorts. Do a search in google for 'PICMG' to find a whole range of those. One word of warning - you may need to add some special (userland) hacks for the watchdog - they can be quite a pain - and you almost certainly will need to learn about serial consoles. My experience with the latter and Linux is not too good :-) Dw

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...