IP Over SCSI? 22
morzel asks: "One of the advantages of SCSI based systems is that a plethora of devices can exist on the same high-bandwidth bus, including multiple host adapters - at least: that's the theory. While it seems pretty obvious to me to use this as a low latency/high-bandwidth interconnect between a small number of hosts, I've never seen an actual implementation of such a system. Do these, preferrably IP-based systems, actually exist? I'm not in need of a Beowulf style cluster just yet (I don't have an application for them) but I am interested in the possible usage of SCSI as a _fast_ interconnection for small numbers of load-balancing machines in cluster. A combination with the Linux Virtual Server Project could create a killer solution... Right? Thanks for all input/comments on this!" (Read on...)
"I would think these kinds of interconnects would be ideal for small clusters, or larger clusters where groups of eight nodes could be interconnected with each other, with one node acting as the master node. This would probably provide more bandwidth and less latency than ethernet-based solutions, and on the other hand could be a lot cheaper than special hardware."
Re:It exist on the Mac (Score:1)
Here's how it works (just the basics); you plug to box into your Mac's SCSI port (desktop or PowerBook) and into the ADB port with a pass-through cable (it draws its power from ADB). The other side has a standard RJ45 jack. You load its own ethernet driver (extension) and set your TCP/IP settings just as you would would with any ethernet adapter.
Intel acquired Dayna and provides a little information about those products.
http://support.intel.com/support/dayna/scsitrbl
Re:Multiple SCSI adapters (Score:1)
Open/close on files usually has significant latency, but maybe that is not due to the SCSI interface. I guess that is why he asked about IP on SCSI....
SCSI adapters aren't that cheap, but 100 Mbs switches aren't either. Are there any gigabit NICs available yet?
Re:Sounds kind of limited to me (Score:1)
In a clustered environment, latency is one of the most important factors in getting the performance cranked up, especially when under load.
Bandwidth is not really that important, but getting the message across the wire asap is something else. That's why myrinet solutions are being used - not just bandwith, but ultra-low latency.
Okay... I'll do the stupid things first, then you shy people follow.
digital (Score:1)
An issue to consider is host adaptor ID. Most of the SCSI hosts I come across don't make it obvious how one would change the ID. If you have 8 hosts, all responding to SCSI ID 7, you are going to have problems.
Another is termination. Unless you use a hub (yes, SCSI hubs to exist), you have to set up the network as a bus, with only the end hosts terminated.
Re:Sounds kind of limited to me (Score:1)
I looked for information how to do this without luck.. just a couple deja entries of 'I did it, why can't you?'
Oh, well...
Re:It exist on the Mac (Score:1)
Anyone here of IP over Firewire? (Score:1)
Re:Sounds kind of limited to me (Score:1)
Re:Sounds kind of limited to me (Score:1)
160M_B_ps Ultra160 Scsi = 160 meg per second transfers.
Theorectical of course, your milage may vary.
Course, your 100mbit ethernet can go 100m, i believe the scsi is limited to about 12m without repeaters.
Resource List (Score:1)
http://ume.med.ucalgary.ca/usenet/Solaris/0336.
Sounds kind of limited to me (Score:1)
Re:Sounds kind of limited to me (Score:1)
It exist on the Mac (Score:2)
-----
References (Score:2)
--Bob
Mach had it... (Score:2)
So, yes, it's been done. It's even been done in the open.
It's hacks like this, and the ability to have multiple ethernet interfaces (think: switched private Gb ethernet) that make me wonder just why in the hell people buy proprietary cluster solutions (DEC's memory channel - 40MB/sec.) when open standards are quite possibly better, and certainly less expensive.
Makes me wanna puke.
--Corey
Re:Resource List (Score:2)
Hmm. it is worth mentioning that they are currently running positive discrimination - if you are using most common Webbrowsers under Windows (with the noticable exception of Lynx) you will be refused access to the site.
--
Linux has it... (Score:2)
http://www-internal.alphanet.ch/archives/local/
It would seem to me that someone already has this working in an experimental stage.
-CC
Re:It exist on the Mac (Score:2)
The Asante model uses a 9v power cube, and will tear up a 3C509 any day of the week.. They also made an Ethernet-SCSI bridge, allowing you to use standalone SCSI devices over a ethernet connection to another bridge. Drive sharing in HW..
I only wish I had Linux/x86 drivers...
Re:It (sort of) exist on the Mac (Score:2)
So yes, and we were offtopic.
Re:Multiple SCSI adapters (Score:2)
Yeah, but they're expensive as hell - Multiwave has them for $300. And a gigabit switch is going to be a pretty hefty price too.
100 mbit switches aren't bad. Netgear makes a nice 8 port 10/100 switch (called the FS108, IIRC), for about $90. I'm planning on getting one sometime the summer, actually: I don't see much need for anything too much faster (though IP over SCSI would be a pretty cool hack).
Ethernet == high latency for PVM-ish clustering (Score:2)
The coolness factor would be high, as would keeping from having a secondary Eth-switch to host the message-passing traffic to keep it off of the primary gen-purpose network.
TCP/IP as well as Ethernet are general purpose networking solutions, real workhorses. However, high-performance cluster-based parallel programming is not a general purpose use -- it benefits the most from a communications path that is optimized for a constant stream of high-volume, relatively small messages from any one node to any other. Sortof a networking nightmare, eh? Sortof like how a usenet feed is a general filesystem's worst nightmare -- uses the underlying mechanism (transport mechanism in PVMs case: TCP/IP/Ethernet; filestore mechanism in usenet's case: ufs, ext2) in a manner that goes against the grain of the optimization assumptions made by the underlying layers.
I maintain our department's 8-node Sparc PVM cluster. We use hand-me-down machines that get displaced from other upgrades. We use it for teaching parallel programming, so performance isn't a great concern for the future of humanity, but when the students write code that doesn't use the message passing medium effectively (currently a dedicated 100mbit switch), then they get a bit discouraged when their code runs better on a single machine as opposed to the cluster. Oh well -- part of the learning process!
Re:Resource List (Score:2)