Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Firewire or Gigabit Ethernet? 104

schvenk asks: "Firewire (IEEE 1394) has been accepted as a standard for peripherals, from hard drives to CD-RW drives to digital video cameras. It's a 400 Mbps technology. At the same time, many machines are shipping with Gigabit Ethernet, a 1000 Mbps equivalent of an more widely accepted standard. I'm not a hardware guy, but at first glance it would seem more efficient to eliminate Firewire altogether and equip peripherals with Ethernet ports, ultimately moving all wired communication to a unified standard. Am I missing something?"
This discussion has been archived. No new comments can be posted.

Firewire or Gigabit Ethernet?

Comments Filter:
  • Well... (Score:4, Informative)

    by jpt.d ( 444929 ) <abfall&rogers,com> on Wednesday January 23, 2002 @12:38PM (#2888499)
    Google found this:
    http://www.unibrain.com/products/ieee-1394/fw_vs _g bit.htm

    I would also like to point out the connectors. I would assume firewire was made partly as competition to usb. Thus it would be relatively easy to assume that firewire carries more current to power some lower powered devices.

    Ethernet isn't designed to power anything. I imagine it only carries enough power to carry the signal for the distances involved.

    Also comes into the cost of making hubs. With ethernet you must worry about ip addresses and routing all that information. I do not believe firewire would require this information to be dealt with in such a complicated matter.

    So firewire is probably lower cost.

    Regards,
    Jeffrey Drake
    • Re:Well... (Score:2, Informative)

      "I would also like to point out the connectors. I would assume firewire was made partly as competition to usb. Thus it would be relatively easy to assume that firewire carries more current to power some lower powered devices."

      Yes, this is true. LaCie, for example, ships a portable hard drive [lacie.com] that can be powered by the Firewire bus.
    • Ethernet != IP (Score:4, Informative)

      by john@iastate.edu ( 113202 ) on Wednesday January 23, 2002 @01:06PM (#2888664) Homepage
      With ethernet you must worry about ip addresses and routing all that information.
      IP is just the most popular protocol layered on top of ethernet -- if you were using ethernet to talk to disk drives for example, there is no reason you would *have* to use IP -- you could just talk to them directly via their MAC-addresses or layer some other protocol on top of ethernet.

      On the other hand, ARP, IP, UDP, and DHCP are all well-understood protocols so you might well decide to do it that way.

      • NetBEUI -- it it wasn't being killed at the moment, it would be the perfect protocol for auto-configuring peripherals over Ethernet.
      • On the other hand, ARP, IP, UDP, and DHCP are all well-understood protocols so you might well decide to do it [use IP to talk to disks] that way. Until some idiot script kiddy launches a DDoS attack against the IP of your disk. ;)
        • and ofcourse when dealing wiht critical data you would want to use a protocal such as udp to ensure a complete, error free transfer of your data.
          • Just because you use UDP doesn't mean you can't get a complete, reliable transmission. NFS and AFS both use UDP. The protocol has the same checksums on each packet as TCP, so you have some assurance that packets that you receive are okay.

            What you probably mean is that UDP makes no guarantees as to whether a packet is actually delivered or not. There is no reason why an application can't implement reliable transmission itself and resend packets if it hasn't received an acknowledgement. If you don't want/need all of the "features" of TCP (congestion control, eventual delivery with no particular timing guarantees*, significant connection setup time, etc...), this might be the way to go.

            * No, UDP doesn't make timing guarantees either, but your application can have greater control over a timeout that it implements.
    • Re:Well... (Score:5, Informative)

      by Paranoid ( 12863 ) <my-user-name@meuk.org> on Wednesday January 23, 2002 @01:07PM (#2888672)

      I would also like to point out the connectors. I would assume firewire was made partly as competition to usb. Thus it would be relatively easy to assume that firewire carries more current to power some lower powered devices.

      Firewire and USB both have the option of powering their peripheral devices. I'm not sure about USB, but with firewire, this is not a requirement. I've yet to find a firewire CardBus card which does supply power. I know the iPod requires a power supply to actually realize its plugged into a host, though.

      Ethernet isn't designed to power anything. I imagine it only carries enough power to carry the signal for the distances involved.

      There are also standards for providing power over ethernet. But thats 10BaseT and 100BaseTX. It works by providing power over another set of wires, since those two standards only use 4 of the 8 conductors in CAT5. 1000BaseT makes full use of all 8 conductors, making this unfeasable.

      Also comes into the cost of making hubs. With ethernet you must worry about ip addresses and routing all that information. I do not believe firewire would require this information to be dealt with in such a complicated matter.

      Ethernet hubs don't have to care about anything, they just rebroadcast. Ethernet switches don't care about anything past the MAC addresses in the frame header. Only IP routers care about IP addresses, subnets, etc. Thats OSI layer 3. Hubs and switches operate below all of that, which is why you can run things like IPv6 and IPX on your network without having to go buy a new hub.

      Firewire hubs act like ethernet switches do, they route information between firewire hosts based on firewire addresses. They're similarly uncomplicated.

      If gigabit ethernet is becoming common in consumer devices, this is great, because prices will finally come down. Gigabit has typically been non-cost-effective. Firewire has been a consumer product all along, and although its mostly had its market stolen away by USB (for the same reason 10BaseT devices are still common: performance is "good enough" and the price blows the competition away), it still has a lower price point than gigabit has in the past. I hope this changes, but I think its still a bit overrated given that most commodity OS's I've seen can't even come up with enough raw data to come anywhere near filling this big a pipe.

      • Ethernet switches don't care about anything past the MAC addresses in the frame header. Only IP routers care about IP addresses, subnets, etc.

        This depends on the switch. Some switches do or can be configured to route based on IP.

        • This depends on the switch. Some switches do or can be configured to route based on IP.

          If a switch starts routing, it ceases to be a switch and becomes a router instead.
      • Re:Well... (Score:3, Informative)

        by Matthew Weigel ( 888 )
        I've yet to find a firewire CardBus card which does supply power. I know the iPod requires a power supply to actually realize its plugged into a host, though.

        Orange-Micro's does, with an optional AC adapter. I think the problem is passing that much power through the PCMCIA port, since it takes more power than USB (for which you can generally find powered PC Cards).

        Friend recently bought an iPod for her older PowerBook, and had a fun time figuring out how to get her shiny new Firewire card talking to it.

        Firewire has been a consumer product all along, and although its mostly had its market stolen away by USB (for the same reason 10BaseT devices are still common: performance is "good enough" and the price blows the competition away)

        I beg to differ. The market for, e.g., Firewire CD-RWs is ramping up, while the market for USB CD-RWs appears to be slacking - my experience was that they were doing fine while Apple's consumer computers didn't have Firewire, but the gain is real enough (8,16,24, even 32x vs. 4x, for a CD-RW) that people want it.

        • Orange-Micro's does, with an optional AC adapter. I think the problem is passing that much power through the PCMCIA port, since it takes more power than USB (for which you can generally find powered PC Cards).

          Good to know. IBM's (part number 19K5680) don't. The only alternative I've found so far is to get a firewire hub which provides itself - thanks for the heads-up.

          I beg to differ. The market for, e.g., Firewire CD-RWs is ramping up, while the market for USB CD-RWs appears to be slacking - my experience was that they were doing fine while Apple's consumer computers didn't have Firewire, but the gain is real enough (8,16,24, even 32x vs. 4x, for a CD-RW) that people want it.

          So, in this instance, USB's bandwidth is not "good enough". Thats fair. In most cases I've seen (cameras, mostly), this is not the case.

          I hope Firewire takes off, as it already seems to have fewer substandard hardware problems than USB has. Of course, those are a direct result of consumer market popularity, but still...

          • Good to know. IBM's (part number 19K5680) don't. The only alternative I've found so far is to get a firewire hub which provides itself - thanks for the heads-up.

            That's actually what I recommended to my friend, because you might be able to find a battery-powered hub for times where you don't have an AC outlet.

            So, in this instance, USB's bandwidth is not "good enough". Thats fair. In most cases I've seen (cameras, mostly), this is not the case.
            I hope Firewire takes off, as it already seems to have fewer substandard hardware problems than USB has. Of course, those are a direct result of consumer market popularity, but still...

            Still cameras, that's true. Camcorders, hard drives (yes including mp3 hard drives :), and CD-RW are good examples of things that can be more compelling with Firewire. The two of them complement each other very well, esp. for us laptop users.

    • Actually, Ethernet is now being revised to provide power over two of Cat 5's four pairs. It's called 802.3af and you can find information about it here [ieee.org]

      Currently, Cisco is making wireless 802.11b hubs with Inline Power over the Ethernet cable. The wireless hub will need only one physical wire cable to provide both power and network connectivity.

      I believe that main issue with GEthernet is that the FireWire protocol was meant to control devices and so does bus arbitration and such, and that the Ethernet protocol (with its CSMA/CD for dealing with collisions of packets, collisions being something you wouldn't want in FireWire) deals more with non-deterministic network access.

      Now a token-based FireWire would be something else. Deterministic access that could scale. One of my favorite networking quotes is, "Ethernet works in practice, but not in theory"

      • One minor point, 10/100BT can have power over ethernet (POE) but 1000BT cannot. It uses all four pair of the wires for the bandwidth. Try a two pair cross connect cable for 10/100BT and you are fine, but it will not work for 1000BT. You need all four pairs.
      • Gigabit ethernet uses all 4 pairs of cat5. Power over ethernet uses the extra 2 pair not used in 100 and 10megabit. There are not the unused pairs with gigabit ethernet.
    • It is doable and currently for my summer job between college semesters I installed ID card access door locks. There we used Cat 5 ethernet cables, 2 to power the card reader, 3 to transmit data to the box reciever, and 2 more the power the bolt lock to release.

      Also current network 10 base T cards don't even use all 8 wires of the Cat 5, they only use/need 4 (if you ever bought a Linksys Card with included cable, the cable only has 4 wires). Now I'm not sure if Gigabit Ether is the same way (being 4 wires only) but if needed, cards or new standards could be created to incoorperated a current to be transmitted thru the cable.
  • I haven't seen a machine with a gigabit port, unless you're referring to some of the larger scale servers that are out there that require that amount of bandwidth capacity. As for why not just transition to gigabit and skip the firewire, I believe it comes down to cost. The gigabit setup is quite expensive. Go to radio shack to buy an optical cable and it'll run you $20 dollars. Gigabit ethernet uses the same media as optical cable. I'm sure there's a price issue even in the hardware that connects it all together as well.
    • Umm, actually it's quite possible to run Gigabit Ethernet on Cat5 UTP- it allows for much smaller length between devices but this isn't a problem for periphirals really. Gigabit hubs and switchs are expensive but I would think the need for them if each device had an in and out port and they were daisy-chained to form a bus. The actual ports are still expensive, but this kind of thing would be a great way of bringing down the price.

      The other thing is that Firewire can carry power, but no kind of Ethernet (AFAIK!) can do so, so it would require more wiring.
      • Firewire can carry power, but no kind of Ethernet

        The cisco 7940 IP phone sitting on my desk runs off the power from the ethernet cable plugged into it (it's the only thing plugged into it), so it must carry some current, whether that's enough to power anything more useful than a phone, I don't know, but usb hdds and scanners generally come with an external power supply anyways.

        • But the cisco phones don't use Gigabit ethernet(if they do thats a BIG waste, even if they had video). of the 8 wires in a cat5 cable, Giga uses all 8, whilst 100/10baseT use 4. So the phone uses other wires for power. With Gigabit, you woulnd't be able to do that.
      • I suppose another thing to worry about is setup- presuming we used DHCP to give the devices address we need to run a server on the machine, but it's less plug-and-play than firewire is.
        Also should we worry about security? I for one could live with out having to deal with splo!ts for my external hdd or cd-writer...
    • All of Apple's G4 towers ship with Gigabit ethernet. They've been coming this way for the past year and a half. Unfortunately, I've never had the opportunity to use mine.. It _does_ run over standard Cat-5, though.
    • Try any recent pro-level macintosh, including the portables. They run 1000BaseT, and they use normal ethernet cables to do it.
    • Gigabit Ethernet doesn't just run over Fiber, it runs over CAT5 as well. Gigabit ethernet is a layer 2 standard, so it can theoretically go over any transmission medium, but I don't think it would be practical to use it over anything but cat 5 and fiber
  • I think one of the important points is how much clue is required for the setup of the two systems.

    With firewire, you just plug the device in, and the firewire protocol details to broadcast, service announcements etc. As a user, no setup, no extra services required, the firewire devices work it all out for themselves.

    With ethernet, presumably you're also going to use TCP/IP to address things, shift your data around etc. So, now you either need a dhcp server somewhere, or some manual configuration. Otherwise, how will this new device know what address space to talk on? Also, you then have issues with device discovery.

    The result - end user stuff gets firewire, as you plug it into one machine and "it just works(tm)". Don't ask about sharing it though.. Meanwhile, your business oriented products come with ethernet and a proper IP stack, an IT guy with "some clue(tm)" configures it (as needed), and several people can use it at once.

    So, whats missing for home use of ethernet and TCP/IP in all the devices? A standard, flexible resource discovery system (I know of a few in the works, none finished), and every home to have a NATing DHCPing DSL / cable modem router, so any boxes the user plugs in will be given an IP in the correct address space.

    • So, whats missing for home use of ethernet and TCP/IP in all the devices? A standard, flexible resource discovery system (I know of a few in the works, none finished), and every home to have a NATing DHCPing DSL / cable modem router, so any boxes the user plugs in will be given an IP in the correct address space.
      IPv6 addresses both the NAT-problem and the DHCP-problem by a mechanism known as stateless autoconfiguration of IP(v6) addresses. Basically, an IPv6 node picks an address at random and broadcasts a message to see if anybody else has claimed that address. If so, it choses another address at random and tries to claim that one instead. Since IPv6 has a very large address space, there won't be any need for NAT.

      There are similar stateless autoconfiguration stuff for IPv4, such as the Universal Plug-and-Play system that was being used by both Microsoft and Apple and is not being standardized by the zeroconf IETF working group [ietf.org].

      The problem with IPv4 is of course that NAT or proxying still is needed for global connectivity.
      • Basically, an IPv6 node picks an address at random and broadcasts a message to see if anybody else has claimed that address. If so, it choses another address at random and tries to claim that one instead.

        OK, maybe it's just me, but doesn't this open up a big DoS possibility?

        A trojan (say on a Windows machine) could sit quietly listening for such requests, and NACK every one that comes along..

        Or is there a mechanism to prevent this?
        • OK, maybe it's just me, but doesn't this open up a big DoS possibility? A trojan (say on a Windows machine) could sit quietly listening for such requests, and NACK every one that comes along.. Or is there a mechanism to prevent this?
          This mechanism is used only on the local network. No autoconfiguration packets ever leave the network, and no routers will forward such packets onto the network.

          If the trojan machine is sitting on the local network, it can do all kinds of bad things anyway - such as flooding the network with random data. In general, it is impossible to guard against "bad" hosts on the local network.
        • The CdC(cult of the dead cow) released a tool 2(or 3) years ago that did something verysimilar. IIRC it used a technique like the one you described, but over smb. So whenever a windows machine said "i wisht o call myself server, are any of you chaps using that name", it always said "yes sir, thats my name, don't wear it out". I think it would also make itself the PDC by whenever an election for PDC came up, said "I run windows 99999". I think it ran on windows and unix. Though this is all just a vague recolection of the CdC talk at defcon in 2000.
  • horses for courses (Score:2, Informative)

    by NeonSpirit ( 530024 )
    IEEE 1394/FireWire/i.Link is a sucessor to serial connections and is prmarily designed for comminucation between devices and a single host. It has mechanisms to guarentee bandwith to individual devices but is generaly for one transfer at a time. Ethernet is a network primarily designed for communication between different hosts. It is designed so that multiple hosts can be communicating simultaniously. It would be possible to create devices that talk Ethernet, iSCSI springs to mind, but it would take some setup, a DHCP srever or fixed IP address etc. Firewire is plug'n'pray. Spirit
  • This is really a sickening question. Just because something sends a certain number of bits over a wire in a given timeframe doesn't mean it's equivalent to everything else with the same data rate.

    Firewire and Ethernet have two very different applications and are designed accordingly. Do you want to give your external hard drives, digital cameras, and iPods IP addresses? Do you want to have to worry about firewalling & routing for you iPod? How would you coordinate the caches of two different machines using the same disk? If you don't want to do that, do you want to worry about some sort of locking mechanism for the disk, to prevent concurrent access?

    Most importantly, just grow up. Silly benchmarks like bandwidth, clock speed, etc., are just useful for comparing objects IN THE SAME CLASS. Maybe /. will one day grow out of their "bandwidth/clockrate == penis size" mentality and actually worry about getting USEFUL PERFORMANCE out of their systems. Sheesh.

    • by Anonymous Coward
      Ethernet does not need to use TCP/IP.
    • ---blockquoth
      Firewire and Ethernet have two very different applications and are designed accordingly. Do you want to give your external hard drives, digital cameras, and iPods IP addresses? Do you want to have to worry about firewalling & routing for you iPod? How would you coordinate the caches of two different machines using the same disk? If you don't want to do that, do you want to worry about some sort of locking mechanism for the disk, to prevent concurrent access?
      ---snip
      Just a quick question, but why would you assume TCP to be the layer 3 protocol of choice?

      The question was "why not Ethernet instead of Firewire", not "let's use TCP/IP". Besides that, even the "problems" you show would not be hard to fix.

      ---snip
      Most importantly, just grow up. Silly benchmarks like bandwidth, clock speed, etc., are just useful for comparing objects IN THE SAME CLASS. Maybe /. will one day grow out of their "bandwidth/clockrate == penis size" mentality and actually worry about getting USEFUL PERFORMANCE out of their systems. Sheesh.
      ---snip
      whoa, calm down. _Point to Point_ Gigabit Ethernet in the real world is significantly faster than the current latest addition to ieee 1394, and that performance would be useful to me right now, and I'm sure others. In the near future, this extra speed will be useful performance to many more people (HDTV video capture from a camera and back out to an external drive perhaps?).
      • Bandwidth is a bit of a mute point here. If attached to a PCI bus, both Gb ether and firewire can pump more bits than the PCI can handle.

        If you want to look at appropriate alternate/existing technologies for this kind of stuff, why not look into a hot swappable SCSI or fibre channel etc.
    • Do you want to give your external hard drives, digital cameras, and iPods IP addresses?

      No. Just like I don't have to give my boxes IP addresses. It's called DHCP. Whee. The disk controller does DHCP for the drives. Simple solution.

      Do you want to have to worry about firewalling & routing for you iPod?

      Don't have to. I think your brain blockage is in assuming that your hard drive, iPod and digital camera would plug into the same hub you connect to AOL with. That's like assuming that just because your old digital camera and your 14.4 modem both use serial ports, that you have to route between them.

      How would you coordinate the caches of two different machines using the same disk? If you don't want to do that, do you want to worry about some sort of locking mechanism for the disk, to prevent concurrent access?

      1. This is done all the time with NFS and SMB. It was solved 15-20 years ago. 2. How do you solve this when two different machines share a SCSI or IDE disk? You don't. Don't share drives.

    • How would you coordinate the caches of two different machines using the same disk?

      The same way it's done with firewire. You do know that you can connect two different machines to the same disk with firewire, don't you?

    • idiot (Score:1, Insightful)

      by Anonymous Coward
      1. Ethernet is layer 2. IP et al are layer 3.
      2. SCSI over IP works quite well...
      3. There's no reason you couldn't run the SCSI protocol using only layer 2 packets.
      4. Most importantly, just grow up and stop writing like a teenager by ceasing to use EXTRANEOUS CAPITALIZATION, AWKWARD SENTENCE CONSTRUCTION, and STUPID COMPARISONS.
  • by smoon ( 16873 ) on Wednesday January 23, 2002 @01:03PM (#2888649) Homepage
    Although Gigabit Ethernet is 1000Mbps in theory, in practice you don't usually get that kind of throughput -- so Firewire might not be all _that_ disadvantaged.

    The basic reality is that you can _get_ cameras, hard drives, etc. with firewire ports while gigabit ports aren't readily available (if at all) on these sorts of 'consumer' devices. Will Gigabit supplant firewire? Maybe -- but why deprive yourself of the advantages of firewire for the next few years until it does (or doesn't) happen?
    • Firewire is subject to same "in practice" limitations. My new iMac is on its way... so I give it a try. :-)

      Ryan
  • Collisions (Score:4, Informative)

    by duffbeer703 ( 177751 ) on Wednesday January 23, 2002 @01:06PM (#2888668)
    Ethernet cannot utilize nearly as much of it's available bandwith as SCSI (Firewire is essentially a serialized SCSI interface)

    When ethernet utilization hits 50%, performance starts to crumble. SCSI can run up to the limit with little trouble.

    This is why you see more large scale SAN's networked by Fibre Channel & SCSI rather than Ethernet (although ethernet models are appearing as well)
    • Re:Collisions (Score:2, Informative)

      by joshuac ( 53492 )
      In a point to point situation (peripheral to host), collisions would quickly become much less of an issue :)

      Design it badly and make the host side act like a hub, then yes, it would not be _that_ much faster than Firewire.
      • One would presumably use a switch and full-duplex on all connections, eliminating collision domains.

        If you've looked at the cost of gigabit switches, though, you'd see that it's not practical for home use.

        As far as I can tell, USB2.0/IEEE1394 are not used too much for disk access, though. I think it's for several reasons, an obvious one of which is the fact that it's cheaper to throw an IDE disk into most boxes than attach a device with an enclosure, an interface converter (drives don't speak IEEE1394 natively, do they?) and a driver to access the storage.

        So... why would this be any different for gigabit ethernet? I don't see iSCSI hitting the home anytime soon.
    • We've seen sustained throughput of around 65% on gigabit and fast ethernet systems (i.e. 8MB/sec on 100Mbit, 80MB/sec on gig ethernet, and that's without jumbo frames [wareonearth.com] which should increase the throughput of gigabit even more.

      And no, this is not some fancy test lab, but a real network with various switches etc in the way.

      • Right... having written an application that can sustain 850 Mbps across real-world gigabit ethernet (with jumbo frames), I have to agree with you. Collisions aren't a problem with modern ethernet. The big thing that kills performance is PCI bus contention (and TCP congestion control :))
  • Couple other reasons (Score:4, Informative)

    by Xunker ( 6905 ) on Wednesday January 23, 2002 @01:10PM (#2888699) Homepage Journal

    Another reason, besides all those already metnioned are that fiber is still kinda expensive (couple of quid per foot), and Gigabit over Cat-5 is a hack -- it has to use all 4 strands and send a parallel signal. And Cat-7... costlier then fiber.

    Another reason is that Gigabit doesn't support QoS out of the box; you need a router type device to do that -- Firewire has that built into the protocol to make sure that your CD-R drive doesn't get an under-run even though you're editing video.

    Still one more reason is comaring the cost of Firewire hubs versus Gigabit hubs. A 4+1 IEEE 1394 hub will run you about $45 USD, while a 5+1 Gigabit ethernet port (over cat-5) will run well over $100 (according the minimal research I've done).

    • Firewire doesn't run over Cat-5 at all. You have to buy special Fireware cables. I'm guessing you can get Cat-5 cheaper than an equivalent length of Firewire, though I haven't checked.
  • by Anonymous Coward
    IEEE 1394 has provision for reserving part or most of its transfer capability for guaranteed throughput, which is what you want for streaming digital multimedia such as out of digital video cameras, etc.

    You could have the rest of the bandwidth allocated to asynchronous communication that could experience congestion while you're watching hiccup-free video through the guaranteed part.

    Ethernet doesn't have a guaranteed throughput, so eventually as traffic builds up you'll get glitches in delivery rates, which you will have to compensate for by big buffers and hence bufferLength/rate delays. If I were writing a video editing app, I think I'd rather be able to assume guaranteed synchronous delivery over the links. For the same reasons, you want a real-time capable OS, to make things easier in that part of the problem. Again, though, if you crank up the speed, like with gigabit ethernet, a non-real-time OS can do pretty well (especially if you dedicate the machine while running the one app), even though it's not really guaranteeing throughput.

  • by foobar104 ( 206452 ) on Wednesday January 23, 2002 @01:32PM (#2888810) Journal
    Oh, you are going to get so flamed for this. Just comparing FireWire and Gig E in this way means that you must fundamentally misunderstand one or both of them.

    Your life would have been so much easier if you'd just said:

    "I'm not a hardware guy, but at first glance it would seem more efficient to eliminate Firewire altogether and equip peripherals with Fibre Channel ports, ultimately moving all wired communication to a unified standard. Am I missing something?"

    Then we could have an intelligent discussion about crosstalk and carrying power and data on the same cable. As it is, you're just going to get things thrown at you.

    So very close.
    • So, then, how about some comparison of

      FC/AL vs GigE
      In my shop we're starting to deploy SANs using FC/AL, but I'm noticing that my desktop access to those storage devices is through a server that handles NFS to my end and has FC/AL on a different interface.

      So, I'm wondering whether it wouldn't be better to have either

      • Fiber Channel all the way to the desktop, or
      • GigE all the way to the disk array system,
      and to avoid the servers altogether.

      Would the performance gain be offset by complexity of managing such a large FC network or am I missing something else?

      Educate me.

      • Straight fiber channel would be best, if you can out a "hop" then that will obviously improve performance. Whether it gives you the manageability of the data that you want, given the distributed nature of access, is another matter. Usually it requires proprietary kernel modules for all OS's that want to access the SAN, so think expensive and potentially difficult to manage. You'll get to know your vendor real well.

        Generally, Fiber channel is tuned for disk access, so its made for just getting big blocks of data. GigE uses smaller ethernet frames AND you'll have the overhead of IP on top of that.
      • by foobar104 ( 206452 ) on Wednesday January 23, 2002 @03:32PM (#2889726) Journal
        Yeah, that's a pretty common situation: manage a bunch of servers and their storage with a SAN and use NFS or SMB or AppleShare to get the data to the desktop.

        The big problem with SANs is scalability. Fibre Channel ports on an FC switch are very expensive compared to Ethernet ports on a network switch. Also, if you have 10 clients hitting the same storage system, it tends to thrash pretty inefficiently. Connecting the same storage system to one server means all access to that storage goes through the server's I/O buffers.

        This, of course, doesn't even get in to the problems of SAN token passing and file access arbitration. Computers on a SAN act as a tightly coupled cluster; daemons on each server communicate with all the others to prevent any two servers from trying to write to the same disk blocks at the same time. Extending that architecture to dozens or hundreds (or more!) clients is challenging, to say the least.

        So running FC straight to the desktop wouldn't be a great idea in most cases.

        Taking Ethernet to the storage is a different idea entirely. The idea is called iSCSI, and it involves running SCSI protocols on top of TCP, so clients can access disks over the low-cost Ethernet network. It requires special drivers for your OS that present the iSCSI interface to the OS like a SCSI device, and that encapsulate the SCSI commands sent by the OS and user space programs inside a TCP connection to a storage device on the other end.

        iSCSI storage systems must have some kind of a computer (usually embedded) that control them; in the case of IBM's iSCSI product, it's a Pentium III running Linux. In this way, iSCSI devices aren't very different from NAS devices; only the protocol for communicating with the client is different.

        The thing that's great about servers, though, is that they can do more than just provide file services. They can provide storage management and backup, and run applications. When compared with a big enterprise-wide server cluster that provides databases and file services and backup and HSM and whatever else, iSCSI appliances seem kinda primitive.

        There's more info on iSCSI on IBM's web site [ibm.com].

        So, to sum up, it sounds like the way things are in your network is a pretty good way of doing things.
        • Great post, BTW.
          There are a couple of other things which are probably pretty important to mention on this here (when I say "you", I really mean the original poster):
          • HBA cost. FC Host Bus Adapters aren't cheap at all. In fact, they're very expensive indeed compared with any kind of ethernet adapter (about $1k for an intel HBA). You want to roll that out to every desktop? I don't.
          • Wiring Cost. I haven't seen any FC gear which runs over anything but high speed serial connectors (which don't operate over any great distance) and fibre (which can operate over very long distances). They're both very, VERY expensive compared to Cat5e, both in terms of cost per foot and termination cost. If you've ever had to pay to wire 200 or so ports throughout a building (for a small office), you'll understand how much this matters, and considering that fibre doesn't bend as well, you're in for a VERY expensive installation indeed.
          • Fabric complexity. This is somewhat related to switch port cost, but it really has to do with the complexity of managing a SAN, which isn't like ethernet at all (it's more of a circuit-based system like ATM than it is like Ethernet). Managing all those connections to get reasonable paths to the servers that they need to get to is pretty difficult, and if you aren't doing it in some kind of intelligent fabric configuration, managing 200 ports is going to suck mightily unless you just punt and go to FC/AL, which is really just a bus anyway, so you're sharing the bandwidth.


          Something that I definitely think should be stressed is the metadata management issues that you're raising. Essentially, your computer expects to be accessing files on a server, not shared disk blocks on a shared disk. So unless you're running a FS which was specifically designed for separation of metadata management (which blocks correspond to which files), you're going to be in for a world of hurt (GFS is an open source [used to be anyway] distributed shared-disk file system, BTW). That type of communication doesn't come cheap, and where you're going to start seeing it is where you've got accesses to very large files (media applications), databases (if you're using something like oracle clustering, but if you are, I'm sure Oracle wants to put you in an ad or something), and backup (where there's very minimal metadata compared to the volume of data dealt with, and you've got a relatively slow backup target). So you're dealing with a different paradigm, which changes things entirely. If you want to run NFS over FC, you might want to use FC/IP, which I don't think I've ever met someone who's using.

  • Firewire vs. GigE (Score:5, Insightful)

    by renehollan ( 138013 ) <[rhollan] [at] [clearwire.net]> on Wednesday January 23, 2002 @01:35PM (#2888824) Homepage Journal
    While the prospect of a single universal physical network layer is appealing, here are some realities that interfere with this.

    1) Applications. Ethernet was designed as a shared medium to support arbitrary contentious traffic framed in a simple data link layer, sent between relatively distinct systems. It is intentionally a small, simple spec. Firewire was designed to provide connectivity to high-bandwidth, real-time traffic in a local environmment. Firewire therefore supports notions of bandwidth reservation, and was initially geared to short-haul distances (i.e. on the desktop, or in a small equipment rack). It is a more detailed and involved spec because of an intended techno-ignorant consumer audience -- plug things in and they work.

    2) Power. While PoE (Power over Ethernet) is gaining steam, driven mostly by the notions of IP telephones and other networked devices without local power, ethernet generally does not carry power. Firewire can, to simplify cabling.

    3) Bleedingedgeedness. Firewire was bleeding edge. In order to be cost-effective at some level, compromises were made. Initial distance limitations (on copper) were severe. It was bandwidth at all costs. Even today, firewire does not strike me as effective for long distances (need for fibre vs. copper). GigE took longer to develop because of the need to work at extended distances (100m being the traditional ethernet radius), with a copper physical plant, and the lack of comsumer device pull. It also had legacy inertia to deal with.

    In my mind, the biggest difference, though, is the nature of the intended traffic: Firewire addresses bandwidth reservation, and ethernet doesn't. To be sure, one can layer the necessary protocols over ethernet to do this, but then ALL the traffic has to be managed outside the ethernet spec. to honour those protocols. Firewire has the promise to be a micro-local, cheap, real-time networking solution. Ethernet addresses longer distance needs with a diversity of traffic types.

  • I believe 1394b allows up to 800Mbs over their standard cable. Obviously it depends on the ability of the hosts. I also seem to remember something about 3.2Gbs firewire over fiber. With that sort of rate and guarunteed bandwith for transfers firewire is much more suitable for consumer applications that 1000baseT
  • There have been a number of insightful or informative posts on this thread such as firewire roughly being serialized SCSI (true)or bandwidth management of Ethernet being poor compared to Ethernet. There was a sub post about firewire and USB being able to power devices where Ethernet can't. This is partly true but Power-over-Ethernet [nycwireless.net] is a reality as well since Lucent and Symbol offer it in some access points.

    There also is a clustering technology for SMTP from an Italian university( the name and link escape me) that uses a modified IP stack for the nodes to communicate on using standard Ethernet equipment. The controlling node also has a NIC the uses "real" IP on Ethernet to talk to the world/Internet. Using something like the modified IP stack would allow you to control and manage devices for storage, etc., without having to manage firewalling separate from how you do it otherwise(another post talked about the horrors of, say, assigning an IP to your iPod) since it isn't capable of talking to the world.

    There would probably be a need for a slight alteration to talk this modified IP (maybe not) for dhcpd to manage devices. Use some of your own paramaters to pass to the clients (storage devices, etc.) for setting up arrays or what have you or use some mod of SNMP for management or both.

    Power the devices with the aforementioned Power-over-Ethernet from the Ethernet switch. This switch would not be your usual off the shelf switch but if a vendor were selling this sort of offering, naturally they have them made up.

    So what if the bandwidth management isn't as good as serialized SCSI -- there should be less effort to repurposing existing work and in some cases probably not having to reengineer any software.

    Before someone says, "hey, look, here's another Linux geek wanting to install it on his toaster", stop and think about it. Embedded software on remote devices communicating over Ethernet and passing data that cannot be DOS'd (use multiple gateways plus the storage network stays up since it isn't directly attacked) but can be managed by any PC you want: add another NIC to it, bind it to the modified IP stack, feed it unpowered Ethernet and uses your management apps/edit .conf files.

    Like my subject, I think this is a potentially elegant solution.

    This looks really doable

  • First, a number of posters seem to be confusing Ethernet (a Layer 2 technology - datalink) with TCP/IP (a layer 3+ technology). IP CAN run over Ethernet, just as ATM, Appletalk, IPX, and many other protocols can.
    Second, as has been mentioned, 1000BaseTX/FX doesn't really mean you GET 1000Mbps...a rule of thumb with any ethernet device is you get approx. 1/4 of the available bandwith. Now, something to consider is the fact that this rule of thumb and common practice is for NETWORKS, with multiple people on one segment (even if it's switched, blah blah blah). If you have a dedicated transciever pair of 1000BaseX transmitters, talking over a dedicated cable, you could probably get as close to the 1Gbps that the PCI bus would allow.
    Another poster (the top rated one ATM) mentioned the voltages carried over the bus. Unfortunately, he's off on that one....Ethernet can easily carry more voltage..in fact, whole product lines are devoted to PoE (Power over Ethernet). Just look at any business IP telephone...they grab their operating voltage straight off the network cable. The only issue with this, is any switch you have in a rack that services these PoE devices needs a PoE converter on it (to enable a standard ethernet switch to handle more than 12V DC). So even sending power over it is just great.
    But when it all comes down to it...Ethernet is not the answer for dedicated connections...one of the big reasons behind ethernet's popularity is CSMA/CD (Which is kinda obsolete in a Dual-Simplex/Full duplex world). This allows many, many devices to automatically share a line, and transmit...while making sure everything eventually gets thru. There is a lot of overhead involved in these kinds of transmissions that are totally pointless on a PCI bus extension (where everything has it's own channel anyway).
    Now, I don't see why they can't just make a 1Gbps firewire transciever .... as that's a whole 'nother story ;)
    • Just look at any business IP telephone...they grab their operating voltage straight off the network cable.

      Really?

      I have a 3com IP telephone sitting right on my desk - in addition to the ethernet cable, it has a brick that plugs into the power outlet. If I unplug that brick, the phone stops working.
    • Many of the posts, including the parent, talk about ineffective use of bandwidth because of collisions. Gigabit ethernet does not support CSMA/CD at all, it's simply gone, done away with, finished. All the problems with congestion are gone.

      Gigabit ethernet is certainly a viable possibility for hard drives and such (see iSCSI), but it is expensive compared to i1394, and it does not carry power. There are Power over Ethernet solutions, but they cannot work with Gigabit Ethernet, since Gigabit Ethernet needs all 8 wires.
  • Firewire has advantages that make it more suitable for the applications it was designed for. It uses a pair of wires up to 14 feet in length. This is much thinner that an 8 wire cat-5 cable -which 1000base-t requires. It also supports isosyncronous mode - which enables the quality of the data stream to be adjusted to ensure that the data is delivered on time.

    There seems to be some confusion about Gigabit Ethernet in some of the posts here. Gigabit Ethernet is a datalink layer protocol(OSI layer 2). It specifies how data should be sent accross the physical layer(OSI layer 1). There are several different standards for Gigabit ethernet depending on the physical media it is to be used with. Cat-5 describes the implementation of the 8 wire cable used.

    When making a point to point connection (with 100base-t) the throughput actually doubles to 200Mbps! This is becuase there is no need for collision detction and then the devices can connect in full-duplex mode.

    For Cat-5 cabeling to be utilized as suggested, one need not implement the layer 2 implementation verbatim - a streamlined media access layer could be designed for this purpose. Also, IP(internet protocol - OSI layer 3) would not be implemented at all as it is not necessary since we are going from point to point.
  • by depeche ( 109781 ) on Wednesday January 23, 2002 @02:47PM (#2889386) Homepage
    The reason that FireWire was developed (and I believe it was begun before USB development was begun) was a need for a simple, hot-swapable bus which would allow different kinds of digital devices to connect together with a trivial 'plug it in an forget it' user operation. The team behind the development included Apple, which had for years used a high speed serial bus for networking (AppleTalk over LocalTalk) and a lower speed serial bus (Apple Desktop Bus) for connecting a variety of peripherals, including keyboards and mice. The original use for firewire on Apple computers was (I believe) going to replace all serial devices with this one bus. Then a second team, lead I believe by IBM developed USB as a replacement for the serial ports and the PS2 style keyboard/mouse interfaces. USB does not have the device density per port that FireWire has. The system was NOT intended to allow high speed transfer of large volumes of information. FireWire was targeted at DV cameras, Digital Cameras, consumer electronics, etc. USB was going to connect low-bandwith serial devices. FireWire can string an extremely large number of devices on one serial chain. FireWire was intended to be universal.

    Then USB took off becuase of the marketing muscle of the consortium behind it, the lower cost and the inclusion of USB on most PC motherboards. Apple decided not to release a FireWire only set of machines and instead began using USB for keybords and mice (cost savings, compatibility) and to allow for access to the increasing number of USB consumer devices. But, USB was still 40 times slower than FireWire (which is the IEEE 1394 standard) and so FireWire (or iLink as Sony branded it) was included for uses that Apples had relied on SCSI for like connecting scanners, removeable media and other devices which you may add or remove from time to time and require reasonoble bandwidth. It still wanted FireWire for connecting to FireWire consumer devices too. SCSI still has a place as well, but not everybody needs SCSI now that IDE has been improved (it was really lousy to begin with) and that FireWire could be used for expansion of capacity for average users. SCSI still has much higher bandwidth capacities and burst capacities. Servers and video editors will still use SCSI. Likewise, Ethernet was intented for asyncronise networking. Yet a different purpose. Ethernet uses a convoluted (but useful for its purpose) networking model in the common TCP/IP application with at least hardware, protocol and session layers which must be negotiated and maintained. Normally, you only see about 60% of the theoritical maximum capacity. So while Gigabit ethernet is good for networking, it is not necessarily appropriate for the purposes FireWire was intended. Think: what is the intended purpose for BlueTooth versus 802.11b? Design criteria matter: e.g. BlueTooth uses many times less power but is intended for short small communications needs; FireWire can provide power to devices (like my iPod) and Gigbit Ethernet cannot. In most cases, having too few standards and too few options is just as bad as having too many options and no standards. Choose the standard which makes sense for your application.
  • Is contention negotiation (or whatever the correct term is) "built-in" to ethernet, or is it just a feature of a higher level protocol, like IP? What I'm talking about is the fact that ethernet is shared, and when multiple requests go out at the same time, each party "stands off" for some random amount of time before resending. I would think that this would absolutely KILL IO performance, as you don't want all your devices competing for the shared data channel. How is Firewire/USB/SCSI, etc in this respect?
    • Re:Contention (Score:4, Informative)

      by renehollan ( 138013 ) <[rhollan] [at] [clearwire.net]> on Wednesday January 23, 2002 @03:14PM (#2889571) Homepage Journal
      It's built into ethernet as in 10BaseT and 100BaseT, but not GigE (1000baseT).

      As for "killing performance", random transmissions with a truncated exponential random backoff time (collsision? wait a random time within an interval, try again... collision? wait a random time within double the interval, try again...) approaches 67% line utilization as the number of transmitters grows to infinity. Without collision detection, you get half that.

      So, yeah, it kills performance, but only in the sense that you're trying to saturate the pipe anyway.

      All this is really moot today, because so much ethernet, even 100BaseT, is switched and not just "hubbed".

    • by jcasey ( 264935 )
      This feature is called CSMA/CD - Carrier Sense Multiple Access / Collision Detection. It is implemented at Layer 2 of the OSI model. This is not such a bad thing. It is used only when 3 or more nodes share the same physical cable - such as Thinnet. It was an efficient way to allow multiple machines to participate on a single wire. The other alternative at the time was token ring - which uses a multi station access unit (MSAU) to act as a "trafic light". While token ring worked well on larger networks (at 4-16mbps), csma/cd worked better in *looser* ethernet environments at 10 mbps. Today it is more common to wire each machine into a switch - this way there is no need for collision detection at all.
  • by Anonymous Coward
    ethernet sends things in tiny unreliable frames meant to be dropped on the floor every now and again, needing to be painfully reassembled into larger useful blocks of data at each end.

    firewire is a bus meant to reliably transfer large amounts of data at a time directly between devices and system memory with very low overhead. 'nuf said.

    (and to the person who suggests that power supply is the only reason, there are power over cat5+ ethernet solutions using the other two unused pairs of cable)
    • The painful reassembly part is generally a higher-level protocol function, done by IP, as in "fragment reassembly". Of course, even these reassembled packets may need further aggregation as part of a stream of data... enter TCP (which provides retransmission for lost packets as well).
  • I've thought about this before, and the main problem I keep coming up with is the cost issue. So I ask this as a question: how much does it cost to have hardware perform ethernet layer 2 vs. firewire layer 2? I would imagine ethernet is more expensive, due to its intended use in machines with processors, but it's just a total guess (I'm also not a hardware guy).

    Otherwise I'd say go for it. Gigabit ethernet can surely support just about any add-on other than monitors and ram. I'd love to be able to just plug in anything I buy for my computer to my ethernet switch. For those consumers too scared to open their machines it's an even better accomplishment.

  • Besided all the Technical differences between GigaBit eathernet and Firewire. It is also makes it easier for people to visualize where to install hardware. There is a GigaBit Eathernet Card. And it is still the same size and shape as a 10/100mbs Eathernet slot. In my mind that is soposed to be used for Computer Networking. The VGA Monitor plug 3 rows of pins and female connector on the system. It tells me that is where the monitor goes. Serial is a male connector and Parallel is a Female Connector. Plug external modem into the Serial and Printer into Parallel. USB are those flat slots and firewire is that funny inverted house shape. If they had all the inputs as a Ethernet connection it would become more difficult to plug you system in. Port 1 is for network Port 2 is for Scanner, Port 3 is for Monitor, etc. It makes it harder to remember where to put things in. I always confuse wich port I Plug in the speaker and the Mic in. And programming/configing a system with 1 universal protocal can get trickier. Espectly with eathernet which is a old protocal just faster. Even if everything did work. I like concept of plugging in the right program for the right spot.
  • Anyone have specs on the latency of firewire vs. ethernet? I'd imagine firewire has a lower latency, since the maximum distances are shorter.
    • Latency is generally overwhelmingly caused by buffering of bits and far less by medium propagation delay. Of course, wire-speed latency matters in the design of CSMA/CD networks with regard to their maximum radius. However, as data rates go up, capacitive effects on NeXT and FeXT (near and far end cross-talk, basically not hearing the weak attentuated signal because your local transmission is so much STRONGER) appear to be the primary distance limiter.
  • ... Ethernet's just a method for sending packets between two nodes, given that you know their MAC addresses. That's all the protocol provides you with, everything else has to be layered on top

    On the other hand, Firewire has some USB-like features like device identification, device power supply, and is also better than USB because you dont need a computer in there to drive the system (you could hook a printer directly to a camera without any proprietary interfaces) and the actual protocol is different... for instance there's an isochronous transfer mode which guarantees a /constant/ data rate between two points. Ethernet and even TCP/IP just doesnt have that. I'm no great expert on firewire but firewire's firewire and ethernet's ethernet. Both have relative merits.
  • Anyone remember the story [slashdot.org] about Gibson guitars using Ethernet cable? This is a similar step towards unification, and shows that the idea is valid even in non-computer-related areas.
  • Given two computers, each with a firewire port, neither with a NIC, how feasible is it to share files between the two computers?
    Also, while we're on the topic of CAT5 cable, what is this new garbage I see at the stores called CAT6, and should anyone waste their money on it, considering the 1000BaseT runs over Cat5e?
    • Given two computers, each with a firewire port, neither with a NIC, how feasible is it to share files between the two computers? Also, while we're on the topic of CAT5 cable, what is this new garbage I see at the stores called CAT6, and should anyone waste their money on it, considering the 1000BaseT runs over Cat5e?

      I haven't done it myself but you can hook them up through a firewire hub. Probably even directly via a firewire cable. No need for a "crossover" since IEEE 1394 is self-negotiating. Windows XP sees an IEEE 1394 port as a network connection. It works pretty well, I'm told. I don't know how Win9x/ME handle this though. To answer your question about CAT6, it's just the successor to CAT5. CAT5e was developed to bridge the gap between CAT5 and CAT6, which wasn't quite ready yet. Kind of like this 2.5G stuff for cell phones we're hearing about because they can't get 3G out the door yet.

    • I've done it myself, and found it works well, if you're going to have the two boxen near eachother. That 14 foot limit is *very* annoying and using repeaters can get expensive if you're going far.

      Also, as to cost, I'd imagine you can get a NIC for close to the cost of a repeater, and just use that FireWire port for your external devices.
  • Firewire vs GigE is not realy a very usefull comparison. They are targeted at very different markets. Firewire (IEEE 1394, iLink, whatever you call it) has gained ground in the consumer electonics and digital video market. The one area where the two overlap is in storage. Currently there are several Firewire storage devices available (external hard drives). Ethernet is going to gain some of that market with iSCSI.

    Firewire and USB 2.0 are going to fight it out on the desktop. They both have about the same speed and both have a strong install base (Firewire with Apple and Sony products and USB with legacy 1.0 devices). Firewire is going to hold on to its nitch in the audio-visual market and some high speed apps with its IEEE 1394b speed boost. USB is going to remain king of keybord and mouse type devices due to lower cost.

    The integration outlook is better in the data networking space. GigE and 10GigE are looking to replace the gaggle of layer1/layer2 solutions out there. That would include ATM, Frame Relay, Fiber Channel, Packet over SONET, etc.
  • Ethernet is an unarbitrated broadcast-type link-layer. Its ideal for running packetised networks that dont have concrete service levels and can accommodate packetloss and collisions.

    IEEE1394 is an arbitrated, virtual channel (either fixed, or on demand) link layer that has its own integrated protocols for data transport. It can also provide power on the same lead, and is ideal for devices that need to arbitrate a fixed bandwidth connection to specific host devices. Great for peripherals.

    Google, is a very popular and efficient search engine that can find you most any information on the internet rapidly. Clueless people are not a bad thing per-se, as long as they accept that one should use the right tool, for the right job, and that the ability to educate yourself without whining to people on forums for help is a handy ability to have on the internet nowdays. Yet another "cant use Google" Ask Slashdot. (Go on, burn my lousy 1 karma for saying the truth.)
  • Yes. A clue.

    Gigabit ethernet has no way of powering external devices. Firewire does.

    Firewire was designed for high-speed peripheral communication. Gigabit ethernet was designed for high-speed network communications. The only thing the two have in common is the modifier "high-speed". Another standard would need to be developed for peripheral communications over GigE (unless you want to add an IP stack to every digital camera and removeable CD-R drive out there, in which case you are smoking crack.)

    Firewire is available on most new PCs now. Gigabit ethernet is not (please do not muddy the waters here, Mac users.)

    Firewire is cheap. Gigabit ethernet is expensive.

    Firewire peripherals are here in abundance. Gigabit ethernet peripherals exist in your head.

    It seems that all you're talking about is putting an RJ45 port onto a machine for peripherals instead of a Firewire port. All this will do is cause people to plug their networking equipment into the wrong port.

    - A.P.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...