Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Technology

Gigabit Networking for the Home? 545

The Clockwork Troll asks: "I've had a whole-house audio/video distribution project on the back-burner for a while now. As gigabit networking hardware prices come down to earth, I'm tempted to jump on the 1000BaseTX bandwagon. As far as I can tell though, the current crop of consumer-priced hardware/software doesn't address a couple key issues, namely: fragmenting jumbo frames for the benefit of legacy clients - this is critical as some of the devices on my network will not tolerate the 9000+ byte Ethernet frames which are needed to get the most out of gigabit; and OS support - do Linux and Windows require much tweaking to take advantage of gigabit? Will most drivers automatically optimize themselves? A Google search didn't reveal too much consensus, especially on hardware choices. What switches and software configurations have Slashdot readers been using for home gigabit networks, in particular mixed ones (100/1000BaseTX?"
This discussion has been archived. No new comments can be posted.

Gigabit Networking for the Home?

Comments Filter:
  • by Anonymous Coward on Friday April 09, 2004 @12:22AM (#8812306)
    Go for the gusto: 1000baseFX!
  • by Anonymous Coward on Friday April 09, 2004 @12:24AM (#8812317)
    Check out the 8 port Asante GX5-800P. You can find them for ~ $160.
    • by darkwhite ( 139802 ) on Friday April 09, 2004 @12:45AM (#8812469)
      I bought a Netgear GS-108 3 months ago at $150. Not to put Asante down, but this line of switches (Netgear's FS and GS) has unbeatable quality, even if the LEDs are very uninformatively used on this one.

      Now to actually get a RAID setup that can load this thing to capacity...
    • by AKnightCowboy ( 608632 ) on Friday April 09, 2004 @08:07AM (#8813947)
      Check out the 8 port Asante GX5-800P. You can find them for ~ $160.

      I guess that would do for amateur installations, but any serious home network engineer deploying gigabit would opt for something with a little more kick. I recommend the Cisco Catalyst 3750G-24T switch for these kinds of applications. 24 ports of 10/100/1000 managed switch goodness and only $4000!! That's unbelievable! Now, if you're you're looking at a modular solution with possibilities of doubling as a router then look no further than the Catalyst 4500 series. Bump up to a 4507R and get redundant supervisor IV support and 5 slots for adding in module goodies.

      For those of us network geeks with serious port density needs at home, I would recommend purchasing a Catalyst 6513 w/redundant sup 720's (makes a kickass cable/DSL router w/reflexive access list support and even server load balancing of your home web servers!). If you're interested in protecting your network of Windows and Linux boxes, throw in a PIX firewall blade and the IDS blade and you're rockin'.

      Now, I suppose you're saying "but all I need is a $160 8 port switch" in which case I'd say you're not a real networking geek. I suppose you buy those cheapo $40 Linksys switches instead of a proper Cisco Catalyst 3500XL series managed 10/100 switch too right? Fucking amateurs.

      • Yeh, right. You're fancy expensive switches *might* impress the novice, but what happens when you need to support your legacy systems?

        No rackmount arcnet hub, for the TRS-80 Model II that runs the thermostat software?

        No Synoptics 3030 with 3 Lattistalk blades (switched localtalk) for those old Mac SE's running the custom, undocumented Filemaker db's?

        What about econet, 4mps token ring or FDDI? Do you have any ATM25 or ATM155 for those Alcatel DSL modems that will do atm rather than ethernet?

        My god man, w
  • by ericdano ( 113424 ) on Friday April 09, 2004 @12:26AM (#8812337) Homepage
    Gotta shuttle all that porn around the home network huh? ;-)
  • by cjpez ( 148000 ) on Friday April 09, 2004 @12:27AM (#8812344) Homepage Journal
    What sort of distribution are you talking about here, anyway? I've got a little LAN hooked up with a simple little 100Mbit Netgear switch, and I NFS-mount my audio and video partitions over to the computer downstairs hooked into the TV (running Freevo at the moment). The 100Mbit switch is perfectly fast enough to stream even DVDs mounted in the computer upstairs, to say nothing of the smaller compressed DivX (or whatever) stuff. If you're just talking about some home theatre kind of movie sharing, there really wouldn't be a need for it.

    Of course, if your needs are more extensive you may need something more...

    • I am thinking of something similar, and am planning to incorporate VOIP to replace the current PBX system, and also a security camera or two, as well as my slim devices mp3 server, but wanted to have plenty of bandwidth headroom (Yes it is a big house) so was wondering about GB Ethernet for possibly similar reasons.
      • by cjpez ( 148000 ) on Friday April 09, 2004 @12:40AM (#8812437) Homepage Journal
        I don't think you'd need anything more than 100Mbit for that. I don't have any experience with VOIP, but I can't imagine it sucks up bandwidth any worse than DVD-quality video, and I imagine that the security camera stuff isn't going to suck up anything major either.

        Anyway, 100Mbit is cheap enough that you could always just install that first and then expand if you need more. If you just make sure that the cable you're running can handle gigabit, you can always plunk down more money later for a gigabit switch and NICs, to replace the $15 NICs and $50 switch you put in originally for 100.

        • by Anonymous Coward
          > Anyway, 100Mbit is cheap enough that you could always just install that first and then expand if you need more.

          I'd agree with this.

          I live in Aspen and deal with high end residences. Most of our clients have high end stereo and theater systems with a web based control system. There are touch screens in every room that handle music, tv, lights, window shades and many other things. They don't generate a whole lot of traffic on the network, but they're there.

          Some also have music servers that run a cu
          • Bingo, did just that yesterday. I did not look too close at the specs but I was pleasantly surprised that my new Mobo now only had a Gigabit Port on board but that Linux supported it out of the box.

            SO Gigabit is coming. Besides, if you make sure that the cable can handle the punch now you can just upgrade you gear later. And 1 Gb Switches can handle 100 MBps cards so you can then upgrade in pieces.
      • The phone companies fit 24 voice channels onto a (1.5megabit) T1. That still leaves 98megabits for your security camera...
        If you compress it, you can fit a ~VHS quality signal in 1 megabit (color or black&white?)
        250Kbit is about the highest quality MP3's that I've seen, so if you throw in a handfull of those and your security cameras, you've still got 80-90 megabits left over for 'regular' networking.
        80megabits is about 10megabytes/second sustained... That's not much worse you'll get (real life) fr
    • by Total_Wimp ( 564548 ) on Friday April 09, 2004 @12:45AM (#8812472)
      But what if you actually want to copy that video? How long do you wait while hundreds of megs or gigs of data transfer? Do you want to wait less time? Gigabit is great and you'll waist _a lot_ less time waiting for your file transfers.

      Lets face it, faster is better. If I could copy a whole DVD in a minute, I'd still prefer the solution the let me copy it in a second.

      TW
      • by cjpez ( 148000 ) on Friday April 09, 2004 @12:50AM (#8812505) Homepage Journal
        But what if you actually want to copy that video?
        Right, obviously if you're doing stuff like that you may need more bandwidth. I'm just considering here that the price of GigE NICs and switches, while not out of reach on even a moderate budget, may just not be worth it if you're not planning on doing anything like that. 100Mbit is so cheap and common nowadays that converting over later won't incur much higher cost than going with GigE initially if you find out that you need it. That's why I asked what kind of thing was going on; if he's just watching movies and stuff remotely on a computer that's only got a couple-gig drive for the OS (as I do) then personally I couldn't justify spending money on gigabit for it.
      • by darkonc ( 47285 ) <stephen_samuel AT bcgreen DOT com> on Friday April 09, 2004 @02:04AM (#8812887) Homepage Journal
        But what if you actually want to copy that video? How long do you wait while hundreds of megs or gigs of data transfer?
        $ units 1second/100megabit minutes/4gigabyte<br>
        * 5.3333333
        6 minutes to transfer a 4GB CD (after adding overhead) seems just fine to me. If you're really expecting to get better than that, you'll need RAID on both ends of the pipe.

        About the only reason I can see for wanting to go gigabit in a house is if your whole family is doing remote video editing, and you've got a nice, 10-spindle RAID box to do the file serving.

        • 6 minutes to transfer a 4GB CD (after adding overhead) seems just fine to me.

          But wouldn't 2 minutes be better?

          On a related note, why do 40x CD-burners exist when 12x would be fine?

          If you're really expecting to get better than that, you'll need RAID on both ends of the pipe.

          Bull. 100Mbit maxes out at about 9MB of data per second at best (and that assumes no extra overhead, like encryption). Even reading from a standard hard drive you can transfer (both read and write) in at least the low 20MBs range
      • by Moderation abuser ( 184013 ) on Friday April 09, 2004 @07:49AM (#8813885)
        Because you'll find that you can't write to a filesystem on a single disk much faster than 100mbit anyway. Gigabit is significantly faster than the I/O that a single drive can provide.

    • by cybermace5 ( 446439 ) <g.ryan@macetech.com> on Friday April 09, 2004 @12:46AM (#8812478) Homepage Journal
      Yeah, but consider multiple terminals around the house, all pulling down different full-resolution DVD video streams. I could see the bandwidth piling up. Plus, who knows what network-intensive applications we'll be using a few years down the road.

      Plus, what if he wants to have a fast backup solution? With the sizes of hard drives these days, you can use all the transfer speed you can get. Let's say he has a server with enough space to maintain a full backup of his 120 Gig drive on his workstation. Using gigabit ethernet, it will take a theoretical minimum of 17 minutes to transfer all of the data. With 100mb ethernet, it'll take a minimum of 2 hours and 50 minutes. That's an extreme example, but you know, it'll shave off a few seconds here and there during normal use. It all adds up at the end of the day.
    • by doormat ( 63648 ) on Friday April 09, 2004 @01:28AM (#8812706) Homepage Journal
      Three words...

      Multiple HD Streams

      An broadcast quality 1080i stream is 19.8Mbit/s. If you figure the max you can get out of 100Mbit/s ethernet is 85%-90% , and you want more than 4 streams (yea, sounds outlandish now, but in 5 years it might not seem so weird). Plus standard network traffic (if you dont make seperate networks) and you're looking at gigabit ethernet.
      • When it costs $10 for a switch and $5 for a NIC.

        Till then, the only time my 100mbit LAN gets remotely taxed is when I run Bacula backups of all of my machines.

      • HD is a wasteland right now. Some of the networks are in HD some of the time, if network sitcoms and a few sporting events is your idea of watching TV. There's HBO and Showtime, if you get either one, and then there's a PBS and a Discovery HD which are almost just a loop. Beyond that and the re-hashed crap on HDNet there really isn't anything terribly compelling in HD.

        • All the network shows my household watches regularly (Alias, 24, Century City, Kingdom Hospital, The Practice, The DA, CSI) are either HD or, in 24's case, widescreen 480p. So are HBO's recent original series, as you note. That's plenty of HD content for us.

          I've considered gigabit Ethernet for HD streaming too -- I mostly get smooth playback over my 100Mbps network, but occasionally there's a little glitch when the player app moves to the next file, which doesn't happen when playing from the local disk. H

        • There really isn't anything compelling in HD. Unless you consider every major US sporting event. Oh, and every popular TV show on the networks. And first run movies on HD PPV. And movies and original shows on HBO.

          Geez what do you want. There is more programming available in HD now, than there was OTA programming 20 years ago.

    • by Shanep ( 68243 ) on Friday April 09, 2004 @01:37AM (#8812747) Homepage
      If you're just talking about some home theatre kind of movie sharing, there really wouldn't be a need for it.

      Yes, 100Mbit should be plenty for that.

      Watching a raw DVD file served from my OpenBSD Samba server, uses about 7Mbit. That's not to say that other DVD's won't require more though, but certainly not 100Mbit, let alone 1Gbit.

  • In your house? (Score:5, Interesting)

    by mao che minh ( 611166 ) on Friday April 09, 2004 @12:27AM (#8812345) Journal
    With over 600 nodes on our network (300-310 being workstations) we only require gigabit at our core, from servers to SAN (Storage Area Network), and from work group switches to the core. Hell, we don't even have a DS3 to the outside world yet. Our largest collision domain serves approximately 90 hosts that are all heavily used, and it never congests its 100mb pipe (unless a worm gets in and actually does some damage, anyways).

    Hard as I try, I can't imagine ever having enough stuff in my house to warrant gigabit. Damn.

    • Re:In your house? (Score:3, Insightful)

      by Total_Wimp ( 564548 )
      If you regularly copy videos for editing, gigabit is great. Many homes do this. We do this. With the price so resonable, I can't see why anyone who does video work wouldn't get gigabit.

      TW
      • Yeah, if you're doing actual video editing, I imagine you can make pretty good use of gigabit. Watching compressed movies over a LAN wouldn't really warrant anything over 100Mbit, though.
    • Re:In your house? (Score:5, Interesting)

      by slaker ( 53818 ) on Friday April 09, 2004 @12:42AM (#8812456)
      You aren't trying very hard. The core of my home network - presently 9 PCs, includes four that are used as central stores of massive amounts (around 900GB apiece, give or take) of video content. Rather than pay the costs in trying to have redundant storage for all of it, I simple distribute everything to more than one machine.
      Now, given that I'm talking about potentially moving around hundreds of single files in the ~4GB/file range, d'ya think Gbit is even a little justified?

      Incidently, for the topic: All Gbit hardware auto-detects crossover, so I just built my backbone network by putting two cards in each of my fileservers and establishing routing between each host. Since Gbit switches are either too cheap to do jumbo frames, or cost more than I want to spend, that's an acceptable workaround. Each machine also has a link to one of the VLANs used by my "client" PCs on the plain old 100mbit network.
      • Dear $DEITY no, you just made a TCP/IP Token Ring hybrid, how could you!? have you no soul?.
    • Re:In your house? (Score:5, Interesting)

      by mikis ( 53466 ) on Friday April 09, 2004 @12:57AM (#8812543) Homepage
      Hard as I try, I can't imagine ever having enough stuff in my house to warrant gigabit.

      Now when Gigabit NICs are like 10$ or even integrated on motherboards, why not?

      What intrests me is, what is the real speed of (home) Gigabit Ethernet, and when (or if) it could be used for diskless computers. I mean, theoretical speed should be around 100MBps, and even newest hard drives are slower than that.

      Would it be possible to use one computer as a SAN for other diskless workstations?

      • Re:In your house? (Score:5, Interesting)

        by egarland ( 120202 ) on Friday April 09, 2004 @01:59AM (#8812863)
        Would it be possible to use one computer as a SAN for other diskless workstations?

        I love this idea. I've thought about it for a while and I think it could be good stuff. Unfortunately, there is no standard protocol for using a network card as a block device. NFS is ok but try booting your Windows box over NFS. There needs to be a protocol similar to i-scsi that allows you to route disk io over an ethernet card on the hardware level but that is cheap and capable of simultaneously acting as an ethernet card for the OS/s networking. Then you could buy a nice huge high speed raid 5 array and use it for disk in all your machines instead of the little cheap slow unreliable things that machines usually have inside them.
  • If you have 100BaseTX with 1000BaseTX you will take a big performace hit. I worked in a data center that had to be converted to 100BaseTX because not all devices are offered in 1000BaseTX and the conversion between 100 and 1000 is a big performace problem.

    Nick Powers
  • at Gigabit speeds successfully on your home LAN, your slow ass drives ain't gonna deal with the flow of bytes.

    Dude that is like trying to use jet fuel in a 1984 Capri.

    • And you have enough cash to be utilizing a JBOD [xyratex.com] to back up your pr0n, there is a problem.
    • by Bishop923 ( 109840 ) on Friday April 09, 2004 @12:57AM (#8812541)
      On a 100BaseT NIC the theoretical max transfer rate is 12.5 MBps with a realistic speed of 8 MBps. Multiply that by 10 to get a rough estimate of Gigabit speed. Most ATA HDD's can transfer around 40-60 MBps. You can easily saturate a 100BaseT network with bargain basement machines.

      Gigabit Ethernet is faster than what your typical ATA drive will absorb, but it is still going to be quite a bit faster than 100BaseT.

      Spend the Money on a nicer HDD or a decent RAID setup and you will be able to make full use of a Gigabit pipe.
  • by Ballresin ( 398599 ) on Friday April 09, 2004 @12:29AM (#8812362) Homepage Journal
    In Mac OS X, there's a setting right in the Network Preference Pane that is under "Ethernet" and it allows you to scale up the packet size depending on the immediately aparent network appliances. I haven't been able to use this feature because:

    A: Some clients have nice network hardware, but legacy copper
    B: Some clients have gig copper, but not enough hardware

    I can't wait to see the transfer rates on Gig with Jumbo packets though. *Drool*
  • by Total_Wimp ( 564548 ) on Friday April 09, 2004 @12:30AM (#8812366)
    I've got an Abit motherboard with Intel gigabit built in and WindowsXP loaded on it. My GF has a Powerbook with gigabit built in. We bought the cheapest gigabit switch we could find. We got Cat 6 cable.

    Everything was autodetected and the speed improvement over 100mbit was dramatic. Highest performance increase I've ever gotten for doing basically zero work (I did plug in the cables all by myself :-).

    Now, this obviously doesn't answer all your questions, but for anyone out there who doesn't have legacy issues all I can say is go for it, it's a no-brainer.

    BTW, I use a Linksys WAP-Router for internet. It didn't so much as burp when we plugged it into the gigabit switch.

    TW
  • by Malor ( 3658 ) on Friday April 09, 2004 @12:31AM (#8812374) Journal
    I think the biggest thing about gigabit is that PCI isn't really fast enough to support it. You can shovel 133MB/second over a PCI bus, or 1064Mb.... very slightly more than a gigabit, but that's with NOTHING else happening on the bus. Generally, since the hard drive controller is also on the Southbridge, I think about the best you're going to get off most PCs, even very, very fast ones, is about 300 megabits sustained.

    To really take advantage, you're going to need machines that run the network card off the Northbridge. Presumably, PCI-Express network cards will also keep up pretty easily. From what I can see, you're probably best to wait another year to eighteen months before upgrading; by then, PCI-X should be pretty common, and gigabit networking shouldn't be very expensive.

    Note that I don't have any direct experience with gigabit: these are just back-of-the-envelope calculations. I could be completely off, so pay attention to replies.
    • Gigabit pipes are needed for stuff that can actually utilize it, like when you have 100+ servers needing to be backed up throughout the day to your SAN, or when you are serving out 600-800GB from your SAN to your servers. This is why you find gigabit pipes at the core and throughout the datacenter, but not from your workstations to your switches. Not yet, anyways.
    • by hattig ( 47930 ) on Friday April 09, 2004 @12:46AM (#8812482) Journal
      Tests using PCI Gigabit chips (e.g., broadcom, 3com, intel) get around 500Mbps or so.

      Intel CSA attached gigabit chips (on Intel chipset motherboards only) perform better. CSA is a dedicated link from the northbridge to a gigE controller.

      Of course, nForce3 250Gb integrates gigE inside, and gets over 800Mbps performance. See the preceeding /. story! Of course, that controller is attached to the processor by a 6.4GB/s link!

      Also, PCI-X != PCIe. PCIe (PCI Express) is the upcoming high speed serial version of PCI that operates on a point-to-point basis. PCI-X is the extended faster variant of 64-bit 66MHz PCI running at up to 133MHz (1GB/s PCI essentially) in a bus configuration.
    • by sirsnork ( 530512 ) on Friday April 09, 2004 @12:47AM (#8812489)
      This is the major point that is overlooked when people talk Gb networks. Only with PCI-X slots do you see a major improvement in performance, and I would doubt that a home network contains even one PCI-X slot.

      once you get around the IDE or SATA, the audio, the USB2 or Firewire (if we're talking video editing) etc etc etc, you would be better adding another standard network card and teaming them for your major data stores in the network and leave everything else as it is.

      Also on a side note a 1 X PCI Express slot is ~250MB in each direction (about ~500MB total) so yes a 1 X PCI-E slot will do Gb ethernet fine
    • But you don't have to be able to pump the full 1000 Mbps to take advantage of gigabit ethernet. As long as you can pump more than 100 Mbps, then gigabit will give you a speed improvement over 100 Mbps ethernet.

      (Or, a good location for the ceiling is "anywhere above your head").
    • by calc ( 1463 ) on Friday April 09, 2004 @01:14AM (#8812634)
      That is why Intel i865/i875 has the option of direct connect e1000 gigabit (CSA) to the northbridge. Most motherboards with gigabit built on that uses either of those chipsets use the e1000 CSA gigabit chips.
  • Dell PowerConnect (Score:5, Interesting)

    by captaineo ( 87164 ) on Friday April 09, 2004 @12:34AM (#8812397)
    I've had a good experience with a Dell PowerConnect hub (the 8- or 16-port model, I forget which). It was quite inexpensive and claims to support Jumbo Frames (however I haven't actually gotten this to work; when I enlarge the frame size on Linux it loses the connection). Oh, and I had to disable one default feature on the hub (tree-spanning something or other) to get it to work.

    For clients I use Intel gigabit cards (the 64-bit PCI "server" model). I wouldn't skimp here since indications are that cheap gigabit cards don't have any hope of getting wire speed. NFS file copies max out at 20-30MB/sec, but I know that is limited by my server's disk array. I did a test for raw network bandwidth (just sending zero bytes as fast as possible) and got around 60-80MB/sec.

    Everything is connected to my existing Cat-5 cable with no problems. This includes several Linux systems, one Mac and one Windows PC.

    I will caution you not to expect anything like gigabit wire speeds with typical clients. My Mac G4 in particular seems to have trouble getting good bandwidth (I think the problem is either the network stack or NFS client).

    If anyone has a success story with jumbo frames, I'd love to hear about it. The only references I could find are for mega-dollar Cisco/Foundry type equipment.
    • Re:Dell PowerConnect (Score:3, Informative)

      by shreeni ( 177352 )
      I always thought that Cat 5 will not be sufficient for Gigabit speeds. It should be atleast 5e or greater.......
      • Re:Dell PowerConnect (Score:5, Informative)

        by Alioth ( 221270 ) <no@spam> on Friday April 09, 2004 @05:32AM (#8813526) Journal
        Gigabit Ethernet actually uses the same frequency (100MHz) as 100Mbit ethernet. Cat5 and Cat5e is both rated for 100MHz. Actually, I wonder if you can get Cat5 but not Cat5e any more. When I wired my house, Cat5e was the minimum spec being sold.

        The difference with Gig-E is that it uses all four pairs in the wire (100Mbit only uses 2 pairs) and it has a different linecode that allows more bits per baud.
  • by dj245 ( 732906 ) on Friday April 09, 2004 @12:36AM (#8812406) Homepage
    Now I know this is /. but before everyone says "you don't need gigabit!" and "bah, who needs that kind of speed" gigabit ethernet is genuinely useful. Even copying 500mb files can take intolerably long when you want it done 4 minutes ago. If the poster wanted a bunch of nonsense about why he shouldn't do it and why its a dumb idea, he could have gone to Circuit city (they don't sell gigabit so they would try to sell him 10/100). Instead he asked us for an informed option and information on the matter.
    • by rhavenn ( 97211 ) on Friday April 09, 2004 @12:44AM (#8812464)
      I was at CompUSA awhile back and some guy was talking to the this sales dude. The guy said he a 256/128 DSL connection and needed a NIC card. The sales guy told him to get a Gig card...it would speed up his internet. I actually did a *cough*bullshit*cough* as I walked by. CompUSA sales people are the WORST.
      • You said it. I've been working as an in-store vendor rep in the networking aisle at CUSA in Norwalk, CT. On the rare occasions when the store employees venture over there (They really only care about big sales. PCs, TVs ipods etc.), half of what they spout out is utter garbage. Heard one of them tell somebody exactly the same thing, except in relation to 802.11b vs 802.11g. On a residential DSL connection, it doesn't make a difference. I try to keep that kind of thing from happening as much as I can.

        • Sales people are generally clueless.

          It seems like almost every time I'm standing around in the computer department looking at networking hardware, a clueless customer is asking a clueless sales guy about stuff. The sales guy will say something stupid, and I'll correct him. Then I'll help the clueless customer save a bunch of cash, helping him with what he needs, rather than what they wanted to sell him.

          Who cares if he didn't spend a bunch of cash. He's a *HAPPY* customer now, knowing he got th
      • I worked there in college like eight years ago.

        That salesdude isn't making a penny for talking about NICs. Salesmen there sell computers and service contracts for money.

        What generally ends up happening is you bring a customer to the front of the store with his computer and help them out to the car if needed.

        Unfortunately, the trip back to the computer area can take as long as 30 minutes. Morons want a salesmen in a computer store to design a network. Bored consultants or lonely old people want someone to
  • Gigabit in my home (Score:5, Informative)

    by orionware ( 575549 ) on Friday April 09, 2004 @12:38AM (#8812419)
    I have a mixed network and have not had any problems with speed or the switches flaking out.

    I have 3com gigabit cards in three computers and a 3com 100Mb card in one.

    One gigabit machine is a redhat 8 machine that is used as the network attached storage (NAS) box feeding media throughout the house and acting as the DNS for the house (This is so much faster than relying on your ISP!) and to filter packets for the kids computer (Damn Pr0N!)

    One gigabit machine is my personal desktop.

    One gigabit machine is in the family room sucking media from the NAS.

    The 100MB machine is upstairs and the kids use that one.

    The gigabit machines are plugged into a LanReady gigabit switch that I bought for 60 bucks Ebay.

    The 100MB machine is plugged into a 3com superstack.

    Both switches are then plugged into the cable router.

    Speeds between the gigabit machines average 50 Meg a second depending how large the files are and if it's streaming or copying, The 100Mb box pulls 7-8 MB a sec from the others.

    I'm happy with the speed.
  • by lone_marauder ( 642787 ) on Friday April 09, 2004 @12:41AM (#8812440)
    No, really. I'm serious. Not at home, anyway.

    Unless you get a very hot, brand new PC with motherboard integrated gigE, your PCI bus can't push the bandwidth. The same goes for switches. You'll be doing good to get 400 mbps out of a cheap gig switch.

    Even if you have a $5000 gigE switch and a PC that can handle it, what are you going to talk to, your cable modem? The only place gigabit ethernet makes sense is when you are aggregating traffic from multiple computers to a centralized server or set of servers, and are using applications that actually require that kind of bandwidth. Even if you want to move that much data around, and have a way to do it (hint - neither scp nor samba can talk that fast), the best benefit you'll see is about double the performance you get with 100.

    Here in the networking world (where I live and play), recent advances in traffic management systems have begun to punch holes in the time-worn theory that throwing bandwidth at a network problem = fixed. If you really want network performance, go check out the Linux advanced router/ traffic control site. (lartc.org) There, you'll learn to get lightning response from ssh and your first person shooters, all while running a 2gig/month web server through your home dsl's 256K uplink. And it won't cost you a dime.
    • by bogie ( 31020 ) on Friday April 09, 2004 @01:04AM (#8812572) Journal
      "The same goes for switches. You'll be doing good to get 400 mbps out of a cheap gig switch."

      40MB is a hell of a lot better than 10MB. I don't know why everyone keeps saying he won't be able to saturate the line. He doesn't need to max it out in order to enjoy the benefits over 100Mb ethernet. Who knows what kind data we will be dealing with in 5 years? Seems like going 1000 is a smart investment.

      I had no idea Gb Ethernet switches had dropped so much in price. If I was buying a new switch today I'd definitely be buying one of those $100 Linksys switches. Considering the cost is so cheap why even bother with 100MB if you think you'll be using bandwidth hungry apps?
      • I had no idea Gb Ethernet switches had dropped so much in price. If I was buying a new switch today I'd definitely be buying one of those $100 Linksys switches. Considering the cost is so cheap why even bother with 100MB if you think you'll be using bandwidth hungry apps?

        The caveat here as I might have hinted in my question is that you might get what you pay for. To the point, the Linksys EG008W workgroup gigabit switch won't do jumbo frames and between two 64-bit/66MHz gigabit XP servers (one with an In

    • by Admiral Burrito ( 11807 ) on Friday April 09, 2004 @01:09AM (#8812595)
      Unless you get a very hot, brand new PC with motherboard integrated gigE, your PCI bus can't push the bandwidth.

      Being integrated with the motherboard doesn't make a performance difference on any board I've ever seen. It still goes over the PCI bus, it's just not using a slot. Creating a separate bus just for the ethernet port would be too expensive.

      You'll be doing good to get 400 mbps out of a cheap gig switch.

      I'd be interested to know where you came up with that. Some switches may have an underpowered backplane that limits your aggregate bandwidth (such that you can't pump a full 1Gbps on all ports simultaneously) but it shouldn't prevent you pushing 1Gbps between two ports when all else is idle. If it's advertised as a gigabit switch but is only capable of 400 Mbps, wouldn't the manufacturer be open to claims of false advertising?

    • by egarland ( 120202 ) on Friday April 09, 2004 @01:24AM (#8812688)
      >You'll be doing good to get 400 mbps out of a cheap gig switch.

      I'll just point out that 400mbps is 4x the speed of 100mbit. That's not a small difference. Seems worth the tiny price premium.

      This is a home network we are talking about. Latency, routing and prioritization isn't really an issue. Usually only 1 or 2 things will be going on at a time. What will be noticed is raw bandwidth during large file transfers. I have a gigabit network here. It's very noticeable..
  • by adler187 ( 448837 )
    I have this friend who goes to South Dakota School of Mines and Technology. He got a bunch of free Cat-6 from one of our mutual friends, whose brother owns a audio/video installation company, so he wired his entire dorm with Gigabit. He brags about it all the time, too. He's done some other weird stuff in his day, though at least he didn't cover his entire dorm room walls with AOL CD's.

    Oh, wait....
  • Gigabit (Score:5, Informative)

    by JWSmythe ( 446288 ) * <jwsmytheNO@SPAMjwsmythe.com> on Friday April 09, 2004 @12:51AM (#8812512) Homepage Journal
    We use GigE fiber for our server networks, and pass up between 400Mb/s and 600Mb/s on high traffic days from each one.

    The one thing I can say is that you'll probably never use it. There's really no need at this time. most protocols aren't any good at sucking up that much bandwidth on a single stream.

    I've had many people prove this to me. They'll transfer files as single transfers. They can use up to about 10Mb/s. But if they transfer lots of files, they can use lots more. Try it through a switch that you can monitor bandwidth on. Through FTP, SMB, SCP, or whatever, you won't use up 100Mb/s. But, running multiple concurrent sessions, you can try to come close.

    Heroinewarrior has a library called "firehose", which uses up all the available bandwidth, and will stripe across multiple connections to use up more. So, if you have 3 100Mb/s cards in a machine, you can come close to transfering at 300Mb/s.

    You should also consider the other factors. Can your machine really send that fast? Is your hard drive fast enough to send over 100Mb/s ?? A nice fast SCSI drive, or a SATA drive can do it, but most IDE drives will fall short (specs be damned, try it in real life).

    I transfer stuff around on the GigE lan all the time. We do exceed 100Mb/s, but it's usually with multiple machines.

    The highest bandwidth usage machines we have are voyeurweb.com . They send out 150Mb/s through TEQL (Linux kernel option) combined 100baseTX cards, with several copies of thttpd running.

    thttpd is a web server that is very small, and works very efficently. Apache has one process per connection, but thttpd has one process for everyone. Well, at least theoretically. It was around 80Mb/s of regular web site files, that it started flaking out. So, we run 4 copies of it on seperate IP's and let it scream.

    As for our network, I'll outline our largest network.

    We have a 1Gb/s uplink to Level3. This goes to a Cisco Catalyst 3508 (8 GBIC ports).

    The remaining 7 GBIC ports go to 7 switches, mostly Cisco Catalyst 3550-48 (48 100Mb/s ethernet, 2 GBIC), and the servers are attached to the 100Mb/s ports. We have one Dell switch, which does 1000baseTX on all the ports, and a few machines with 1000baseTX cards. They can't pull anything resembling 1000Mb/s between each other. it simply doesn't happen. Honestly, doing transfers through http, ftp, or scp doesn't ever use over 100Mb/s on individual transfers. Sure, we can do it with multiple concurrent transfers, but at home, how many hundred or thousand users are you really trying to supply?

    For home, you'll never use it. 100Mb/s is usually overkill. I set up my house with 802.11b, and at 11Mb/s peak, I see no difference than my old house, where we had copper run to every room and a Catalyst 2924 managing it. 11Mb/s is more than sufficent for a home network.

    Spend your money on a *GOOD* 100Mb/s switch. I highly recommend Cisco, like a 2924, which you should be able to get relatively cheap used. Even if you put GigE cards in the machines, you can at least monitor your bandwidth now, and see what you really use. If you start flat-lining at 100Mb/s (bandwidth graphs make things really obvious), then you could consider upgrading.

    • Re:Gigabit (Score:3, Interesting)

      Agent Smythe, meet RFC1323 - Long Fat Pipes. If you can use more throughput between the same machines with multiple streams, then window scaling is for you.
  • by Anonymous Coward
    I was fortunate enough to buy 2 TI-Chipped Firewire cards ($20 total) and use them to network my main WS to my Server w/ 5-foot cable. You can save a lot of money going this route if you can. MM
  • SMC Gigabit (Score:5, Informative)

    by retro128 ( 318602 ) on Friday April 09, 2004 @01:01AM (#8812563)
    A friend of mine just went nuts when he found out about a new switch from SMC [smc.com], the SMC8508T. While it's unmanaged, it offers non-blocking architecture across the entire line as well as support of jumbo frames up to 9K, which is extremely unusual for SOHO stuff. Not even a lot of expensive Cisco stuff does jumbo frames. And he paid $150 for it.

    Why should you care about jumbo frames? I found this nice guide about that here. [wareonearth.com]
  • by juang310 ( 765067 ) on Friday April 09, 2004 @01:12AM (#8812616)
    The least expensive switch I have found that support jumbo frames are from SMC, the SMC8505T http://tinyurl.com/3by3v and the SMC8508T http://tinyurl.com/nhaz. The links are to the smc site. The 5 port version is approximately $100-120 and the 8 port is $140-$150. SMC also has 16 and 24 port versions. As far as support for Jumbo frame support Windows 2000/XP and Linux both have them as long the NIC has drivers that support them. I know the major NIC manufacturers like Intel, Broadcom, and 3Com have driver support for them. One tip: if you are using Dells with Intel 1GigE embedded on the motherboard make sure to use the latest drivers from the Intel support site since the default Windows drivers from Dell do not show the Jumbo frame option. As far as the optimal Jumbo Frame size, that would depend on the type of traffic you are carrying. Simply putting in 9K frames on everything might not be optimal. It will take some experimentation to find the right sizing.
  • I say go for it! (Score:5, Interesting)

    by egarland ( 120202 ) on Friday April 09, 2004 @01:12AM (#8812618)
    I have a NetGear 4 port gigabit switch. I have found I can transfer files about 2.5x as fast as with 100mbit (without jumbo frames). In my book, that's worth the few extra bucks a gigabit switch will cost you.

    A warning though, I've heard most of the cheap gigabit switches have fans in them. Fans reduce the reliability of a switch many fold and make them LOUD. I like my 4 port Netgear and they now make an 8 port version which is also fanless and very reasonably priced.

    Does anyone have a Linksys or D-Link gigabit switch who can confirm or deny the presence of a fan?

    One note I'd like to throw in: Gigabit ethernet requires Cat-5 cable. Not Cat-5e, Not Cat-6, Cat-5. Better cables may be less prone to issues but they aren't part of the gigabit ethernet standard so don't go out and re-cable your house just for a little Gig-E.
    • Re:I say go for it! (Score:3, Interesting)

      by glwtta ( 532858 )
      I have a Linksys 8 port gigabit switch and I can definitely confirm that it's a bit of a beast when it comes to noise (for a switch anyway). I have enough stuff running to drown it out completely, but I can definitely see how some people would be greatly annoyed.
  • Bah (Score:3, Insightful)

    by Aoverify ( 566411 ) on Friday April 09, 2004 @01:24AM (#8812694) Homepage
    I dont get it. People here are bitching that the best throughput they see on gigabit ethernet is 400Mbps. Thats 4x the speed of regular 100Mbps ethernet. 4x still seems like a hell of an improvement, especially when you consider gigabit switches can be had for $100-150. I'd take a 4x faster HDD, processor, memory, etc anyday! Why snub your noses at at 4x network speed increase?
  • Web100 project (Score:5, Interesting)

    by scenic ( 4226 ) * <sujal@s u j a l .net> on Friday April 09, 2004 @01:27AM (#8812704) Homepage Journal
    The Web100 project [web100.org] might give you insights and technical information about tuning your OSes to get maximum performance from your high speed network. While they are mostly concerned with WAN tuning (this project is affiliated with Internet2 [internet2.edu], the underlying problems discussed (and the testing software they offer) should provide you with clues on maximizing performance on your LAN.

    As for fragmenting down, it might be easier to do that with a router that you actually have software control over (i.e. an old, low power linux box). I don't really have any experience with this on a home network, so...

    Sujal

  • by tstoneman ( 589372 ) on Friday April 09, 2004 @01:33AM (#8812722)
    I went through this... I bought netgear gs105 and netgear nics, all really cheap at amazon.

    Like me you'll probably find you don't get a 10x increase in speed, but maybe 25-50%, like from 8 MB/s to 13 MB/s when you transfer stuff between two computers.

    This is because your hard drive is fragmented, and this will completely, and drastically affect performance when you copy stuff. You don't realize it, but you will take a massive hit when you try to copy your isos, movies, etc across the LAN.

    I went from 13 MB/s to like 30 MB/s after i defragmented my source and destination drives.

    The main thing is that with Gigabit Ethernet, you have to think of the entire network as a system that works completely together. There has to be a complete unity between all components on your network because you will see the bottlenecks a lot easier.

    Also, none of the netgear cheap stuff support jumbo frames. The more expensive NICs do, but the gs10X ports do *not* support jumboframes.

    As well, they get really, really, really hot. Unnecessarily hot if you ask me, like burning to the touch, and could really heat up the inside of your CPU. In fact, even the gs105 switch is hot to the touch, too.

    I instead bought 2 Intel Pro 1000 MTs. They are much more reliable, they do support jumbo frames (but I can't use it until I actually get a jumob frame compatible switch) and they don't get hot at all.
  • by apetime ( 544206 ) <ape.com@ g m a i l . c om> on Friday April 09, 2004 @04:48AM (#8813373)
    You're probably going to get firewire cards for cheaper than gigabit ones, and I have seen demo setups with firewire wall plates so you can network your home (though I don't know if they're commercially available yet). But this would seem to be an alternative worth looking into.
  • by stewartjm ( 608296 ) on Friday April 09, 2004 @05:30AM (#8813520)
    Boy this turned into a bit of a tome.

    For a switch I went with an 8 port SMC EZSwitch 8508T [smc.com]. I chose it since:
    1. It supports jumbo frames. According to my testing it will pass ethernet packets up to 9212 bytes which should correspond to a 9198 byte MTU.
    2. It doesn't have a cooling fan. A definate plus since in my experience the little fans in switches such as this can become quite annoying as they age.
    3. It comes with rack mount ears.
    4. It's affordable. I purchased it from Securemart.com [securemart.com] for $139.31 shipped. Ordered it Thursday or Friday, it arrived Monday or Tuesday.

    As to NICs, one of my PCs already had an Intel gigabit port on the motherboard. In addition I purchased 4 more Intel Pro 1000/MT Desktop Adapters [intel.com]. Since:
    1. They have good driver support on both Linux and Windows.
    2. They support jumbo frames. Supposedly up to around 16000 bytes.
    3. They're supposed to be pretty fast/efficient. It's kind of dated but you can find a comparison of some 32-bit gigabit NICs here [digit-life.com].
    4. They'll do 66Mhz if your motherboard supports it and of my systems does.
    5. They have DOS NDIS2 drivers so I can use Ghost to make/restore images over the network.

    One I purchased through Intel's evaluation program [ententeweb.com] for $35.31 shipped. As I recall it took over a week to show up. The other three I ordered from OnlineMicro [onlinemicro.com] for $28 each plus $11.32 shipping. Be sure to change the shipping option from ground to 2 day air if you order more than 1, it's cheaper. They shipped them out the day of my order and they arrived on time.

    One of the Intel NICs died about 4 hours after I installed it. I swapped it with another and the replacement has been working fine for a few weeks now. I ran the diagnostics on it and other all but the link test passed. When the OS is booted up the switch shows no link lights but sometimes when the PC is off the link lights do come on. I've also tried it in another PC where it exhibits similar symptoms. I haven't yet contacted Intel about getting it replaced.

    I spent a lot of time tweaking various things. Some findings:
    1. With default SO_RCVBUF sizes a MTU in the neighborhood of 4000 or so bytes seems to get about the best network/application wide throughput. Specifically the otherwise fast NF7-S system below would lose almost 50% throughput with 9000 byte MTUs with the default SO_RCVBUF size. Linux to Linux lost around 30% as I recall.

    In theory you can change the default SO_RCVBUF size on linux by echoing appropriate values to:
    /proc/sys/net/ipv4/tcp_rmem
    Other than that you appear to have to change this setting in each individual application. One application of note that allows you to easily make this change is samba. See your: /etc/samba/smb.conf

    2. If you crank the SO_RCVBUF size up to 200ish k or more then a 9000ish byte MTU can eek out another 5ish percent more bandwidth. Thus for the moment I've decided to just stick with 4076.

    3. MTUs that are not of a size of the form 8x+4 cause Linux to behave oddly when it performs path MTU discovery. Namely for jumbo sizes that don't fit that form the discovery decides that the PMTU is 1492. You can read more detail about it in a Usenet post I made here [google.com]. I still don't have a good picture of what'
  • Several points ... (Score:5, Interesting)

    by Animedude ( 714940 ) on Friday April 09, 2004 @05:42AM (#8813561)
    First, it seems many people around here are not THAT up to date on what you can actually buy right now. It is correct that Gigabit is not really THAT useful when you're using a PCI card stuck to the 133MB/s PCI bus (although I would not consider around 60-70MB/s THAT bad compared to a standard 100MBit network card, it's still 8-9 times faster...). But you CAN buy motherboard integrated GBit cards that ARE on their separate bus right now, at consumer prices. Just look for an Intel 875P board with Intel CSA GBit, e.g. an ASUS P4C800E Deluxe. German c't magazine tested various home GBit solutions and they got around 110 MB/s over consumer priced hardware, if you just choose the right components.

    Second, the speed depends of course mainly on what the two sides of the connection are capable of in read speed (from disk) and write speed (to disk). If you copy files from A to B and one side is only using a cheap-ass 10 MB/s hard disk, you won't get anywhere near the theoretical maximum network speed.

    I have a LAN here with my main machine being a machine with Intel CSA, and then there are three other machines - two with a PCI GBit card and one with a motherboard-integrated PCI 3com NIC. Depending on which machine copies to which machine, I get transfer speeds of 30 MB/s (copying to my old Celeron PC) to about 70 MB/s (the last only when I copy files from a machine with a fast hard drive to my main machine, which is using the CSA GBit and the SATA stripe set, which is also using a separate bus away from PCI - in this case the network speed seems to be limited by the read speed of the other machine).

    So I would say that right now the home GBit is limited mainly a.) by the combined speed of hard disk and PCI GBit card being smaller than 133MB/s in the case of a machine with a PCI network card and b.) the hard disk read/write speed being slower than the max GBit speed in the case of a machine with CSA GBit. I would guess that if I had a second machine like my fastest one (both hard disk and GBit away from PCI and the hard disk stripe set being able of read/write speed greater than 100MB/s) I would finally be in GBit heaven :)

    As far as components go - look, as was said, for the motherboard integrated, non-PCI solutions if you buy a new PC. If you're upgrading an old PC, PCI cards are OK - they are a DEFINITE improvement over 100MBit cards, even if you just read 30MB/s. As for the switch - don't buy the cheapest one, the Realtek chips (they're the ones most likely using in there) seem to have some real issues. Also, if you are noise sensitive, look for one without a fan, those little buggers can get pretty annoying real soon. I bought a 3com 5 port 10/100/1000 switch for (half a year ago) 150 Euros, and I'll probably stick another one on top of it pretty soon. That thing (3C1670500) is small, has no fan and simply does what you want it to do. And it's pretty cheap for a brand name product. And all the components which don't use GBit (like the print server, the DSL router and the Access Point) I simply left on the old 100MBit switch, so the five ports limitation wasn't really one.
  • by bani ( 467531 ) on Friday April 09, 2004 @07:13AM (#8813800)
    Ok, here's the deal with jumbo frames.

    Don't worry about them. Only very, very expensive systems will be able to take advantage of them.

    If you have 32/33 pci, you arent going to get max throughput from GbE anyway. I've managed to get around 90mbyte/sec using ttcp, which is about 750mbit/s.

    Because the hardware does all the work for you (hardware checksum, interrupt mitigation, etc). the cpu usage is very low even at that rate. And thanks to polling, the interrupt rate isnt an issue either.

    Your bottleneck will be your PCI bus, plain and simple. You arent going to get the full 132mbyte/s from 32/33 pci, period.

    Unfortunately 64bit/66mhz PCI motherboards are somewhat expensive and 64/66 cards are 3-4x the cost of 32/33 ones.
  • by peril ( 11405 ) on Friday April 09, 2004 @07:15AM (#8813803)
    Framesize is a function of hardware capability.

    If you have legacy 10/100 devices that are plugged into that segment, jumbo gigE frames will NEVER work with the legacy devices. gigE frames appear to be L2 MAC errors as the preamble, source, destination, length addressing may line up in the front of the frame, but the crc at the rear will never line up. (Ethernet II frame illustrated below)

    Preamble|Source MAC|Destination MAC|length|data|CRC

    This is exactly like MTU's not lining up.

    But anyways, I think there are demonstrations with some workloads saturating a gigE w/o using jumbo frames.

    [snip] from http://sd.wareonearth.com/~phil/net/overhead/

    Gigabit Ethernet with Jumbo Frames
    Gigabit ethernet is exactly 10 times faster than 100 Mbps ethernet, so for standard 1500 byte frames, the numbers above all apply, multiplied by 10. Many GigE devices however allow "jumbo frames" larger than 1500 bytes. The most common figure being 9000 bytes. For 9000 byte jumbo frames, potential GigE throughput becomes (from Bill Fink, the author of nuttcp):

  • Hardware I use... (Score:5, Informative)

    by bani ( 467531 ) on Friday April 09, 2004 @07:33AM (#8813841)
    I've been using GbE for home LAN for about a year now. Here's the hardware I use:

    Switch:
    Linksys Instant Gigabit 10/100/1000 8-port switch [linksys.com]
    I think I paid ~$200 for this.

    Cards:
    Intel PRO/1000 MT Desktop Adapter [intel.com] (~$50 ea)
    Use the e1000 driver in 2.4.x or 2.6.x.
    Netgear GA302T Copper Gigabit Adapter [netgear.com] (~50 ea)
    Use the tg3 driver in 2.4.x or 2.6.x

    The tg3 chipset runs rather hot, the e1000 is tiny and runs cool. I havent noticed a performance difference between either, and both chipsets run fine regardless of whatever PC I put them in.

    Motherboards with embedded GbE typically use e1000 (if theyre good), or realtek (if theyre cheap).

    Jumbo frames:
    See my post on that here [slashdot.org].

    Cabling:
    Hand crimped cat5e. Works fine. One interesting note about GbE, you no longer have to worry about crossover cables -- the GbE spec requires that devices autodetect crossover. You can make all your GbE cables "straight through" cables.

    Do pay careful attention to following strict T568 wiring code though. You can no longer get away with incorrectly wired cables which just happened to work for 100bt. Since all pairs are now used in GbE, your wiring order must be 100% spec.

    Here's some wiring guides:
    http://www.lanshack.com/make-cat5E.asp
    h ttp://yoda.uvi.edu/InfoTech/rj45.htm

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...