Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Hardware

Finding the Bottleneck in a Gigabit Ethernet LAN? 100

guroove asks: "I have a small gigabit ethernet network at home, and I spent a lot of money getting gigabit NICs for all my computers and even bought cat 6 cabling. I only have 3 computers on the gigabit network (a Mac, a Windoze machine, and a Linux box) so instead of getting a switch, I triple NIC'd the Linux box, which I use as a gateway and a file server. After the network was complete, I wasn't satisfied with transfer rates, so I started a transfer of a very large file and found that the transfer rate was topping off at just over 145 Mbps (which is a far cry from 1000 Mbps). I'm wondering now where my bottleneck is. Is it the NICs? Are all gigabit NICs really giving us 1000 megabits per second? Is it the driver? Is it Samba? Could it be that the hard drives aren't fast enough? Does anyone have experience with gigabit home networking enough to know where the bottlenecks are? Does the current PCI technology even allow for bandwidth that high"
This discussion has been archived. No new comments can be posted.

Finding the Bottleneck in a Gigabit Ethernet LAN?

Comments Filter:
  • by Prowl ( 554277 ) on Monday August 02, 2004 @04:01PM (#9864357)
    tell me the linux machine's not an old 486 you had lying under the stairs...
    • OP: Answers (Score:5, Insightful)

      by Glonoinha ( 587375 ) on Monday August 02, 2004 @11:43PM (#9866695) Journal
      On your best day a random IDE drive is going to read or write 30 megs a second (average, on the fairly high side for anything short of SATA or nice SCSI) for completely sequential data in a large contiguous file; that's 240 megabits maximum throughput at the drive heads, or effectively 'wire speed'. That's assuming you are using relatively new hard drives in all these machines.

      Throw in all the Samba and other protocol overhead, throw in the fact that you probably aren't running P4 3.2GHz boxes, in fact maybe much less, throw in the lack of a dedicated switch and all of a sudden getting 50% of your theoretical peak throughput (hard drive being the limiting factor, not network) isn't too harsh of a reality.

      And it's a 'Windows' box, you stupid fuck. Maybe if Linux users (yea, I'm posting this in Mozilla on a RH9 install) would grow up and learn to spell the word 'Windows', Corporate America wouldn't instantly dismiss Linux users as a bunch of fucking retards. I spend a part of my work day trying to convince my boss that Linux is the choice of a new generation of professionals and every time someone says M$ or 'Windoze' I have to start over from ground zero. If you aren't part of the solution, you are part of the problem.
      • Comment removed based on user account deletion
      • Brilliant man...fucking brilliant. Thank you for restoring my faith in /.

        There needs to be a way to mod you higher...can I buy you a hooker or something ;) ?
      • Kick ass and thanks mods. I'm getting really sick of people thinking they're either clever or funny for purposely misspelling windows or microsoft. Now I can't spell for shit, but I don't find misspelling to be an insult to the intelligence other than of myself.

        Keep up the good work of selling linux for what it is and not what assholes (who think they can actually achieve gigabit ethernet on their home network using 3 nics) do to hurt its reputation.

        It's a sad fact you had to go out of your way to say y
        • with everyone so cool and cynical...

          windoze is a pun, if you cant bring yourself to ignore it and understand the true value of the mea...

          OMG THAT GUY IS SO CLEVER!
      • I find it ironic that you refer to this guy as a "stupid fuck" and then talk about the need to grow up. Your point is valid, but you hurt your argument with your 13-year-old vocabulary. Maybe you need to grow up a bit yourself.
      • windoze is a pun, you nazi. and its not like this is a sales pitch. if your boss does not see that nux is the way then most likely your not expressing things as they are or not giving it that extra zing that the sale pitch needs, how do you think M$ is so popular? You define it, dont let it define you. Really though, what does this guy expect putting 4 giga nics in his machine and then not reaching 'giganet' speeds. linux users grow up? I think the whole fuckin world needs to grow up. YOU DEFINE IT, DO
    • 25 years ago: Finding the Bottleneck in my 4mps Token Ring LAN - Hello, even though I'm supposed to be getting 4mps, I'm lucky when I can see sustained 1mps speeds...

      15 years ago: Finding the Bottleneck in my 10mps Ethernet LAN - Hello, even though I'm supposed to be getting 10mps, I'm lucky when I can see sustained 2mps speeds...

      10 years ago: Finding the Bottleneck in my Fast Ethernet LAN - Hello, even though I'm supposed to be getting 100mps, I'm lucky when I can see sustained 10mps speeds...

      1 year ago
    • Are your 3 NICS on the same PCI bus ? If it's a plain PCI bus then remember the PCI bandwidth limitation.

      If it's PCI-X or similar then ok, look elsewhere.

      I'd go for ASIC based hardware (i.e. GB switch)

      Sorry !
  • by bungeejumper ( 469270 ) on Monday August 02, 2004 @04:07PM (#9864383)
    It is "entirely possible" that the Linux machine is acting as a router, switching all your traffic in C code. Not to mention it is probably sending traffic up and down the PCI bus, once at ingress and once at egress. The lookup of the IP destination address is probably using a whole lot of memory bandwidth, and if it's at all like a regular router, it's probably doing a full IP header Sanity check (using the IP CRC), version number and TTL decrement. After the TTL decrement, you would need to recompute the CRC. I would say the Linux machine is your bottleneck. Unless you could somehow get it configured as an ethernet switch, rather than a Layer 3 router.
    • by Jahf ( 21968 ) on Monday August 02, 2004 @05:31PM (#9864976) Journal
      Agreed.

      While there are a number of Linux based routers out there, none that I know of are used in the Gigabit realm. Even if they are, they at the -very- least have recompiled the kernel to switch on a number of router/gateway optimizations ... and quite possibly contain proprietary network / NIC kernel modules to further gain improvements.

      Unless you have a VERY modern bus architecture (alot of people using Linux routers do so on old gear), preferably an AMD with hyperthreading (since I doubt you have a non-x86 system or you'd have mentioned it), you will never get close to maximizing not one but -3- Gb NICs.

      Take a look at some of the servers that are out there in the x86 realm. They usually require you to use a 100MHz or 133MHz PCI card to get best results from a Gb ethernet NIC. And if you look at the first generation of x86 servers (say, from 2 years ago) that came with Gb ports by default, looking deep into the benchmarks you often find that they never reached their Gb potential with the built-in ports either. The advantage was that it was still better than 100 Megabit.

      With a hyperthreaded high-speed bus and some kernel tweaks, I would be quite happy if I could get all 3 NICs to stress-test simultaneously at 300-500Mb/each. Heck, I'd probably be happy around the 250Mb range.

      BTW, even a Gb switch, on the home CPE level, is probably never going to send multi-Gb of data (ie, by trying to switch data amongst multiple Gb ports). Often times you are limited to a max of 1Gb total throughput because of the switched backplane. Heck, even then you may max around 900Mb due to network overhead.

      Moral is simply to realize that with all networking products, the real speed is usually significantly less than the rated speed.
      • by Fweeky ( 41046 ) on Monday August 02, 2004 @06:27PM (#9865335) Homepage
        Intel do Hyperthreading, not AMD. The buzzword there is Hypertransport, which significantly ups the speed of memory and device access; a lot of motherboards with Gigabit onboard now attach them directly to the 800/1000MHz Hypertransport bus, which can easily keep up.
        • Sorry, thank you ... absolutely right. And I should have known better because the whole time I was desparately wanting to type "the bus technology that DEC developed for the Alpha" :) I definitely think a modern MoBo with Hypertransport and multiple onboard Gb NICs would work, but it is probably not the solution this poster had.

          Since the poster was using additional NICs from the way I read it, Hypertransport still wouldn't help though, so maybe I should have left those pieces out.

          Either way, the poster di
    • ...get it configured as an ethernet switch, rather than a Layer 3 router.

      The poster didn't say how he had the cards connected together. My understanding is that he could make a layer 2 switch by bridging [sourceforge.net] all the ethernet interfaces. It'd cut down on all the IP routing overhead. Still, I'd recommend getting a dedicated Gigabit switch. The PCI bus just wasn't meant to handle this amount of traffic.

    • Find out with MRTG (Score:3, Insightful)

      by Bios_Hakr ( 68586 )
      One of the best things you could do is configure SNMP on all 3 boxen. After that, run MRTG to figure out what's happening on the wire. If you made the connectors yourself (as opposed to factory-made cables), doublecheck to see if the connectors fall within the CAT-VI spec. How much of the pair is untwisted? How far into the connector is the shield/plenum seated? Is the wire kinked or does it have sharp bends anywhere? Is the wire running next to power? All these things can cause the signal to be degr
  • Bottleneck (Score:3, Interesting)

    by ArmorFiend ( 151674 ) on Monday August 02, 2004 @04:11PM (#9864415) Homepage Journal
    Well, the way to find the bottleneck is obvious. First try a transfer from linux to the mac. Then try a transfer from linux to the peecee. If both of those are fast transfers, then its time to start thinking about your linux box's bus. If one is fast and one is slow, go to town on the slow leg.

    Putting 2 peer links in the linux box seems like it might have been a mistake, since you're now not able to add new computers without buying new nics for the linux box. Buying a hub might have been better, but what do I know? Maybe gigabit nics cost $1 and hubs cost $1,000.
  • Not too shabby. (Score:3, Insightful)

    by Profane MuthaFucka ( 574406 ) <busheatskok@gmail.com> on Monday August 02, 2004 @04:12PM (#9864419) Homepage Journal
    Your speed is also dependent on protocol, driver, and OS overhead. Check those things before you worry about such a simple hardware setup.

    You didn't give any information about your protocol so that leads me to believe that you haven't considered TCP vs. UDP, for example.
  • by missing000 ( 602285 ) on Monday August 02, 2004 @04:13PM (#9864430)
    a gigabit 128 megabytes [google.com]

    If you are getting 145 megabytes / second, that's damn good.
  • by Student_Tech ( 66719 ) on Monday August 02, 2004 @04:13PM (#9864433) Journal
    I found that unless both machines were of a recent vintage, samba seems to hit a limit. Exmaple being my current computer AMD 2400XP running Linux 2.4.24, to a AMD 500 K6-2 running Linux 2.4.20 tops off about 1 MB sec on a 100 Mb/sec network. Contrast my current computer (2400 one) to a friends 2600XP running Win2K, I was seeing about 6-7 MB/sec. (and a 25% CPU usage...)

    I have found that FTP seems to use the bandwidth up better if you want to test it. Computer xbox I can get 7-9 MB/sec on a 100 Mb/sec connection.

    You might also look into some network bandwidth tools that just go to and from memory and are designed for testing network speeds.
  • by _LORAX_ ( 4790 ) on Monday August 02, 2004 @04:25PM (#9864511) Homepage
    Um... how about the obvious. How fast is the Hard Drives in both computers? 145Mbps = ~18MB/s which is approaching the sustained limit for many ATA100/133 drives these day.

    So I would start there.
    • I agree. "copying a large file only moves at ~18MB/s.. why aren't I getting 80MB/s?!!!1" is kind of a stupid question. If you want to run a speed test, repeatedly copy a smaller file that your linux box can cache in memory, over and over again so you _KNOW_ it's cached. Make sure the file can fit inside whatever linux says your average 'cached' memory load is. Then get and re-get it and see how fast it gets. I'll bet that done right, you can get probably 45MB/s sustained.

      The other thing is that you have
    • Looks like you hit it on the head. I am using an ATA/133 hard drive. I'm actually in the process of setting up a hardware RAID for the hard drives to see if that speeds things up at all.
    • Yep. Never underestimate the ability of limited harddrive speeds to throw a wrench in file transfer speeds. I first ran across this while developing the Linux network driver for LSI's 1Gb & 2Gb Fibre Channel adapters. Spent a little while pulling hair before the whole "IDE drives on either end of a 2 Gigabit link might be an issue" point hit me. I found this to be an issue even while reading from and writing to 10K RPM FC drives. Had to use an FC RAID on both ends before I could saturate the network cap
    • On a motherboard that is 2 years old, copying from drive to drive (large video files), I'll see rates of anywhere from 40-45 MB/s. That's with 7200rpm SATA/PATA drives. Copying from a 5400rpm 150GB drive is a solid 30MB/s.

      Shooting across the gigabit LAN, those rates drop to 15-20 MB/s. That's with cheap, consumer level hardware on both sides, copying to/from a Windows share.

      Still quite acceptable, since the switch could easily be handling multiple streams between different boxes. At least I'm not st
  • by ComputerSlicer23 ( 516509 ) on Monday August 02, 2004 @04:29PM (#9864532)
    You could be running out of disk bandwidth.

    I have several harddrives that top out around 14-20Megabytes per second, which turns into roughly the speed rating you are talking about.

    I doubt your running out of PCI bandwidth.

    It could be the latency, or that you have a poorly tuned network stack. I know that using NFS, getting 12-15Mbit/sec was considered pretty good given the inherient latency of the protocol.

    I had similar problems no matter what protocol I was using FTP, HTTP, or scp. What I found was that, I needed to use a network speed tool: NPtcp, which is part of the netpipe tool set.

    http://www.scl.ameslab.gov/Projects/old/ClusterCoo kbook/nprun.html [ameslab.gov]

    The other thing is to figure out if your cards support Jumbo frames. If they do, it can be a boon to go change your MTU, and modify specific parameters in your TCP stack, and in your application to change the socket options used (specifically, to use a packet size larger then 8k). I'm not sure how to do this under windows, but I've found it readily available under Linux on google searches.

    More information is more useful. Knowing what chipset it is based off of, which drivers you are using, what OS, would be mighty helpful to helping you solve your problem.

    Kirby

    • It's actually pretty likely that he *is* running out of PCI bandwidth.

      Three Gigabit/s NICs on a PCI bus ... let's see, the PCI bus is 32 bits times 33MHz, which is almost exactly 1Gb/s, so a single GB NIC could actually saturate the bus all by itself running optimally. Add in the other two NICs, the IDE or SCSI bus, and misc other peripherals, and it's very easy to bottleneck at the PCI.

      This is why most modern motherboards build the GB directly onto the northbridge.
      • Sorry, but I'm going to have to go with 'What is the throughput of his hard drives?' for $400, Alex.

        CPU faster than BUS faster memory faster than hard drive. The gigabit NIC fits in there somewhere between memory and bus, and on different systems you can interchange memory and bus for faster / slower - but the CPU is generally fastest, and the hard drive generally slowest.
      • Sure a Gigabit card could completely saturate a PCI bus. I'm well aware that's why they are built into the northbridge.

        However, generally when benchmarking one doesn't actually use both the Gbit NIC's (he's a double idiot if he failed to mention this). From the setup, he's got something silly going on. As a general rule, the NIC and the harddrive will need to use up roughly the same about of PCI bandwidth (they are writting roughly the same amount of data, plus or minus framing/headers for the frames/s

  • by bohnsack ( 2301 ) on Monday August 02, 2004 @04:34PM (#9864581)
    I might be good to start by measuring your network's performance, without hard drives or application software in the loop. I'd suggest using IPerf [nlanr.net] to accomplish this. If you measure less than expected performance with IPerf, your problem is with your NICs, switch, or drivers. If IPerf reports OK numbers, start looking at Samba and your hard drives. The bus shouldn't be a problem, because even a lowly 32 bit 33 MHz PCI bus has a theoretical 1.056 Gb/s data rate.
    • by bohnsack ( 2301 ) on Monday August 02, 2004 @05:05PM (#9864808)
      Using IPerf to test your network bandwidth is easy:

      [machine1]# iperf -s
      Server listening on TCP port 5001
      TCP window size: 85.3 KByte (default)
      [ ID] Interval Transfer Bandwidth
      [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec

      [machine2]# iperf -c machine1
      Client connecting to machine1, TCP port 5001
      TCP window size: 16.0 KByte (default)
      [ ID] Interval Transfer Bandwidth
      [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec

      This, ~950 Mb/s, is around what you can expect from a 1500 MTU GigE network.
    • another example (Score:2, Informative)

      netmon etc # iperf -s
      ----------
      Server listening on TCP port 5001
      TCP window size: 1.33 MByte (default)
      ----------
      [ 6] local xx.xx.xx.xx port 5001 connected with xx.xx.xx.xx port 32793
      [ ID] Interval Transfer Bandwidth
      [ 6] 0.0-10.0 sec 1.10 GBytes 945 Mbits/sec

      fsf_linux root # iperf -c netmon
      ------------
      Client connecting to netmon, TCP port 5001
      TCP window size: 16.0 KByte (default)
      ------------
      [ 5] local xx.xx.xx.xx port 32793 connected with xx.xx.xx.xx port 5001
      [ ID] Interval Transfer Bandwidth
      [ 5] 0.0-

    • The bus shouldn't be a problem, because even a lowly 32 bit 33 MHz PCI bus has a theoretical 1.056 Gb/s data rate.

      As soon as you factor in any overhead at all - or of course any other devices on that PCI bus, like a graphics card or storage - your 32bit 33MHz PCI bus has less available bandwidth than the network connection. Then you have TCP ACKs: full-duplex GbE is 1 Gbit/sec in each direction; 32x33 PCI is 1Gbit/sec total... If you're streaming data between disk and network, of course, you also need to

  • by photon317 ( 208409 ) on Monday August 02, 2004 @04:50PM (#9864697)

    Gigabit ethernet is fast, and it's very easy for your processor, your tcp stack, your driver, your card, your pci bus, etc... to bottleneck at less than gigabit speeds. It's pretty hard to tell you which without seeing the whole setup and analyzing it in place. It's also possible for the tcp protocol itself to bottleneck at a lower-than-gigabit speed if you don't tune various parameters to help it out. You can tell if it's a tcp bottleneck by trying multiple parallel transfers between the same pair of machines and checking to see if the aggregate bandwidth is significantly higher than a single transfer. If this turns out to be the case, you can look at various network tunable like buffer sizes and window sizes. Another related tunable is the MTU of your ethernet network. If ALL your cards (and your switches if you had any) support it, you can turn on Jumbo Frames and push 9000 bytes per ethernet frame instead of 1500, which can make a big difference in transfer speeds over a gigabit network.
    • if you're using your linux machine as a gateway then you'll definitely run into a bottleneck at the pci bus for starters. which happens to be the no. 1 reason why hardware gigabit switches are so expensive, since they switch at a hardware level (hence faster) instead of software
  • by molo ( 94384 ) on Monday August 02, 2004 @05:11PM (#9864828) Journal
    A couple likely bottlenecks:

    1. samba. The microsoft SMB or CIFS protocol is a big inefficient hog. Try transferring with FTP. The data is piped down a TCP stream, end of story.

    2. hard drives. most hard drives can't push a gigabit/second from the platters (let alone write). Check out their sustained transfer speed (not burst cache). Also check out your bus medium. ATA-66 won't push a gigabit.

    3. pci bus. Transferring data down the PCI bus from the disk controller and then back out the PCI bus to the network card means you need a 2x effective bandwidth. PCI can't hit 2 gigabit here. You might get better results with PCI Express.

    Good luck.
    -molo
    • The microsoft SMB or CIFS protocol is a big inefficient hog.

      Somewhat off-topic, I know, but how do the speed of different WebDAV implementations compare to SMB/CIFS in terms of efficiency? Do I remember correctly that once the initial header goes across, the rest is just plain packet data with no further negotiation?

      most hard drives can't push a gigabit/second from the platters

      More on-topic now - that's a good point. I can get up to nearly 30MB/s (or VERY roughly 300Mbps) on a 5400RPM drive reading di

      • More on-topic now - that's a good point. I can get up to nearly 30MB/s (or VERY roughly 300Mbps) on a 5400RPM drive reading directly (hdparm -t), so assume even under ideal conditions and a 10,000RPM drive you'll get less than 60MB/s or so (or again VERY roughly 600Mbps = 60% theoretical maximum for Gigabit).

        Bit more complicated then that... higher areal densities on the platters will allow for higher sustained transfer rates. So an older 40GB/platter drive will have a slower transfer rate then the 80GB
        • Even the best HDD's hit only 60MB/s on the outside of the platters. That's a 15K RPM medium (by todays standards) density HDD. The best IDE high density HDD is the Maxtor Maxline 300GB SATA drive with an outside transfer of only 38MB/s this is a 7200RPM 100GB/platter drive. Assuming linear growth in transfer in relation to density even 200GB/platter won't hit 100MB/s @ 7200 RPM. All numbers are from storage review's [storagereview.com] performance database.
  • by eht ( 8912 ) on Monday August 02, 2004 @05:20PM (#9864885)
    I just installed gigabit at my home network but sprung for a cheaper switch, the only problem with it is that it doesn't do jumbo framing [wareonearth.com], and here is a list of jumbo frame compatible hardware [uoregon.edu]

    to test your link speeds you should not be using Samba, instead use ttcp (windows version [pcausa.com],java version [ccci.com], or your favorite distro should have a copy, I know it's in the ports of FreeBSD)
  • To effectively use a gigabit pipe, especially for sustained transfers, you need gigabit integrated into the motherboard and a fast RAID or fast Serial ATA. You also cannot expect commodity hardware to route gigabit through PCI, get a switch.
  • Make sure you have jumbo frames enabled on all the machines. Note that you need OS X 10.2.4 or newer on your Mac to use the 9K frames.

    Also, Samba might not be the best choice for Mac Linux transfers. You'll probably be better off using NFS version 3 between the Mac and Linux box. Both machines should speak NFS natively and not require any additional software.
    • Make sure you have jumbo frames enabled on all the machines. Note that you need OS X 10.2.4 or newer on your Mac to use the 9K frames.

      What happens if jumbo frames is enabled on the server but not on one of the workstations?
  • I just read an article in this month's CPU (computer power user) about the limitations of TCP on high-speed networks. Apparently, due to the way TCP adjusts to available bandwidth, it can never exceed 450Mbps or so.

    TCP was designed for the 10/100 Mbps ethernet or less. To make effective use of faster networks, you need to use an enhanced version of TCP, of which there are several. I don't think any are mainstream yet, though.

    That aside, as others have pointed out, your PCI bus and hard drives will bottlen

  • This is not a troll. Workstation versions of Windows, like Windows NT Workstation, Windows 98, Windows ME, Windows 2000 Professional and Windows XP have a crippled TCP/IP stack that make them move data slower than they really should. Using a "Server" edition of windows like Windows NT Server, Windows 2000 Server and Advancd Server and any edition of Windows 2003 will make any network throughput markedly faster.
  • by Anonymous Coward on Monday August 02, 2004 @08:45PM (#9865995)

    With the increased bandwidth of Gigabit Ethernet, software routing on generic hardware is severely non-optimal these days.

    • 32-bit/33Mhz PCI (which is the nice short PCI slots in all NON-server motherboards) is limited to 132MBytes/s transfer. Since a full 1Gbit = 128MBytes, you only can only get a theoretical 66MBytes per GBit port, since ALL traffic has to go back and forth from main memory, and thus has to cross the PCI bus twice.
    • For the high-end 64-bit/66Mhz PCI slots available on server motherboards, you get a theoretical 528MBytes/s performance, which should be enough to run 2 simultaneous connections (even with some of the PCI bus collisions).
    • The above holds even for dual-port NICs, since the traffic has to go back and forth to RAM, and can't just stay on the NIC.
    • The size of the NIC's on-board buffer has a serious impact on performance, as this acts a temporary storage while the CPU deals with the network packet interrupt. If you have a small buffer, then you're going to force a lot of retransmits, as stuff comes in and overwrites the existing data while waiting for the CPU.
    • Remember that for every packet incoming, there is an interrupt request sent to the CPU to deal with the incoming data. A rule of thumb from the Sun Solaris side of the house: You dedicated 1 full 400Mhz UltraSPARC II CPU to just servicing the interrupt requests from a single GBit ethernet card. Translated to the x86 world, that generally means that you'll run at least 25% CPU load on a 2GHz CPU while trying to service 1 GBit ethernet's full of network interrupts.
    • If you have NICs which can use Jumbo Frames, these improve performance considerably, as they reduce the total number of packets (and thus, overhead) by a factor of 10.
    • Linux's network stack is not fully optimized for GBit performance. The BSDs are better, but neither have had the obscene tuning that dedicated router/switch stacks have (such as Cisco's IOS).
    • As mentioned above, the non-Server versions of Windows have similar limitations in their network stacks, which seem to limit network throughput to about 200Mbits/s, regardless of hardware. The various Server versions don't have this problem.
    • Remember that you are doing ROUTING, when all you really want to do is SWITCHING. Routing is significantly more work for the CPU, since it involves packet inspection, and not just a MAC address table lookup and reforward.
    You really need to use dedicated switches (as there are hardware ASICs that do this all at near-wire speeds, and eliminate all the potential problems above).

  • Over unencrypted AFP (Mac native filesharing protocol) from one Xserve G5 2.0 DP to another, via a $100 Gigabit switch, I got 33MB/s, or almost exactly 2GB/minute.

    I did find that turning on the encryption option blew that number to hell: I didn't play with it long, but it appeared that it cut my speeds to about 10% of that.

    The servers in question didn't have much else going on, etc. Not truly scientific, as that rate was good enough for me. And enough to show me that I should not use encryption when I
  • If all your hardware isn't capable of supporting and configured to use jumbo frames it will be hard to come anywhere close to saturation point.
  • before you went out and bought all that gear?
  • by ledbetter ( 179623 ) on Tuesday August 03, 2004 @12:18AM (#9866845) Homepage
    Give Iperf [nlanr.net] a try. I used it for benchmarking my home gigabit LAN. It's got multiple versions available for many platforms (as well as source code). It generates data and sends it, not requiring any hard drive access thereby taking drive speed out of the equasion. This blog site [stanford.edu] also has some more info.
  • PCI / Gigabit NICs (Score:1, Informative)

    by Anonymous Coward
    Most PCs nowdays have the standard 32-bit 66mhz PCI slots and 64-bit PCI. Of course, the 66Mhz PCI slot will top out at transfer speeds of ~266Mbytes/s. 64 bit slots will of course be a bit faster.

    Worse yet, if you are running these gigabit nics in a 33Mhz PCI slot, you will get less than 133Mbyte/s transfer rates across the bus.

    So my advice to you is that you investigate what kind of speeds and slots your cards use. Are they on their optimal slot type? Are they actually using bus mastering?

    You shoul
  • by John Miles ( 108215 ) on Tuesday August 03, 2004 @03:47AM (#9867432) Homepage Journal
    As a lot of people have pointed out, off-the-shelf PCs aren't a good choice for gigabit Ethernet switching and routing regardless of the OS, and you can't really take advantage of true full-duplex Gigabit Ethernet on a standard consumer PCI bus. Still, you can do better than 145 Mbit/sec.

    I've been using a LinkSys EG008W switch on my home network, and it's a real bargain. It is a true switch (not a hub), costs less than $200, and all eight ports are capable of autosensing gigabit-capable hardware. Not all so-called "Gigabit" hubs are created equal; some of them only work in half-duplex mode, some of them only have gigabit capability on their uplink ports; some of them slow down to 100 megabit/sec if any of their ports are connected to 100-megabit devices.

    The Linksys's big drawback is its fan noise. It is insanely annoying. I owned mine for about 24 hours before I opened it up and dropped the voltage to the fan with a three-terminal regulator IC. I cut a hole in the top to improve the airflow at the lower fan speed, and it's perfectly unobtrusive now. (No, I don't remember what voltage I ended up running the fan at, unfortunately.) If you're either (a) deaf; (b) located at least a couple of rooms away from your network closet; or (c) handy with a soldering iron and indifferent to manufacturer warranties, the EG008W would be an ideal piece of hardware for your situation.
    • Does that swich support jumbo frames?
      • A Google search on "EG008W jumbo frames" suggests that it does not. What effect that has on ultimate throughput, I don't know.

        I do know that I get roughly the same throughput accessing drives on a remote machine (via netbeui) as I do on my local box, so it hasn't been an issue for me. If my NICs were sitting on PCI Express ports, I'd probably be more concerned about it. Given the price of the EG008W, I really don't have any room to complain.
  • by sireenmalik ( 309584 ) on Tuesday August 03, 2004 @03:49AM (#9867440) Homepage Journal
    1. What you are seeing is average rate.

    TCP goes into congestion avoidance and fast retransmit and recovery (for example TCP-Reno). The connection might be touching maximum rate but you are not seeing it!

    2. If your file transfer is over a large round trip time then TCP rate gets dilated: (File-Size / N*RTT)

    where RTT is round trip time and N is the number of round trips required to transport the page.

    3. If you are downloading the file, from "somewhere out there" then the bottleneck might be "somewhere out there" and not in your setup. Please recall, the bottleneck will cause TCP to de-accelerate whenever it sees a packet loss.

    2/100 dollars.
  • I only have 3 computers on the gigabit network (a Mac, a Windoze machine, and a Linux box) so instead of getting a switch, I triple NIC'd the Linux box, which I use as a gateway and a file server.

    Why would anyone put 3 gigabit NIC's in a box and route within a 3-node network when you can buy a low-end gigabit switch [cdw.com] for less than $100??? Were you hung over the day you "designed" this??? Also note that using cable of a higher grade than required by the spec typically doesn't give you some magic speed inc
  • get some hardware! (Score:3, Informative)

    by itzdandy ( 183397 ) on Tuesday August 03, 2004 @03:56PM (#9871275) Homepage
    just get a gigabit switch.

    i'm not trying to be a dick here, but your a fucking moron if you think you can use elmers glue and duct tape to build a high speed network! gigabit needs gigabit cards and gigabit switches period, not haveing these is effectively taking the giga out of the bit.

    secondly, if your saying that it was cheaper for you to get 5 gigabit cards that it was to get 3 gigabit cards and a 4 port gigabit switch, then a lot of your problem is problably weak gigabit cards. you didn't but the 12$ ones on the internet did you? those should be labeled 1/3gigabit, their processors arent capable of enough transactions, and some actually offload onto the CPU like some sick "winethermodem"

    i run a gigabit network at my home, i have 4 desktop machines on it, 2 of them with INTEL gigabit built into the motherboard, and the other 2 with intel PCI cards. i can easily transfer using nfs at 700mpbs, which sounds fair to me after TCP/IP overhead. my samba results are a bit less and around 600-650mpbs.

    also. every one of these machines is an XP1600+ or faster, except for my notebook, which is a celeron2.4 and is using a PCMCIA gigabit card from 3Com. The laptop is slower on the network with about 400mbps with samba, which is most likely a limitation of the 3com card combined with the PCMCIA bus.

    --

    i appologize for cursing, but please read that paragraph again. you need to build things within spec(or above) to get the stated performance, gigabit is not made to be strung nic->nic->nic and routed with standard routing software. Your PCI bus, your nics, your memory and CPU, and your un-tuned routing are problably ALL adding up to your week transfer rates.
    • Windows / Samba is not a good benchmark. The protocol is so bloated and Windows does so many things behind your back that you can't get reliable results. Use FTP or raw sockets (netcat).
    • Don't benchmark networks with active Windows systems. Windows floods your net with broadcasts.
    • Don't copy from or to harddisks for benchmarks. Use ramdisks or packet generators. Recent S-ATA disks have a theoretical peak(!) transfer rate of 160 MByte/s, with a much lower average transfer rate. Compared to a ramdisk, this i
  • echo "0" > /proc/sys/net/ipv4/tcp_sack
    echo "0" > /proc/sys/net/ipv4/tcp_timestamps
    echo "3129344 3137536 3145728" > /proc/sys/net/ipv4/tcp_mem
    echo "65536 1398080 2796160" > /proc/sys/net/ipv4/tcp_rmem
    echo "65536 1398080 2796160" > /proc/sys/net/ipv4/tcp_wmem
    echo "163840" > /proc/sys/net/core/optmem_max
    echo "1048560" > /proc/sys/net/core/rmem_default
    echo "2097136" > /proc/sys/net/core/rmem_max
    echo "1048560" > /proc/sys/net/core/wmem_default
    echo "2097136" > /proc/sys/net/core/wme

All power corrupts, but we need electricity.

Working...