Finding the Bottleneck in a Gigabit Ethernet LAN? 100
guroove asks: "I have a small gigabit ethernet network at home, and I spent a lot of money getting gigabit NICs for all my computers and even bought cat 6 cabling. I only have 3 computers on the gigabit network (a Mac, a Windoze machine, and a Linux box) so instead of getting a switch, I triple NIC'd the Linux box, which I use as a gateway and a file server. After the network was complete, I wasn't satisfied with transfer rates, so I started a transfer of a very large file and found that the transfer rate was topping off at just over 145 Mbps (which is a far cry from 1000 Mbps). I'm wondering now where my bottleneck is. Is it the NICs? Are all gigabit NICs really giving us 1000 megabits per second? Is it the driver? Is it Samba? Could it be that the hard drives aren't fast enough? Does anyone have experience with gigabit home networking enough to know where the bottlenecks are? Does the current PCI technology even allow for bandwidth that high"
the gateway? (Score:4, Funny)
OP: Answers (Score:5, Insightful)
Throw in all the Samba and other protocol overhead, throw in the fact that you probably aren't running P4 3.2GHz boxes, in fact maybe much less, throw in the lack of a dedicated switch and all of a sudden getting 50% of your theoretical peak throughput (hard drive being the limiting factor, not network) isn't too harsh of a reality.
And it's a 'Windows' box, you stupid fuck. Maybe if Linux users (yea, I'm posting this in Mozilla on a RH9 install) would grow up and learn to spell the word 'Windows', Corporate America wouldn't instantly dismiss Linux users as a bunch of fucking retards. I spend a part of my work day trying to convince my boss that Linux is the choice of a new generation of professionals and every time someone says M$ or 'Windoze' I have to start over from ground zero. If you aren't part of the solution, you are part of the problem.
Re: (Score:2)
Re:OP: Answers (Score:2)
There needs to be a way to mod you higher...can I buy you a hooker or something
Re:OP: Answers (Score:2)
Keep up the good work of selling linux for what it is and not what assholes (who think they can actually achieve gigabit ethernet on their home network using 3 nics) do to hurt its reputation.
It's a sad fact you had to go out of your way to say y
Re:OP: Answers (Score:1)
windoze is a pun, if you cant bring yourself to ignore it and understand the true value of the mea...
OMG THAT GUY IS SO CLEVER!
Re:OP: Answers (Score:1)
Re:OP: Answers (Score:1)
Ask Slashdot: 25 years ago. (Score:2)
15 years ago: Finding the Bottleneck in my 10mps Ethernet LAN - Hello, even though I'm supposed to be getting 10mps, I'm lucky when I can see sustained 2mps speeds...
10 years ago: Finding the Bottleneck in my Fast Ethernet LAN - Hello, even though I'm supposed to be getting 100mps, I'm lucky when I can see sustained 10mps speeds...
1 year ago
Re:Ask Slashdot: 25 years ago. (Score:2)
Re:the gateway? (Score:1)
If it's PCI-X or similar then ok, look elsewhere.
I'd go for ASIC based hardware (i.e. GB switch)
Sorry !
The Linux machine is acting as a router ? (Score:5, Insightful)
Re:The Linux machine is acting as a router ? (Score:5, Informative)
While there are a number of Linux based routers out there, none that I know of are used in the Gigabit realm. Even if they are, they at the -very- least have recompiled the kernel to switch on a number of router/gateway optimizations
Unless you have a VERY modern bus architecture (alot of people using Linux routers do so on old gear), preferably an AMD with hyperthreading (since I doubt you have a non-x86 system or you'd have mentioned it), you will never get close to maximizing not one but -3- Gb NICs.
Take a look at some of the servers that are out there in the x86 realm. They usually require you to use a 100MHz or 133MHz PCI card to get best results from a Gb ethernet NIC. And if you look at the first generation of x86 servers (say, from 2 years ago) that came with Gb ports by default, looking deep into the benchmarks you often find that they never reached their Gb potential with the built-in ports either. The advantage was that it was still better than 100 Megabit.
With a hyperthreaded high-speed bus and some kernel tweaks, I would be quite happy if I could get all 3 NICs to stress-test simultaneously at 300-500Mb/each. Heck, I'd probably be happy around the 250Mb range.
BTW, even a Gb switch, on the home CPE level, is probably never going to send multi-Gb of data (ie, by trying to switch data amongst multiple Gb ports). Often times you are limited to a max of 1Gb total throughput because of the switched backplane. Heck, even then you may max around 900Mb due to network overhead.
Moral is simply to realize that with all networking products, the real speed is usually significantly less than the rated speed.
Re:The Linux machine is acting as a router ? (Score:4, Informative)
Re:The Linux machine is acting as a router ? (Score:1)
Since the poster was using additional NICs from the way I read it, Hypertransport still wouldn't help though, so maybe I should have left those pieces out.
Either way, the poster di
Re:The Linux machine is acting as a router ? (Score:2)
The poster didn't say how he had the cards connected together. My understanding is that he could make a layer 2 switch by bridging [sourceforge.net] all the ethernet interfaces. It'd cut down on all the IP routing overhead. Still, I'd recommend getting a dedicated Gigabit switch. The PCI bus just wasn't meant to handle this amount of traffic.
Find out with MRTG (Score:3, Insightful)
Bottleneck (Score:3, Interesting)
Putting 2 peer links in the linux box seems like it might have been a mistake, since you're now not able to add new computers without buying new nics for the linux box. Buying a hub might have been better, but what do I know? Maybe gigabit nics cost $1 and hubs cost $1,000.
Re:Bottleneck (Score:2)
Not too shabby. (Score:3, Insightful)
You didn't give any information about your protocol so that leads me to believe that you haven't considered TCP vs. UDP, for example.
are you talking about bits or bytes? (Score:3, Insightful)
If you are getting 145 megabytes / second, that's damn good.
Re:are you talking about bits or bytes? (Score:1)
Re:are you talking about bits or bytes? (Score:2)
Re:are you talking about bits or bytes? (Score:1)
Run this on each of your drives, replacing hdX with the appropriate designator. This ~should be~ your maximal thruput to your NIC, unless you are testing individual drives on a SoftRAID setup.
Maybe something besides samba (Score:4, Informative)
I have found that FTP seems to use the bandwidth up better if you want to test it. Computer xbox I can get 7-9 MB/sec on a 100 Mb/sec connection.
You might also look into some network bandwidth tools that just go to and from memory and are designed for testing network speeds.
Re:Maybe something besides samba (Score:2, Interesting)
I created a testfile:
500+0 records in
500+0 records out
Copied the file to another box over NFS:
`testfile' -> `/mnt/video/testfile' 40:26:40269100
Re:Maybe something besides samba (Score:2)
Re:Maybe something besides samba (Score:1)
Re:Maybe something besides samba (Score:2)
136 / 60 != 10 (order of magnitude)
It's 2.26 times faster or 56% slower.
Re:Maybe something besides samba (Score:2)
How about checking the HD's on either end? (Score:5, Insightful)
So I would start there.
Re:How about checking the HD's on either end? (Score:3, Informative)
The other thing is that you have
Re:How about checking the HD's on either end? (Score:2, Informative)
Re:How about checking the HD's on either end? (Score:2)
Re:How about checking the HD's on either end? (Score:4, Informative)
Re:How about checking the HD's on either end? (Score:3, Informative)
Re:How about checking the HD's on either end? (Score:2)
Shooting across the gigabit LAN, those rates drop to 15-20 MB/s. That's with cheap, consumer level hardware on both sides, copying to/from a Windows share.
Still quite acceptable, since the switch could easily be handling multiple streams between different boxes. At least I'm not st
All sorts of issues could be happening. (Score:3, Interesting)
I have several harddrives that top out around 14-20Megabytes per second, which turns into roughly the speed rating you are talking about.
I doubt your running out of PCI bandwidth.
It could be the latency, or that you have a poorly tuned network stack. I know that using NFS, getting 12-15Mbit/sec was considered pretty good given the inherient latency of the protocol.
I had similar problems no matter what protocol I was using FTP, HTTP, or scp. What I found was that, I needed to use a network speed tool: NPtcp, which is part of the netpipe tool set.
http://www.scl.ameslab.gov/Projects/old/ClusterCoo kbook/nprun.html [ameslab.gov]
The other thing is to figure out if your cards support Jumbo frames. If they do, it can be a boon to go change your MTU, and modify specific parameters in your TCP stack, and in your application to change the socket options used (specifically, to use a packet size larger then 8k). I'm not sure how to do this under windows, but I've found it readily available under Linux on google searches.
More information is more useful. Knowing what chipset it is based off of, which drivers you are using, what OS, would be mighty helpful to helping you solve your problem.
Kirby
Re:All sorts of issues could be happening. (Score:1)
Three Gigabit/s NICs on a PCI bus
This is why most modern motherboards build the GB directly onto the northbridge.
Re:All sorts of issues could be happening. (Score:2)
CPU faster than BUS faster memory faster than hard drive. The gigabit NIC fits in there somewhere between memory and bus, and on different systems you can interchange memory and bus for faster / slower - but the CPU is generally fastest, and the hard drive generally slowest.
Re:All sorts of issues could be happening. (Score:2)
However, generally when benchmarking one doesn't actually use both the Gbit NIC's (he's a double idiot if he failed to mention this). From the setup, he's got something silly going on. As a general rule, the NIC and the harddrive will need to use up roughly the same about of PCI bandwidth (they are writting roughly the same amount of data, plus or minus framing/headers for the frames/s
Use Iperf to test network bandwidth (Score:3, Informative)
Re:Use Iperf to test network bandwidth (Score:5, Informative)
[machine1]# iperf -s
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
[machine2]# iperf -c machine1
Client connecting to machine1, TCP port 5001
TCP window size: 16.0 KByte (default)
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
This, ~950 Mb/s, is around what you can expect from a 1500 MTU GigE network.
another example (Score:2, Informative)
Re:Use Iperf to test network bandwidth (Score:2)
As soon as you factor in any overhead at all - or of course any other devices on that PCI bus, like a graphics card or storage - your 32bit 33MHz PCI bus has less available bandwidth than the network connection. Then you have TCP ACKs: full-duplex GbE is 1 Gbit/sec in each direction; 32x33 PCI is 1Gbit/sec total... If you're streaming data between disk and network, of course, you also need to
Gigabit ethernet != gigabit file transfer (Score:3, Interesting)
Gigabit ethernet is fast, and it's very easy for your processor, your tcp stack, your driver, your card, your pci bus, etc... to bottleneck at less than gigabit speeds. It's pretty hard to tell you which without seeing the whole setup and analyzing it in place. It's also possible for the tcp protocol itself to bottleneck at a lower-than-gigabit speed if you don't tune various parameters to help it out. You can tell if it's a tcp bottleneck by trying multiple parallel transfers between the same pair of machines and checking to see if the aggregate bandwidth is significantly higher than a single transfer. If this turns out to be the case, you can look at various network tunable like buffer sizes and window sizes. Another related tunable is the MTU of your ethernet network. If ALL your cards (and your switches if you had any) support it, you can turn on Jumbo Frames and push 9000 bytes per ethernet frame instead of 1500, which can make a big difference in transfer speeds over a gigabit network.
Re:Gigabit ethernet != gigabit file transfer (Score:1)
Samba, hard drives, pci bus (Score:5, Informative)
1. samba. The microsoft SMB or CIFS protocol is a big inefficient hog. Try transferring with FTP. The data is piped down a TCP stream, end of story.
2. hard drives. most hard drives can't push a gigabit/second from the platters (let alone write). Check out their sustained transfer speed (not burst cache). Also check out your bus medium. ATA-66 won't push a gigabit.
3. pci bus. Transferring data down the PCI bus from the disk controller and then back out the PCI bus to the network card means you need a 2x effective bandwidth. PCI can't hit 2 gigabit here. You might get better results with PCI Express.
Good luck.
-molo
Re:Samba, hard drives, pci bus (Score:2)
Somewhat off-topic, I know, but how do the speed of different WebDAV implementations compare to SMB/CIFS in terms of efficiency? Do I remember correctly that once the initial header goes across, the rest is just plain packet data with no further negotiation?
most hard drives can't push a gigabit/second from the platters
More on-topic now - that's a good point. I can get up to nearly 30MB/s (or VERY roughly 300Mbps) on a 5400RPM drive reading di
Re:Samba, hard drives, pci bus (Score:2)
Bit more complicated then that... higher areal densities on the platters will allow for higher sustained transfer rates. So an older 40GB/platter drive will have a slower transfer rate then the 80GB
Re:Samba, hard drives, pci bus (Score:2)
Re:Bus bandwidth (Score:2)
I.e. 133MBps are about 1068Mbps.
And, yes, "mebi" is the correct SI unit here. Google for mebibyte vs. megabyte.
ttcp and jumbo frames (Score:3, Interesting)
to test your link speeds you should not be using Samba, instead use ttcp (windows version [pcausa.com],java version [ccci.com], or your favorite distro should have a copy, I know it's in the ports of FreeBSD)
Expected performance (Score:1)
Re:Expected performance (Score:1)
Jumbo Frames, NFS v.3 (Score:2)
Also, Samba might not be the best choice for Mac Linux transfers. You'll probably be better off using NFS version 3 between the Mac and Linux box. Both machines should speak NFS natively and not require any additional software.
Re:Jumbo Frames, NFS v.3 (Score:2)
What happens if jumbo frames is enabled on the server but not on one of the workstations?
TCP is a bottleneck too (Score:2, Interesting)
I just read an article in this month's CPU (computer power user) about the limitations of TCP on high-speed networks. Apparently, due to the way TCP adjusts to available bandwidth, it can never exceed 450Mbps or so.
TCP was designed for the 10/100 Mbps ethernet or less. To make effective use of faster networks, you need to use an enhanced version of TCP, of which there are several. I don't think any are mainstream yet, though.
That aside, as others have pointed out, your PCI bus and hard drives will bottlen
Re:TCP is a bottleneck too (Score:1)
If that's true, why does BIC TCP [ncsu.edu] exist?
According to their FAQ:
Re:TCP is a bottleneck too (Score:3, Insightful)
Basically with a low-latency network there is a lot of space in the pip for packets and TCP does n
Re:TCP is a bottleneck too (Score:2, Informative)
Secondly, low latency is what you want. TCP doesn't handle HIGH latency very well. Remember, TCP needs to get ACKs back for every packet it sends. High lat
Re:TCP is a bottleneck too (Score:2)
Windows of course! (Score:2, Interesting)
Re:Windows of course! (Score:1)
Are we just to take this guy's word for it?
Re:Windows of course! (Score:1)
Re:Windows of course! (Score:1)
Re:Windows of course! (Score:2)
Gigabit switching not for generic machines... (Score:4, Informative)
With the increased bandwidth of Gigabit Ethernet, software routing on generic hardware is severely non-optimal these days.
For comparison (Score:2)
Over unencrypted AFP (Mac native filesharing protocol) from one Xserve G5 2.0 DP to another, via a $100 Gigabit switch, I got 33MB/s, or almost exactly 2GB/minute.
I did find that turning on the encryption option blew that number to hell: I didn't play with it long, but it appeared that it cut my speeds to about 10% of that.
The servers in question didn't have much else going on, etc. Not truly scientific, as that rate was good enough for me. And enough to show me that I should not use encryption when I
jumbo frames (Score:2)
don't you wish you'd done your homework (Score:2)
Iperf - Network Speed Testing Tool (Score:4, Interesting)
PCI / Gigabit NICs (Score:1, Informative)
Worse yet, if you are running these gigabit nics in a 33Mhz PCI slot, you will get less than 133Mbyte/s transfer rates across the bus.
So my advice to you is that you investigate what kind of speeds and slots your cards use. Are they on their optimal slot type? Are they actually using bus mastering?
You shoul
What you seeing is the "average rate" and not .... (Score:2)
Use a dedicated switch (Score:3, Insightful)
I've been using a LinkSys EG008W switch on my home network, and it's a real bargain. It is a true switch (not a hub), costs less than $200, and all eight ports are capable of autosensing gigabit-capable hardware. Not all so-called "Gigabit" hubs are created equal; some of them only work in half-duplex mode, some of them only have gigabit capability on their uplink ports; some of them slow down to 100 megabit/sec if any of their ports are connected to 100-megabit devices.
The Linksys's big drawback is its fan noise. It is insanely annoying. I owned mine for about 24 hours before I opened it up and dropped the voltage to the fan with a three-terminal regulator IC. I cut a hole in the top to improve the airflow at the lower fan speed, and it's perfectly unobtrusive now. (No, I don't remember what voltage I ended up running the fan at, unfortunately.) If you're either (a) deaf; (b) located at least a couple of rooms away from your network closet; or (c) handy with a soldering iron and indifferent to manufacturer warranties, the EG008W would be an ideal piece of hardware for your situation.
Re:Use a dedicated switch (Score:1)
Re:Use a dedicated switch (Score:2)
I do know that I get roughly the same throughput accessing drives on a remote machine (via netbeui) as I do on my local box, so it hasn't been an issue for me. If my NICs were sitting on PCI Express ports, I'd probably be more concerned about it. Given the price of the EG008W, I really don't have any room to complain.
average Vs. maximum rate (Score:3, Informative)
TCP goes into congestion avoidance and fast retransmit and recovery (for example TCP-Reno). The connection might be touching maximum rate but you are not seeing it!
2. If your file transfer is over a large round trip time then TCP rate gets dilated: (File-Size / N*RTT)
where RTT is round trip time and N is the number of round trips required to transport the page.
3. If you are downloading the file, from "somewhere out there" then the bottleneck might be "somewhere out there" and not in your setup. Please recall, the bottleneck will cause TCP to de-accelerate whenever it sees a packet loss.
2/100 dollars.
3 NIC's in a box????? (Score:2)
Why would anyone put 3 gigabit NIC's in a box and route within a 3-node network when you can buy a low-end gigabit switch [cdw.com] for less than $100??? Were you hung over the day you "designed" this??? Also note that using cable of a higher grade than required by the spec typically doesn't give you some magic speed inc
get some hardware! (Score:3, Informative)
i'm not trying to be a dick here, but your a fucking moron if you think you can use elmers glue and duct tape to build a high speed network! gigabit needs gigabit cards and gigabit switches period, not haveing these is effectively taking the giga out of the bit.
secondly, if your saying that it was cheaper for you to get 5 gigabit cards that it was to get 3 gigabit cards and a 4 port gigabit switch, then a lot of your problem is problably weak gigabit cards. you didn't but the 12$ ones on the internet did you? those should be labeled 1/3gigabit, their processors arent capable of enough transactions, and some actually offload onto the CPU like some sick "winethermodem"
i run a gigabit network at my home, i have 4 desktop machines on it, 2 of them with INTEL gigabit built into the motherboard, and the other 2 with intel PCI cards. i can easily transfer using nfs at 700mpbs, which sounds fair to me after TCP/IP overhead. my samba results are a bit less and around 600-650mpbs.
also. every one of these machines is an XP1600+ or faster, except for my notebook, which is a celeron2.4 and is using a PCMCIA gigabit card from 3Com. The laptop is slower on the network with about 400mbps with samba, which is most likely a limitation of the 3com card combined with the PCMCIA bus.
--
i appologize for cursing, but please read that paragraph again. you need to build things within spec(or above) to get the stated performance, gigabit is not made to be strung nic->nic->nic and routed with standard routing software. Your PCI bus, your nics, your memory and CPU, and your un-tuned routing are problably ALL adding up to your week transfer rates.
Too many wrong decisions (Score:2)
linux kernel settings to help (Score:2, Informative)