Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Linux Software

Benefits Of Multiple CPUs With Samba? 23

PirateBek asks: "We're considering putting in a sizable Linux box to serve our entire campus via Samba. We can save quite a bit of money by going with a fast (PIII-800) single CPU, or spending more for dual slower (PIII-667) CPUs. Is going with two slower CPUs worth the inital dough, or will a faster CPU make up for the added benefit? Does Samba tend to like dual CPUs, or does it really matter? Mind you, we're working under a tight budget, and we want to get the most bang for our buck. Any experience/knowledge in this arena would be quite appreciated."
This discussion has been archived. No new comments can be posted.

Benefits of Multiple CPUs with Samba?

Comments Filter:
  • The single best reason I can think of to go with a 2 CPU solution is to allow you to run virus scanning on the server. Since you are using Samba, I'm going to got out on a limb here and guess that you are serving up files to Windows clients. I wouldn't think of running a file server for Windows machines that didn't have a good virus scan in place.

    A well thought out anti-virus solution will have scanning at every possible point (mail gateway, client machines, file server, etc...)

    There are anti-virus solutions that run on Linux and a second CPU would definitely help out if you decide to do that.
  • You no longer have to buy a proprietary x86 OEM solution to get massive, multiple memory and PCI busses for high I/O throughput. The new ServerWorks (formerly Reliant Computer Corporation) ServerSet III-series chipsets are making their way onto retail mainboards from major vendors. They put the disk controller on a different PCI bus than the Ethernet controller.

    For those interested, here is a comparison of various chipsets and their aggregate memory and PCI throughput in MBps, respectively (sorry about the format but ./ doesn't seem to allow tables or the pre/code tag):

    • Intel i440BX/GX (1xSDRAM/1xPCI): 800 / 133
    • VIA Pro/133A (1xSDRAM/1xPCI): 1,033 / 133
    • Intel i810/815 (1xSDRAM/1xPCI): 1,033 / 133
    • Intel i820 (1xRDRAM/1xPCI): 1,600 / 133
    • Intel i840 (2xRDRAM/2xPCI): 3,200 / 400
    • RCC ServerSet IIILE (2xSDRAM/3xPCI): 2,133 / 933
    • RCC ServerSet IIIHE (4xSDRAM/3xPCI): 4,266 / 1,200
    • RCC ServerSet IIIWS (2xSDRAM/3xPCI): 2,133 / 1,200
    • Intel i450NX (4xEDO/2xPCI): 2,133 / 266
    • Intel i450NX-mod (4xEDO/5xPCI): 2,133 / 666

    Again, this is just aggregate throughputs. RDRAM is faster at bursts, but SDRAM has lower latency (i.e. better), and the old EDO of the i450NX chipset doesn't even get close to what SDRAM and RDRAM can do.

    But note the massive PCI throughput of the ServerSet chipsets from ServerWorks -- due to their 3 independent PCI busses, of which, 2 are 64-bit (both are 66MHz capable in the HE/WS). They're so good, that even Intel is adopting the 4-way IIIHE for a forthcoming SDRAM server mainboard instead of RDRAM. This is largely because the 4 RIMM slot i840 can only support upto 2GB of RDRAM whereas the 16 DIMM IIIHE can support 16GB. As of right now, all Intel can offer for servers is the 3-year old i450NX so most tier-1 vendors have opted to work with ServerWorks instead.

    -- Bryan "TheBS" Smith

  • Samba runs more than one daemon to handle different types of requests. Having multiple CPUs lets the daemons communicate more effectively than having them share one CPU. I think the slower dual-CPU setup is the way to go. Ideally, you'd like smbd to remain in cache on one CPU while the other pops around between nmdb and the kernel filesystem layer. Multiple CPU's let processes with multiple threads to work more effectively and that's exactly what you have with a fileserver.
  • grep s.*m.*b /usr/dict/words
    didn't work on my linux box but worked on my school account on a solaris machine
  • more CPUs will make things run faster.

    All other things being equal, you're right.
    But since the poster says his budget is limited (as everyone's, I suppose), he has to make some trade-offs. Very generally speaking, if most of the tasks are CPU bound, you'd better get lots of CPU power (more CPUs and/or faster CPUs). If mos of the tasks are I/O bound, you will get better performance with somewhat less CPU and more I/O troughput (fast HD, fast bus, fast NICs).
    In this case, I think I/O is more important than CPU, so I would go for a single CPU machine with SCSI HD's.

  • Well, our main goal for the samba box will be network storage, network printing, and possibly tie it all in with a web front-end. We may actually be hooking in database information, but all of that will be processed on a much bigger box.
    We'll have about 1500 users, with maybe 150 connected at any one time. The system will likely have a gigabit uplink to the main switch, or at least a 100-base. The rest of our network is fast and reliable, so we're not concerned about the connectivity.
    As we'll not be using the system for a lot of processing power, I guess the second CPU might be money spent unwisely, unless you think differently based on the information I just added. The VA Linux box will/does have the option of adding a second CPU at another time, so if need be, we can toss that in and recompile.
  • I'd love to get something better than an Intel box to run this on, but our budget is way too tight. I figure the 133 mhz bus on the PIII will likely do fine for now - hopefully.
    Hmm...lots of folks are getting rid of their Alpha's cuz they won't run Win2K...hmm...
    /me strokes beard.
  • Well, we're not "Intel Whores" (hehe)...but we like the options that VA Linux gives in rackmount systems. If there are other vendors out there that can provide an AMD solution that is rackmount, and can certify that their RAID solutions will run with Linux, I'd love to find out.
    We'll likely have a RAID5 array of LVD disks, and we're expecting to have about 150 connections at any one time, but Xeon is definitely not a price option.
    I would love to see us move towards an Athlon platform. I've been using AMD for quite some time on my home systems, and I feel quite confident in their CPUs.
  • ...this one is.


    the voice of reason
  • I dont think so.
  • That posted at 2. That's funny since I don't have enough karma to do that. What's going on here?
  • The word list included with Linux, /usr/dict/words, is only 45402 words long.

    Try: html

    They have lists with over 200,000 words. Samba is definitely included.
  • Impossible.

    Firstly, that's not proper syntax. It would have had to have been either:

    grep "s.m.b." /usr/dict/words

    ...which gets me...


    Or maybe it was

    grep "^s.*m.*b.*" /usr/dict/words

    ...which gets me...


    No "samba" in my /usr/dict/words. Maybe it was a custom entry, or maybe this is must a myth.

  • Actually, truth be told even an old Pentium-II 233 can keep a Samba network VERY busy.

    Mutliple CPU's can be helpful on a minor scale for a couple of reasons:

    1. Fewer context switches. If you have a LOT of processes running, it's amazing how much CPU time gets lost to context switches. SMP helps reduce this overhead because it reduces the number of processes per CPU. Of course, it also increases context switch time, so you have to balance it.
    2. Better interrupt handling. On most PC's, the interrupt hardware sucks. This is important for a network server because it usually gets hit by quite a bit of interrupts between disk and network cards. SMP systems tend to have APIC and other fancy interrupt handling mechanisms, although some of the newer single CPU boards are now incorporating these features.
    3. Better I/O subsystems. In general, SMP motherboards tend to have better I/O subsystems, because with multiple CPU's it's much easier to stress the I/O subsystem. Things like 64-bit 66MHz PCI can really help a serer. Of course, if you get a really fancy single CPU system this is less of a factor.
    4. Faster authentication. Authentication can actually be the bulk of the latency people encounter when accessing data from an SMB server. If you are using the SMB box as a domain controller then you are going to have a lot of CPU performance lost to authentication. Being a domain controller is heavy duty work (so much so that Microsoft suggests secondary domain controllers for large networks). This can really slow things down. Of course, the best solution is to have a seperate box dedicated to doing authentiction, but if not, this is one area that will take advantage of CPU performance.

    Your biggest performance improvements will probably come from doing a few basic things right:

    • Get a(some) very nice NIC(s). For maximum performance get something which will handle the IP stack on board.
    • Go with a fancy disk subsystem. If you really need performance go with Ultra-160 SCSI, preferably hooked up to 64-bit 66MHz PCI. Think about RAID, think about multiple controllers.
    • RAM. Lots of it and as fast as possible. This will increase the disk cache and how many processes/sockets you can have going at once.
  • by BJH ( 11355 )

    You will gain some performance by using SMP, but if you've got the choice, go with a single-CPU machine with the fastest I/O subsystem you can get for the same cash as the SMP box. It'll give you much more bang for your buck.
  • The main question here is do you need to do a lot of data processing on this computer? I doubt it if it's a samba server. All samba does is provide the SMB network transport for data to flow across. Dual CPUs won't make a dent in the speed of the traffic. Spend your money more wisely on quality fast NICs and switches. That's where you'll notice the speed difference! -Pete McDonnell
  • my experience with samba has been that disk i/o is more important than cpu horsepower.

    Probably the biggest single factor that will contribute to slow performance is going to be disk I/O and latency, esp. if you're going to have a lot of continuous small file operations.

    I'd suggest that you get a single processor, and spend money on a good ultra 2 scsi controller. If you need data protection, run RAID 10 - it's more expensive to implement than RAID 5, but it's faster, and you wouldn't need a fancy shmancy RAID controller with an intelligent cache to keep from suffering a performance hit on writes. Ugh, call me a SCSI nazi, but i wouldn't use ATA for something more than casual use.

    I didn't see mention made of how many clients you're expecting to service, but a P3 or Athlon in the 700MHz range should do pretty well. couple that with lots of RAM (1GB) and tune samba accordingly (bigger buffers = faster access). of course, faster network access would remove one more bottleneck.

  • Actually, I read about it in the beginning of the Oreilly book titled, "Samba". I forgot the actual pattern match, but I remember trying it and it worked (Solaris). I just checked and samba isn't in my /usr/dict/words either, but it is in the Solaris /usr/dict/words on my school network.
  • by trims ( 10010 ) on Friday June 23, 2000 @01:46PM (#981462) Homepage

    You're making a file server, and by far you're biggest problem is going to be I/O bottlenecks (disk & network), NOT CPU. In fact, I can keep a 100Mbit connection fully flooded with SMB traffic with a lowly Dual PPro200 system. So the second CPU isn't necessary at all. Here are my recommendations (and I've done this before):

    • Use Seperate NT Boxes for Domain Controllers. If you're going to be serving Windows clients, it's alot easier to set up 3-4 cheap PCs (say $1k each (new)) to be dedicated BDC/PDCs. Since you can locate the DCs near (in network terms) the clients, you're going to get much better authorization and login response times than using the Samba server as authorizer.
    • As a correllary to the above, put a dedicated pipe from the Samba box to the PDC - slap in an extra network card for each box, and give the samba server a dedicated route to the PDC - this will help speed things up quite a bit.
    • Disk Speed is Everything - As another poster suggested, use a hardware Raid 10 solution. Raid 5 will be ok, but 10 will be much faster. In either case, USE SCSI. Don't even think about IDE. Get 10,000rpm disks if you can, but more 7200RPM disks is better than fewer 10k disks. Go for a minimum of Ultra2 LVD drives, or Ultra3 if you can.
    • Use multiple NICs - Your other bottleneck will be the network. At the minimum, use a different NIC for each major network segment. If you can, use a switch that allows for NIC bonding (like Cisco 2900s), so you can aggregate the NICs (it's alot cheaper to get 4 100Mbit NICs bonded into a 400Mbit channel than to try for Gigabit Ethernet).
    • Get lots of RAM - this will be used for disk caching, which speeds things up alot. A minimum of 512MB is acceptible, and 1GB might be nice, depending on what else is going on.

    If the machine is doing pure Smaba serving, and you are using external PDCs, get the lowest-speed CPU you can (which will probably mean at least a 500Mhz one). It will be more than sufficient. Use the money for your disk subsystem.

    If you want to do something like virus scanning, or PDC work, or even DNS serving, look into a better CPU, especially if your going to be doing Mail on the box (which is a CPU hog). In general, though, I think you'd be better off with sticking to a limited-function box and 1 CPU.

    Product Plug: I like Compaq Proliants. They're very Linux-friendly, and they have the nice extras you want in a server. Here's a suggested config:

    • Compaq Proliant ML370 w/ 600Mhz CPU
    • 512MB RAM
    • Integrated 2-channel SmartArray Ultra2 LVD raid controller
    • 2 100Mbit NIC cards
    • 6 Hot-swap Ultra2 LVD 10k RPM 18.2GB hard drives
    • Redundant power supply

    That runs $11k direct from Compaq (figure you get it cheaper from a reseller). You can knock off $2k if you use 7200RPM drives.

    Look for something similar. Having a dual-capable MB is nice, just in case you decide to add crap to the machine later (or re-purpose it).

    Best of Luck.


  • by billcopc ( 196330 ) <> on Friday June 23, 2000 @05:28AM (#981463) Homepage
    Quite simply, if this is going to be just a file server, I'd suggest going for a single cpu. IMHO, SMP is good for big badassed multithreaded apps like 3d modelling/rendering. Network servers are also heavily multithreaded, but they employ many short-lived processes whose overhead shadows the efficiency savings of SMP.. too much context switching and extra hassle. Single is simpler, and simple is fast. But what's more important here is the actual data throughput. Nic's are very important, as well as the I/O bus speed. Wide/LVD Scsi hard drive are ideal, but ATA66/100 is much cheaper and "fast enough". Now I don't know Samba's performance details, but you probably don't need a Xeon 800 to run this unless you're expecting >200 simultaneous requests. Even at 100mbps, a P2-450 with a decent amount of ram should do fine. One thing you should consider (if the guys in charge aren't Intel whores) is the AMD Athlon Thunderbird if you want to get away with it for cheap. Compared to P3's, I find they run just as fine and fast, and price wise it's an obvious winner, which would leave you with more cash left to spend on the truly critical elements : NIC's and hard drives.
  • by grammar nazi ( 197303 ) on Thursday June 22, 2000 @09:17PM (#981464) Journal
    Does anyone know how samba was named?

    Give up?
    He used:
    grep s*m*b* /usr/dict/words

    The coolest word in the resulting list was samba.

    First Post?
  • by YASD ( 199639 ) on Friday June 23, 2000 @01:19AM (#981465)

    Single CPU?
    Program will not execute!
    Takes two to samba!


"If it's not loud, it doesn't work!" -- Blank Reg, from "Max Headroom"