Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

1U Apache Servers - Sun or Intel? 34

odoa4 asks: "What do you think will make a better 1u Apache Server? A Sun Netra X1 or a 1U Ontel box such as a Dell PowerApp.Web 120 or a VA 1220. This is a difficult question because the Intel boxes are about $1800 a piece and the Sun machine is only $1000. The Sun machine can only use 1 CPU and it runs at 400Mhz, where the Intel machines run at 800+Mhz with up to 2 cpus. Keep in mind that the web servers will be clustered, so the real question isn't so much about whether the Intel boxes are faster, but more along the lines of are they worth the $800 more when I could almost get another Netra for the cost. As far as OSs go, I would use Solaris 8 on the Sun machines and FreeBSD on the Intel boxes. Here are some links to various servers I've looked at: Sun Netra X1, VALinux 1220, and the PowerApp.web 120" Are there other alternatives for 1U servers, that the submittor might do well to look into?
This discussion has been archived. No new comments can be posted.

1U Apache Servers - Sun or Intel?

Comments Filter:
  • by Anonymous Coward
    You're right with the extra networking capabilities, but SUNW does not recommend going with anything faster than a pair of FastEth's in a low-end Netra. (The X1 is the "SX" of the Netra series; have a look at the Netra T1 200's specs for comparison.) The Sun rule of thumb for U-II's is to have 1 MHz of U-II for every Mbits/s you are connected to (ballpark).

    Been there, done that. I was a field engineer at Sun for a while. If your higher speed networks will see that much traffic, a higher-end machine is recommended, and those you may be way too expensive for this comparison.
  • Ethernet is not just RJ45. It's whatever you can encapsulate Ethernet over. Ethernet over SCSI is pretty common. Also, serial isn't a port, it's a communications method.
  • My office has a 1U Penguin Computing box that I think retails more toward the $1000 end of the spectrum, and we've been very happy with it. The Netra X1's have IDE drives, I have not had great experiences with Sun gear running IDE in production (YMMV). I don't know what the bus looks like on an X1, but I know other Netras are I/O monsters, so that's probably worth considering as well.

    Just don't do VA. They have had severe quality control problems with every product I've seen from them. I saw an order of about a hundred boxes from them come in at a client site with a nearly 50% DOA rate.
  • In reference to your price comparisons among the three 1U servers you mention; the Sun Netra X1 [sun.com] is a single CPU system with IDE drives while the Dell PowerApp.web 120 [dell.com] (aka PowerEdge 1550 [dell.com]) and the VALinux 1220 [valinux.com] are both capable of being dual CPU systems and have SCSI drives. At least on the Dell front, the PowerApp.web 110 [dell.com] (aka PowerEdge 350 [dell.com]) is a closer match in terms of hardware to the Sun Netra X1. I don't know what the performance differences would be given my lack of experience with Sun's hardware and recent versions of Solaris, but the other comment(s) here have some thoughts on that issue.

    Jonathan

  • I'm with you here. You're running webservers. If you're going to buy a ton of them, the performance differences between the two is negligible (sp). However, the one thing that sun has that I've yet to see a PC have is the console/ok prompt. WHats better, the netra line has a prompt even lower than the ok prompt that lets you do a remote power on from the console.

    That just makes life a hell of a lot easier. At my last company, we had a rack of 6 Netra T1's, unconfigured, but plugged in and powered off. They were attached to our console server as well. If we had a webserver crash, we'd log into the jumpstart machine, set it up, hit the console on one of the 'downed' machines, power it up, and kick off the install. An hour later we'd have a new webserver ready to go. Then we could wander down to our colocation at our leisure. It was great.

    Anyways, SUN all the way. It'll make your life a lot easier.
  • One problem with using apache on the netras is the bloat when you add in mod_perl, so you'll probably need some extra RAM. They only come with half a GB, and extra is not particularly cheap from sun.

    If you're using an app server of some kind there should be no problem. I used a few netra T1's running netscape enterprise server to hit a netscape app server backend and they worked great.

    BTW--I'm not sure I've seen any 1U models that used the 25-pin serial connectors as someone suggested above. Generally the serial ports are rj-45.

    These comments in no way reflect light outside the visible spectrum

  • Yes, rj-45. You use flat ribbon cable instead of cat-5 so you don't get cross-talk.

    This comment in no way reflects light outside the visible spectrum
  • Not if you shelled out the money for Suns C compiler and compiled mod_ssl and apache and all of its friends with cc -x05 -xtarget=ultra -xarch=v9 which will give you 64bit binaries, which does serious asskicking on intel hardware :P

  • I've always felt that two uniprocessor machines are better than one dual-processor machine, especially if you're already considering loadbalancing multiple webservers. If you're being charge for rackspace, you might consider buying a custom-built box containing two PCs. here's one [mcmail.com] available in the UK (retailing for just under £2000 for a decent spec).

  • oh, reasons? um, it's slow, it has poor hardware support, poor application support, and quite a lot of the OS is apparently managed differently than solaris on sparc. this is in reference to solaris 8 on intel and solaris 7 on sparc. and there's a world of difference between solaris and aix - very little common ground in practical terms between aix and anything else... i can't see any practical reason to run solaris on intel, when there are many OSes that have better application support that do a much better job of support it. just my $0.02
  • ugh, are you serious? solaris on ia32 blows. i don't really like solaris on sparc, but the ia32 version is shit, to put it bluntly.

    stick with freebsd, or if you must have System V, debian or another linux distro.

  • You really should consider management of the machines. With a sun, with serial console you can do _any_ thing you need to do with the box. This includes changing boot devices, and other BIOS features. Being able to do almost everything through the console port is a great feature. With the x86, you'll need a monitor and keyboard to do changes like that. That may mean a trip to the data center. (If you can't tell, I like suns)

    /*
    *Not a Sermon, Just a Thought
    */
  • Uhm. Obviously you've never seen a Cisco router.
  • Snap I think makes 1U servers, if your looking into storage
  • I've been using some obsolete SparcStations (1 SS10/64 Mb, 2 SS5/128 Mb) running Linux for 2 years as a web/mail/ftp servers. Even with the ESP driver issues (5 mb/s on SCSI2-7200RPM drives), they beat both my laptop (P3-650/192 Mb) and desktop (Celeron-667/512 Mb) when used as webservers.

    The intels are delivering the first pages faster but then slow down as hits go up, the sparcs just keep on delivering them at constant speed.

    ANother advantage of the Netra is the LOM (Light Out Management) which allows you to totally admin your box remotely, even at OBP (Sun 'BIOS') level.
    If you care more about power and manageability than speed, go shopping for a Netra.
  • I would suggest you look into Solaris/x86 rather than FreeBSD. I've used both and find Soalris far more stable. Since you are under the 4-CPU limit, Solaris is free (last I checked).
    --
    He had come like a thief in the night,
  • Please... Solaris x86 is no less secure than FreeBSD or Linux. Besides, if you're dumb enough to run a webfarm without any sort of protection (ahem FIREWALL) or intrusion detection, you deserve what you'll get.

    I run a pretty large web farm on Solaris x86. Since we run Java, there's no comparison in performance to anything else out there. Especially because when we started, your choice of usable JVMs was Windows, Solaris Sparc, or Solaris x86.

  • Also take a look at eRacks' WEB server. [eracks.net]


    - Configurable hardware &
    - Preinstalled OpenBSD, Apache, WebMin, Webalizer.

  • We've got a PowerApp.web 120 over here and a brand spanking new Netra T1 AC200.

    The AC200 is a carrier grade system, and it shows. It's (relatively) cheap, it's got a USIIe chip, and it's the most manageable box I've ever touched, even amongst other Sun gear.

    The PowerApp may still have a bad motherboard revision floating around that doesn't work with their own Redhat install.

    The most important issue to take in, however, is that upgrade cost. Sun will charge you an arm and a leg to upgrade, but you won't *need* to in most cases. These are throwaway boxes.

    The nicest part, though, is the size. The T1/X1 is about half the depth of the PowerApp. For a 2 post rack, the PowerApp is a 2 man job. For the Netra, it's 1 person, no problems. The box is just easier to hold in place while screwing it in.

    Just make sure you have the 25-pin male/female adapter for the Netra if you ever want that serial port to work.

    Of course, once they're up, the hardware's proven stable, and the apps are installed, they're about equal. I don't touch them after they're up, because they both run just fine. But I prefer the Netras based on size, price range, support, and OS. Re: Solaris vs. Linux for anything critical, I'll have to stand by the one with a more stable default filesystem, better manufacturer support, and more thorough testing. I still feel burned from the motherboard incident, regardless of how quickly it got handled. We should never have gear that is not 100% guaranteed to work with the installed OS.
    Raptor
  • I actually have a Netra X1 right next to me. I've been playing around with it. I reallly like it. It is completely unexpandable (with the exception of a spot for a second hard drive and places for more memory), but elegant in it's simplicity. It has two 10/100 ethernet connectors, and 2 RJ-45 serial connectors (contrary to one above post, these serial connectors use the exact pinouts as Cisco gear, which makes me very happy, since I'm a Cisco sort of person). It also has a "personality card", which contains, at the least, the MAC addresses for the Ethernet cards. Apparently, if you have completely stock configurations, you can swap machines in and out by swapping these cards (although I haven't had a chance to play with this - I've only got one.) There are definitely reasons to love this box, or hate it.

    • Everything is integrated. If anything dies, you replace the whole thing.
    • LOM (Lights Out Management). This is very cool. I have not touched the power switch on this box (except once - I wanted to verify that pushing the power switch "off" would initiate a gracefull shutdown, which it did) LOM includes a built-in watchdog - you can run this daemon, and if it doesn't talk to the hardware within a certain amount of time, the hardware power-cycles itself, assuming the system has crashed. You can programmatically turn on an amber maintenance LED on the back of it. This can be very handy if you've got racks and racks of these things.
    • As someone pointed out, EVERYTHING can be done through the serial console. You will never need to connect a monitor, keyboard and mouse (not that there's anything to connect them to anyway).
    • They are also extremely easy to set up. Plug it in. Connect serial cable. At the LOM prompt, type "poweron". It boots, asks normal questions (what language do you speak, where am I, what's my name, what's my address, etc.), then boots and is very happy.
    • If you like Cisco, and want an out-of band management setup, you can use something like a Cisco 3640 with one FastEthernet and 3 NM-16A 16 port async ports to manage 48 of them (hey! that sounds like about one rack full) That's a sweet solution, and much cheaper than 48 ports worth of KVM switches (not to mention, you can't just type "#.poweroff" to power down a hung box). And the Cisco Octopus cables should just jack straight into the Netras.

    Then on the other hand, there are some nice cheap x86 boxes from Einux. [einux.com] They're cute, and fuzzy, and are happy running Linux. And they cost exactly the same as a Netra X1. But I don't have one, so I can't say much about them.

    In the end, there are several questions that you have to ask. Actually, just one. Which environment do you like better? The price point between the Einux boxes and Netra X1's isn't a difference. The Sun's are more easily managed, if you put some resources up front to learn the Sun way of doing things. And there may be a hidden value in how much a PHB will like the Sun name. Or you may have a non-PH Boss, who likes the Linux name. The x86 box might have a bit more horsepower (or maybe not)

    In the end, it's really close to a wash. Choose the environment you're more comfortable in. If you're equally comfortable in both, do what I do. Take a coin, flip it in the air, and quick! Before it lands! think to yourself "which way am I hoping it will land?" If that doesn't work, look at the coin, because the two choices really are equal.

  • Yes, but the servers under consideration don't have the US-III. The entry-level Netra isn't even really a serious US-II, but rather a IIe, which was originally designed for embedded applications. In other words, it's the Celeron of the Sparc world, and it doesn't have anywhere near the performance of its big, 8MB-cache bretheren.

    You can see SPEC CPU results for a 500 mHz UltraSparc IIe on this page [spec.org]. Yep, that's a base CINT of 165, which is a helluva lot lower than the 307 result of a mere Pentium-III at 700 mHz (results can be seen here [spec.org]). In fact, no Intel chip has ever turned in a performance nearly that bad since the Spec CPU2000 benchmark was created in late 1999. Ouch.

    Since these Netras also have IDE drives, you won't be improving performance along that axis either. I'd definitely go with the Intel options as far better bang-per-buck.

    --JRZ

  • (Local disk access because you want silly amounts of swap space to allow caching of many pages in virtual memory).

    Agreed, disk access speed is a factor - but:

    • if you can't fit your working set of served pages in RAM, performance is going to fall pretty drastically, so available RAM is going to be more important
    • you don't need to use swap space to store the files that you're mapping into virtual memory - assuming the web server isn't modifying the static files in any way, they'll still be clean and hence can just be reloaded from their original disk locations if necessary.
  • The UltraSparc throughput is the answer to why the Sparc outperforms under load. The UltraSparc can handle nearly x6 the throughput of the Pentium III. The Sparc will be a much more stable under pressure than the Pentium, which will start to cave under a massive number of context switches. However, the Sparc's slower horsepower will start to smell if you are doing lots of SSL.
  • The serial ports are rj-45? WTF?
    The ethernet is rj-45, the serial ports are either db9 or db25. The only box i have ever seen with any sort of rj connecter for the serial port was a VAXstation.
  • Not the grandest of ideas; it was what brought PitBull's "great hack attack" (yeah, one of those).
  • We have a stack of Netra T1's (8) and another bunch Dell PowerApps (6). They are both nice reliable machines and I have had no problems with either. The T1's we use have higher specs than the PowerApps, but the PowerApps are well specified. I'm assuming the X1's are of a lower spec.

    The Dell boxes have a really nice bios, you can configure them to take input and output during the POST through a serial port which is invaluable, the Sun does the same but uses an RJ45 connecter (which is different to bigger Sun servers, which is different to Cisco equip, which is different to Arrowpoints... bleh). The Dell hardware also performs impeccably with FreeBSD, had a few of them running as firewalls for 6 months with no problems.

    Anyway, they are both good machines - pick the one your happier looking after.
  • Not on the Dells, you can plug a null modem cable and off you go - they have been designed with the datacentre in mind.
  • Solaris is free up to 8 cpus.
  • by Christopher Thomas ( 11717 ) on Tuesday May 15, 2001 @10:16PM (#220357)
    CPU speed might very well be irrelevant to your decision. If you're making a web server farm, your local disk access bandwidth and network bandwidth may be the limiting factors. (Local disk access because you want silly amounts of swap space to allow caching of many pages in virtual memory).

    The Right Thing to do, given that you have a hefty cluster-building budget to work with, is to buy one of each type of machine, subject them to simulated loads, and see how they perform. Throttle network and fileserver and database server bandwidth to simulate demands from the rest of the cluster when running the test.

    Don't have time to run the test? Then I hope you're good at guessing.

    You should also consider hardware/software support availability and cost, and in-house expertise when making the decision, naturally.
  • by fwc ( 168330 ) on Tuesday May 15, 2001 @11:27PM (#220358)
    I just got done building a pair of 1U rackmount servers: One was a 933 mhz Pentium III with 512MB of RAM and two 40GB hard drives for just over $1000, and one which was a celeron 733 with 512MB and one 30GB hard drive for around $600.

    These are built from standard components: Specifically an intel CA810EAL motherboard ($141.44), Low Profile RAM (Kingmax PC150) ($101/512MB stick), Standard Floppy drive ($10), Thermaltake Low Profile Fan ($8), Standard IDE hard drive ($100ish depending on size), an a FCPGA CPU (Celeron 700's are $77). The case I buy from one of my suppliers for $179, but you can get them for about $225ish on the street.

    The only gotchas are that you need to make sure that you use low profile memory and a low profile fan designed for a 1U case, but besides that it all just works.

    You can also do dual-processor units, if you really need the CPU, but from your post, it doesn't sound like you're doing anything CPU intensive. The Motherboard mentioned above is a favorite of the "rackmounters" as it has built on Video AND an Intel Pro100 Ethernet card, so you don't even need to waste the 1 PCI slot you are able to get in a 1U case. (Note: There are some 1U cases which will let you use two pci cards, but they are few and far between)

    I think what you will find however, is that the sun hardware doesn't really seem all that fast compared to the intel stuff. Besides that, you probably HAVE the spare components lying around if something fails.

    ------

  • by MadCow42 ( 243108 ) on Wednesday May 16, 2001 @07:10AM (#220359) Homepage
    >>Ii>The Right Thing to do, given that you have a hefty cluster-building budget to work with, is to buy one of each type of machine,

    Depending on the total quantity of units that you're looking at after you figure out which one you want, you might be able to get your supplier to "loan" you one of each for your testing. At the very least, they should let you return one of them after the testing, if agreed to in advance.

    Seeing as you're looking at 1U servers, I'd assume that you're looking to stuff a bunch of them in a rack... For $10k++ in sales, they'll loan you a computer for a month. If not, look around, there's suppliers that will.

    MadCow.

  • by loosifer ( 314643 ) on Wednesday May 16, 2001 @11:21AM (#220360) Homepage
    V. Networking. The key here is that the Sun box only has room for one network card. So, if you need 2 cards, the answer is pretty simple. Both the dell and VA box have room for two. Also, if you want something other than base 10/100 ethernet, you aint gettin it with the Sun box.

    Well, kind of--the Sun box already comes with two NICs built in, so you don't have to even buy a card.

    And you can certainly get other than 10/100 with a Sun box--you can get GBit ethernet, hell, you can even get ATM.

    VII. Memory. Sun's memory bug is no longer around, so memory is pretty much even ground here. All of these guys come with 128 Megs on the lowend, which is probably too little depending on your purpose.

    There never was a "memory" problem; it was a cache problem, and it only occurred on the 400Mhz procs that came in the UE machines, which the X1 most certainly is not. And you can use standard PC133 ECC RAM in the X1s, so it's cheap to upgrade them.

    And for the record, I would do straight performance tests--nothing else matters in this arena, if these are the only machines you are installing. If you already have a big data center, go with what you have. I personally think Solaris is significantly superior to any other OS I've seen for the data center, especially with Jumpstart, consoles, etc.

  • by selectspec ( 74651 ) on Wednesday May 16, 2001 @06:23AM (#220361)
    I. Operating Systems. Well (here comes the flamebait), there are pluses and minus to Solaris vs. BSD or Linux. In these simple systems performance wont vary much between the OS's. However, Solaris is the more reliable OS (better NFS implementation, etc). As far as the OS goes, I'd have to say Solaris, except for the sun compiler which is a complete piece of sh*t and a pain in the ass if you are doing some custom C stuff. Ultimately, if it were me, I'd prefer to work with Solaris here, especially if your running a Java middle-teir (servlets).

    II. CPU. What matters here is the usage, and my first question would be are you going to be doing SSL. If so, are you using an accelerator in front of the cluster? If you are using an accelerator, I would lean towards the UltraSPARC IIe. The SPARC wont perform SSL as fast as the Pentiums and the Pentiums frankly have better caching layouts than these lowend SPARCs. Without a SSL accelerator appliance, the Pentiums will considerably outperform almost 2:1 over the Ultra.

    III. Maintenance. This is the tough one. If you have to replace one of the Sun ethernet cards or hard drives get out your checkbook (we are talking Sun here). The Intel configs will be cheaper to maintain.

    IV. Hard Drives. Drives for these guys really are for caching and virtual memory since you are going with a cluster. The reality is that IDE is perfect for this use, since your net connections are going to be the primary choke points. And the Sun IDE drive is faster than the VA box drive. The Dell is SCSI which is probably not economical for most purposes.

    V. Networking. The key here is that the Sun box only has room for one network card. So, if you need 2 cards, the answer is pretty simple. Both the dell and VA box have room for two. Also, if you want something other than base 10/100 ethernet, you aint gettin it with the Sun box.

    VI. Other Drives. The Sun box doesnt have a CD drive, (which is fine considering its a cluster) or a floppy drive, and the others do. This is a nice cost saving measure if your clustering.

    VII. Memory. Sun's memory bug is no longer around, so memory is pretty much even ground here. All of these guys come with 128 Megs on the lowend, which is probably too little depending on your purpose.

    VIII. Service/Support. Dell has very good service and support. Sun has bad support, but they honor warranties. I've never used VA, but heck they own /. (hmmm).

    Summary. I'm sure I'm missing stuff, but this is a start. If you are just serving up flat files, I dont think I'd go with the sun, but I dont think I'd go with either of your other choices. I'd probably go with the VA 1120, or the low-end dell. (or look at another vendor like pengiun, etc). If you are running app servers, then I'd fork based on what the medium is. Mod perl stuff probably lends better for non-solaris environments (I could be fudding now), but Java definetly runs better on Solaris (I wonder why?). For C/C++ stuff, the toss up is with your compiler choice. If you are using other open source stuff, you probably dont want to be using gcc on Solaris (flamebait alert!) for optimal performance, but you'll have a nightmare of compatibility issues if you use the sun compiler. If you're using third-party binaries, who cares.
  • by smoon ( 16873 ) on Wednesday May 16, 2001 @02:49AM (#220362) Homepage
    IBM makes a nice looking 1U server that includes KVM functionality.

    Compaq has one that uses the same hard drives as their other servers.

    I have to agree with one of the earlier posts -- get one of each and test them. Intel/AMD/x86 chips have gotten awfully fast lately. Since the cheaper sun boxes usually use IDE and PCI, the main difference from the intel servers will be CPU, scsi vs. ide, and memory subsystem (cache amt, speed, ram amt & speed). The Sparc might make more sense if you're doing some 64 bit stuff, or using some feature that's better on sparc, e.g.: floating point.

    Since the preceding seems fairly obvious, your question simplifies to: "Is solaris 8 on slower hardware better than *BSD on faster hardware?". That question is better answered by looking at your in-house solaris expertise, need to run commercial apps only available on solaris, etc.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...