Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Hacking

White Box, Or Big Names for Lower-End Servers? 107

LazloToth asks: "Those of us who manage small- to medium-size networks face the decision all the time: for the run-of-the-mill web, print, or storage server running on i386 architecture, should we buy HP or Dell, for example, or build it ourselves from commodity hardware and save some bucks up front? In my operation of fewer than 50 servers, one will see a mix of the two. For servers that take more abuse, I tend to buy the proprietary stuff. But not always. I wonder what experiences other admins and managers have had with do-it-yourself servers in a production environment, and whether they feel that white-box servers perform as well - - and last as long - - as anything else? What is the mix in your network of big-names to no-names?"
This discussion has been archived. No new comments can be posted.

White Box, Or Big Names for Lower-End Servers?

Comments Filter:
  • Two sides (Score:4, Insightful)

    by PunkOfLinux ( 870955 ) <mewshi@mewshi.com> on Saturday December 10, 2005 @11:36PM (#14231459) Homepage
    there are two sides the issue here

    big name - warranty (saving your ass)

    white box - if you build it yourself you know what's in there. It's cheaper. But you don't have a warranty.
    • Re:Two sides (Score:5, Interesting)

      by madstork2000 ( 143169 ) * on Sunday December 11, 2005 @12:35AM (#14231619) Homepage
      Only if it is an on-site warranty, and the turn around is guaranteed in a short period of time. I have used White boxes, because I can usuall afford to buy 3 lowend boxes for about the price of a single Dell.

      I run webserver and have about 25 boxes. I buy motherboards from only a couple of manufacturers that I trust. I run commodity Harddrives, and use rsync rather than fancy scsi. I look at the individual warranty on the parts.

      I generally save enough $$ that I can buy the server and a "hot-spare" for well less than the price of a name brand box. I have had relatively few hardware issues, and even the ones I did have could be fixed quickly and cheaply. It is nice when no single component costs more than about $150.

      I guess in essense I am warrantying it myself, warranties do no good if the server is going to be down for any length of time and you are dependent on a big companies whims.

      -MS2k

      • Re:Two sides (Score:5, Insightful)

        by toddbu ( 748790 ) on Sunday December 11, 2005 @02:57AM (#14232033)
        In addition to the money you save, you'll also save a lot of time. I've run both high-end Compaq machines and servers that I've build myself, and I've found the latter to be a lot easier to deal with. Here's why:

        • No special drivers to load - Compaq has their own configuration tools and just keeping track of the CDs with the software was a pain. If something goes wrong, is it the driver, your OS, or something else?
        • Inability to debug hardware - You can't drop a proprietary drive in another machine that you have in your office to see if it works. You have to have another proprietary machine to see where the problem is.
        • Touchy hardware - Some might disagree with me, but I found Compaq hardware to be really touchy. When you spend $10-20K on a single box, you expect it to always run. I've found as good or better reliability in machines that you build yourself.
        • Configured the way you want - It can be difficult to build out a proprietary machine just that way you want it. If the vendor is short on parts, you have to choose between getting it now in a different configuration or waiting until they have the part. When you build your own stuff, you buy what you want when you want it.

        Don't get me wrong - there are times when proprietary systems make sense. I don't think I'd ever build my own laptop. But servers are better when you build them yourself.

        • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Sunday December 11, 2005 @03:12AM (#14232074)
          I have a server room filled with HP servers. We lease them for 3 years and then send them back. I have ZERO problems with the software and I get parts (usually drives) replaced in less than 24 hours.

          These servers run 24/7 for the 3 years we have them and give us no problems at all.
        • Inability to debug hardware - You can't drop a proprietary drive in another machine that you have in your office to see if it works. You have to have another proprietary machine to see where the problem is.

          What the hell are you talking about?

          Who uses proprietary hard drives?!??!

          Touchy hardware - Some might disagree with me, but I found Compaq hardware to be really touchy.

          I disagree with you. Compaq makes the most reliable hardware I've ever used. (Unless you're talking about their consumer-grade crap... a
          • Are you talking before or after HP bought out Compaq?
            Because I had a Compaq Desktop maybe 5.5 years ago. The think died under 2 years later and I havn't been able to fix the thing since.

            However I not have a compaq laptop (Evo N410c) I'm not sure if it was made before or after the switch but I havn't had many problems. Now I did strip it down and reinstall winodows with my own XP disk but I havn't had many problems. Now my laptop doesn't have much power. It isn't made for that. The thing lags if you are
          • Re:Two sides (Score:3, Insightful)

            Who uses proprietary hard drives?!??!

            Sure, they use an IDE or SCSI interface. Same size, same mounting points. But they will have Compaq or IBM firmware on the drives. It's possible to substitute generic stuff, but weird things happen.

            I've never held a job where I've been able to play with the cool toys. Desktop support or helpdesk, rollouts and whatnot. But even I know this. Christ.
            • You're saying that hard disks in HP/Compaq or IBM servers have proprietary, vendor specific firmware? Do you have any references to back this up?
              I've looked after servers for more than 10 years, and have *never* come across anything like this. Every one has used regular SCSI disks from Seagate, IBM or similar, and were readily replaceable with the same spec unit purchased from any reseller. No need to purchase a specific "Compaq" compatible drive or anything.
              • You're saying that hard disks in HP/Compaq or IBM servers have proprietary, vendor specific firmware? Do you have any references to back this up?

                This definitely used to happen. DEC was notorious for it. Vendor lock-in, dontcha know, under the guise of "reliability assurance". Their pre-commodity stuff was durable as basalt, though.

                Businesses like Compaq & Sun were cheaper, though, partly because they used commodity parts. Maybe they changed, though, after they "grew up" and went Enterprise.
              • proprietary hard disks in HP/Compaq, IBM etc? absolute shite. show me a single iota of evidence this is true in say, a proliant. they're just commodity scsi disks in a vendor-specific sled.
                • We use Sun Ultra 10s by the truckload at $DAYJOB. The IDE 20GB discs all are parts obtained from Sun. We tried using off-the-shelf IDE discs of the same size (even by the same manufacturer), but the box couldn't see more than 8GB of disc space. Jumper settings were even matched. Could only be the firmware on the drive allowing the Sun and the disc to address >8GB in a somewhat non-standard fashion.
                • I bought two seemingly nice IBM-branded Seagate Savvios. They absolutely refused to work with my HP/Compaq SCSI adapter and Seagate rep confirmed, that the IBM firmware in these drives requires an IBM's SCSI card.

                  I contacted the EBay seller and he confirmed, that he tested the disks in an IBM system -- I had to send them back to him :-(

                  I guess, IBM does this to justify its markup. But it does happen, so eat your shite back.

              • I don't know about servers, but I've inherited a lot of hardware on my linux box from Dells. One Dell desktop had a cdrw/dvd that Dell OEM WindowsXP absolutely refused to recognise as existing, put it my Linux computer works great, put it back unrecognised, so Dell sends a new one under warentee and abandoms the "bad" one on site :). new one gets hear runs great in the Dell. the tech at Dell said the firmware must have gotten "corrupted" in the "bad" drive and we didn't tell him it worked in the linux machi
          • I had to look after a compaq once, had a propriatary keyboard. Seriously, the only thing in our server room that wasn't on our KVM switch.

            I think that they assume they sell into Compaq shops, and that the only thing that matters is that their kit interoperates with other Compaqs.
          • Who uses proprietary hard drives?!??!

            I probably wasn't really clear on this. We used to buy racks full of the Compaq hot-plug drives, which you can only get them from Compaq. At least that's how it used to be.

            If your *vendor* (ie the guy selling it to you) is short on parts, then they're just as likely to be short on the big name stuff as they are on the white box stuff. And you can always choose another vendor (it's not like big iron stuff where you buy from the manufacturer.)

            Ok, but if I must repl

        • As for proprietary drives, why would you test a drive today?

          If I start getting soft errors, hard errors, warning lights, etc, on a disk drive, I replace it. Period. I have 4 hour support with Dell. They can test the drive AFTER replacing mine. It takes me 4 hours to get the drive, 2 minutes to swap it (I have to walk down a flight of stairs). If the data is sensitive, then I say I want to keep my drive when I buy the server if it should fail (doesn't cost much). I then find a way of destroying it (suc
          • As for proprietary drives, why would you test a drive today?

            Scenario #1 - Let's say you think you've got a bad backplane but aren't sure. You've replaced a couple of drives, but the problem doesn't seem to be going away. I don't know about your experience, but most techs will swap a drive and that's it. They'll bring it back to the shop and and dispose of it without a second thought. You could be fighting the same problem for months or years without a resolution.

            Scenario #2 - You might like the boxe

        • The real power of hp/compaq servers comes from the remote management.
          If the box is at remote location, and it freezes, ssh won't do you any good, since you cannot login.
          With iLO, you get console access remotely. You can even format and reinstall the entire machine if needed.
          • I have done this on white box machines. Albeit not in a "production" environment.

            Using a second machine as a serial console server, then enable the serial console on the "target" server.

            Obviously, this is not slick, and if the BIOS on the MB does not support a serial console you don't have "full" access, but for 99% of the issues you'll run into a serial terminal on a white box system, combined with a reboot switch will work.

            The other 100% you'll have to make a trip, but you'd probably have to make a trip
    • How much is that warranty actually worth? In my experience, I doubt it actually saves you anything. Your employees still end up doing all the troubleshooting they'd be doing on a white box, plus the time dealing with the idiots the big name has answering their phones, all of which is on your dime, of course. Oh, and you're paying the big name for the privelege as well. Then your system is offline at least overnight while you wait for them to send you the replacement part, which probably isn't a big deal bec
    • there are two sides the issue here

      big name - warranty (saving your ass)

      white box - if you build it yourself you know what's in there. It's cheaper. But you don't have a warranty.


      Unfortunatly it is much more complicated that that. While there is an aspect of Cost vs Support there are many factors to consider these might include, performance, budget, use, estimated time till replacement, environment and company policy.

      Consider a charitable organisation, they may need a server to do tasks, x to z wh
  • We go proprietary (Score:4, Informative)

    by mnmn ( 145599 ) on Saturday December 10, 2005 @11:44PM (#14231471) Homepage
    I bought a noname 2U server at $600 CDN for the company which gave us alotta grief... modems would never work in its PCI slots. So we decided to always go proprietary. All our big servers are ibm xseries.. and we buy ibm xseries 206 ($600 USD) for cheaper stuff. We never go Dell on servers.

    I know you can build a superior system thats whitebox. MDG sells machines for cheap with Intel motherboards. You can buy Tyan mobos for whitebox systems.

    However keep support in mind. Everytime something breaks on IBM xseries servers, we call tech support. In 4 hours of calling the replacement part arrives, and the techie arrives the same or next day and replaces the part no questions asked. Sure we've had lots of trouble on our tape drives etc, but it gets replaced painlessly, no driver changes, and no financial hits.

    Another benefit of name brands is that you can say youve worked on so and so servers in your resume. Smart employers wouldnt or shouldnt count that, but you do see people asking for MCSE and proliant servers, etc. Its even more specific when you get into UNIX... they'll only accept that brand of unix.
    • However keep support in mind. Everytime something breaks on IBM xseries servers, we call tech support. In 4 hours of calling the replacement part arrives, and the techie arrives the same or next day and replaces the part no questions asked. Sure we've had lots of trouble on our tape drives etc, but it gets replaced painlessly, no driver changes, and no financial hits.

      I'll second that tought. IBM's hardware service is second to none. Whenever one of our IBM servers (x or p or whatever) fails, we switch to
      • Using both Dell and IBM I can say that I have experienced the opposite when it comes to support from IBM and Dell. Recently we had a hardware issue with an XSeries server which required a IBM tech to replace on-site. It took three calls to get the incident logged and then the ticket was passed between three different techs before we found out they were busy with other problems and never showed up that day. The ticket was assigned to a different tech the next day who could not make it onsite and passed it
        • by phaze3000 ( 204500 ) on Sunday December 11, 2005 @06:11PM (#14235021) Homepage
          Wow, all I can say is you must live in some alternate universe to the one I live in.

          Here getting Dell to come out a fix one of their servers (even with 'silver' 4 hour cover) is like getting blood from a stone. With the IBM auto-support I had one occasion where a disk failed and we had the replacement before anyone noticed the problem (incorrectly configured RAID monitoring was the culprit re the lack of notification).

  • by duffbeer703 ( 177751 ) on Saturday December 10, 2005 @11:50PM (#14231490)
    Lights out management usually works better on IBM, HP or Dell systems. Also, building and fixing machines is a pain and gets time consuming & expensive, particularly if you get a bad batch of drives or motherboards that requires alot of fixing.

    If you are running < 10-15 machines, I can see cost savings in going whitebox. But if you are tight on staff and runnings lots of machines, buying name-brand kit is cheaper in the long run.
  • by linuxwrangler ( 582055 ) on Sunday December 11, 2005 @12:01AM (#14231520)
    The contents of the box are pretty generic for most purposes. Motherboard from Asus/Intel/..., BIOS from Phoenix/Award/..., Processor from Intel/AMD/Motorola/...

    What really makes a difference is the vendor. I have a local guy who I can call and ask for recommendations and advice. If I tell him I want a Dual Opteron with 12 gig RAM, mirrored 74 GB hot-swap drives, dual hot-swap PS and a rack-mount case of my choosing he personally delivers it a couple days later.

    Drive in my raid-array dies? He brings by a replacement the following day.

    Oh, and the only number he gives me is his cell phone. And he answers it. Always.

    With the exception of some specialized telephony equipment (actually a different white-box vendor specializing in that market - Dell et. al. wouldn't have a clue about this stuff), he is always my first call.

    I've been using him for years. When the company he worked for ceased operations he started his own and service has remained outstanding.

    I guarantee that nobody who uses the "name brand" machines can come anywhere close to the responsiveness and support that I get from my local vendor.
    • Go on give him a plug. If he is that good give hime some free advertising.

      I use a mix of whitebox and HP gear. Thats about 35 whitebox and 10 HP.

      I only use the HP gear on the high availability servers as it has great Lights Out using iLO advanced. If I could find something similar that worked as well I would probably stop using the name brand gear.

      The white box servers are great. Parts can be obtained at just about any corner computer shop as there is nothing proprietary, it is built exactly as I want it, I
    • I think your 'guy' may defeat the spirit of the question. He's looking at doing the whiteboxing himself, not using a good reseller. You're whitebox guy is doing for you the same thing that dell, hp or ibm would do, only probably better because the little guy is 'hungrier.'

      If you didn't have your guy would you answer the same way; if you were building all yourself?
    • by fm6 ( 162816 ) on Sunday December 11, 2005 @02:00PM (#14233864) Homepage Journal
      You're not saying "Buy white box." You're saying, "buy from a good white box vendor." And how many of those are out there? From what I've seen, not that many.

      Besides, by depending on this guy, you've created a one-man point of failure. What happens when this guy gets sick or goes on vacation? Where's your immediate response then?

      Even if he never gets sick or takes time off, he's not going to be able to sustain this level of service. His own good reputation will work against him. He's obviously one of those people who has to do everything himself. He's probably not very good at delegating or training, so he's never going to be able to scale up his operation. So unless he starts turning away business and dropping customers when they get too big for him to handle, he's going to get in out of his depth.

      If I were in your shoes, I'd want my hardware needs met by a solid organization, once I could count on not just now, but years from now. And that has to do with people, not with where the boxes are assembled.

    • My local vendor is like that as well, service much better than say Dell who doesn't do next business day support in my state as is the case with all the other name brand vendors so I win right off the bat. Not only that but he and his wife are more than willing to build to my custom designs which consistently out perform the name brands.
    • Brand name for me.

      I wish we all knew guys like your vender. It would make life much easier. I've been burned by very small vendors. It's not that they all make crap. It's just that they tend to vanish one day or unknowingly give you a monther board that dies after a year. With a big name brand vendor I can get an extended service contract that I know will be useful two years from now.

      I own a medium sized retail store. My hours are frequently 12 hours a day six days a week, and 6 hours on Sunday. I need
  • by Frumious Wombat ( 845680 ) on Sunday December 11, 2005 @12:10AM (#14231544)
    We went the white-box route on our first compute cluster, which were then converted to desktops later. Decent machines, but the power-supplies weren't up to the 24x7 operation and tended to eventually have the fans sieze up, causing the ps to overheat. Eventually other components showed that they could have been better, and we cannibalized some machines to keep others running. They were replaced by HP and IBM boxes under 3-year, next-day, service contracts.

    The advantage of calling IBM, HP, or even Dell, is not simply the service contract (though your time is worth something), or the fact that their QC is superior to wherever you're getting your parts from, but that they have real engineers, who worry about such issues as optimizing air-flow, choosing proper fan-sizes, etc. Take apart an IBM xSeries 345 some time, then try to decide if you could actually buy parts to build a machine like that, for less than just calling IBM.

    White-box systems may have once made sense, ( I remember a 386/40 AMD-based system that I wrote my thesis on that was still running when I came to visit years later), but with modern components, heat-loads, etc, it pays to invest in properly engineered hardware, backed up by a company willing to service it on short notice. WB hardware may still make sense for desktops, if your environment keeps the data in non-local storage, so that a new desktop can be dropped, booted, and put into production immediately. Never with servers.

    We adopted an informal, simple, but effective policy: Do not buy any machine that doesn't come with a three-year warranty, or hard-drive that doesn't come with a five year warranty.
    • I think this is a pretty sensible route. I happen to have some decommissioned servers and workstations from a couple of the big brands and what's inside shows no hint of anything being an afterthought, and that sort of reliability engineering is what is needed for servers and certain workstation tasks.
    • by BigBlockMopar ( 191202 ) on Monday December 12, 2005 @01:49AM (#14236751) Homepage

      Decent machines, but the power-supplies weren't up to the 24x7 operation and tended to eventually have the fans sieze up, causing the ps to overheat.

      Oh yeah, big time Achilles Heel of the generic PC, assuming name brand mobo and stuff.

      It's just impossible to get a good power supply in a generic PC. ("Good" means built with decent quality components, like the Astecs [astecpower.com] and Lambdas [lambdapower.com] you'll find in proprietary systems. It does not mean "Comes with a ThermalTake Fan and is the choice of 14-year-olds and overclockers!".)

      My best success was based on a simple formula: the power-to-weight ratio. Buy the heaviest supply marked with a given advertised wattage rating.

      Then, for server use, step 2 is to open up the supply and replace all the made-in-Bangladesh-or-Taiwan-or-China electrolytic capacitors with Spragues or Nichicons rated AT LEAST 1.5x the voltage ratings of the capacitors which were in there. And then out comes the no-name 12V fan, only to be replaced with a (loud! expensive! moves a hell of a lot of air! lasts forever!) Comair Rotron [comairrotron.com] 120V fan running directly off the power line. Also gives you a chance to fix the *many* cold solder joints you're likely to find in commodity power supplies. All told, usually under an hour per supply, with the new fan often costing more than the supply!

      Since I started doing this, I haven't had a single failure of one of my white-box server supplies.

    • The advantage of calling IBM, HP, or even Dell, is not simply the service contract (though your time is worth something), or the fact that their QC is superior to wherever you're getting your parts from, but that they have real engineers, who worry about such issues as optimizing air-flow, choosing proper fan-sizes, etc. Take apart an IBM xSeries 345 some time, then try to decide if you could actually buy parts to build a machine like that, for less than just calling IBM.


      We're a small shop and our cooling d
  • by alta ( 1263 ) on Sunday December 11, 2005 @12:28AM (#14231597) Homepage Journal
    With the low price of low end servers you aren't going to save a lot of money going with the low end box.

    Consider this, you buy, or build a white box. You'll end up with very short warranties provided by different companies, very much a pain in the arse. You may save $200. Now think of how much a 50k/year sysadmin makes per hour (roughly $25 if I did calc right)

    So, $200/25 = 8 hours...

    Now you've got $200 which is equivelent to 8 hours. Are you going to spend more times on a whitebox than a dell? I would say so. ESPECIALLY if you're building yourself. Consider extra time spent finding parts. Extra time putting it together. Then when things fail you have to round up the warranties for individual parts. Probably your warranties won't be as good as what dell provides. And then repair time. I know as a sysadmin we tend to repair ourselves anyway, but consider it may be something you WOULD let an on-site tech repair because you're busy...

    In a home situation, I'd say build your own. When you're off the clock, your time is free. but at work, when time IS money, buy the named stuff.

    BTW, my numbers are BS. Play with your own, I think you'll draw the same conclusions.
  • Just because it has a "name brand" like Dell doesn't mean it's any good. I've been highly unimpressed with their hardware. Just this last week I saw a brand new Dell server in which the hard drives had a plastic cover that made air flow impossible. At another place in the case, there was a fan whose intake was a closed plastic area. Hot hard drives and placebo fans don't improve server quality.

    Whether you do it yourself or buy a name-brand system, make sure that the case is well-designed and that the co
    • >>Just this last week I saw a brand new Dell server in which the hard drives had a plastic cover that made air flow impossible. At another place in the case, there was a fan whose intake was a closed plastic area. Hot hard drives and placebo fans don't improve server quality.

      Could you provide the model numbers of these servers? I'd like to investigate further. While I'm not a proponent of Dell, the airflow on any of the servers we've received in the past 3 years appears to be good.

    • Just this last week I saw a brand new Dell server in which the hard drives had a plastic cover that made air flow impossible.

      Don't remove that plastic cover. It keeps the drive's case from getting unsightly little scratches on it!
    • I haven't seen any recent Dells with this problem. Of course, I have seen a 500MHz PIII Dell server that overheated because they put 3 10,000 RPM SCSI drives in the uncooled hard drive bays, but the new Dell servers are definitely cooled up expectable standards. It is good to remember that just like any server, a $500 bargain basement Dell server probably isn't speced to cool 15 SCSI drives. In general, most of the servers I see are dells (I am an IT contractor for Biotech companies), and most of them ar
  • I support 2 Win Server 2003 boxes, 2 Redhat 9.0 boxes and about 10 desktops running XP pro. All built from commodity hardware. I use Abit motherboards, Pentium 4's or Celerons, Intel NIC's and ATI Graphics. I've had *zero* problems after the initial startup issues of a bad RAM module and a bad ATI board. I also suggest Toshiba DVD/CD writers.
    • Re:Build it yourself (Score:2, Informative)

      by billcopc ( 196330 )
      As a shop tech who assembles roughly a thousand PC's every month, I have to step in and say you have cobbled together the flakiest components ever seen. Allow me to explain:

      Abit is in deep doggy-doo because they're under fire for extremely shoddy quality control and RMA service. I don't carry Abit anymore because 3 out of 4 boards would come back to us, and then Abit would sit on them for a few months before repairing or replacing them because they have no idea how to run a business (money and workforce i
      • As a shop tech who assembles roughly a thousand PC's every month

        I have to query this, I'm sorry:

        • 22.14 business days a month
        • 45.17 PCs per business day
        • 5.65 PCs per hour, assuming an 8 hour day

        Are you telling us you assemble ("roughly") 1 PC every 10m 37s? Does that include unpacking? All screws? Cable ties? Boot tests?

        • I'm thinking he probably works on some sort of assembly line and "installs" 1000 motherboards a month (something like that). Also, the components he's described seem to be more suited for desktops not servers...so I don't really know why his advice applies in this situation.
        • Yes dear, screws, cable ties, and I run mem/disk tests while I start on the next one.. it takes me about 10 minutes per system. The main difference is that I work far more than 22.14 days a month, make that 26.57 by your math (six days a week), and 11 hour days. That means I get 25.78 minutes per hour to blast you on slashdot :)

          Yes, it has fallen to such ridicule. Everyone and their mother are buying 299$ PC's to surf the net, copy DVD's and flash their pirated satellite receivers, and we make a hefty pr
      • I agree with your choices to an certain extent although I do like Intel motherboards when I'm working with Pentiums aside from the fact that I know how to change certain settings in the Northbridge and Southbridge that seriously increase performance. Where I differ is on the Maxtor HD recommendation. I'm finished with Maxtor. After seeing four of them die in the same week, two of the drives less than a year old and none of them more than a year and half old, I switched to Seagates. Not the fastest but t
      • First you are talking about desktop components and this discussion is about Servers.

        ASUS has just gotten into server level boards and they are alright but fairly lowend.

        MSI really shouldn't be used in a server. They make desktop boards.

        DVDRWs in servers? Now really again this is a desktop component. But if we are going to discuss DVD-RWs, why would I pay 2 to 3 times more for Plextor when LiteOn are just as good. Plextor use to be heads and shoulders above the rest but now you just paying for the name.

        When
  • If it's your machine, buy it, build it, make it exactly what you want for a good price. If it's for a company and you may or may not be working for when it needs support I'd say buy from someone where it has a model number. It makes it a lot easier to know what it is.

    If you are comfortable building *and supporting* the machines through their life and there really isn't anything out there that's pre-built and has the right price and features it might be worth building. There are times when the market fall
  • In terms of service.

    IBM's 4 hour turn around on parts and service can save your arse if you have to have all your eggs in one basket. Even a failed hard drive in an array is a looming disaster you'd like to fix now, not next business day. So even at 3am on Saturday - you can have a drive delivered so you can have your array back to 100% by the next business day.

    If you go with whitebox - be sure and build in redundancy so you can lose one, or take it down for extended repairs. Because the money you're saving
    • Unless he's running a hospital, what business needs a four hour turn around?
  • Fingers (Score:3, Interesting)

    by tokki ( 604363 ) on Sunday December 11, 2005 @01:33AM (#14231819)
    After I lost my second finger in the sharp guts of a whitebox system while trying to fix it (again), I decided to go with brand-name and I never looked back.
    • You lost your second finger AGAIN? Jeez, wear gloves or something.
    • Spend another $20 next time and get a good case. Whitebox doesn't mean "buy the absolute chapest shit possible"...
    • Actually I've lost more blood and skin, albeit no fingers (yet!), to Dell and HP/Compaq machines to date. When designing/building/buying a white box the design of the case is near the top of the list of decision factors as I frequently do maintenance and repairs myself, and always after the warranties expire.
  • by binaryspiral ( 784263 ) on Sunday December 11, 2005 @01:37AM (#14231832)
    I work in a fairly large datacenter, where I help support many of our colocated customer's equipment. Some of which we sell and maintain, some of which they purchase and colocate. I've seen a good mix of generic servers that were custom built because there was no pre-built options available. But when it comes to support - there are few options when things go down.

    Motherboard company blames ram, ram company blames raid card manufacturer, raid card says it's a bad firmware version of the motherboard... two hours later, server is still down. Who's going to let us swap out a motherboard just to see if it works?

    I don't see a price advantage to whitebox servers compared to modern server hardware from the big names. Anyone who's just looking at the price tag is fooling themselves.

    Dell's hardware is unimpressive, I'll give a nod to the previous responder who mentioned that. And storage subsystems are still insane. Even with the evolution of SATA for slower mass storage, cost/MB is still too high with these subsystems.

    Beware of a name brand's inexpensive servers. Some of the rock bottom units are cheap, but they lack some of the basics like raid, hot swap drives, expandability... On the bright side, even if you go with these cheap units, you'll still have service and support from a major player.
    • That really depends on your skills. No hardware company or vendor questions my diagnoses and yes I do have the tools to do it right. I have no problem tracking down the exact cause of the problem and enough spare parts to do a temporary fix to hold until the replacement shows up except for motherboard failure although I'm seriously considering have a spare on-hand for my future servers provided I don't go name brand on those. The reason I mention that qualification is that HP and Sun seem to have woken u
    • Okay explain this "Even with the evolution of SATA for slower mass storage".

      How is SATA "slower mass storage"? What's faster? You will have trouble convincing me that a large SCSI array is faster than a large SATA array given the same interconnect. Once hit an array of 4 drives or more, your individual drive speed is no longer an issue as the connection between your system and array is the bottle neck and not the drive speed. With enough cache and a battery backup on your raid controller, the speed of your
  • Manageability (Score:3, Informative)

    by micron ( 164661 ) on Sunday December 11, 2005 @01:48AM (#14231854)
    The main difference you get with HP, IBM, Sun vs the rest is the manageability of the hardware.

    A generic box fails, or has intermittant failure, and sometimes you are scratching your head figuring out what is wrong. The better designed gear will tell you that "Dimm 2 has been throwing ECC errors for the past couple of days". Gives you a place to look. In the generic box, you are replacing all the RAM sticks.

    I don't see a whole lot of difference between a Dell and a whitebox.
    • Yup, agreed. We have a few Ultra 5s and 10s running as servers (dev boxes mainly). We occasionally get "Red State Exceptions" which Sun can't diagnose (it's CPU, memory or motherboard; lot of use...). However, the "real" servers are generally pretty good at giving out good diagnostics saying e.g. CPU2 had a parity error, DIMM x had an uncorrectable memory error, etc, etc.

      Basically, to answer the question: what will you do when the server breaks? With a server from IBM, Dell or whoever, you give them a

  • Middle tier (Score:3, Insightful)

    by Piquan ( 49943 ) on Sunday December 11, 2005 @01:51AM (#14231860)

    For the Big, Important Servers-- customer-facing web servers, product db backend, major fileservers, etc-- then IBM, HP, NetApp for fileservers, etc seem to be the way to go. Use somebody who's built a name for themselves in the enterprise by service, not marketing; Dell still doesn't know how to support those needs.

    But for intermediate servers-- internal web servers, testing boxes, etc-- you can go with a smaller company. It's still worth going with a company, rather than DIY: the company deals with fixing servers every day, has the parts on hand, etc. Your organization may have great people, but the guy who is constantly building servers for a living is going to beat you on service.

    The smaller companies, like OffMyServer (blatant plug for a company who's done well by my employer), can meet your needs without breaking the bank. We have dozens of servers in my department alone, and we just couldn't afford to put a big HP contract on each of them.

    ObDisclaimer: Speaking for myself, not my employer, my own opinions. Not affiliated with any of the above companies that I know of, other than that my company buys from all of them.

  • by Bishop ( 4500 ) on Sunday December 11, 2005 @02:25AM (#14231931)
    I have had good experiences with HP, IBM, and Sun support. All have sent out technicians to fix problems promptly. Dell support is alright, but you need to twist their arm some time.

    For rackmount gear go with a name brand. I have had nothing but trouble with generic white box rackmount gear. Recently a stack of 20 antec cases was 1/4" too high to fit in the industry standard rack.

    For non rackmount servers I will go with HP/IBM/Sun if I want SCSI or similar server features. For really low end stuff I might go with white box but only if the hardware budget is an issue, or if I need a specialty box with specific hardware. If I go with a white box I always use higher end components so their isn't much of a price difference anyway.

    The biggest issue I have had with white box machines is that the hardware was not designed to run 24/7 and it fails. Despite what the tweakers think most white box server cases have poor heat management. Adding more fans is not the solution when the harddrives sit in a dead zone of low air movement.

    And again the support from HP, IBM, and Sun is really nice.
    • I'll second IBM support. We bought the original xSeries 325, early enough that some of our machines were actually release candidates, rather than official hardware (IBM was back-ordered at the release, and the choice was take the betas, or wait two months; I thought it would take that long to get the cluster retooled for them anyway, so I took the betas). IBM treated them identically to the rest of the order, and when I called in an issue, including something vague along the lines of "eth0 refuses to ne
  • parts replacement (Score:3, Insightful)

    by chinakow ( 83588 ) on Sunday December 11, 2005 @02:37AM (#14231975)

    One of the questions that you should answer for yourself is, "If this server is down, how much time will it take the boss to get pissed?" If the answer is less than one day then get a name brand and a service contract that guarantees a fix, but from the sounds of things, the answer might be more along the lines of multiple days. So as long as you can make it work then white box might be a better idea in the long run, if you can handle 3 days minimum for a part replacement.

    Just my thoughts on the issue.

  • Do both !! (Score:3, Insightful)

    by DrSkwid ( 118965 ) on Sunday December 11, 2005 @06:40AM (#14232463) Journal
    I buy second hand named brand servers.

    • This is actually not a bad idea, and one I've looked into just for myself to have some 'project servers' to play around with at home. I'm not involved in datacenter operations or procurement, so I'm not sure what the issues would be around getting them for corporate use, but it seems like they could be an option for second-line stuff, or for an organization that was on a very limited budget.

      You'd want to do your research into what you were buying, though. I've heard that there are some models of servers aro
      • you no longer have warranty support. this means should you need spares, you're going to struggle to get them quickly. it can make a lot of sense, but in other ways you're getting the worst of both worlds: the lack of support of white boxes, and the expense of running named servers.
        it can work, but you need to weigh up the pros and cons - and be sure that you're qualified to do so.
        my money's on new, named boxes, and replace them every 3 years unless you're really small or really cash-poor.
  • You take the four year old Dell Optiplexes within your organization that otherwise just go to a salvage aucton. You install Linux or a BSD OS on them.

    'Big Name' at lower-than-white-box prices. Voila!
  • I bought a few white boxes from a noname vendor. Service was horrible (this is my experience only, yours may vary), and ended up spending more than I saved in time. It took me 4 weeks to get a new HD for the machine. With IBM or Dell, it is delivered via Next Day Air at a minumum, usually a courier within 4 hours. The time you spend screwing around with the cheap servers will quickly exceed the money you saved. There is also the fact that the cheapies never quite fit in the rack right, they don't have
  • I think the general consenus around here would be to go with Dells, IBMs, etc.

    We use Dells. Partly, because they fully support the IPMI protocol. It's really nice to be able to remotely control the machines, see trend statistics, etc. using IPMI.
    • Pretty much every server level machine we sell can be upgraded to support IPMI 1.5 or 2.0 and we use commodity boards. The upgrade price depends on the motherboard manufacturer but generally ranges from $40-140. Dell is offering nothing special here.
  • The price point of a server includes far more than the hardware in the box. I'd say the hardware itself is the least costly part of the package. Others have already mentioned the warranty that comes with an HP or Dell box. As important though is the software and support that comes with the box. For example, having the HP management agents installed on a server is a necessity if you need to support more than a few boxes. HP's Integrated Lights Out feature is a huge plus, especially in remote environments (re
  • Print server: If something craps out, and it takes me a couple days to get a replacement parts, nobody is going to have a fit. I use a white box, or salvage whatever we've got. Saves money.

    Production database: Something dies. Vendor technician is there in 3 hours with a replacement part, and you're back up in 4 hours.

    For random office machines, we use whatever we've got around, or buy something cheap. For production, its all IBM or Sun.

    For developer laptops, we use IBM. Prices are high. Specs are low. But t
  • by dr_leviathan ( 653441 ) on Sunday December 11, 2005 @01:37PM (#14233737)
    We buy hundreds of white 1U pizzaboxen from SiliconMechanics.com every month (not only are they white, but they are also blank -- we net boot debian GNU/linux onto them ourselves). SM has an excellent record for replacing broken parts, although we're never in an emergency when something breaks since we deploy backup hardware for everything. If something breaks we can switch to the current backup, start converting a spare machine to be the new backup, and then take our time getting the broken hardware fixed, its all under warranty.

    All of our vanilla services: mail, web, and even database are on white boxen from SM. We have some black box stuff for heavy mass storage.
  • If you have enough warm spares ( and staff ), then generic servers should be ok to get by with. The fact you have no service agreements should be transparent to the users.

    If you cant afford the extras, its hard to beat on-site service you can get from the big names.
  • Stealing from a post I wrote earlier this year on a similar subject, I will agree with the folks pushing for name brand hardware instead of hand-building each machine:

    Resist the urge to buy/build one-off servers because they are cheap. The $300 one-off computer that some kid built in his garage is going to cost you way more than the difference it would have cost going with a single standardized platform - over the life of the machine.

    One person can maintain 300 machines if they are all exact clones of each
  • by seifried ( 12921 )

    Sun x2100 - $675 for a barebones system, just use whichever SATA drive you prefer, and brand name PC3200 DDR400 RAM (I use kingston mostly), voila an Opteron based box with 2 hard drives (there's an onboard RAID card as well), dual gigabit ethernet (broadcom, very nice), up to 4 gigs of ram (4 slots), lights out management, service light, great airflow, serial port that allows BIOS access as well as OS access (most whiteboxes won't do that). PCI-E slot for a raid card/fiberchannel/ethernet/etc. Oh and that

  • "run-of-the-mill web, print, or storage server running on i386 architecture"
    You don't have any old PII or PIIIs laying around?
    We often retire old desktops to print or small web servers at my office. We usually have more of them than we know what to do with. Now for storage that is a different story. If a print server goes down you probably didn't loose anything. For storage I would want a good white box or a name box.
    • >>For storage I would want a good white box or a name box

      That is what is what a SAN is good for....put a cheap box out there as a server and a have 2 or more high reliability storage devices on your network that all the servers use for storage

      use one of the various Linux iSCSI targets and initiators (Microsoft has iSCSI target and initator software for Window$ also), along with LVM and Software RAID to create some cheap SAN devices consisting of 2 or more SATA drives, a beefy power supply (i.e not the
  • We use Optiplex GX110 - 500MHz, 256MB RAM. They're cheap, really easy to get hold of second-hand and Linux runs fine on them. They're so cheap we just buy twice as many as we need and use the others for spares.
  • Try finding a ready-built server with an ARM or PWRficient chip in it... Whitebox has the nice full featureset, so its a good platform for running a server off. That said, I may still buy RHEL because the support is excellent. All the same, it's not worth subscribing for up2date, i find yum a better alternative.
    • You realize that he's talking about white-box computers (custom built with commodity hardware) and not Whitebox Linux, right? I thought the same thing when I read the title of the article, but the actual question makes it clear.
  • This has been beat to death before. I could recall this one, can probably find you even more hits.

    The thread:

    http://ask.slashdot.org/article.pl?sid=05/01/09/01 45234&tid=98 [slashdot.org]

    My take:

    http://ask.slashdot.org/comments.pl?sid=135424&cid =11301686 [slashdot.org]

    Quote:
    I wonder what experiences other admins and managers have had with do-it-yourself servers in a production environment, and whether they feel that white-box servers perform as well - - and last as long - - as anything else?

    99% of the time as an admin, and per
  • brand name servers is to vary the maintenance contracts. I have an Oracle production, test and development environment. The production environment gets gold service, even though it's a RAC cluster because if something breaks I want it fixed now! The development and test environments have next business day. If we lose a whole RAC node in test we have to wait at the most three days (box goes down on Friday before a three day weekend) before we get it fixed, which we can live with. Going to NBD from gold servi

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...