Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware Technology

How Would You Build a Datacenter? 81

InOverOurHeads asks: "Some of my coworkers and I are building a new datacenter for our company. We're a growing startup, we have about 50 servers now and expect to have about twice that before too long so building to grow is key. Now that we're about $15,000 in to the project, it is looking and feeling more and more like we were way over our heads. We have 4 racks wired to a single 20amp circuit. Our UPS is at 90% load and we only have 10 machines on it. We have all of our cooling on one side of the server room where it is about 60 degrees, the other side of the room where the servers exhaust is about 30 degrees warmer, so it appears that we have some convection problems with only a handful of the machines on, right now. We're realizing that there is a lot more to building a datacenter than racking servers, what else have we missed?"

"On the positive note, we have a really nice overhead wire rack, that's looking good and all of our wiring is really tight looking; all the colors match, all the cables are labeled, they are all the right length, etc.

Are there any guides or how-tos on this? Since we're going to bite the bullet and tell the boss that we messed up we want to try to correctly measure the rest of the work involved in making it work. What happens when the UPS is at 100% load and how Dell servers react to being under powered?"

This discussion has been archived. No new comments can be posted.

How Would You Build a Datacenter?

Comments Filter:
  • Hmph (Score:5, Funny)

    by the Man in Black ( 102634 ) <jasonrashaadNO@SPAMgmail.com> on Monday October 20, 2003 @08:49PM (#7265906) Homepage
    If the state of IT in this country is ay indication, your best bet is to fire everyone and outsource your needs to India.

    But maybe I'm just bitter.
  • There must be somebody out there who does this stuff for a living.
    • A quick summary of your situation looks like this:

      1. Hire an expert and do what s/he tells you.
      2. Co-locate with a reputable site with excellent SLA's.
      3. Continue on your obvious death spiral, and spend the cash on a great big party when you get fired (I give you about 6 months tops).
  • Buy an existing one. (Score:3, Informative)

    by lostindenver ( 53192 ) on Monday October 20, 2003 @09:04PM (#7266006)
    I know of four for sale in the denver area going for 1/3 to 1/2 what it cast to build one. They include redundant backbone connections, Power backup and great locations.
    The best thing they are oversized, sell the extra space as a COLOC
    • sell the extra space as a COLOC

      Sell it to whom? Currently, supply far exceeds demand which is why those datacenters are for sale for a third or a half of what they cost to build.

      I think that at this point, setting up in-house datacenters is a waste of resources. The glut of available space makes it easy to find deals on space. Admittedly, there are situations where things must be kept in-house but this article didn't specify if that was a concern.

  • by adamy ( 78406 ) on Monday October 20, 2003 @09:06PM (#7266024) Homepage Journal
    Have no single point of a failure.Multiple UPS,network connections inside and outside, routers, firewalls, switches, etc. If anythinggoes down, you need to be able to replace it as quick as possible.

    Are you in an earthquake zone (The Bay ARea)? If so, make sure 1, the building is earthquake retrofit, 2) the racks are all bolted to the wals suck that a little shake up doesn't turn into a shake down.

    Make sure you are getting enough power to the building. Have Generators in case power goes out. UPS should only keep things going long enough for the generators to kick in.

    Off site backups, of course. It is hard to beat the bandwidth of a stationwagon full of storage. Daily backups should be moved out of the building, I'd suggest on firewire/SCSI hotswappable hard drives, but there are many ways to solve this problem. Longer term backups should be a geographically out of disater range (east coast to west coast ideally)

    Did I mention redundancy? Make sure you havea duplicate of everything.

    OK, you have it built? Now test it. Kill the power and see if the UPSs can hold it long enough for the generators kick in. Now do it again, but pull out one generator.

    Get one of those devices that allows you to remotely power cycle your machines as well, incase it locks up.

    Havea back door (IE a dial up) to get into your data center unless you are going to have it manned 24/7. THis will keep you from coming in at 2 AM when a router blows.

    Thats all off the top of my head. If I am wrong, please point out where, as the alternative viewpoints will be wuite helpful.

    • If I am wrong, please point out where, as the alternative viewpoints will be wuite helpful

      Well, how about cost to start with. It's a little difficult because the original poster didn't really mention budget. But if "$15,000 in" denotes a large portion of the budget, maybe things like firewire backups are going to be out of the question. I suppose if you're worried about all of the servers being totally destroyed it's worth doing, but is it really necessary to have off site backups?

      I agree having a gen
      • I suppose if you're worried about all of the servers being totally destroyed it's worth doing, but is it really necessary to have off site backups?

        It's been said that there are two types of people in the world: those who believe in off-site backups and those who will.

      • I suppose if you're worried about all of the servers being totally destroyed it's worth doing, but is it really necessary to have off site backups?

        It is absolutely necessary if you to protect your data. There are just too many things that can go wrong and offsite storage isn't *that* expensive.
        The ideal is to have your offsite storage at a company that does just such things, but even at an employee's home will do.

        On the other hand, you can ask the question of what happens if the building is destroyed a
        • ...if you're going to try to keep the company going, then some sort of offsite backup is essential.

          That's a great point and I totally agree. The problem I have is that it sounds like the original poster is pretty short on cash. Something about thinking $15,000 was a lot of money. I work for Intel, and I probably have about that amount of hardware at my desk right now, so I'm maybe a bit jaded (BTW I'm not trying to brag, a tray of processors and memory isn't really that exciting).

          The best analogy I ca
    • the racks are all bolted to the wals suck that a little shake up doesn't turn into a shake down.

      Although bolting the racks to the wall or floor will prevent them from falling over, it'll transfer the shock of any movement of the building right into the rack and your equipment. You should look at installing some form of isolation platform [worksafetech.com] instead.

      • buy a tire. rip it into squares about 3 inches square. put bolt through the rack brace, through two pieces of tire, and into the wall. tighten it down until the rubber just starts to pucker. this provides a nice amount of isolation for about $10... useed tires with holes can be had on the cheap.
    • regarding the generators... I work for a company that must maintain a large LAN, a call center, and a DNCS (digital network control system). Our generators are hard-piped to natural gas; they'll also run on diesel fuel if needed. Natural gas may sound like a bit extreme, but consider this: in June we had a surprise storm -- not a tornado exactly, but a large 'team' of microbursts destroyed approx 200-thousand telephone poles (and it took the utilties with it, of course). Our main DNCS location was with
      • One word - Blades

        I's not a panacea. If you want more processor power you pay for it with more power consumption and air conditioning. Usually in colocations you have 2x20 Amp outlets. If you use blade servers you end up with filling from one third to half the rack and your power consumption reach 100%.

        If you do it for yourself (in company's office) the costs are skyrocket because of UPSes and additional air conditioning.

        If tasks are not CPU-intensive you may be better off with SPARC-based blades or Int

  • Quickie... (Score:3, Insightful)

    by BrookHarty ( 9119 ) on Monday October 20, 2003 @09:06PM (#7266025) Journal
    Dont run everything off 1 UPS. You already stated you have 1 UPS at 90% of the load as it is. 1 Point of failure and the money you saved just caused a horrible outage and pissed off customers.

    Sounds like you already know this, plan on secondary backup, AC, and make sure you have terminal servers. (Using unix right?)
  • More Infrastructure! (Score:3, Interesting)

    by JLester ( 9518 ) on Monday October 20, 2003 @09:13PM (#7266054)
    You're definitely going to need a lot more AC and A/C.

    You'll need at least one 20-amp circuit per rack in my opinion if you go with standard 110 battery backups. For that many servers though, you might be better off going with 220 service and the high voltage battery backups that APC and others offer.

    Our old server room started small with a couple of servers and quickly outgrew the AC service and A/C. We heated our whole office in the winter with just the servers! Maintenance ran several new 20-amp circuits for us until we filled up the breaker box.

    When we moved a couple of years ago, I made sure to get the new room right before we moved any equipment. We have central A/C fed by several outside units plus a very large auxiliary unit just for the server room. 20-amp circuits are ran every few feet on separate breakers. I don't know what type servers you are using, but large multi-processor, redundant fans, RAID, etc. boxes use LOTS of power. We use mostly Compaq DL380's, two of them will draw 50% off an APC 1400R battery backup. For extended runtimes, we made sure to not overload the battery backups, so only two servers per backup with no more than two backups per 20-amp circuit. It's slightly overkill, but I got very frustrated in our old location and resolved to never blow breakers or kill battery backups this time.

    Since you're just getting started, it will pay off big time in the long run to get everything setup right before you start loading in servers. It makes things so much easier to just plug in without having to call maintenance or a contractor to upgrade services.

    Jason
  • When my company set up a server lab last year we had to have the room retro-fitted with a dedicated air conditioning system... high volume flow with multiple output points around the room and it's own inflow duct as well, including filtration, etc.

    We supplement it with several large industrial size fans for increasing air circulation to the racks.

    Make sure there is plenty of space behind your racks and enough access points to that rear area to allow air to circulate.

    More UPS units... two per rack at leas
    • Actually, adding fans can cause a lot of problems. You are going to increase air mixing, which means that the computers never get really cold air. Not much of a problem if you only have a couple servers in a cabinet, but when your density gets higher it won't work as well.
  • Realize you are in over your head and use an existing datacenter.

    They have already done all the things you have done, all the things you are realizing you forgot, and all the things you will not find out about until everything fails.

    You can now move on with you core business, whatever it may be.

  • by cookiepus ( 154655 ) on Monday October 20, 2003 @09:17PM (#7266086) Homepage
    The question makes no sense outside of context. If what you're doing is really important, you would:

    Have cool air coming up from the floor into each machine (and it'd be freezing)

    Have a diesel generator with at least a few day's worth of fuel, and contingency plans for obtaining more fuel. It should be feasible to run on generator indefinately in case of a major power outage.

    Redundant data centers. Have data mirrored between them for complete redundancy in case of any disaster striking one of the locations.

    Obviously I am being facetious. If you had a budget and the necessity to do something on that scale, you wouldn't be asking /. However, it would be worthwhile to specify the degree of importance and the budget of this project.

  • by stienman ( 51024 ) <adavis@@@ubasics...com> on Monday October 20, 2003 @09:18PM (#7266094) Homepage Journal
    On the positive note, we have a really nice overhead wire rack, that's looking good and all of our wiring is really tight looking; all the colors match, all the cables are labeled, they are all the right length, etc.

    "I don't care whether it works boys, just make it look good for the investors, ok?"

    *shudder*

    -Adam
  • by bitty ( 91794 ) on Monday October 20, 2003 @09:27PM (#7266151) Homepage
    On the off chance that this isn't a troll ($15 grand for 50 name brand servers, plus racks & UPS? It don't smell good here), did you guys do any research ahead of time, or did you just start slapping shit together? If I were your boss you would be FIRED, because you obviously have no clue as to what you are doing.

    You obviously didn't talk to an HVAC engineer, because they would have set you up properly from the start, getting accurate heat output ratings for all the present and planned equipent (3.413 BTU per Watt, they tell me). Then, looking at the placement of the racks, would have had the cold air pumped in at the right places. This may be correctable by tossing a box fan or two in the room to move the air. Not fired yet.

    You also didn't consult with an electrician or electrical engineer, because they would have given you some very sound advice on your load and UPS needs. You are so woefully underpowered and under-UPSed, your servers will lose power long before they have a chance to properly shut down. By the way, do you mean to say that you have 50 servers and only 10 are on UPS? FIRED!

    "But look on the bright side, Mr. Boss, the cabling is all neat and pretty!" FIRED for spending more time on the cabling than you did planning this disaster!

    Do the tech industry a favor, and go get a job shoveling shit at the zoo.
    • The parent sounds like flamebait but I agree with it all. From your description, you really are in over your head as your name would suggest.

      If you want my opinion, you get to get someone on board that's done this before because it sounds like you're spending too much time on the wrong things.

      It sounds like this datacenter is going to be critial to your business, such that if it goes down or fails in any way it will really affect your bottom-line. Why, then, are you trying to be he-man and do everything
    • The tech's were fired/offshored. This is the marketing guy trying to do this.
  • by LunaticLeo ( 3949 ) on Monday October 20, 2003 @09:30PM (#7266163) Homepage
    I assume you are not going to build a "datacenter", but rather build out a computer room. Given that here is what I have to say.

    Don't build a computer room or datacenter. Find a commercial hosting service. Rent some cages and contract for reserving contiguous cages.

    If you don't like the commercial hosting service here are the things I did to build out a computer room.

    Power: Contract with a commercial electrician to get many more 20amp drops. The electrical contractor will know how to deal with the owners of your building to arrange the additional circuts. For most two processor intel boxes you can estimate 3 amps per box.

    You can calculate the required volt-amps of your UPSs with this approximation UPSs volt-amps = Volts * AMPs * .7 . Computer volt-amps is really less than the volts times amps, due to complex impedence. Disk arrays are closer to 1.0 scaling. Don't skimp on power for disk arrays.

    Get rackmounted UPSs spec'ed out for the hardware connected to them. Don't skimp out here either.

    Cooling: You can purchase "portable" air conditioners and put them in your computer room. They will drop the excess heat into your office ceilings; assuming you are in one of those buildings with popup ceiling tiles. Office buildings recycle heat this way so it is OK. Find out if your building turns off AC on the weekends and nights. I was at a place that did that, and it sucked working weekends and it sucked worse for our computers. If they do cut AC on the weekends, then you will need more BTU cooling from your portable air conditioning.

    If you are really going to build a datacenter contract with an appropriate architecture firm. In my mind a "datacenter" is a basement or whole building with full on-site deisel power generators and raised floors or overhead wire guides. That is probably not something required for upto 100 hosts. Over 100 hosts is where that might be a good idea.

    Did I mention that commercial hosting service? You may grow out of your office space with employees and want to move. A commercial hosting service provides far greater quality computer and network capacity, and the don't tie you down to much.
    • Rackmounted UPS systems are OK, but going overboard with them doesn't help things. They need maintenance, batteries only last a few years, and most importantly, they are fire hazards.

      Using rule-of-thumb numbers, for 50 servers I would guess between a 10-15 kVA UPS. Generally, 15 minutes of battery is a good number, but if you don't have a generator backing it up, all it will give you is orderly shutdown. That may be good enough for your needs... much of it depends on how good the utility is.

      15 kVA is
  • Server Room Design (Score:3, Informative)

    by aaarrrgggh ( 9205 ) on Monday October 20, 2003 @09:34PM (#7266191)
    The first thing to do is hire someone that knows what they are doing. You are talking about a fairly small computer room-- they are often the ones with the most problems.

    Some quick pointers:
    -A single rack can be at about 2 kW with overhead air conditioning. Underfloor AC will get you closer to 3 kW. Much more than that, and you get in trouble. If you guess a real demand of 150W/server, you can fit about twelve servers in a single cabinet before you start to get into trouble. Plan for 5kW per rack on your UPS system and distribution - 2x20A outlets or 1x30A, 208V per cabinet (non-redundant)-- double for redundant cords.
    -The back of the racks should be hot. That isn't a problem, in and of itself. Good data center design is based on hot and cold aisles for just this reason. To see if you have a problem with the air conditioning, check and see what the return air temperature is-- if that is too low (close to the cold aisle temperature), you are going to get stuck.
    -If the backs of the racks are hot, make sure you have blank covers over all the open spaces on the racks. That keeps the hot air from mixing with the cold air on the front.
    -If you have raised floors with AC, try putting a tile or two on the hot aisle to induce flow and make it more comfortable. That should help some of the hot air get back to the AC units.
    -Have an engineer look at it. If you can, hire someone that specializes in data center design. Plan for at least $1,000/day of their time, $2k minimum-- just for looking at it and giving you a report. It's money well spent! You can bolt on a number of fixes for a problem, but it won't fix the root cause. Maybe that is good enough.
    -Be careful of the breadbox UPS vendors. They want to bypass the engineers and the contractors. They don't always tell you what you need to know.

    {shameless plug}I work for a company called Syska [onlinenvironments.com]-- there's plenty of other companies that do this type of work though. {/shameless plug} Find someone close to you that can help.
  • The datacenter at my last job was about two thousand square feet and had on the order of 100 servers.

    Some things that stood out for me were (I'm just a programmer, so this might be obvious you):

    One rack was dedicated to what amounted to a switchboard. All the networking stuff was there. The wires running into the datacenter terminated in one set of ports (one pc, one port). These were then individually connected to a switch or hub using standard cabling. The servers in the room were wired the same way
  • by digitect ( 217483 ) <digitectNO@SPAMdancingpaper.com> on Monday October 20, 2003 @10:17PM (#7266435)

    You need help. (With an architect, I used to do this stuff for the "largest router company in the world".)

    From an architectural perspective, don't underestimate the complexity of space planning. Equipment access, emergency egress, and growth of all engineering and supporting systems may put you at a very different place than you might imagine if you consider only your direct server capacity. I'm sure every geek around here would like to think they can solve most engineering type problems with a little extra effort, but building design has more than a few gotchas you don't want to miss.

    On the building engineering side, the general trend is for higher and higher densities. Ten years ago, one might have projected that data centers would be getting exponentially larger, but the increasing density of electronic components keeps that growth more reasonable. However, density of equipment has a nasty side effect in that it pushes HVAC, power, UPS, and structural limits far beyond what your average spec office building is designed for. I know from experience that increasing structural floor load capacities from 80psf to 150psf is eyebrow-raising expensive with an operative data center!

    Don't make dangerous mistakes. Beyond the expense, embarrasment, and possible job loss, you could create a serious life safety problem for yourselves or those working around you. Obviously four servers isn't exactly a major data center, but if that triples in the middle of a low load floor bay, (or if they're already some mondo racks) you might be closer to floor capacity than you realize. Sounds like you're beyond UPS, power, and HVAC load now--hire an architect with an engineer in tow for a few hours ($400-ish) to advise you. (Or mail me with your geographical location if you need recommendations. ;)

    • As an engieer, I take exception to hiring an architect first. Find someone that has enough of an architectural background to know when you might need two exits, but focuses more on HVAC and electrical design-- someone that really has a good idea of the underlying engineering. Unfortunately, architects like being in charge all the time, and in specific cases like this they might overshadow an engineer's advice.

      One example would be the floor loading stated by the parent. Generally, the only things that a
      • Unfortunately, architects like being in charge all the time, and in specific cases like this they might overshadow an engineer's advice.

        Heh, the goal is coordination of all concerns. Feel free to use any professional that you feel comfortable will be competant throughout the process.

        One example would be the floor loading stated by the parent. Generally, the only things that are a concern in an office building are the UPS system and the batteries.

        Perhaps in a measly telecom room. A real data

  • ... but there are some other things to be said.

    First off, you don't tell us a lot about your business. Do you have well-stated service level agreements? If not, get the hell off slashdot and do SLAs ... you've got noplace to stand without them.

    Second, once you know what your service level committments really are, ask yourself "do I have half a clue what the fuck I'm doing?" If the honest answer is "no", do what has been suggested and move your services off to a hosted-service firm. (I have heard good
  • Chapter 17 of the ever popular Practice of System and Network Administration (ISBN 0-201-70271-1) by Limoncelli and Hogan gives a fairly broad overview of some of the things to consider when building a data center. Everything from chosing a site, to climate control, to wiring and cable management are considered.
    Go find it at your local computer bookstore and read the aforementioned chapter.
    Better yet, buy the book. If you're like many of us and have had system adminship thrust upon you, the rest of the book
  • Co-locating your servers at an established datacenter will give you (assuming you choose your vendor carefully) a datacenter that is far nicer than you ever could have built yourself. They will have had experts design a good HVAC system and redundant networks to boot.

    It sounds like a bit more research may have been in order before diving into this project. Your power requirement estimations seem WAY off to me. We generally have at least 2 20 amp circuits feeding each rack. The power coming to each rack

  • Here's what we do. We're on like year 3 of a 10 year contract with Exodus to host datacenters for us, but we don't use them anymore. We just pay them 6 figures a year for nothing. Or duck their bills.

    How about our fancy machine room with the nice raised floor, plenty of power, genset, cooling, and all that jazz? It's on the half of the floor we had to sublet to a real company with 100 times our revenue.

    Our servers? Piled in a reclaimed conference room with big box fans on the floor blowing into the

  • Go some place that has done it already and talk with their engineers about it. Your ISP is a great starting place as you already have a relationship with them and they are usually happy to show off their setup to their customers.

    Dell has enterprise consulting that can also help you setup your datacenter. Ain't free, though.

    Good luck.

    -m
  • I hate to say it, but for $15,000 you could have paid for 20+ months of full rack rental at a datacenter. At least that's the cost here in Austin, TX. Now of course, that doesn't include bandwidth, another $350/month for 2Mb/sec. So maybe we are talking 15 months. But I think it's going to cost you a lot more money to build a REAL datacenter (on a small scale) that can even match a pre-existing datacenter.
  • A bunch of random comments.

    1. If you are doing this for internet based servers, you might well be better off colocating your servers in a commercial data center. The cost of doing this is a lot less than it used to be and is likely less than doing it yourself. This might not be the solution for all of your systems, but may be appropriate for many.

    2. You will need a lot more power than you think. Most "single/dual" cpu servers are 3-4 amps each. If you load up a rack with 1U servers, this can easily
  • I've seen all sorts of "datacenters"... from a boy's room of a convereted elementary school to two-story behemoths in high-rises.

    There are plenty of problems to be had, even when there is a professional staff of electricians, carpenters and plumbers. (Yes, plumbers... one place had a water-cooled mainframe)

    There is also the issue of floor load... at one building they discovered that placing three IBM Shark's in the middle of the floor caused the cantilevered floor to "jump"... we actually had to borrow 1/
  • Ask your local utility if they can help. I know of a couple data centers (all on one utility so I don't if others do this, but they should) who pay a reduced rate for the power to their data center. In return on the high demand days the utility sends a command to their UPS (and in turn the generator) to switch off of mains power onto the backup. Not only do you get the benifit of lower power costs, but it tests your backup under a real situation.

    Look at the power requirements of your servers, vs how muc

  • Keep in mind that 1 watt is 1 amp at 1 volt.

    20 amp circuits at 120V will power about 5 or 6 400 watt power supplies reliably. if you have 50 machines (assuming 400 watt power supplies), I would go with no less than 10 20 amp circuits. Get a certified electrition to wire up power panels. He will know how many breakers you need and should be able to do the full job for around or under $5000.

    Battery backup and generators are s imilar problem. Keep track of how many watts you are running and buy a surplu
  • There's this little aluminum products manufacturing facility in rural Nebraska that makes great stuff for data centers, and the cost will pleasantly suprise you: www.mtpartners.com

    Also, put the cables under your tiled floor, not in overhead racks. Typical overhead racks are open-air and human-accessible in a way in which tiled-in areas are not.

    Finally, have someone skilled in the arts of physical security take a look at your building, entrance, etc. For example, I've seen people build really nice data c
  • Make sure that you have a good offsite backup system. I would suggest on a professional outfit like Iron Mountain [ironmountain.com] (formerly Arcus [bizjournals.com]). Be sure to run through some sort of disaster recovery simulation at least annually. The worst thing is having to re-build a datacenter from nothing after a disaster. You may be able to re-build the center, but the data is the really hard part.
    1. Never, ever overload a UPS. Bear in mind, in fact, that as UPSen age, the alternate supplies (typically gel batteries) degrade in performance, and so "100%" when new becomes "110%" when about three to four years old. The rule of thumb we use here is no more than 60% load on any UPS.
    2. Make everything - everything critical to the mission, that is (power to building, UPSen, generators, network kit, servers, disks) - fully redundant. Much like this post, in fact. :)
    3. Make sure that redundant PSUs in computers
    • Always, always assume that the one thing you can't or won't make redundant will fail. If hardware doesn't pop, then you can absolutely guarantee that the software or firmware will make it. :)

      Never take manufacturer's reliability figures with less than an entire salt mine full of salt. Systems always fail more often than quoted.

      Never underestimate the power of full backups, and preferably a standing image in GHOST format or similar of each critical system. Being able to restore a machine without havin

  • On the positive note, we have a really nice overhead wire rack, that's looking good and all of our wiring is really tight looking; all the colors match, all the cables are labeled, they are all the right length, etc.

    That's great, kid. No offense but you are way out of your depth. Call IBM and get them to build it for you, or at least send over a consultant to advise.

  • If you have to ask slashdot for advice on building a mission critical data center, I think that you should resign and hand over to someone else.
  • Raised floors are damn useful. By all means do all your cabling overhead, but if you can connect the A/C to pump air under the floor it flows nicely up the inside of the racks and out the top. This means that people can actually work in the server room without freezing, yet it still keeps the servers cool.

    Environmental monitoring is a good idea, for when something stops working at 6:00pm on Christmas Eve.

    Fire extinguishing is important. The sky is the limit for what you can spend, but at least make su

  • This isn't something you can just throw together.
    You might find these guys helpful: http://www.lampertz.com/DuS.htm (not affiliated, just had one installed).
    They put in secure datasafes and modular IT rooms that are literally blastproof, watertight and fireproof. Not cheap, but datacenters aren't - which you are going to learn as soon as you try and create one.
  • I would make sure you have a few flashlights and maybe one of those lights you can clip on things. And make sure the overhead lights (or at least some of them) are on redundent power as well. If the power goes it would be really nice to be able to move around the room and see things if you have too. Its always easier to work on stuff if you can see well.

    Some of those light sticks that campers use might be good too.
  • Speaking as someone with a degree in civil engineering, who has been doing system administration for the last 8 years, I'd say that first, you should be informed about what all of the issues and variables you're not even thinking about. [eg, chilled air and warm air return issues, proper sizing of power systems, etc]. So, pick up a copy of Enterprise Data Center Design and Methodology [sun.com], and read it.

    Then, once you know what you need to be looking for, hire a professional -- odds are, there's stuff they'll
  • Download their Data Center Site Planning Guide (PDF) at the following URL: http://docs.sun.com/db/doc/805-5863-13?a=load
  • I've done several of these. For $185/hr, I'll help you do yours :)

    That said, even on this small a scale, you should have an environmental and electrical engineer involved already. More than likely they're going to be pushing you towards a facility UPS, Dedicated AC, and diesel/natural gas generator.

    My best advice, if you can't make these calculations yourself, is to LISTEN TO THEM!. Let them worry about power factors, UPS load, powerline harmonics, Thermal outputs, etc. That's what they get paid to do.

    Al
  • If a server has a 500w power supply you are drawing 4amps of current.

    500watts/120volts ~= 4amps/server

    Now if you have 4 servers...

    4amps/server * 4 = 16amps

    Give a little extra to the UPS say 1-2amps for a total of 18amps, so yeah, I would say you are at 90%. Oops!

    You should have a minimum of a 30amp circuit for EACH rack, or better yet, dual 30amp circuits, and a UPS for each rack.

    $15k should have been able to cover all of that.

    You also sound like you have serious heat issues, and as you probably know
  • Seattle has a good abandoned data center. Diesel generators, huge battery banks, lots of backbone, earthquake braced cabinets, biometric scanners, top of the line and mostly unused. The company I used to work for layed us all off and the whole thing is just sitting there. So you can hire a cheap crew - I know a few - and we will have you up and running in no time. Just don't expect to fill up the unused space with our old sales team and don't run the company like fools.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...