How Would You Build a Datacenter? 81
InOverOurHeads asks: "Some of my coworkers and I are building a new datacenter for our company. We're a growing startup, we have about 50 servers now and expect to have about twice that before too long so building to grow is key. Now that we're about $15,000 in to the project, it is looking and feeling more and more like we were way over our heads. We have 4 racks wired to a single 20amp circuit. Our UPS is at 90% load and we only have 10 machines on it. We have all of our cooling on one side of the server room where it is about 60 degrees, the other side of the room where the servers exhaust is about 30 degrees warmer, so it appears that we have some convection problems with only a handful of the machines on, right now. We're realizing that there is a lot more to building a datacenter than racking servers, what else have we missed?"
"On the positive note, we have a really nice overhead wire rack, that's looking good and all of our wiring is really tight looking; all the colors match, all the cables are labeled, they are all the right length, etc.
Are there any guides or how-tos on this? Since we're going to bite the bullet and tell the boss that we messed up we want to try
to correctly measure the rest of the work involved in making it work. What happens when the UPS is at 100% load and how Dell servers
react to being under powered?"
Hmph (Score:5, Funny)
But maybe I'm just bitter.
Re:Hmph (Score:1)
Re:suggestion (Score:1)
Hire an expert (Score:1)
Re:Hire an expert (Score:2)
1. Hire an expert and do what s/he tells you.
2. Co-locate with a reputable site with excellent SLA's.
3. Continue on your obvious death spiral, and spend the cash on a great big party when you get fired (I give you about 6 months tops).
Buy an existing one. (Score:3, Informative)
The best thing they are oversized, sell the extra space as a COLOC
Re:Buy an existing one. (Score:2)
Sell it to whom? Currently, supply far exceeds demand which is why those datacenters are for sale for a third or a half of what they cost to build.
I think that at this point, setting up in-house datacenters is a waste of resources. The glut of available space makes it easy to find deals on space. Admittedly, there are situations where things must be kept in-house but this article didn't specify if that was a concern.
Department of Redundnacy Department (Score:5, Informative)
Are you in an earthquake zone (The Bay ARea)? If so, make sure 1, the building is earthquake retrofit, 2) the racks are all bolted to the wals suck that a little shake up doesn't turn into a shake down.
Make sure you are getting enough power to the building. Have Generators in case power goes out. UPS should only keep things going long enough for the generators to kick in.
Off site backups, of course. It is hard to beat the bandwidth of a stationwagon full of storage. Daily backups should be moved out of the building, I'd suggest on firewire/SCSI hotswappable hard drives, but there are many ways to solve this problem. Longer term backups should be a geographically out of disater range (east coast to west coast ideally)
Did I mention redundancy? Make sure you havea duplicate of everything.
OK, you have it built? Now test it. Kill the power and see if the UPSs can hold it long enough for the generators kick in. Now do it again, but pull out one generator.
Get one of those devices that allows you to remotely power cycle your machines as well, incase it locks up.
Havea back door (IE a dial up) to get into your data center unless you are going to have it manned 24/7. THis will keep you from coming in at 2 AM when a router blows.
Thats all off the top of my head. If I am wrong, please point out where, as the alternative viewpoints will be wuite helpful.
Re:Department of Redundnacy Department (Score:2)
Well, how about cost to start with. It's a little difficult because the original poster didn't really mention budget. But if "$15,000 in" denotes a large portion of the budget, maybe things like firewire backups are going to be out of the question. I suppose if you're worried about all of the servers being totally destroyed it's worth doing, but is it really necessary to have off site backups?
I agree having a gen
Re:Department of Redundnacy Department (Score:1)
It's been said that there are two types of people in the world: those who believe in off-site backups and those who will.
Re:Department of Redundnacy Department tsarkon (Score:2)
If it's a pissing contest you want, you've got it.
Yeah, as I said in another post, I probably have about $15,000 in hardware sitting at my desk right now. I stayed late at work today installing Windows 2003 on a system but it took a while to format the 22 attached hard drives.
Frankly I wouldn't know if IDE or non-ecc memory is ok in servers because I've never really had to bother with it. I've got at least
Re:Department of Redundnacy Department (Score:1)
I suppose if you're worried about all of the servers being totally destroyed it's worth doing, but is it really necessary to have off site backups?
It is absolutely necessary if you to protect your data. There are just too many things that can go wrong and offsite storage isn't *that* expensive.
The ideal is to have your offsite storage at a company that does just such things, but even at an employee's home will do.
On the other hand, you can ask the question of what happens if the building is destroyed a
Re:Department of Redundnacy Department (Score:2)
That's a great point and I totally agree. The problem I have is that it sounds like the original poster is pretty short on cash. Something about thinking $15,000 was a lot of money. I work for Intel, and I probably have about that amount of hardware at my desk right now, so I'm maybe a bit jaded (BTW I'm not trying to brag, a tray of processors and memory isn't really that exciting).
The best analogy I ca
Re:Department of Redundnacy Department (Score:3, Informative)
Although bolting the racks to the wall or floor will prevent them from falling over, it'll transfer the shock of any movement of the building right into the rack and your equipment. You should look at installing some form of isolation platform [worksafetech.com] instead.
Re:Department of Redundnacy Department (Score:1)
Re:Department of Redundnacy Department (Score:2, Interesting)
Re:Department of Redundnacy Department (Score:2)
Blade Servers [google.com]
Re:Department of Redundnacy Department (Score:2)
I's not a panacea. If you want more processor power you pay for it with more power consumption and air conditioning. Usually in colocations you have 2x20 Amp outlets. If you use blade servers you end up with filling from one third to half the rack and your power consumption reach 100%.
If you do it for yourself (in company's office) the costs are skyrocket because of UPSes and additional air conditioning.
If tasks are not CPU-intensive you may be better off with SPARC-based blades or Int
Quickie... (Score:3, Insightful)
Sounds like you already know this, plan on secondary backup, AC, and make sure you have terminal servers. (Using unix right?)
Don't need terminal servers..... (Score:1)
More Infrastructure! (Score:3, Interesting)
You'll need at least one 20-amp circuit per rack in my opinion if you go with standard 110 battery backups. For that many servers though, you might be better off going with 220 service and the high voltage battery backups that APC and others offer.
Our old server room started small with a couple of servers and quickly outgrew the AC service and A/C. We heated our whole office in the winter with just the servers! Maintenance ran several new 20-amp circuits for us until we filled up the breaker box.
When we moved a couple of years ago, I made sure to get the new room right before we moved any equipment. We have central A/C fed by several outside units plus a very large auxiliary unit just for the server room. 20-amp circuits are ran every few feet on separate breakers. I don't know what type servers you are using, but large multi-processor, redundant fans, RAID, etc. boxes use LOTS of power. We use mostly Compaq DL380's, two of them will draw 50% off an APC 1400R battery backup. For extended runtimes, we made sure to not overload the battery backups, so only two servers per backup with no more than two backups per 20-amp circuit. It's slightly overkill, but I got very frustrated in our old location and resolved to never blow breakers or kill battery backups this time.
Since you're just getting started, it will pay off big time in the long run to get everything setup right before you start loading in servers. It makes things so much easier to just plug in without having to call maintenance or a contractor to upgrade services.
Jason
Dedicated Air Conditioning (Score:2)
We supplement it with several large industrial size fans for increasing air circulation to the racks.
Make sure there is plenty of space behind your racks and enough access points to that rear area to allow air to circulate.
More UPS units... two per rack at leas
Re:Dedicated Air Conditioning (Score:2)
Use an existing datacenter (Score:2)
They have already done all the things you have done, all the things you are realizing you forgot, and all the things you will not find out about until everything fails.
You can now move on with you core business, whatever it may be.
Re:Use an existing datacenter (Score:2)
There are some multi-million dollar data centers out there sitting totally empty and just begging to be bought. Ask commercial real estate agents in any large city - companies like NTT/Verio [verio.com] fully built out quite a few, then dumped them all on the market when the bubble burst.
Re:Use an existing datacenter (Score:1)
1. Start business as discount virtual host/datacenter.
2. Ask Slashdot "How Would You Build a Datacenter?"
3. Be replaced by Cambodians who ultimately turn a PROFIT.
Re:Use an existing datacenter (Score:1)
Re:Use an existing datacenter (Score:2)
Ask your salesman about them. Ones with people I personally knew who got laid off: the Pittock colo and a colo in a suburb of Seattle.
Or, better yet, try using Google for "colo"+"verio":
"Verio, which last year closed down 25 data centers around the United States, leaving 46 open at the end of 2001, is now planning to shut down all but 10 of its data centers, reducing the count further from the 22 now open.The company also is planning to lay off 650 of its 2,600 wor [bizjournals.com]
Re:Use an existing datacenter (Score:1)
Iff all your users are in the building, it may be cheaper to build. If you are service remote locations, a datacenter with 4 racks, 10 20 amp circuits, and a 10Mb internet access can run from $1000 to $4000 per month.
HINT: make a hard bargin about the bandwith... if you are only using 1 Mb , don't let them charge you for 10. Do not back down... most COLOs are in dire need of custome
A "Data Center" or a "data center?" (Score:3, Interesting)
Have cool air coming up from the floor into each machine (and it'd be freezing)
Have a diesel generator with at least a few day's worth of fuel, and contingency plans for obtaining more fuel. It should be feasible to run on generator indefinately in case of a major power outage.
Redundant data centers. Have data mirrored between them for complete redundancy in case of any disaster striking one of the locations.
/. However, it would be worthwhile to specify the degree of importance and the budget of this project.
Obviously I am being facetious. If you had a budget and the necessity to do something on that scale, you wouldn't be asking
The old adage still rings true... (Score:3, Funny)
"I don't care whether it works boys, just make it look good for the investors, ok?"
*shudder*
-Adam
Jumpin' Jesus on a pogo stick (Score:4, Insightful)
You obviously didn't talk to an HVAC engineer, because they would have set you up properly from the start, getting accurate heat output ratings for all the present and planned equipent (3.413 BTU per Watt, they tell me). Then, looking at the placement of the racks, would have had the cold air pumped in at the right places. This may be correctable by tossing a box fan or two in the room to move the air. Not fired yet.
You also didn't consult with an electrician or electrical engineer, because they would have given you some very sound advice on your load and UPS needs. You are so woefully underpowered and under-UPSed, your servers will lose power long before they have a chance to properly shut down. By the way, do you mean to say that you have 50 servers and only 10 are on UPS? FIRED!
"But look on the bright side, Mr. Boss, the cabling is all neat and pretty!" FIRED for spending more time on the cabling than you did planning this disaster!
Do the tech industry a favor, and go get a job shoveling shit at the zoo.
Re:Jumpin' Jesus on a pogo stick (Score:1)
If you want my opinion, you get to get someone on board that's done this before because it sounds like you're spending too much time on the wrong things.
It sounds like this datacenter is going to be critial to your business, such that if it goes down or fails in any way it will really affect your bottom-line. Why, then, are you trying to be he-man and do everything
The tech's were fired/offshored. (Score:2, Funny)
Recipe for a computer room (Score:4, Informative)
Don't build a computer room or datacenter. Find a commercial hosting service. Rent some cages and contract for reserving contiguous cages.
If you don't like the commercial hosting service here are the things I did to build out a computer room.
Power: Contract with a commercial electrician to get many more 20amp drops. The electrical contractor will know how to deal with the owners of your building to arrange the additional circuts. For most two processor intel boxes you can estimate 3 amps per box.
You can calculate the required volt-amps of your UPSs with this approximation UPSs volt-amps = Volts * AMPs *
Get rackmounted UPSs spec'ed out for the hardware connected to them. Don't skimp out here either.
Cooling: You can purchase "portable" air conditioners and put them in your computer room. They will drop the excess heat into your office ceilings; assuming you are in one of those buildings with popup ceiling tiles. Office buildings recycle heat this way so it is OK. Find out if your building turns off AC on the weekends and nights. I was at a place that did that, and it sucked working weekends and it sucked worse for our computers. If they do cut AC on the weekends, then you will need more BTU cooling from your portable air conditioning.
If you are really going to build a datacenter contract with an appropriate architecture firm. In my mind a "datacenter" is a basement or whole building with full on-site deisel power generators and raised floors or overhead wire guides. That is probably not something required for upto 100 hosts. Over 100 hosts is where that might be a good idea.
Did I mention that commercial hosting service? You may grow out of your office space with employees and want to move. A commercial hosting service provides far greater quality computer and network capacity, and the don't tie you down to much.
Rackmount UPS (Score:2)
Using rule-of-thumb numbers, for 50 servers I would guess between a 10-15 kVA UPS. Generally, 15 minutes of battery is a good number, but if you don't have a generator backing it up, all it will give you is orderly shutdown. That may be good enough for your needs... much of it depends on how good the utility is.
15 kVA is
Server Room Design (Score:3, Informative)
Some quick pointers:
-A single rack can be at about 2 kW with overhead air conditioning. Underfloor AC will get you closer to 3 kW. Much more than that, and you get in trouble. If you guess a real demand of 150W/server, you can fit about twelve servers in a single cabinet before you start to get into trouble. Plan for 5kW per rack on your UPS system and distribution - 2x20A outlets or 1x30A, 208V per cabinet (non-redundant)-- double for redundant cords.
-The back of the racks should be hot. That isn't a problem, in and of itself. Good data center design is based on hot and cold aisles for just this reason. To see if you have a problem with the air conditioning, check and see what the return air temperature is-- if that is too low (close to the cold aisle temperature), you are going to get stuck.
-If the backs of the racks are hot, make sure you have blank covers over all the open spaces on the racks. That keeps the hot air from mixing with the cold air on the front.
-If you have raised floors with AC, try putting a tile or two on the hot aisle to induce flow and make it more comfortable. That should help some of the hot air get back to the AC units.
-Have an engineer look at it. If you can, hire someone that specializes in data center design. Plan for at least $1,000/day of their time, $2k minimum-- just for looking at it and giving you a report. It's money well spent! You can bolt on a number of fixes for a problem, but it won't fix the root cause. Maybe that is good enough.
-Be careful of the breadbox UPS vendors. They want to bypass the engineers and the contractors. They don't always tell you what you need to know.
{shameless plug}I work for a company called Syska [onlinenvironments.com]-- there's plenty of other companies that do this type of work though. {/shameless plug} Find someone close to you that can help.
switchboard and labcoats (Score:2, Interesting)
Some things that stood out for me were (I'm just a programmer, so this might be obvious you):
One rack was dedicated to what amounted to a switchboard. All the networking stuff was there. The wires running into the datacenter terminated in one set of ports (one pc, one port). These were then individually connected to a switch or hub using standard cabling. The servers in the room were wired the same way
Ask Slasdot: How do I do my job? (Score:2)
Get professional help! (Score:4, Insightful)
You need help. (With an architect, I used to do this stuff for the "largest router company in the world".)
From an architectural perspective, don't underestimate the complexity of space planning. Equipment access, emergency egress, and growth of all engineering and supporting systems may put you at a very different place than you might imagine if you consider only your direct server capacity. I'm sure every geek around here would like to think they can solve most engineering type problems with a little extra effort, but building design has more than a few gotchas you don't want to miss.
On the building engineering side, the general trend is for higher and higher densities. Ten years ago, one might have projected that data centers would be getting exponentially larger, but the increasing density of electronic components keeps that growth more reasonable. However, density of equipment has a nasty side effect in that it pushes HVAC, power, UPS, and structural limits far beyond what your average spec office building is designed for. I know from experience that increasing structural floor load capacities from 80psf to 150psf is eyebrow-raising expensive with an operative data center!
Don't make dangerous mistakes. Beyond the expense, embarrasment, and possible job loss, you could create a serious life safety problem for yourselves or those working around you. Obviously four servers isn't exactly a major data center, but if that triples in the middle of a low load floor bay, (or if they're already some mondo racks) you might be closer to floor capacity than you realize. Sounds like you're beyond UPS, power, and HVAC load now--hire an architect with an engineer in tow for a few hours ($400-ish) to advise you. (Or mail me with your geographical location if you need recommendations. ;)
Re:Get professional help! (Score:2)
One example would be the floor loading stated by the parent. Generally, the only things that a
Re:Get professional help! (Score:2)
Heh, the goal is coordination of all concerns. Feel free to use any professional that you feel comfortable will be competant throughout the process.
Perhaps in a measly telecom room. A real data
A lot of good points here (Score:2)
First off, you don't tell us a lot about your business. Do you have well-stated service level agreements? If not, get the hell off slashdot and do SLAs
Second, once you know what your service level committments really are, ask yourself "do I have half a clue what the fuck I'm doing?" If the honest answer is "no", do what has been suggested and move your services off to a hosted-service firm. (I have heard good
consult the guide (Score:1)
Go find it at your local computer bookstore and read the aforementioned chapter.
Better yet, buy the book. If you're like many of us and have had system adminship thrust upon you, the rest of the book
Have you considered co-location?? (Score:2)
Co-locating your servers at an established datacenter will give you (assuming you choose your vendor carefully) a datacenter that is far nicer than you ever could have built yourself. They will have had experts design a good HVAC system and redundant networks to boot.
It sounds like a bit more research may have been in order before diving into this project. Your power requirement estimations seem WAY off to me. We generally have at least 2 20 amp circuits feeding each rack. The power coming to each rack
What we do (Score:2)
How about our fancy machine room with the nice raised floor, plenty of power, genset, cooling, and all that jazz? It's on the half of the floor we had to sublet to a real company with 100 times our revenue.
Our servers? Piled in a reclaimed conference room with big box fans on the floor blowing into the
Re:What we do (Score:1)
Visit your ISP (Score:1)
Dell has enterprise consulting that can also help you setup your datacenter. Ain't free, though.
Good luck.
-m
Why reinvent the wheel? (Score:1)
Datacenter (Score:1)
1. If you are doing this for internet based servers, you might well be better off colocating your servers in a commercial data center. The cost of doing this is a lot less than it used to be and is likely less than doing it yourself. This might not be the solution for all of your systems, but may be appropriate for many.
2. You will need a lot more power than you think. Most "single/dual" cpu servers are 3-4 amps each. If you load up a rack with 1U servers, this can easily
Listen to everyone else... (Score:2)
There are plenty of problems to be had, even when there is a professional staff of electricians, carpenters and plumbers. (Yes, plumbers... one place had a water-cooled mainframe)
There is also the issue of floor load... at one building they discovered that placing three IBM Shark's in the middle of the floor caused the cantilevered floor to "jump"... we actually had to borrow 1/
Ask utilities (Score:2)
Ask your local utility if they can help. I know of a couple data centers (all on one utility so I don't if others do this, but they should) who pay a reduced rate for the power to their data center. In return on the high demand days the utility sends a command to their UPS (and in turn the generator) to switch off of mains power onto the backup. Not only do you get the benifit of lower power costs, but it tests your backup under a real situation.
Look at the power requirements of your servers, vs how muc
more power (Score:1)
20 amp circuits at 120V will power about 5 or 6 400 watt power supplies reliably. if you have 50 machines (assuming 400 watt power supplies), I would go with no less than 10 20 amp circuits. Get a certified electrition to wire up power panels. He will know how many breakers you need and should be able to do the full job for around or under $5000.
Battery backup and generators are s imilar problem. Keep track of how many watts you are running and buy a surplu
relay racks and cabinets to keep you under budget (Score:1)
Also, put the cables under your tiled floor, not in overhead racks. Typical overhead racks are open-air and human-accessible in a way in which tiled-in areas are not.
Finally, have someone skilled in the arts of physical security take a look at your building, entrance, etc. For example, I've seen people build really nice data c
Don't forget good offsite backups (Score:2)
Steps to reliability. (Score:2)
Oh, and three other things... (Score:2)
Never take manufacturer's reliability figures with less than an entire salt mine full of salt. Systems always fail more often than quoted.
Never underestimate the power of full backups, and preferably a standing image in GHOST format or similar of each critical system. Being able to restore a machine without havin
Re:Steps to reliability. (Score:2)
Outsource it (Score:2)
That's great, kid. No offense but you are way out of your depth. Call IBM and get them to build it for you, or at least send over a consultant to advise.
resign (Score:2)
If you have to ask slashdot for advice on building a mission critical data center, I think that you should resign and hand over to someone else.
Raised floors and other nice to haves (Score:2)
Raised floors are damn useful. By all means do all your cabling overhead, but if you can connect the A/C to pump air under the floor it flows nicely up the inside of the racks and out the top. This means that people can actually work in the server room without freezing, yet it still keeps the servers cool.
Environmental monitoring is a good idea, for when something stops working at 6:00pm on Christmas Eve.
Fire extinguishing is important. The sky is the limit for what you can spend, but at least make su
Hire an expert (Score:2)
You might find these guys helpful: http://www.lampertz.com/DuS.htm (not affiliated, just had one installed).
They put in secure datasafes and modular IT rooms that are literally blastproof, watertight and fireproof. Not cheap, but datacenters aren't - which you are going to learn as soon as you try and create one.
a few flash lights (Score:2)
Some of those light sticks that campers use might be good too.
Read a book...then get a professional (Score:2)
Then, once you know what you need to be looking for, hire a professional -- odds are, there's stuff they'll
Sun has some info you could use (Score:1)
If it's worth doing do it right (Score:2)
That said, even on this small a scale, you should have an environmental and electrical engineer involved already. More than likely they're going to be pushing you towards a facility UPS, Dedicated AC, and diesel/natural gas generator.
My best advice, if you can't make these calculations yourself, is to LISTEN TO THEM!. Let them worry about power factors, UPS load, powerline harmonics, Thermal outputs, etc. That's what they get paid to do.
Al
Ohm's law and your data center (Score:2)
500watts/120volts ~= 4amps/server
Now if you have 4 servers...
4amps/server * 4 = 16amps
Give a little extra to the UPS say 1-2amps for a total of 18amps, so yeah, I would say you are at 90%. Oops!
You should have a minimum of a 30amp circuit for EACH rack, or better yet, dual 30amp circuits, and a UPS for each rack.
$15k should have been able to cover all of that.
You also sound like you have serious heat issues, and as you probably know
Don't re-invent, just buy an abandoned one... (Score:1)