Ask Slashdot: Little Boxes Around the Edge of the Data Center? 320
First time accepted submitter spaceyhackerlady writes "We're looking at some new development, and a big question mark is the little boxes around the edge of the data center — the NTP servers, the monitoring boxes, the stuff that supports and interfaces with the Big Iron that does the real work. The last time I visited a hosting farm I saw shelves of Mac Minis, but that was five years ago. What do people like now for their little support boxes?"
Little boxes (Score:5, Funny)
I make them with ticky tack.
Re: (Score:2, Flamebait)
You do realise no one outside of New Zealand will get that joke...
Re:Little boxes (Score:5, Informative)
SOLVED: Little Boxes (Score:5, Insightful)
Answer: VMware VMs.
Re:SOLVED: Little Boxes (Score:4, Insightful)
Re: (Score:3, Interesting)
Re: (Score:2, Troll)
You do realize that song is significantly older than weeds right?
Re:Little boxes (Score:5, Informative)
Re: (Score:3)
or old farts who remember Pete Seeger.
Re: (Score:2)
Do you have to be an old fart to remember Pete Seeger? That was my reference. I had forgotten that Weeds used it as its theme song, I'm not a fan.
Re: (Score:2)
Looks like I'm wrong -- see what happens when you trust your own memory over Google/Wikipedia? Someone clearly lied to me in my youth when they told me it was referring to our town :-(
Re:Little boxes (Score:5, Funny)
Iv'e seen Windows 8, I know what ticky tack little boxes look like.
Re: (Score:2)
Re: (Score:2)
I'm in Australia and i got it!
Re:Little boxes (Score:4, Funny)
The song was written 'bout Daly City - a Philippine colony which forms the buffer-zone between San Francisco and the United States of America.
Re: (Score:2)
Is that why my family always sang that song when we drove past [google.com]?
Re: (Score:2)
Used to sing it at camp.
Re:Little boxes (Score:5, Funny)
network boxes,
made in china,
network boxes that go sparky-spark
network boxes
exploding boxes
dangerous boxes, all the same.
Re:Little boxes (Score:5, Funny)
There are white ones
And more white ones
And they all have those blinky lights
and they're all made out of ticky tacky
and they all fail just the same.
Re: (Score:3)
That's true. My company uses IBM BladeCenter servers bundled into a VM cluster. The bang-for-buck at the Time were the 4-core Opterons... That easily scaled to 4-cPUS for 16-cores.. (That could probably be higher now). The beauty of AMD. Moving into this space is that the blades could be swappable with the current hardware.
But rather than rowed of boxes, VM is the better way to go.
Re: (Score:3)
Little boxes?
"The little boxes will make you angry!"
Re: (Score:3)
VMs (Score:2, Insightful)
put them in VMs!
Re: (Score:3, Funny)
put them in VMs!
Great Plan! If all your servers are virtual then you don't have to worry about diesel fuel when there's a hurricane!
Re: (Score:3, Insightful)
Uhhh. because the "little boxes" and individual servers run on unicorn farts and angel tears?
Re:VMs (Score:5, Insightful)
Call me old school, but Unix/Linux are multi-tasking. Why not just run multiple services on one OS directly on the metal?
Re:VMs (Score:4, Interesting)
There are good reasons to separate functions. Mainly security. That way, if someone hacks the NTP server, they don't get control of DNS, nor do they get control of the corporate NNTP server, or other functions.
The ideal would be to run those functions as VMs on a host filesystem that uses deduplication. That way, the overhead of multiple operating systems is minimized.
What would be nice would be an ARM server platform, combined with ZFS for storing the VM disk images, and a well thought out (and hardened) hypervisor. The result would be a server that can take one rack unit, but can handle all the small stuff (DNS caching, NTP, etc.)
Re: (Score:3)
Why dedup? Those VMs should not require more than 500MB-2GB each.
Deduplication (inline) only adds complexity and sources of latency you don't need or want.
Any small pizza box with 2x146GB drives (or 2x256GB ssd) in RAID1 should be able to handle any number of virtualized small utility guests without any deduplication.
Re: (Score:2)
It comes down to an issue of scalability.
With the multiple services, on one OS. Means if some of the services gets popular, and needs more power then the server can handle. You will need to decommission and reinstall and configure the service onto an other server... And in the mean time your other services are often getting performance hindered. Virtualizing means if you need to move it from one box to an other it is a file copy away. vs. reconfiguring and testing.
Re: (Score:2)
That's why you use clusters.
Now, why are you talking about that in a thread about virtualization?
Re:VMs (Score:4, Interesting)
Well, one of the reasos is that some services get hold of port 80 (or, a few times other ports), and don't want to share it. With virtualization you can share resources with those too... But yes, those services are a minority, and probably won't need a lot of resources...
Another reason is that you may want to give different people permission to administrate different machines... But again, except for companies that sell hosting, that's an exception.
A third reason is that you may want to replicate your environment for backups and testing... Except that you don't need a VM to do that on Linux. You just copy the files, add two devices to /dev and run the bootloader again. It's easier than backing-up a VM in Windows.
And I've never heard about any other reason for virtualization. I can't also think about any other. I'm lost about why sudenly so much people wants it so badly... Ok, all datacenters added specialized machines for decades because of those first two reasons I gave you above, and get some benefit virtualizing them... But the core of a datacenter (the main databases, web servers - the machies that actualy spend the day working) should run on the metal, and altought I've met several people that arguee otherwise, I've never heard any argument for virtualizing them that holds any water.
But now, I think, maybe the HA people should try to virtualize their clusters. They have a huge amount of redundancy, and consolidating several virtual machines in a single real one can help them reduce their costs. (Ok, if you are in doubt, no, I'm not THAT stupid, it's a joke.)
Re: (Score:3)
I'm lost about why sudenly so much people wants it so badly... Ok, all datacenters added specialized machines for decades because of those first two reasons I gave you above,
I thought it was because young geeks and proto-managers grew up with the Curse Of Windows, where you had to run one service per machine, and then brought that flawed mindset into the Linux world.
Re: (Score:2)
Then you dont fully understand how a vmware farm works.
Re: (Score:3)
VMWare (and, I understand, all of their competitors) have this notion of clustering where one "main server" can be rebooting without causing any of their guests to suffer interruption.
You can stuff those services onto a separate guest, but as long as things are laid out properly and you dont have some dependency for your virtual infrastructure on that guest, you can virtualize it just fine. You can even virtualize the vCenter server, though it makes bringing the virtual infrastructure back up from scratch
Re: (Score:2)
Not in the least is it dumb. If you manage your systems properly and their boot order, its a non issue.
performance? (Score:5, Insightful)
NTP server is all about consistency. If it's running in a VM and can be delayed at the whim of the host, do you think it's going to be a very good source of time?
Re: (Score:3)
I think it will be fine, so long as it's not using the CPU for a timing source.
Re:performance? (Score:5, Informative)
Re: (Score:2)
Depends on the requirements (the needs).
You don't need cadillac solutions if the requirement is to have logs that are easy to correlate.
Virtual Machines will work fine for most applications.
NTP servers are NOT about consistency (Score:3, Informative)
NTP servers are NOT about consistency, they are about making badly designed protocols, such as NFS, capable of limping, instead of just falling on their face.
If the requests on these protocols used a client timestamp for the client's idea of the current time, then the server on receiving the request could look at its idea of the current time, and arrive at a delta before it actually did anything other than enqueue the request locally.
Then when the server responded with a non-"now" timestamp in any client re
Re: (Score:3, Interesting)
NTP servers are NOT about consistency, they are about making badly designed protocols, such as NFS, capable of limping, instead of just falling on their face.
If the requests on these protocols used a client timestamp for the client's idea of the current time, then the server on receiving the request could look at its idea of the current time, and arrive at a delta before it actually did anything other than enqueue the request locally.
Then when the server responded with a non-"now" timestamp in any client response, it could apply this delta to the response value, and as far as the client was concerned, it and the server would have synchronized ideas of "now", without resorting to all of this NTP BS or worrying about clock drift, or anything.
I lobbied very strongly to try to get this fixed in NFSv4; maybe we will get our collective heads out of our butts by NFSv5.
Are you all mad? What does improving NFS have to do with intentionally letting PC clocks drift?
Could I go out on a limb and suggest there are reasons besides NFS to keep clocks in sync? Wow.
Re: (Score:3)
But if the protocol's time-dependency issues are fixed by an application, along with every other application/protocol's time-dependency issues, then fixing the protocol is superfluous because a functional system will already have a stable sense of what time it currently is courtesy of NTP. One cure for a thousand ailments.
Would you feel better about it if NTP were wholly integrated into the kernel? Why, or why not?
Why seperate boxes for tiny resource requirements? (Score:3)
Re:performance? (Score:4, Interesting)
I have had best results on bare metal indeed.
I run ntpd on bare metal along with other apps but I run ntpd in a jail (chroot like), just in case. I do reply to public requests but I do not allow queries, ntpdate and other stratum servers requests work fine but you can't ntpq -pn me for example.
From ntp.conf:
restrict default noquery
By the way, I am a maniac but I am still satisfied at +/-5 ms. Please do not close my door to hard so it generates a gust of wind towards my ntp server and make it go above +/- 5ms error margin. Not maniac enough to buy a GPS although...
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
HP Proliant MicroServer N40L (Score:5, Informative)
I don't work in a data center. But I think you might want to look at an HP Proliant MicroServer.
Basically it is an AMD laptop chipset on a tiny motherboard in a cunningly designed compact enclosure. The SATA drives go into carriers that are easily swapped (but not hot-swappable). It's quiet and power-efficient. It supports ECC memory (max 8GB) and supports virtualization.
http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/15351-15351-4237916-4237918-4237917-4248009-5153252-5153253.html?dnr=1 [hp.com]
Silent PC Review did a complete review of an older model (with a 1.3 GHz Turion instead of 1.5 GHz).
http://www.silentpcreview.com/HP_Proliant_MicroServer [silentpcreview.com]
SRP is $350, but Newegg has it for $320 (limit 5 per customer).
http://www.newegg.com/Product/Product.aspx?Item=N82E16859107052 [newegg.com]
Newegg also has 8GB of ECC RAM for about $55, so you can get one of these and max its RAM for under $400.
I just got one and haven't had time to really wring it out, but I did do the RAM upgrade. Despite the tiny enclosure, it wasn't too painful to work on it, and I was impressed by the design. The Turion dual-core processor has a passive heat sink on it, and the single large fan on the back pulls air through to cool everything. (There is also a tiny high-speed fan on the power supply.)
I'm going to use this as my personal mail server. It's cheap enough and small enough that I plan to have at least one put away as a hot spare; if the server dies, I'll power it down, move the hard drives to the spare, and I'll have the mail server back up within 5 minutes. Not bad for a cheap little box.
Re: (Score:3, Interesting)
It's not rack-mountable. No IPMI either. That should be a deal-breaker for anyplace serious enough to have a rack.
We try to virtualize anything that can be virtualized. But for those few tasks that really need to run on bare metal, we've had good luck with little Atom D525 Supermicro rackmountable boxes. We bought a few complete boxes (minus ram and storage) that Newegg billed as fanless (which was a lie). Those ran hot enough to develope problems after a few months. Ever since we've built ours up from part
Re: (Score:2)
And in some places that get a little *too* serious, you end up with some stupid proprietary appliance that can't be rack mounted but the PHB swore was needed. And for that, you will have one of these [rackmountsolutions.net]. And in the extra space next to said proprietary POS, you can put something like the abovementioned HP server.
ESXi (Score:3, Interesting)
No little unsupportable boxes here.
Re: (Score:2)
It has made it easy to spin up a test server or six as needed. Makes my work life just a little bit easier.
Previous gen hardware (Score:5, Insightful)
Last generation's compute nodes. We keep some around for utility functions after decommissioning a large cluster.
Re: (Score:2)
Get a real time server. (Score:5, Interesting)
Go get a GPS satellite receiver/time server. Actually, get two. Don't screw with time.
THEN, virtualize the rest of the stuff. Monitoring, syslogging, management, patchers, etc.
We've virtualized everything except for
- a Windows DC so that it stays up if the vmware datastores or SAN eats itself in a horrible way.
- The NIS server we have to use on our UX environment due to an ancient regulation. I'm not willing to put up HP-UX VMs for this right now, otherwise it'd be safe in a VM as well.
- Anything we can't virtualize due to licensing/contract/support issues. So our VOIP environments, phone call recording, access control systems for the doors,
My datacenter is getting a lot nicer to look at, and a lot easier to upgrade. I can shift servers or volumes all over the room so I can do live maintenance during the day.
Re: (Score:2)
Note: GPS timeservers can vary widely in quality. Don't assume that the most elegant package, slickest website or cheapest price equates to a solid box (remember, realtime OS's can crash too ;).
Some of the most reliable and precise timeservers I've seen have been home-built PC based boxes.. YMMV.
"Obsolete" hardware (Score:5, Interesting)
Those support tasks don't exactly push hardware to its limit, and most of those tasks are the kind of thing that demands a bunch of redundant servers anyway.
Throw a bunch of "last generation" hardware at the task -- stuff from the "asset reclamation" pile. Leave a few more around as spares. Less disposal paperwork. Works just fine. By the time your last spare fails, you'll have a new generation of obsolete hardware.
amazon (Score:2, Interesting)
For little boxes that deal with DNS, time, etc - put them in amazon. They're critical servers, but don't really need to be at your site. Put the primaries outside, and slaves on the inside. That way if you have an outage you can always repoint DNS to somewhere else...something you can't do if your primary DNS is on a dead network.
virtualizing NTP is dumb (Score:2)
You want consistently fast behaviour from your time servers. Don't mess with virtualizing them.
Re: (Score:2)
Re: (Score:2)
VM's do bad things to keeping accurate time they do a lot of funny business. There solution so far is it poke a hole though to the main OS to get time.
Re: (Score:2)
Good luck with that...
Re: (Score:2)
besides the NTP problems, also make sure to write on a piece of paper the IP of every computer on IT, then put it on a wall.
When you have internet problems and nobody is able to get any work done anymore because all of the light services don't need to be at your site, you'll need those addresses for the LAN party.
Crash Cart (Score:2)
Re: (Score:2)
Use OpenStack and you won't even need this.
VMs (Score:2)
VMs
Virtual Machines I suppose (Score:3)
I think its apalling that we do that. Its a horribly expensive way to work in hardware but we do it because we can't be stuffed to deal with operating systems. Most likely a single box and OS instance could do it for you if it was set up correctly.
If you (Score:5, Funny)
If you can't run it on your iPad, it's probably not worth running.
--Management.
Re: (Score:3)
I'm picturing racks of overclocked iPads with a wall of box fans pointed at them.
And then I'm imaginging the conversations that would inevitably ensue:
"I know I fat fingered the fucking IPV6 address. YOU try typing on this goddamn touch screen"
Personally at work for small things... (Score:2)
I personally hate and despise people who put non-rackmount kit in racks...
We use various devices.. mostly all 1ru servers of various configs... for eg there are a couple of mini-itx 1ru servers we have that have e350 based mini-itx boards (i really love the e350/e450 boards)... not quite as cheap as the hp n40 microserver, but at least its a rack format.
Then we have a few that run virtualisation here and there for some tasks using kvm (some of those too have e350's in them as the e350's do have the virt'n e
Re: (Score:2)
I totally agree. Just populate your racks and pick some for "special duty" (and put your DNS, NTP, and monitoring daemons on there).
Re: (Score:2)
Sounds like you need an ATS.
Re: (Score:3)
While I agree that the proper solution for a rack is rack mount equipment, the fact that something is not rack-mount is not an excuse for it to be a rat's nest of cables. I have installed non-rack mount equipment, there's no reason the cords can't be just as neat and tidy as the rack-mount stuff if you do it right. That said, the better answer is to smack whoever decided to go with non-rack mounted equipment in the first place...
Rat bait stations (Score:2)
What scale data center? (Score:3)
I can't imagine trying to perform network management with a few mac minis so I'm assuming you're referring to a very small facility? Our new data center was built on 10-gig infrastructure and our NM is appropriately scaled--NetScout Infinistreams connected to Gigamon matrix switches. While the Gigamons were quite expensive they allowed us to utilize fewer Infinistreams while also providing some very cool functionality.
It look a long time for our upper management (those with the dollars) to come around to the notion that, in order to realize the full investment made in the data center, true network management needed to be baked in from the start.
Soekris (Score:2)
We are using a couple Soekris [soekris.com] boxes for some basic monitoring. They are lightweight atom processors with no active cooling and it's designed with networking in min. 4 Gig-E ports on the 6501, and you can get up to 8 more thanks to 2 PCI-E slots available in the rackmount version. Since we are using an mSATA SSD on the board we have no moving parts, so nothing mechanic to fail.
What do I like... (Score:2)
I like the same big boxes as are used for everything else. NTP server, running on a Mac Mini...really? Get a GPS-driven device that serves the purpose. They run an embedded OS, so they're very low-maintenance and straightforward, and they perform extremely well. As far as uptime/network/performance monitoring functions, these need to be at least as reliable as everything else. And the mainframe interfaces are awfully important...imagine how much good you'd be if you maintained you intellect but became
Synertron micro boxes (Score:2)
http://www.synertrontech.com/ [synertrontech.com]
Some are fanless that I use for linux boxes, some are rackmount with multiple motherboards per 1U case, and their prices are add-ons are cheaper than newegg.
Nope, don't work for 'em, just used their products for about 8 years now.
We don't have anything. (Score:2)
Penny wise and pound foolish (Score:2)
If by "big iron", you mean "IBM Mainframes or similar kit", then your question has meaning.
If by "big iron", you mean "lots of irritating PCs that I think I can add up into a supercomputer because all problems are amenable to parallel solutions", then your question is meaningless.
Assuming the second, you are much better off just using identical hardware for everything, since it will mean you have the components on hand should anything go wrong, and it will mean that you have a single maintenance SKU. In th
If you can't rack it... (Score:5, Informative)
...I don't want it in my datacenter. If you have no budget for non-revenue-generating boxes for services like DNS, NTP, etc. then upgrade the server hardware you tore out of production after the last upgrade cycle with SSDs and low-wattage processors & put it back into service for your internal needs.
Otherwise get a few Dell R210s or some other small cheap rack server with an IPMI 2.0 BMC and get on with your business. Any money saved by buying "mini-PCs" (or whatever you want to call them) for any datacenter computing hardware you plan to rely upon at all will be burned the first time you have to drive to the datacenter and physically babysit some cheap machine because it didn't have IPMI.
"And they're all made out of ticky-tacky" (Score:2)
"And they all look just the same"
un-cloud (Score:2)
The use of discreet machines allows for a machine to be specialized for a task. Sometimes you just need fast number-crunchers for special types of numerical problems, and GPUs work well. Other times, tasks can be parallelized so a distributed computing model works well. For the accessory infrastructure of NTP, DNS, and forth, reliability is more important than CPU mips or memory bandwidth. Take the high-end servers of yesteryear, one that would have been put out to pasture, and use that for such things. De
2 - 3 redundant "big iron" with VM's (Score:2)
We use a lot of VMs for this kind of thing now (Score:2)
One-off boxes become a huge time sink, usually at the absolute worst possible time to do so. With two very viable options with Xen and ESX, put the time and care into setting up a stack with the nifty features you want -- redundancy options, ability to move VMs from one server to another, monitoring, out of band management, RAID, etc.
Then you can set up the little management hosts, set up a VM for each one of those "little things", and also come up with a single way of deploying your operating systems so y
Why not hypervisors? (Score:4, Interesting)
Re:virtualization is the game now (Score:5, Insightful)
Virtualized NTP is about the dumbest thing I've read on /.
Yes, worse than various conspiracy theories and fanboi wars.
Re:virtualization is the game now (Score:5, Insightful)
To be fair, if someone cares enough about time accuracy to understand why that's a dumb idea, they should probably be using a GPS receiver instead of a PC.
If you care about time... (Score:2)
Or using both GPS and atomic clocks [google.com].
Re: (Score:2)
Re: (Score:3)
If you care enough to use a GPS receiver instead of a network time source, you should also care enough to get the antenna on to the roof... We have many such time sources controlling timing in the basements of buildings, but the antenna always ends up on the mast.
Re: (Score:2)
Re: (Score:2)
A few ms on incomming and outcomming lags won't hurt it at all. A few ms on incomming lag without a few ms on the outcomming lag (like what happens when your VM is sent to swap) will completely destroy it's accuracy.
Re: (Score:2)
It can be done.
Ultimately if you need time more accurate than within a few seconds, you should be using a GPS fed stand alone time server anyway. If you are just running NTP so everyones desktop clock is the same and the log files match up.. VM will work fine.
Re: (Score:2)
Great. Then we can knock out the data center by unplugging just one box. Brilliant plan.
Re: (Score:2, Insightful)
Redundancy doesn't mean having different services on separate boxes, it means having the same services in multiple places. In fact it's easier with one VM box hosting everything, because it's easier to keep it backed up and sync'd to a spare then it is to do a whole bunch of individual ones.
Re: (Score:2)
Why not just run all the programs in the same OS. I put the DNS server and the NTP server and the Mail server and the Web server in one box without VMs. You can still take down the whole place by unplugging one box so don't worry about lacking a kill switch.
Re: (Score:2)
Why is it important to have a kill switch? You working on the Skynet beta or something?
Re: (Score:3)