Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Ask Slashdot: Little Boxes Around the Edge of the Data Center? 320

First time accepted submitter spaceyhackerlady writes "We're looking at some new development, and a big question mark is the little boxes around the edge of the data center — the NTP servers, the monitoring boxes, the stuff that supports and interfaces with the Big Iron that does the real work. The last time I visited a hosting farm I saw shelves of Mac Minis, but that was five years ago. What do people like now for their little support boxes?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Little Boxes Around the Edge of the Data Center?

Comments Filter:
  • ESXi (Score:3, Interesting)

    by nurb432 ( 527695 ) on Thursday November 01, 2012 @07:01PM (#41847959) Homepage Journal

    No little unsupportable boxes here.

  • by attemptedgoalie ( 634133 ) on Thursday November 01, 2012 @07:02PM (#41847973)

    Go get a GPS satellite receiver/time server. Actually, get two. Don't screw with time.

    THEN, virtualize the rest of the stuff. Monitoring, syslogging, management, patchers, etc.

    We've virtualized everything except for
    - a Windows DC so that it stays up if the vmware datastores or SAN eats itself in a horrible way.
    - The NIS server we have to use on our UX environment due to an ancient regulation. I'm not willing to put up HP-UX VMs for this right now, otherwise it'd be safe in a VM as well.
    - Anything we can't virtualize due to licensing/contract/support issues. So our VOIP environments, phone call recording, access control systems for the doors,

    My datacenter is getting a lot nicer to look at, and a lot easier to upgrade. I can shift servers or volumes all over the room so I can do live maintenance during the day.

  • "Obsolete" hardware (Score:5, Interesting)

    by beegle ( 9689 ) on Thursday November 01, 2012 @07:03PM (#41847993) Homepage

    Those support tasks don't exactly push hardware to its limit, and most of those tasks are the kind of thing that demands a bunch of redundant servers anyway.

    Throw a bunch of "last generation" hardware at the task -- stuff from the "asset reclamation" pile. Leave a few more around as spares. Less disposal paperwork. Works just fine. By the time your last spare fails, you'll have a new generation of obsolete hardware.

  • amazon (Score:2, Interesting)

    by mveloso ( 325617 ) on Thursday November 01, 2012 @07:05PM (#41848021)

    For little boxes that deal with DNS, time, etc - put them in amazon. They're critical servers, but don't really need to be at your site. Put the primaries outside, and slaves on the inside. That way if you have an outage you can always repoint DNS to somewhere else...something you can't do if your primary DNS is on a dead network.

  • Re:VMs (Score:4, Interesting)

    by mlts ( 1038732 ) * on Thursday November 01, 2012 @07:44PM (#41848389)

    There are good reasons to separate functions. Mainly security. That way, if someone hacks the NTP server, they don't get control of DNS, nor do they get control of the corporate NNTP server, or other functions.

    The ideal would be to run those functions as VMs on a host filesystem that uses deduplication. That way, the overhead of multiple operating systems is minimized.

    What would be nice would be an ARM server platform, combined with ZFS for storing the VM disk images, and a well thought out (and hardened) hypervisor. The result would be a server that can take one rack unit, but can handle all the small stuff (DNS caching, NTP, etc.)

  • by Mark of the North ( 19760 ) on Thursday November 01, 2012 @07:53PM (#41848465)

    It's not rack-mountable. No IPMI either. That should be a deal-breaker for anyplace serious enough to have a rack.

    We try to virtualize anything that can be virtualized. But for those few tasks that really need to run on bare metal, we've had good luck with little Atom D525 Supermicro rackmountable boxes. We bought a few complete boxes (minus ram and storage) that Newegg billed as fanless (which was a lie). Those ran hot enough to develope problems after a few months. Ever since we've built ours up from parts (SUPERMICRO CSE-510-200B 1U rackmount server case, SUPERMICRO MBD-X7SPE-HF-D525-O server motherboard, SUPERMICRO MCP-220-00051-0N single 2.5" fixed HDD mounting bracket, GELID Solutions Model CA-PWM 350 mm PWM Y Cable, RAM and storage). About $400 and have been really reliable. Only thing I don't like is that they don't have IPMI on a dedicated port.

    But honestly, if there is any virtualization going on, there shouldn't be much need for these.

  • Re:VMs (Score:4, Interesting)

    by marcosdumay ( 620877 ) <marcosdumay&gmail,com> on Thursday November 01, 2012 @09:39PM (#41849267) Homepage Journal

    Well, one of the reasos is that some services get hold of port 80 (or, a few times other ports), and don't want to share it. With virtualization you can share resources with those too... But yes, those services are a minority, and probably won't need a lot of resources...

    Another reason is that you may want to give different people permission to administrate different machines... But again, except for companies that sell hosting, that's an exception.

    A third reason is that you may want to replicate your environment for backups and testing... Except that you don't need a VM to do that on Linux. You just copy the files, add two devices to /dev and run the bootloader again. It's easier than backing-up a VM in Windows.

    And I've never heard about any other reason for virtualization. I can't also think about any other. I'm lost about why sudenly so much people wants it so badly... Ok, all datacenters added specialized machines for decades because of those first two reasons I gave you above, and get some benefit virtualizing them... But the core of a datacenter (the main databases, web servers - the machies that actualy spend the day working) should run on the metal, and altought I've met several people that arguee otherwise, I've never heard any argument for virtualizing them that holds any water.

    But now, I think, maybe the HA people should try to virtualize their clusters. They have a huge amount of redundancy, and consolidating several virtual machines in a single real one can help them reduce their costs. (Ok, if you are in doubt, no, I'm not THAT stupid, it's a joke.)

  • Re:performance? (Score:2, Interesting)

    by Anonymous Coward on Thursday November 01, 2012 @10:24PM (#41849545)

    We use two of our Windows domain controllers for our time source. Those 2008 R2 machines are running on a 10 node ESX farm with about 450 other virtual machines. Those two domain controllers provide time services for about 2000 devices in our worldwide network (not just windows machine either, our switches, routers, SAN, etc). We have NEVER had a problem with NTP and synchronization.

    NTP is network time protocol. It is designed with random latency in mind. If you are going over a network, there is random latency. That latency inherent to any network is many orders of magnitude higher than any latency a virtual machine sees running on a hypervisor.

  • Why not hypervisors? (Score:4, Interesting)

    by SignOfZeta ( 907092 ) on Thursday November 01, 2012 @10:28PM (#41849563) Homepage
    I don't operate a datacenter, but for virtualized servers in an office, I always enable the NTP server functionality in the hypervisor, have it sync to a stratum-1 time source, then advertise that address via DHCP and DHCPv6 for my guests and workstations (and visiting cell phones) to use. Being the definitive time source, I also tell the hypervisor to automatically set the clock on the guests, then give a virtualized AD domain controller (if any) the PDC FSMO role to set the Windows domain time. I have sites with two or three hypervisors running NTP, and it seems to work well. Not sure if it will scale to your environment, OP, but it may be worth mulling over.
  • Re:performance? (Score:4, Interesting)

    by ls671 ( 1122017 ) on Friday November 02, 2012 @12:08AM (#41850103) Homepage

    I have had best results on bare metal indeed.

    I run ntpd on bare metal along with other apps but I run ntpd in a jail (chroot like), just in case. I do reply to public requests but I do not allow queries, ntpdate and other stratum servers requests work fine but you can't ntpq -pn me for example.
    From ntp.conf:

    restrict default noquery

    By the way, I am a maniac but I am still satisfied at +/-5 ms. Please do not close my door to hard so it generates a gust of wind towards my ntp server and make it go above +/- 5ms error margin. Not maniac enough to buy a GPS although...

  • by Anonymous Coward on Friday November 02, 2012 @12:51AM (#41850275)

    NTP servers are NOT about consistency, they are about making badly designed protocols, such as NFS, capable of limping, instead of just falling on their face.

    If the requests on these protocols used a client timestamp for the client's idea of the current time, then the server on receiving the request could look at its idea of the current time, and arrive at a delta before it actually did anything other than enqueue the request locally.

    Then when the server responded with a non-"now" timestamp in any client response, it could apply this delta to the response value, and as far as the client was concerned, it and the server would have synchronized ideas of "now", without resorting to all of this NTP BS or worrying about clock drift, or anything.

    I lobbied very strongly to try to get this fixed in NFSv4; maybe we will get our collective heads out of our butts by NFSv5.

    Are you all mad? What does improving NFS have to do with intentionally letting PC clocks drift?

    Could I go out on a limb and suggest there are reasons besides NFS to keep clocks in sync? Wow.

  • Re:VMs (Score:2, Interesting)

    by Anonymous Coward on Friday November 02, 2012 @01:29AM (#41850423)

    Well modern hypervisors like VMWare allow you to prioritize virtual machines so that they get a higher share of scheduling time in an overcomittment scenario. Assign your ntpd server a high priority so that it doesn't have to wait in a long queue to get run time.

    Yes running time-sensitive stuff on a hypervisor is tricky but not at all impossible. It's not stupid unless you don't know what you're doing.

  • by A bsd fool ( 2667567 ) on Friday November 02, 2012 @09:08AM (#41851981)
    Right on the NTP virtualization (which is irrelevant), but wrong on the "bootstrap problem". I run a two private mini-DCs, one fully virtualized, the other almost. In the "almost" DC, only the pfSense box is not virtualized. It handles DNS caching, firewall duties, VPN access, and DHCP. In the second DC, even pfSense runs in a VM. The "trick" is to use the tools you have -- set the VM startup order so the VMs responsible for DNS are started first, or at least soon enough to be up before the VMs that rely on them. The ESX servers themselves do not need DNS for anything. NTP on the VMs is irrelevant. The hypervisors will do NTP to keep themselves synced, and the VMs sync through the (always installed, right?) VMWare tools (or open-vm-tools) since even running an NTP *client* in a VM is problematic and ultimately pointless.
  • by Anonymous Coward on Friday November 02, 2012 @11:45AM (#41853715)
    vSphere needs DNS if you install it with an external database server (Which I have). Yes you can get away with never requiring DNS to start your VMWare cluster, and I've done it, which is why I've decided it's just less effort and pain to have two physical DNS servers instead, which makes it a non-issue entirely.

    I also never "power on" the servers. They were powered on the first day, and except for memory upgrades, have been powered on ever since.

    I tend to plan for the worst case scenario, which is a restart from a dark data center. Given that a hurricane just passed awfully close by one of them, that seems like a valid assumption for me to make.

    Regarding NTP, I still "don't get" what you mean I guess. My ESX hosts sync to the normal NTP pool, and they are the only machines that need to use NTP. All the others are virtual and so sync via the vmware tools and not NTP.

    I have a couple of thousand physical servers. They very much need to sync their hardware clocks via. NTP. I need reliable NTP servers. NTP running on a virtual host is not reliable (the clock drifts horribly, although ESX5i is better in this regard).

Today is a good day for information-gathering. Read someone else's mail file.

Working...