Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Maintaining SSH Host Keys Across a Large Network? 20

skullY asks: "Like most organizations, we have a large number of servers, all of which have unique host keys. Our big problem right now is that the vast majority of our servers are in a cluster and we have one or two machines that get reinstalled every week. Every time this happens, a new SSH host key is generated, and all our existing machines bitch when they have to connect. I realize that the best approach is to install system wide ssh_known_hosts files, and keep them up to date, but in practice this isn't so easy. So my question is how does everyone else keep track of large numbers of SSH host keys that continually change, and propogate those changes to machines on the network?"
This discussion has been archived. No new comments can be posted.

Maintaining SSH Host Keys Across a Large Network?

Comments Filter:
  • You just answered your own question, stop reinstalling everything every week, for Christ's sake :)
    It's the easiest way to do upgrades. I make changes here, do some testing, and before applying the changes across the board I have to make sure they'll work under load. The only reliable way is to take a machine out of the cluster, do the new install on it (So that when we add machines we know they'll work) then after a few days of that machine not falling down, use an expect script to ssh to the remaining machines and do the upgrades in place. Eventually, I'll get something setup where the new install wget's some important settings from another machine, and at the same time sends it the ssh key, but that's not really what I want to have to setup. I was hoping someone would know of some obscure key management system that automatically propogated changes across the network, or would have a really nifty way to implement it without the lag or MITM issues.
  • by Zurk ( 37028 )
    use gnu queue for this. it can do all that and more.

  • What about setting up an NIS map for the known_hosts file?

  • Our big problem right now is that the vast majority of our servers are in a cluster and we have one or two machines that get reinstalled every week.

    You just answered your own question, stop reinstalling everything every week, for Christ's sake :)


    You're tired of Slashdot ads? Get junkbuster [junkbusters.com] now!
  • Well, you can always ask the folks of OpenSSH (www.openssh.org), perhaps they'll have some ideas.

    You're tired of Slashdot ads? Get junkbuster [junkbusters.com] now!
  • What is the point of backing up keyfiles to tape or to another host.

    Some '3733t h4x0r' just needs to steal the tape or compromise the server where the keys are contained and every advantage of ssh keys is eliminated. Backup tapes violate file permissions at a distance. A single point where all keyfiles are stored is a boon to internal hackers.

  • You have a point, but think about practical considerations too.

    In a university or colocation facility, yes you NEED to be running ssh or pptp or something similar.

    But what do you gain in the datacenter or other internal corporate enviroment? If your professional staff is going to break your machines, they'll be able to even with ssh present.

    Where I work, we have a cluster of IBM RS/6000 machines, a couple dozen Suns and a bunch of Digital AlphaServers doing a wide variety of data processing. We do not use ssh at all for a couple of reasons.

    1. 90% of servers have 6 or less users, who are admins or senior dba's/developers. Most connectivity is done through client/server connections to databases or other applications.

    2. Most end-user types cannot even directly connect to the server subnets. All connections go through a broker of some sort.

    3. It's a pretty new network. Everything is switched. Hard to sniff on a switch.

    What would ssh gain us in this circumstance? Nothing, but wasting a few more CPU cycles.
  • If this is an internal enviroment, using ssh in this manner is probaly a waste of time and processor resources. Firewalls and private networks are the best way to secure servers in an enterprise enviroment.

    I mainly work with database servers, which all sit in protected subnets, accessible only to the application servers (brokers, report writers, etc) which need direct access to the database.

    If any application is overhyped on Slashdot, ssh is. Clueless sysadmins writing passwords on notepads and stacks of DLT tapes on sysadmin's desks are a far bigger security menace than hackers sniffing an internal network for passwords.
  • If you read my post, you would have noticed that I specifically mentioned that I was not talking about a university or colocation facility.

    When I refer to a datacenter, I am referring to facility belonging to a mid to large sized company or government agency.

    #1 Developers do not have access to production enviroments. Test/Development enviroments use the same sort of network as the production boxes. I neglected to mention this. DBA's need access to systems.

    #2 By 'broker' I am referring to a second tier application which talks to the clients, processes requests, and accesses the database. The firewalls only enable these servers to connect to the central databases.

    Our firewalls block all traffic, except traffic to specific ports and addresses that we specify.

    #3 You got me on the switch thing. I don't work with managed switches at all and stuck my foot in my mouth. Excuse my ignorance.

  • You're assuming that he allows root SSH logins :)
  • Kerberos provides security in various different ways, including authentication, authorization, and encryption. A kerberized telnet is arguable just as secure as ssh (perhaps more so).
  • When I used ktelnet years ago, it only encrypted the authentication sequence. Everything else, including typing su, as in the clear. Don't know if this is still the case....

    I belive this depends on the implementation of ktelnet you have, but a fully compliant kerberos application can use the kerberos service to support both authentication and encryption.

  • You're using the wrong technology for the problem. SSH is great for connecting to the occasional remote host (e.g., to your ISP shell account, to your office computer from home and vice versa), but it was never designed to handle this type of situation.

    In this case you really should be using a central authentication method. Kerberos is one well-known example, and there are others as well. In this case you would/could still generate new host keys, but you would only have to update a single server (two, including a backup server).

    The flip side of this problem is that you really don't want to allow just anyone to connect to a server cluster - with SSH you can specify acceptable user keys, but it's even more of a nightmare to propogate this information to each system. With a central server, you can disable a user's account systemwide (or on some defined subnet) with a single update. No worries that the guy who had to be fired left a backdoor SSH keyfile on a server or three.

    Conversion to Kerberos isn't trivial, but it has some additional benefits. E.g., SSH just gives you communications, but Kerberos authentication is also available to properly configured applications (ktelnet, kftp, ksu, plus sudo, cvs, postgres, lprng, pop/imap, etc. Even NFS, on some non-Linux systems.) I've found a Kerberized system *easier* to use because I'm rarely asked for passwords except when logging in, or asking for root privileges. No need to keep track of separate passwords, lest someone trivially break the CVS password and figure out my account's password.
  • What's so hard about backing up your data files before you do a reinstall? OpenSSH likes to install them on /usr/local/etc/ssh_host_key, /usr/local/etc/ssh_host_dsa_key, /usr/local/etc/ssh_host_rsa_key, /usr/local/etc/ssh_host_key.pub, /usr/local/etc/ssh_host_dsa_key.pub, and /usr/local/etc/ssh_host_rsa_key.pub. If you back these up before you wipe things and put them back when you're done it's all better. You won't have these problems anymore. Some linux distros might want to put them in /etc rather than in /usr/local/etc, but nonetheless you just need to make backups of unreplaceable data before you do wiping the drives. It's just not that hard.
    _____________
  • The root problem here is that when you reinstall machines you aren't restoring the original ssh host keys. Why don't you start making backups of those keys and restoring them to the proper places whenever you do a reinstall?

    Secondly for key distribution why don't you setup a cronjob that will copy the host key list from a single, secure, server every hour or something via scp? Then just make sure that servers host key never changes by doing the above.

    Or make a script along the lines of:

    for i in host1 host2 host3
    ssh root@$i "scp someserver:/keyslist keyslist"
    end

    and run it every time you need to update all of the key lists.

    All those methods are quite secure as they all use scp to do the copying and will fail should the host key of the machine somehow change, say by an imposter. Unless someone manages to break into the machine itself your fine.

  • ok it's in totally poor taste to reply to one's own post but I just found this link: storing ssh host keys in dns [globecom.net]

    do a google search for ssh key dns and you'll get lots of hits. This is probably the best way to manage large sets of ssh hostkeys, assuming you have your DNS house in order. good luck!

  • Actually, he saw this bit of the message:
    Or make a script along the lines of:
    for i in host1 host2 host3 ssh root@$i "scp someserver:/keyslist keyslist" end
    And correctly assumed that the luser is letting root ssh in. Bad. Very bad.

    You're tired of Slashdot ads? Get junkbuster [junkbusters.com] now!
  • You said it best in your subject: What kind of enviroment is this?

    ssh is an excellent application. I came from a university environment and started working in a large corporate environment. Sniffers would be easy to install in both. Universities such as mine have ethernet in most classrooms at every desk. My work has ethernet in every cubicle. What was it, that you said about people not being able to install sniffers?

    I will admit that you have a valid point about sysadmins writing down passwords, or better yet, everybody accessing certain machines as root. On the other hand, I can only think of one environment where sniffers wouldn't be a concern. That is computer room with the entire network contained to the room.

  • by trog ( 6564 ) on Friday April 06, 2001 @07:47PM (#309441)

    Not one to normally flame, but not using ssh in ANY environment because it's inconvienent is just plain stupid. If I ever had an admin working for me who believed this drivel, I'd fire him on the spot; However, I realize security isn't the easiest thing to learn, so I will explain here. Please indulge me for a moment.

    Who can you trust in a datacenter environment (by datacenter, I am refering to a co-lo facility; for most companies, this is a datacenter)? Can you trust:

    The $7/hour security guards that "patrol" the area?

    The script kiddez who happen to land a job in the NOC (Where they study for their MCSE all day, no doubt)?

    The often-incompetent datacenter "engineer" who can't seem to get the router to your cage configured correctly?

    Now, let's take your arguments one at a time:

    90% of servers have 6 or less users, who are admins or senior dba's/developers. Most connectivity is done through client/server connections to databases or other applications.

    Even this statement reflects a poorly designed production area. No developer should have access to the production area. The only people to have access are the release engineer (and only the access he/she needs to get the code out) and the sysadmin.Period. Anything less is just asking for trouble.

    Most end-user types cannot even directly connect to the server subnets. All connections go through a broker of some sort.

    A "broker"? Do you mean a firewall? The packets have to get to the firewall somehow. I'm willing to bet you don't control every hop along the way as well. I'm also willing to bet that your firewall is allowing packets in and out somewhere.

    Besides...we all know that firewalls never get compromised, right?

    It's a pretty new network. Everything is switched. Hard to sniff on a switch.

    It's trivial to sniff a switch, especially if it's a "pretty new" network. Managed switches all have traffic redirection which can be used to direct all traffic to a specific port. All commercial switches tend towards pathetic security. (There were Cisco advisories on Bugtraq only yesterday).

    And even in an office environment, you have to deal with misconfigured clients, viruses (this is a very, very big reason to use ssh), malicious employees, socially-engineered quot&;good intentioned" people, etc., etc.

    Now tell me again why you don't use ssh.

  • by nehril ( 115874 ) on Saturday April 07, 2001 @07:20PM (#309442)
    ssh is a communications encryption scheme. Kerberos is an authentication mechanism. they solve totally different problems. If he added Kerberos auth to his network (which may already be the case) he would still run into problems with ssh bitching about unknown keys.

    Backing up and restoring ssh keys is a very good idea, but this still does not address the issue of large scale manageability. What happens when you add a new server? What if you want to revoke a key for security purposes?

    Rolling your own solution is probably the only way. The mass "scp keyfile to all machines" trick is sort of inelegant but effective. Ideally you would have a system that reads key info from DNS. I remember hearing something about extensions to BIND to do something like this, but I don't recall the details. If you do go hacking DNS (and thus also hacking the ssh client) then you have to make sure your DNS is also authenticated properly... bleh it's late and I'm not making any more sense. :)

"Kill the Wabbit, Kill the Wabbit, Kill the Wabbit!" -- Looney Tunes, "What's Opera Doc?" (1957, Chuck Jones)

Working...