Maintaining SSH Host Keys Across a Large Network? 20
skullY asks: "Like most organizations, we have a large number of servers, all of which have unique host keys. Our big problem right now is that the vast majority of our servers are in a cluster and we have one or two machines that get reinstalled every week. Every time this happens, a new SSH host key is generated, and all our existing machines bitch when they have to connect. I realize that the best approach is to install system wide ssh_known_hosts files, and keep them up to date, but in practice this isn't so easy. So my question is how does everyone else keep track of large numbers of SSH host keys that continually change, and propogate those changes to machines on the network?"
Re:Idea (Score:1)
gnu q. (Score:1)
NIS map? (Score:1)
What about setting up an NIS map for the known_hosts file?
Idea (Score:1)
You just answered your own question, stop reinstalling everything every week, for Christ's sake
You're tired of Slashdot ads? Get junkbuster [junkbusters.com] now!
Re:Idea (Score:1)
You're tired of Slashdot ads? Get junkbuster [junkbusters.com] now!
Re:Wrong technology for the problem (Score:1)
Some '3733t h4x0r' just needs to steal the tape or compromise the server where the keys are contained and every advantage of ssh keys is eliminated. Backup tapes violate file permissions at a distance. A single point where all keyfiles are stored is a boon to internal hackers.
Re:What kind of enviroment is this? (Score:1)
In a university or colocation facility, yes you NEED to be running ssh or pptp or something similar.
But what do you gain in the datacenter or other internal corporate enviroment? If your professional staff is going to break your machines, they'll be able to even with ssh present.
Where I work, we have a cluster of IBM RS/6000 machines, a couple dozen Suns and a bunch of Digital AlphaServers doing a wide variety of data processing. We do not use ssh at all for a couple of reasons.
1. 90% of servers have 6 or less users, who are admins or senior dba's/developers. Most connectivity is done through client/server connections to databases or other applications.
2. Most end-user types cannot even directly connect to the server subnets. All connections go through a broker of some sort.
3. It's a pretty new network. Everything is switched. Hard to sniff on a switch.
What would ssh gain us in this circumstance? Nothing, but wasting a few more CPU cycles.
What kind of enviroment is this? (Score:1)
I mainly work with database servers, which all sit in protected subnets, accessible only to the application servers (brokers, report writers, etc) which need direct access to the database.
If any application is overhyped on Slashdot, ssh is. Clueless sysadmins writing passwords on notepads and stacks of DLT tapes on sysadmin's desks are a far bigger security menace than hackers sniffing an internal network for passwords.
Re:What kind of enviroment is this? (Score:1)
When I refer to a datacenter, I am referring to facility belonging to a mid to large sized company or government agency.
#1 Developers do not have access to production enviroments. Test/Development enviroments use the same sort of network as the production boxes. I neglected to mention this. DBA's need access to systems.
#2 By 'broker' I am referring to a second tier application which talks to the clients, processes requests, and accesses the database. The firewalls only enable these servers to connect to the central databases.
Our firewalls block all traffic, except traffic to specific ports and addresses that we specify.
#3 You got me on the switch thing. I don't work with managed switches at all and stuck my foot in my mouth. Excuse my ignorance.
Re:Root problem here (Score:1)
Re:Wrong technology for the problem (Score:2)
Re:Wrong technology for the problem (Score:2)
When I used ktelnet years ago, it only encrypted the authentication sequence. Everything else, including typing su, as in the clear. Don't know if this is still the case....
I belive this depends on the implementation of ktelnet you have, but a fully compliant kerberos application can use the kerberos service to support both authentication and encryption.
Wrong technology for the problem (Score:2)
In this case you really should be using a central authentication method. Kerberos is one well-known example, and there are others as well. In this case you would/could still generate new host keys, but you would only have to update a single server (two, including a backup server).
The flip side of this problem is that you really don't want to allow just anyone to connect to a server cluster - with SSH you can specify acceptable user keys, but it's even more of a nightmare to propogate this information to each system. With a central server, you can disable a user's account systemwide (or on some defined subnet) with a single update. No worries that the guy who had to be fired left a backdoor SSH keyfile on a server or three.
Conversion to Kerberos isn't trivial, but it has some additional benefits. E.g., SSH just gives you communications, but Kerberos authentication is also available to properly configured applications (ktelnet, kftp, ksu, plus sudo, cvs, postgres, lprng, pop/imap, etc. Even NFS, on some non-Linux systems.) I've found a Kerberized system *easier* to use because I'm rarely asked for passwords except when logging in, or asking for root privileges. No need to keep track of separate passwords, lest someone trivially break the CVS password and figure out my account's password.
Backups? (Score:2)
_____________
Root problem here (Score:2)
The root problem here is that when you reinstall machines you aren't restoring the original ssh host keys. Why don't you start making backups of those keys and restoring them to the proper places whenever you do a reinstall?
Secondly for key distribution why don't you setup a cronjob that will copy the host key list from a single, secure, server every hour or something via scp? Then just make sure that servers host key never changes by doing the above.
Or make a script along the lines of:
for i in host1 host2 host3
ssh root@$i "scp someserver:/keyslist keyslist"
end
and run it every time you need to update all of the key lists.
All those methods are quite secure as they all use scp to do the copying and will fail should the host key of the machine somehow change, say by an imposter. Unless someone manages to break into the machine itself your fine.
Re:Wrong technology for the problem (Score:2)
do a google search for ssh key dns and you'll get lots of hits. This is probably the best way to manage large sets of ssh hostkeys, assuming you have your DNS house in order. good luck!
Re:Root problem here (Score:2)
You're tired of Slashdot ads? Get junkbuster [junkbusters.com] now!
Re:What kind of enviroment is this? (Score:2)
ssh is an excellent application. I came from a university environment and started working in a large corporate environment. Sniffers would be easy to install in both. Universities such as mine have ethernet in most classrooms at every desk. My work has ethernet in every cubicle. What was it, that you said about people not being able to install sniffers?
I will admit that you have a valid point about sysadmins writing down passwords, or better yet, everybody accessing certain machines as root. On the other hand, I can only think of one environment where sniffers wouldn't be a concern. That is computer room with the entire network contained to the room.
Re:What kind of enviroment is this? (Score:3)
Not one to normally flame, but not using ssh in ANY environment because it's inconvienent is just plain stupid. If I ever had an admin working for me who believed this drivel, I'd fire him on the spot; However, I realize security isn't the easiest thing to learn, so I will explain here. Please indulge me for a moment.
Who can you trust in a datacenter environment (by datacenter, I am refering to a co-lo facility; for most companies, this is a datacenter)? Can you trust:
The $7/hour security guards that "patrol" the area?
The script kiddez who happen to land a job in the NOC (Where they study for their MCSE all day, no doubt)?
The often-incompetent datacenter "engineer" who can't seem to get the router to your cage configured correctly?
Now, let's take your arguments one at a time:
90% of servers have 6 or less users, who are admins or senior dba's/developers. Most connectivity is done through client/server connections to databases or other applications.
Even this statement reflects a poorly designed production area. No developer should have access to the production area. The only people to have access are the release engineer (and only the access he/she needs to get the code out) and the sysadmin.Period. Anything less is just asking for trouble.
Most end-user types cannot even directly connect to the server subnets. All connections go through a broker of some sort.
A "broker"? Do you mean a firewall? The packets have to get to the firewall somehow. I'm willing to bet you don't control every hop along the way as well. I'm also willing to bet that your firewall is allowing packets in and out somewhere.
Besides...we all know that firewalls never get compromised, right?
It's a pretty new network. Everything is switched. Hard to sniff on a switch.
It's trivial to sniff a switch, especially if it's a "pretty new" network. Managed switches all have traffic redirection which can be used to direct all traffic to a specific port. All commercial switches tend towards pathetic security. (There were Cisco advisories on Bugtraq only yesterday).
And even in an office environment, you have to deal with misconfigured clients, viruses (this is a very, very big reason to use ssh), malicious employees, socially-engineered quot&;good intentioned" people, etc., etc.
Now tell me again why you don't use ssh.
Re:Wrong technology for the problem (Score:3)
Backing up and restoring ssh keys is a very good idea, but this still does not address the issue of large scale manageability. What happens when you add a new server? What if you want to revoke a key for security purposes?
Rolling your own solution is probably the only way. The mass "scp keyfile to all machines" trick is sort of inelegant but effective. Ideally you would have a system that reads key info from DNS. I remember hearing something about extensions to BIND to do something like this, but I don't recall the details. If you do go hacking DNS (and thus also hacking the ssh client) then you have to make sure your DNS is also authenticated properly... bleh it's late and I'm not making any more sense. :)