
Keeping Passwords Embedded In Code Secure? 130
JPyObjC Dude asks: "When designing any system that requires automated privileged access to databases or services, developers often rely on hard coding (embedding) passwords within the source code. This is obviously a bad practice as the password is then made available to anybody who has access to the source code (eg. software source control). Putting the passwords in configuration files is another practice, but it is still quite insecure as cracking hashed passwords from a text file is a trivial exercise. What do you do to manage your application passwords so that your system can run completely automated and yet make it difficult for hackers to get their hands on this precious information?"
Passwords suck (Score:5, Informative)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Makes little difference from a security standpoint, though. If the attacker can get at the file system, then he can read the private key.
Easy; encrypt the private key (Score:2)
In that case you can do what Apache and openssh does to protect that kind of information:
Encrypt the keys and store them on disk.
When the program starts (maybe it's just a keyholder process) then the user is prompted for a passphrase and the key is stored in memory, if the keyholder sees any processes it doesn't like on the machine (like a d
Re: (Score:2)
And thus you're right back to the initial problem: Either you have to have an attended startup (and restart) process, or else you have to store a password somewhere on the system.
Re: (Score:2)
I think most people are missing the point in this question any way, it seems as though the OP wants to let applications access services with all the same credentials and keep those credentials from the user, at this point the you have already lost as that's simply impossible (see DRM).
A better way would be to write a (trusted) server that the (untrusted) clients can talk to in stead of letting the clients talk directly to the bac
Re: (Score:2)
Well, the OP didn't say that you weren't allowed to request a password at boot.
Agreed. The OP didn't say much at all. However, the common reason that developers want to put passwords in their code is to allow the app to access a resource that requires authentication. The most common example of that is a web server that needs access to a database.
it seems as though the OP wants to let applications access services with all the same credentials and keep those credentials from the user
I don't see that in the original question at all.
A better way would be to write a (trusted) server that the (untrusted) clients can talk to in stead of letting the clients talk directly to the backend database and services with the security problems that creates.
I think that is probably the architecture under discussion. The question, then, is how the trusted server obtains the credentials it needs to access the database and services.
Re: (Score:2)
If the machine that the server runs on is trusted then it just reads the config file.
If the machine is under an untrusted users control then a trusted server must be implemented that enforces the limitations needed for that user and the credentials for that service can be stored in a config file on the untrusted client.
The only time you would ever be worried about storing
Re: (Score:2)
If the OP is talking about a webserver then the answer is to simply put the credentials in a config file and secure the machine as usual.
Well, that's what you do because you have no other choice, but it doesn't mean it's secure. Any attacker who gets access to the file system then has access to whatever resources the web app uses.
I agree that a given credential should always have the minimum necessary set of permissions, but it's not uncommon, especially for a web app, that the minimum necessary set is full CRUD access to the entire set of tables.
Re: (Score:2)
If the attacker already has a level of access to the system that allows both access to the config file (it might only need to be readable by root) and has network access to the database then you have already lost.
Re: (Score:2)
Well, it's as secure as possible.
If the attacker already has a level of access to the system that allows both access to the config file (it might only need to be readable by root) and has network access to the database then you have already lost.
I would agree that it's as secure as practical... not as secure as possible.
If possible, you shouldassume that your Internet-facing hosts may be compromised, and try to arrange it so that compromise of those hosts doesn't lead to uncontrolled access to other, more critical resources, such as the database. Which is why it would be nice to avoid putting the DB password in a config file or in the source code. Unfortunately, that goal is mostly incompatible with the goal of supporting unattended restarts.
Re: (Score:3, Insightful)
Re: (Score:2)
Ah, but that's exactly the scenario that people are often trying to defend against when they try to hide or "encrypt" passwords needed by applications.
The point is that it's impossible.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You don't even need the source code (Score:1)
Re:You don't even need the source code (Score:4, Interesting)
As a developer who has hardcoded passwords into applications before, I can safely say that using 'strings' would NOT have worked, as I never actually create d a string for such a password -- rather, I would implement a backdoor password as an FSM, with each state having its own separate case code that compares a character in the string entered to a single character from the actual password. Any deviance from the path for the FSM would fall through to the normal password handling facility, using the characters entered so far as the string entered. The passwords in such a case were non-trivial, between 20 and 40 characters, including combinations of letters, numbers, punctuation, and blanks, so the likelihood of stumbling across them accidently was remote to the extreme. Changing the password was only possible with access to the source code, and was done in a way that was simple to maintain, but in the over 10 years of use that these programs received in the companies that they were written for, the security of these hard-coded passwords were never compromised (because of the industry we were in, if it had been, we would have heard about it because the way we wrote it, it would have caused a panic).
It's probably not something I'd ever do these days, but back in the 80's and early 90's, it worked very well.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Excuse me?
Whenever I get apiece of software of which I cannot verify the source, I suspect a backdoor password being there. This is basic security and has been documented at least since the fir
Re: (Score:2)
Yeah... and...?
I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?
This wasn't an option for the companies who contracted us to write the software for them... and no, we didn't tell them about the backdoor. Neither, however, did we ever actually use it except the one time in the 12 years that the software was being used that it was necessa
Re: (Score:2)
Generally spoken, that is indeed the correct answer.
This wasn't an option for the companies who contracted us to write the software for them...
Sure it was, they could have contracted someone else who gave them the possibility to review the source code.
and no, we didn't tell them about the backdoor. Neither, however, did we ever actually use it except the one time in the 12 years that the software was being u
Re:You don't even need the source code (Score:4, Funny)
2. meeting me in court
So you're really going to spend tens of thousands of dollars to recover non-existant damages to prove a point? The conversation might go something like this:
Judge: I see you're suing for 10 million dollars, but you don't list your damages. How did the defendants actions hurt your business? Was there a security breach? Did the defendant not meet the terms of the contract?
You: Well not really. The contract didn't say anything about what I'm suing about. Nobody broke in and we had a lot of means to prevent it, but someone COULD have broken in. Basically this guy just made me real mad because I didn't agree with his security procedures. Dag-nab-it, the guy slightly increased my risks! We don't have any damages, I just assumed that whenever I don't like something, I just sue the pants off them.
Judge: Umm.. Right. Well sorry, civil courts operate on damage to one party caused by another. Criminal courts operate where criminal laws have been broken. Since there's no damages you can show, and no laws have been broken I'm throwing this case out. Didn't your lawyer tell you all this?
You: Only the first 10 lawyers. Then I found this really good one...or at least so I thought at the time. He charged he $20,000 and told me it'd be thrown out at the first hearing. I guess I should have gotten a better lawyer.
Re: (Score:2)
Since the company I am working for, and for which I am responsible for security, works a lot with sensitive information from customers, the risk of losing their trust is quite there, even more so if it becomes publicly known that such a backdoor existed. In that case there wold be real damage even if no actual security breach ever took place.
Re: (Score:3, Informative)
Re: (Score:2)
No it is not. Giving my company access to the source code can be based on a NDA, and in no way requires you to produce open source software.
WE want to be able to verify that no backdoor exists. Alternatively, we could arrange a guarantee by means of a contract that no such thing exists with a very hefty penalty attached if it turns out otherwise.
That is not, however, what this guy is talking about at all
Re: (Score:2)
Well, you just made it public..
Re: (Score:2)
Yep... some years after the software is no longer used... so it's not an issue. But I can virtually guarantee that nobody on slashdot knows which companies used it or even what software I am talking about. As I said, the number of companies that used the programs was countable on one hand and the other programmer and I personally knew every employee of the companies that contracted us to write software for them, which is why we were able to contain the security risks involved.
Like I said before though,
Re: (Score:2)
I got that part, but the issue here is this:
As I mentioned before, in itself, there are valid reasons to have a backdoor, and as you described the one incident where you made use of it, it doesn't sound like your use of it was invalid at all.
The issue is that by implementing it, and by not informing your customer about it, you exposed them to a security problem that they could not judge, and that is what I take issue with.
Today your use of a
Re: (Score:2)
Re: (Score:2)
Well, I understand your assumption here, but a substantial part, if not the majority of all security breaches are inside jobs, and not some random network based hacker. You sure you also
Re: (Score:2)
Monopoly? (Score:2)
I mean, so what if you suspect a backdoor being there, what do you do about it? Not use the software?
Generally spoken, that is indeed the correct answer.
So what do you do when you suspect a backdoor in the software published by a monopoly or by each member of an oligopoly? Do you put your business on hold for 20 years waiting for the patent to run out?
Sure it was, they could have contracted someone else who gave them the possibility to review the source code.
Unless it is not commonplace for the monopoly or among the members of the oligopoly to allow third parties to review the source code.
Re: (Score:2)
Pay someone to write an alternative, and live in a place where software patents are not valid to begin with. And yes, we did the first, and yes, I am living in a place where software patents are not valid.
On top of that, making sure 3rd parties do not get access to data about our customers is actually a legal
Immigration? (Score:2)
Pay someone to write an alternative
An entire operating system, from the ground up?
and live in a place where software patents are not valid to begin with
Which such developed country has a permissive immigration policy?
EU law
Have EU courts routinely dismissed patent infringement cases on grounds that on methods of communication involving arguably novel processes of data processing are not valid subject matter for a patent?
Re: (Score:2)
I guess they could have done... but nobody else that worked for any of the companies that contracted us would have had even the slightest clue on how to do that, so they would have had to hire somebody else. If other programmers were going to review our source code (which we would know about, since the code was at a site that we controlled, and there was no remote access to it), it would DEFINITELY
Re: (Score:2)
Whenever I get apiece of software of which I cannot verify the source, I suspect a backdoor password being there.
Do you have source code to all your hardware's firmware and the complete schematic's to its design ?
Re: (Score:2)
For the bios we have the source code, yes. Complete hardware schematics not of everything to the detail that we would want, but enough to verify its workings. With regards to hardware the requirement is slightly different however, having the schematics is only a small part of the picture, being allowed to verify that machines are produced in a secure environment and according to the published specs is at leas
Hashed passwords for database access? (Score:5, Funny)
Re: (Score:2)
Re: (Score:3, Insightful)
No answer (Score:2)
That said, salted hashes are pretty tough to crack. Changing the passwords regularly will make it unrealistic for a cracker to obtain the passwords through brute force.
Re:No answer (Score:5, Insightful)
And you know what? That's not secure. But then again, the database it's connecting to should be as firewalled as all get-out, and even if it's NOT firewalled, it should have host-based authentication so that you can only access it with that password from the appropriate machine (your web server). At that point, if someone can hook into your LAN to sniff traffic or spoof things, you're probably in deep trouble anyway - but perhaps you could configure the database server to only accept connections over a VPN of some sort with appropriate authentication certificates.
Re:No answer (Score:4, Interesting)
You can't (Score:2)
Public-key crypto? (Score:3, Insightful)
I believe public-key cryptography could do this. Encode the public key (several kilobits, if you're paranoid)? in the source, and have the program use it to authenticate the secret key given by the user. Publish the source code on YouTube for all the good it will do an adversary, right?
Re: (Score:3, Informative)
In short, if a cracker has full access of a program or system and the system has access to the passwords (even if it does some fiddling around before revealing the passwords) then the cracker has full access to the passwords. There's no way to protect against that except by not allowing any access to the passwords (by ju
Re: (Score:2)
Re: (Score:1)
What the user appears to be trying to accomplish is allowing db access without querying the user for a password. To do this, he believes he needs to embed the authentication credentials in the application or its configuration files. To that end, he's asking how Slashdot folk do this securely.
If it's assumed that a person using the software is authorized to access the DB, because the person has access t
The question is based on a false premise (Score:2)
This simply isn't true. If salting is used (which is quite commonplace these days), it's pretty much going to be impossible to recover the password from the hash.
Re: (Score:3, Insightful)
The problem with this is.... how does the program get the password it needs? If its encrypted with a salt...well, that's one way, so the program would have to do a brute force everytime it wanted to use that password.
There's little point to encrypting a locally stored password, as the decryption technique must be relatively simple to allow the program to access it. The idea is to secure everything around it, including the system that is being connected to. Use host based authentication, firewalls, etc.
Re: (Score:2)
Only by not knowing the salt (i.e. that any password you hash isnt going to match the stored hash unless it happens to be the unknown salt+password) would you be trying the harder task of reversing hash(?????password) into salt+password.
Re: (Score:2)
I believe a random salt is just meant to make dictionary attacks unattractive, to remove the danger by (already existing) "md5-hash -> plain text" libraries.
You also can't tell by simply looking at a list of salted hashes that multiple people use the same password.
Real world example(s):
while deve
Re: (Score:2)
The point of salting isn't to protect an individual password. It can be as easily brute-forced/dictionaried as anything. As I understand it, the point of a salt is that t
Kerberos (Score:4, Informative)
Re: (Score:2)
Kerberos was built for just this situation. Read up on it. I think its even available as Active Directory for MS.
You're right that MS provides a bastardized version of KERBEROS, but wrong that it helps.
In order to get an authentication ticket from the ticket-granting server, you have to authenticate to the ticket-granting server. If the machine can start up completely unattended, that means it has the KERBEROS authentication credentials stored on disk somewhere, which means the attacker can get them, and can then authenticate to the ticket-granting server and get whatever authentication tokens he needs.
Re: (Score:2)
But as you've indicated, for other situations this won't help.
Re: (Score:2)
Under exactly the right circumstances (i.e. all of your userbase always logs into the domain before running the database client app in question), this pushes the authentication problem to exactly where he wants it. Unfortunately, the original poster hasn't given nearly enough information to tell if Kerberos/AD/any-other-SSO will help his situation.
But as you've indicated, for other situations this won't help.
Yeah, it seemed to me he was talking about completely unattended startup of a server that requires access to a database (or whatever).
Permissions (Score:2)
Re: (Score:2)
It gets much more fun.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
yourserver.com/show_page.php?page=../../../../dat
the permissions will do nothing to secure the config in this case.
Assume they know the password (Score:2, Insightful)
Wrong Question (Score:5, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Goatse?
Well... (Score:2)
Re: (Score:2)
Re: (Score:2)
You must trust root (Score:1)
The only other way to do this would be to have your app retrieve the key from a trusted remote location via SSL, then use it on the remote app... which is sounding more and more like a kerberos or mutual SSL key thing anyways.
Can't be done, no way, no how. (Score:5, Informative)
First, let me dispose of one issue:
It's much, much worse than that, because the password is also available to anybody who has access to the binary. "man strings".
Others have suggested various options, but absolutely none of them work.
The bottom line is: If the machine has all of the information needed to perform the authentication without human intervention, then an attacker who gains control of that machine has all of the information needed to perform the authentication. Period. No getting around it. The best you can do is limit the damage in the case where the attacker has only partial access.
What is that best? For a network-accessible machine, do the following:
That's a lot of work, and it's still not completely secure. Luckily, very little needs even that level of security. Oh, and there aren't any OSes available that make good use of a TPM yet, so it's not really possible.
For most systems, what I'd really recommend is: Put the auth credentials in plaintext in a config file and limit access to that file to the bare minimum. If you have Mandatory Access Controls (e.g. SELinux), configure them to allow only the server process to read that file. Then, lock the whole system down as tightly as possible (within existing constraints). Ensure that a bare minimum number of people have logins on the machine, and that they all have minimum permissions, firewall it as completely as possible, and keep it up to date on security patches. Finally, put it in a locked room and tightly control physical access to it.
Of course, even this reduced-security approach is too onerous in many cases, so you have to make compromises. That's where a good understanding of security and plenty of hard thinking about what compromises can be made come in.
There ain't no silver bullet.
Re:Can't be done, no way, no how. (Score:4, Informative)
Responding to myself... Uh oh.
It occurs to me that I may be answering the wrong question. If the assumption is that the attacker won't have access to the server, but may have access to the development team's source code, then the answer is simple: put the password in a config file that the developers don't have access to.
Re: (Score:2)
Re: (Score:2)
A very good summary of what I found out myself.
I have the same problem, and what I did was just use no password at all, but create different roles for the system.
Our programs have only a certain role in which they can insert or update only certain parts of the database. Really sensitive tasks must always be done by an operator, who has to log in manually.
Unfortunately, we are using mySQL, which is not as rigorous. For update actions the restricted role must also have query capabilities.
I think that by u
Re: (Score:2)
Re: (Score:2, Insightful)
Where I work, we have a product that needs to store a shared encryption key for communications. The interaction with customers, QA, and marketing went like this:
Them: OMG, the password is there in plain text
Us: The password is in a file readable by root only, as is the install directory. If you can read it, you already pwn the box
Them: OMG, the password is there in plain text
Us: The product has to run unattended as root. There's nothing sensible we can do about it.
Them: OMG, the password is there in plain text
End result: we changed the program to encrypt the password using a fixed key. Customers, QA and marketing finally shuts the hell up.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What I do... (Score:2)
It's all about managing your risk: This is what I do:
Is the system 100% secure? No. Is the system secure enough? Yes! The key is risk management; the probability of our system being co
Re: (Score:2)
Re: (Score:2)
That's what I do; it's an automated system.
A simple solution. (Score:2)
I see a lot of elaborate answers, but we all seem to be forgetting something obvious. When the service comes up, have it prompt an administrator for the password then store it in memory. Ultimately this is only obfuscation, but despite passwords getting stored in memory all the time and I think the rate of compromise remains fairly low. At any rate, it is a lot less likely an attacker will find it there than in a plaintext file on the disk. Apache HTTPD and all the MTA services I use do this when using
Re: (Score:2)
Eventually you seem to have to trust root and file permissions that the programs and config files can only be accessed by those you trust to do so,
Downtime requires attendance. (Score:2)
To compliment comments made by the first response, there are only two situations where you need an administrator to supply the password. Once when the system is first brought online and then every time afterwards the system experiences a critical fault or scheduled maintenance that requires services to be restarted. In both cases, there has to be staff available. Especially if a system goes down (which it should not typically do) then there is likely a problem that demands attention. Otherwise, under no
Some possibilities (Score:2)
smart cards (Score:2)
End-to-End Authentication (Score:2)
Kerberos provides a great mechanism for this. Using pkinit, you can use various credential types. Or, stick to the basics and use
Truly no easy answer (Score:2)
Q. Your coders are not to be trusted
A. Put a file containing a security token (using the generic term token here, depending on what you use - certificate, or others). Open the file, read the token, send it to your server
A2. Use SSL tunneling, using aforementioned certificate, add another file for server details
A3. Create a "mirror database" with all important information replaced
Encrypted File System and other tricks (Score:3, Interesting)
Encrypted file systems have a similar problem. They need to decrypt the filesystem for authorized boots or mounts, but need to stay encrypted otherwise. One common trick here is to only make the decryption key available once, at start up, after which it is put into memory, preferable with a small amount of obfuscation to slow down memory walkers. You could then use something like FUSE [freshmeat.net] to mount the encrypted filesystem with your plaintext password.
As other folks have wisely pointed out though, the best posture is to use mandatory access control and restrict access to the configuration file. If you have the privileges, another good practices involves removing all compilers from the machine, firewalling all FTP traffic in or out, firewalling egress (outbound) HTTP traffic (pull in files to process), restrict SSH traffic to pre-defined nodes and enforcing that with a firewall ruleset. Preferably, you'd make all the firewall stuff occur on a separate box. What this does is restrict what tools will be available to an attacker. You can also remove fun programs like strings, ldd, od, *hexedit, and so on. "But I need to modify these tools!" you say. Leave SVN or CVS clients on the node, check your changes into SVN/CVS on your test bed machine, and then just check out the latest stable branch on your exposed machine. Then you get good protection and good configuration management all in one swoop.
Other tricks involve establishing a proxy process or strict limiting what can be done with the compromised username/password. A proxy process might be a setuid C program that only does one thing and accepts no user input. If you must accept user input, be extremely strict (use sscanf on all inputs and limit the size of the buffer accepted) and then have an experienced C developer review your code for improper bounds handling. This proxy process might do things like move files to a read-only directory structure (static web pages in a DMZ), or it might be a CGI script that updates rows in a database. We've actually used the CGI script idea because it a) it a cross-platform way of talking to the database, b) is a good decoupler of otherwise complex code, and c) strongly limits what can be done as an attack. Be careful of the venerable SQL injection attack there though.
A good use of a proxy process might be the transparent mounting/unmounting of an external USB drive, perhaps against a hidden partition on the stick. The drive would have your key. Sure it's obfuscation, but it's complicated enough to decode that it will slow somebody down for a while.
The last trick is to limit what can be accomplished with the username/password that is obtained. We have some processes whose job is to inject data into the database for the backend to all of our tools. That database user is limited to select, insert, and update operations. With Oracle, I could even restrict which specific tables get which privileges.
The best thing to do is to write a document that some folks call the Security Design Document to define your security posture, what you are known to protect against, and where you are vulnerable. Assign a risk mitigation matrix (vulnerability, threat, countermeasure, residual risk) row to each vulnerability. Be honest and then let your manager understand the position you've left them in and try to assign a cost to each countermeasure/mitigation so they can make a decision on what to close or leave open.
You are always going to have vulnerabilities. Everyone does, even the best systems. What makes the difference is those who analyze, understand, and counter that risk in a way that is appropriate to the situation. Direct exposure to the Internet is a situation that should warrant better risk analysis, but rarely does.
The mysqlinfo file (Score:2)
http://www.suso.org/docs/databases/mysqlinfo.sdf [suso.org]
http://www.suso.org/docs/databases/saferdbpassword s.sdf [suso.org]
I've thought about trying to spread the word about it and even making an RFC, but I don't have the time for that.
No really good solutions (Score:3, Interesting)
1) You can store the credentials somewhere on machine A.
2) The service (typically a Web server) on machine A can run with an account that's either has privileges to access the DB or has privileges to access credentials stored somewhere else to access the DB.
If an intruder gets access to machine A and gets root / admin privileges - then they can gain access to the DB. Obviously, you're first priority is to make sure that this does not happen! Use good firewalls and firewall rules. Make proper use of a DMZ. Check your application for security problems (buffer overflow, SQL injection, etc). Keep up to date on patches. Your second line of defense, is to:
1) Try to insure that an intruder is detected.
2) Make them work for it (access to DB)
3) Have a good audit trail
4) Monitor your network and application
I'll address item #2. Assume that you put the credentials in the configuration file or a separate file on machine A. You should encrypt the credentials (using an encryption application NOT kept on machine A). The key can be hard-coded in the (web) application. If you want, you can use layers of keys(Encrypted key b decodes key in config file. Encrypted key c decodes key b, Encrypted key d
Default Password? (Score:2)
This works well (all one line, of course):
PASSWORD=`head -c 8
Stick in a configuration file with restricted permissions and mail the location of the file to root so that the admin can change it.
Similar problem, no real solution (Score:2)
I'm working on software with a similar problem--I want to store the SQL database username and password in some halfway-to-secure fashion, because leaving it in cleartext in the PHP is just asking for the database to be compromised. So, the only alternatives are to encrypt it within the code or to put it in an external file. The external file makes it easier to change the username and password after the fact, so that's where I'm going.
Problem here is that the contents of the file need to be encrypted in so