Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Programming IT Technology

Do You Code Sign? 259

Saqib Ali asks: "I am a regular reader of Bruce Schneier's Blog, Articles, and Books, and I really like what he writes. However I recently read his book titled 'Secret and Lies' and I think he has done some in-justice to the security provided by the 'Code Signing.' On page 163 of his books, he (Bruce Schneier) basically states that: 'Code signing, as it is currently done, sucks.' Even though I think that Code Signing has its flaws, it does provide a fairly good mechanism for increasing security in an organization." What are your thoughts on the current methods of code signing in existence, today? If you feel like Bruce Schneier, how would you fix it? If you feel like Saqib Ali, what have you signed and how well has it worked?
"The following are the reasons that he (Bruce Schneier) gives:

Bruce's Argument #1) Users have no idea how to decide if a particular signer is trusted or not.

My comments: True. However in an organization it is the job of the IT/security dept to make that determination. It shouldn't be left up to users. The IT dept should know not to trust "Snake Oil Corp.", however anything from "Citrix Corp" should be fairly safe. Moreover Windows XP SP2 provides provides a mechanism to create a Whitelist of certain trusted signers, and reject everything else. This is a very powerful security mechanism, and greatly increase the security in a corporate environment, if the workstations are properly configured. Having said that, this feature may not be that useful for home user, who can not tell the difference between Snake Oil and Citrix Corp.

Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.

My Comments: I fully agree with this. However Code Signing was never intended for this purpose. Code signing was design to prove the authenticity and integrity of the code. It was never designed to certify that the piece is also securely written.

Bruce's Argument #3) Just because two component are individually signed does not mean that using them together is safe; lots of accidental harmful interactions can be exploited.

My comment: Again Code Signing was was never designed to accomplish this.

Bruce's Argument #4) "safe" is not all-or-nothing thing; there are degrees of safety.

My comment: I agree with this statement.

Bruce's Argument #5) The fact that the evidence of attack (the signature on the code) is stored on the computer under attack is mostly useless: The attack could delete or modify the signature during the attack, or simple reformat the drive where the signature is stored.

My comments: I am not sure what this statement means. I think this type of attack is outside the realm of Code Signing. 'It is like saying host based IDs or anti-virus are useless, because if you can compromise the system you can turn them off.'

I would really appreciate any comments / thoughts / feedback on the above mentioned Bruce's arguments and my commentary. I am planning to give a short talk about benefits of code signing, so any feedback will really help me."
This discussion has been archived. No new comments can be posted.

Do You Code Sign?

Comments Filter:
  • by DeadSea ( 69598 ) * on Wednesday August 31, 2005 @07:26PM (#13449955) Homepage Journal
    The best example of why code signing as it is currently implemented is broken is Windows Update. During the windows update process you are asked to accept signed code and you may "Always trust code from Microsoft". In the context of Windows update, that sounds perfectly legit to most users. They want to update their computers. They don't want to be bothered by the dialog again to do so in the future.

    By agreeing to always trust Microsoft you are agreeing to several things you may not realize:

    1. You are trusting all code by Microsoft, not just Windows update (obvious to most people)
    2. You are trusing Microsoft code that folks other that Microsoft give you to run.

    The second one is the kicker. If there is a bug in some signed code by microsoft that allows JavaScript to call it and write to any file, then anybody can give you that signed code and some JavaScript and take over your computer. This will be done without any further notification at all to you as the end user.

    You are trusting microsoft to:

    1. Write perfect code
    2. Envision every possible use of code they write

    Even if you believe that code can be bug free, there is no way anybody who write code really locks it down so it can't be used for anything other than what it was intended. There was a security vulnerability that took advantage of just this. I bug in some signed Microsoft code. I'm not sure how it was fixed.

    Currency conversion with understands "convert 23 dollars to pounds" [coinmill.com]

  • Bruce is right (Score:4, Insightful)

    by Anonymous Coward on Wednesday August 31, 2005 @07:33PM (#13450011)
    Bruce is right. You mention that code signing is not designed to handle problems of security or safety. Well, what good is that? The primary reason you want to know who wrote the code is because some you trust some organizations to write safe code. Yet a restricted security model (sandbox, etc.) would give you a greater level of security. It's nice to know that Friendly Company X put their seal of approval on some flunky's ActiveX, but it's much nicer to know that the system is restricting system calls and network access.
  • by imac.usr ( 58845 ) on Wednesday August 31, 2005 @07:35PM (#13450029) Homepage
    I recently installed Fedora Core 4, and after setting it all up I ran up2date and noted that it's set to require GPG signatures by default (I imported the key as well). Upon running up2date, though, practically every package it found brought up an error message stating that it couldn't recognize the signature, and asking if I wanted to install the package anyway. After about ten packages, I said "fuck it" and turned off GPG signing. (I had to do so by editing up2date's config file manually, since it only runs through its config process once, it seems.)

    If Red Hat can't be bothered to sign any of its updates (even the kernel, for pete's sake), then why as a user should I care one way or another?

  • by Homology ( 639438 ) on Wednesday August 31, 2005 @07:37PM (#13450040)
    During the windows update process you are asked to accept signed code and you may "Always trust code from Microsoft".

    For some reason there is no option to never trust certain certificates.

  • by owlstead ( 636356 ) on Wednesday August 31, 2005 @07:41PM (#13450072)
    You are trusing Microsoft code that folks other that Microsoft give you to run.

    I know this is true, and bugs have been found in libraries. What was even more wrong is that the same key was used for multiple libraries, making it hard for Microsoft to put the key out of its misery (put it in a Certificate Revocation List.

    This is an example where the technique is not so much wrong, but the system in which the technique is used is wrong (one of the spearpoints of Bruce). I do not want to give any web-site the ability to upload and install code on my computer, even if it is signed by someone I trust.

    In principle, the idea that MS signs code for automatic updates of their own code is great, it takes out the man in the middle attack (taking over the update site, attack on proxies etc). Leave the code signing be, but leave the snags out.
  • Re:Bruce is right (Score:5, Insightful)

    by Anonymous Coward on Wednesday August 31, 2005 @07:44PM (#13450090)
    This isn't "insightful".

    You need both a sandbox and authentication of the provider. I can give you code for your sandbox that purports to be a login client for your bank, you enter your creds and I can send them to another URL or do other nasty things.

    Code signing is designed to handle the problem of types "is this software from my bank really from my bank". It's the same problem an SSL certificate solves. You can have a perfectly valid SSL certificate, but if it claims to be from your bank and really isn't your data could go anywhere.

    In other news, seatbelts proven not to prevent auto-accidents!
  • by ad0gg ( 594412 ) on Wednesday August 31, 2005 @07:47PM (#13450112)
    When you sign an activex control you can choose not to allow scripting calls to it. XP is pretty weak when it comes to security, server 2003 is lot better, it actually forces you to whitelist a site by default before javascript and activex runs on it. Problem with activex is that you can't fine tune security, its either all or nothing. Java code signing and code security is lot better alowing more control over what the code can do which be set by the programmer, from what i read you can replace activex controls with .net controlls for more fine tune controls. I just never seen it done in the real world.
  • by bafio ( 879076 ) on Wednesday August 31, 2005 @07:56PM (#13450161)
    >You are trusting microsoft to: > 1. Write perfect code > 2. Envision every possible use of code they write Since I am running MS OS, I am trusting (or risking) it already. This makes no sense!
    I already have some 100MB of library that may (and do) contain bugs! What the signature says is that that code come from MS, and that is a lot more than "I hope I typed the URL correctly".
  • Point #5 (Score:3, Insightful)

    by rewt66 ( 738525 ) on Wednesday August 31, 2005 @07:56PM (#13450163)
    I believe that I understand Bruce's point #5. Let's say that I'm going to download the Linux kernel from RedHat. And let's say that I want to be sure that it's the real Linux kernel instead of some trojaned thing. So I check the signature (assuming that RedHat actually signed it...)

    But how would it not be the real executable? I only see two possibilities:

    1. Somebody hacked into RedHat's servers and overwrote the executable. But if they did that, why not just overwrite the signature too? (I know, it isn't that simple if the signature mechanism uses a public key, which I suspect that it does. Then you would have to have access to a valid RedHat private key to sign the bad executable. But you could just delete the signature instead, making it look like RedHat didn't bother to sign the file.)

    2. Somebody is playing with you via DNS or ARP poisoning or some such, and you aren't going to RedHat at all. But the exact same argument applies - they just remove the signature, and who's to know? (Well, everybody knows who is checking signatures, but everybody assumes "they just didn't sign it" rather than "oops, hostile action!")

    So the point is that signatures don't really protect you here, unless you are really paranoid, and in practice, very few people really operate consistently in paranoid mode...
  • by HishamMuhammad ( 553916 ) on Wednesday August 31, 2005 @07:59PM (#13450185) Homepage Journal
    You can sign your code using a public key scheme like GPG. No need for a middle-man like VeriSign. Users can get your public key at your project's website and verify that the code is really yours. Of course, your project's website may be compromised... but so can VeriSign's...
  • Re:Bruce is right (Score:5, Insightful)

    by vadim_t ( 324782 ) on Wednesday August 31, 2005 @08:01PM (#13450198) Homepage
    It has its value. It's just not a panacea.

    You can apply code signing for several things. For instance, you might use it while working from home. This way whoever receives your source can be quite sure it comes from you. This also assures that the source was not changed since you signed it, for instance, by a virus. The later relies on that it couldn't have been infected before it was signed, though.

    It could be also useful for distributions. Let's say, somebody breaks into a Debian mirror and replaces sshd with a version with a backdoor. If code signing was in place, you could notice it quite easily. Now, probably you don't trust every developer individually, but trust them because their key was signed by the general Debian key. But still, something can be arranged. For instance:

    Debian would have a master key that signs developers' keys. Debian would also have a list of developers, and a list of their projects, also signed with a key. And then there are packages signed by each developer.

    To check trust, you check the signature, then make sure the developer who signed it belongs to that project. This way merely being a Debian developer is not enough to put a backdoor in some random package.

    Of course, none of this assures complete security. It could be a bug, the developer's key could be stolen, etc. But this gives you interesting mechanisms, such as revoking a developer's key, and it makes life much harder for random script kiddies.

    Now, I completely agree that this is not a panacea. But let's be realistic, while a web browser could run in a VM, I doubt very much this approach would work so well with sudo. Being able to make sure that the update to sudo you're about to install comes from the usual developer has some value.
  • by AlexCV ( 261412 ) on Wednesday August 31, 2005 @08:11PM (#13450252)
    You work for a corporation. You have some control over a bunch of desktops. You have an hopefully security aware groups to vet things and inspect and what not. You're the ONLY group of users who see real benefits to code signing. Of course it all works, FROM YOUR POINT OF VIEW. (sorry for shouting).

    Me, I'm just one guy. I'm not going to do due diligence for every software out there. Half of what I use isn't signed. Should I just give up on it? I don't have TIME to deal with security enough to make a whitelist useful.

    Them over there, they're also a big corporation, but they have tons of sites and while people back at HQ are somewhat clueful, the support folks at the site in the Boondocks halfway around the country suck. Or security is just not a priority, whatever. Benefits from code signing? Nil.

    So you, the poster of this article, are in the one group that actually gains anything. I would recommend you investigate the possibility of resigning absolutely everything and banning anything but your own cert, not even microsoft's. Cause it sure seems like you run a tight ship, I envy you.
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Wednesday August 31, 2005 @08:13PM (#13450264) Homepage Journal
    ...is that the code is from who it claims to be from - in other words, nobody tampered with it in transit or on the remote site. This doesn't tell you it is safe, it DOES tell you that any danger is not from an unknown/outsider.


    In other words, it doesn't eliminate risk, but it does quantify it - provided that the signature chain is meaningful.


    For example, a "correct" approach is to have the package maintainer sign the package as verified by the maintainer. The maintainer's key is signed by someone else - preferably someone well known - so that users can be sure that the key's owner is who they say they are.


    The RPM/DEB/whatever packager would then sign their built version of the code. For "perfect" security, the binaries would be handed back to the source package maintainer, who would countersign the binary as authenticly compiled from their work.


    A distribution provider would then add their OWN signature to the binary package, to establish that the copy in their posession is genuinely from them and that they know the source of the package. They'd also countersign the developer's and binary packer's keys.


    At this point, you've an auditable trail and enough cryptographic keys to (almost) guarantee that nobody could break into the package and add malign software to it.


    It does NOT prove that the package is "safe", but COULD be used in a court where digital signatures were accepted to prove responsibility. Thus, if a package was provably untampered-with, but provably had spyware in it, then you could use the signature chain to establish with a high level of certainty as to who had introduced that spyware and when. (It can't have been after the last signature that matches the binary, and must have been the first person who established a valid signature on the file.)

  • by bitslinger_42 ( 598584 ) on Wednesday August 31, 2005 @08:13PM (#13450266)

    You state that several of Bruce's arguments do not apply, since code signing wasn't designed to solve problem A or problem B. Unfortunately, this isn't an issue of what signing was designed to solve, it is a question of what the end user thinks code signing is for.

    If the end user is presented with pop-ups asking "Do you want to trust code from Company X?", the user will be making a decision about that trust. They may (or may not) be concerned with questions such as "Will this code crash my computer?" or "Is this a Trojan horse?". They couldn't care less if the code was really authored by Simon P. Coder while under the employ of Company X. When they click "Always Trust", if they're thinking at all (not guaranteed), they will think that the code is safe, won't crash, and won't have extra "features" that steal their private information.

    This is Bruce's point. Because of the presentation and implementation issues, most end users are left with the impression that signed code==good code, an impression that is not always accurate. If the technology is leading the end users to believe things that simply aren't true, there is a problem. In certain, limited, tightly-controlled environments, code signing can work as intented. In general, it is at best an annoyance to the end users and at worst a complete fraud.

  • Re:Point #5 (Score:4, Insightful)

    by Keeper ( 56691 ) on Wednesday August 31, 2005 @08:29PM (#13450357)
    1. You are correct, you would need access to RedHat's private key to "fake" a signature. If the file isn't signed, you know that whoever created the binary didn't have access to the private key and that you can't determine the origin of the file. If you choose to believe that the file's origin was from RedHat after RedHat told you that they sign their binaries, then you made a poor decision.

    2. Again, poor decision.

    At the end of the day, it is up to the user to determine what to trust and what not to trust. They are the only ones who can make the trust decision. Code signing is intended to give users the information they need to make that decision. If you want to take the decision out of the hands of the users, a 3rd party must decide what can and can't be safely run on a machine. That isn't an acceptable solution.

    Security is entirely about paranoia. You lock your front door because you're afraid someone is going to walk into your house and steal your stuff. You lock your car because you're afraid someone is going to steal it. You have a logon/password to your computer because you're afraid someone is going to find your porn collection.

    If you want to operate a computer in an enviornment that exposes you to to hostile applications, you must be paranoid enough to determine where an executable came from and if you trust that location before running it.
  • Re:Bruce is right (Score:5, Insightful)

    by harlows_monkeys ( 106428 ) on Wednesday August 31, 2005 @08:45PM (#13450457) Homepage
    Yet a restricted security model (sandbox, etc.) would give you a greater level of security

    However, pretty much every sandbox implemention has had exploitable bugs that allowed code running in the sandbox to get out.

    So, even with a sandbox, it is wise to also avoid running code from people that you don't trust, so signing is still useful in a sandbox environment.

    Also, a sandbox doesn't help with code that has to run outside the sandbox, such as device drivers, or new versions of whatever implements the sandbox.

    Look at it this way: for a piece of code to do something malicious on your system, two things must happen:

    1. the code has to run on your system with sufficient privilege or access to do its malicious deeds
    2. the code has to actually contain something malicious

    You can protect your system by making sure that at least one of these conditions does not hold. Sandboxes try to make sure the first condition does not hold. Code signing tries to make sure the second condition does not hold.

  • Re:Good comments (Score:5, Insightful)

    by lukewarmfusion ( 726141 ) on Wednesday August 31, 2005 @09:00PM (#13450547) Homepage Journal
    I think Schneier's criticisms often come off that way. His critique of certificates amounts to "they're not perfect, so don't bother." This "all or nothing" type of attitude may not be exactly how he feels, but his writing certainly makes one feel that way.
  • by pVoid ( 607584 ) on Wednesday August 31, 2005 @09:12PM (#13450616)
    Bruce's Argument #1) Users have no idea how to decide if a particular signer is trusted or not.

    My comments: True. [...]The IT dept should know not to trust "Snake Oil Corp." [...]

    You are missing the point entirely: What if I were to present you with "Citrix Corp." and "Citrix Corporation" and "Cirtix Inc.". Which would you *know* comes from *the* Citrix corp. Also, notice how the third one had a typo. Also, I will remind you of some guy who had obtained a cert from verisign for the name of a well known company. I forget which one it was, but it was something like Microsoft or Sun.

    Bottom line: the cert only assures you that the string ("Citrix") it corresponds to is correct. It doesn't say anything else. Which begs to ask: why have a signature?

    Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.

    My Comments: [...]Code signing was design to prove the authenticity and integrity of the code.[...]

    Again, this is aside the point: when you for example give shell access to students at university machines, all the binaries they run are part of a secure base. cp and ls are *the* tried and true binaries from every distribution. An administrator *knows* that they can trust that code.

    Now, let's say an administrator installs a signed ActiveX plugin. Let's say it's even the Flash player. What we cannot know, and what makes this mechanism extremely dangerous (by means of perceived safety), is that the player might have a security hole in it. So you might go to a web page, and an action script loaded into the player could cause the player to execute random code. This is a big no-no. And not because the player is flawed, but rather because you've decided to integrate this piece of code into your trusted base OS.

    Bruce's Argument #3) Just because two component are individually signed does not mean that using them together is safe; lots of accidental harmful interactions can be exploited.

    My comment: Again Code Signing was was never designed to accomplish this.

    Bruce's Argument #4) "safe" is not all-or-nothing thing; there are degrees of safety.

    My comment: I agree with this statement.

    Combined with the first two points, you're basically saying that there's no point in having code signing.

    Bruce's Argument #5) The fact that the evidence of attack (the signature on the code) is stored on the computer under attack is mostly useless: The attack could delete or modify the signature during the attack, or simple reformat the drive where the signature is stored.

    This is a very important feature of security: auditing. If you have a system that's been compromised, you want to know how it happened. *Especially* if you are in a corporate environment: you see one workstation get 0wn3d and formated, you won't be sitting around to see when the next one hits. You will want to know what did it.

    All in all, I agree with everything he says. Even though I'm just a mere mortal.

  • by Anonymous Coward on Wednesday August 31, 2005 @10:56PM (#13451181)
    This is why code signing doesn't work, because NOBODY knows they are supposed to type:
    "rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*"
    (where NOBODY == normal person, i.e. the poor schmuck who's rooted box is bombarding you with spam).

  • Re:Bruce is right (Score:1, Insightful)

    by Anonymous Coward on Thursday September 01, 2005 @04:46AM (#13452375)
    I understand the "working from home" scenario. Code signing helps in checking the origins of the code. It works because the employees know each other personnaly, and so it is easy to verify the validity of the signatures.

    But now the "linux distribution" scenario (say "Debian"): how do I, as an outsider, verify that the signature (for the code that I downloaded) really comes from Debian? You would need to go to their office(s), speak to their representatives and verify the credentials of those representatives.

    The above is overkill and unworkable for software that is intended to be wide spread (at low cost). This is why the "chain of trust" was invented: the theory goes that an individual trusts a CA, the CA trusts the company, so by implication the individual wouble be able to trust the company. However, in practice:
    • The individual does not trust the CA; at best he/she gives it the benefit of the doubt.
    • The CA does not trust the company; at best it checks the status of the company in the chamber of commerce.
    • The rules of logical reasoning ("implication") do not apply when it comes to trust. Exempli gratia: my brother trusts me, I trust my Peruvian friend, but yet my brother does not trust my friend. I am sure you can come up with similar examples.

    In short: Bruce Schneier rightfully criticizes the way code signing is handled in practice, today.
  • by wagemonkey ( 595840 ) on Thursday September 01, 2005 @05:52AM (#13452593)
    for signed code, there is a way to track it back and make the author accountable
    Unless, as per Bruce's point #5, the code modifies or deletes it's signature - then your 'way to track it back and make the author accountable' doesn't exist.
  • by Anonymous Coward on Thursday September 01, 2005 @10:01AM (#13453847)
    > In certain, limited, tightly-controlled environments, code signing can work as intented.

    I work for a company that signs firmware upgrades for our devices (with a hardware protected key), and the devices check the signature before accepting an upgrade, and in that situation, signing works.

    But more generally, I agree with you and Bruce.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...