Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Programming IT Technology

Do You Code Sign? 259

Saqib Ali asks: "I am a regular reader of Bruce Schneier's Blog, Articles, and Books, and I really like what he writes. However I recently read his book titled 'Secret and Lies' and I think he has done some in-justice to the security provided by the 'Code Signing.' On page 163 of his books, he (Bruce Schneier) basically states that: 'Code signing, as it is currently done, sucks.' Even though I think that Code Signing has its flaws, it does provide a fairly good mechanism for increasing security in an organization." What are your thoughts on the current methods of code signing in existence, today? If you feel like Bruce Schneier, how would you fix it? If you feel like Saqib Ali, what have you signed and how well has it worked?
"The following are the reasons that he (Bruce Schneier) gives:

Bruce's Argument #1) Users have no idea how to decide if a particular signer is trusted or not.

My comments: True. However in an organization it is the job of the IT/security dept to make that determination. It shouldn't be left up to users. The IT dept should know not to trust "Snake Oil Corp.", however anything from "Citrix Corp" should be fairly safe. Moreover Windows XP SP2 provides provides a mechanism to create a Whitelist of certain trusted signers, and reject everything else. This is a very powerful security mechanism, and greatly increase the security in a corporate environment, if the workstations are properly configured. Having said that, this feature may not be that useful for home user, who can not tell the difference between Snake Oil and Citrix Corp.

Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.

My Comments: I fully agree with this. However Code Signing was never intended for this purpose. Code signing was design to prove the authenticity and integrity of the code. It was never designed to certify that the piece is also securely written.

Bruce's Argument #3) Just because two component are individually signed does not mean that using them together is safe; lots of accidental harmful interactions can be exploited.

My comment: Again Code Signing was was never designed to accomplish this.

Bruce's Argument #4) "safe" is not all-or-nothing thing; there are degrees of safety.

My comment: I agree with this statement.

Bruce's Argument #5) The fact that the evidence of attack (the signature on the code) is stored on the computer under attack is mostly useless: The attack could delete or modify the signature during the attack, or simple reformat the drive where the signature is stored.

My comments: I am not sure what this statement means. I think this type of attack is outside the realm of Code Signing. 'It is like saying host based IDs or anti-virus are useless, because if you can compromise the system you can turn them off.'

I would really appreciate any comments / thoughts / feedback on the above mentioned Bruce's arguments and my commentary. I am planning to give a short talk about benefits of code signing, so any feedback will really help me."
This discussion has been archived. No new comments can be posted.

Do You Code Sign?

Comments Filter:
  • by DeadSea ( 69598 ) * on Wednesday August 31, 2005 @07:26PM (#13449955) Homepage Journal
    The best example of why code signing as it is currently implemented is broken is Windows Update. During the windows update process you are asked to accept signed code and you may "Always trust code from Microsoft". In the context of Windows update, that sounds perfectly legit to most users. They want to update their computers. They don't want to be bothered by the dialog again to do so in the future.

    By agreeing to always trust Microsoft you are agreeing to several things you may not realize:

    1. You are trusting all code by Microsoft, not just Windows update (obvious to most people)
    2. You are trusing Microsoft code that folks other that Microsoft give you to run.

    The second one is the kicker. If there is a bug in some signed code by microsoft that allows JavaScript to call it and write to any file, then anybody can give you that signed code and some JavaScript and take over your computer. This will be done without any further notification at all to you as the end user.

    You are trusting microsoft to:

    1. Write perfect code
    2. Envision every possible use of code they write

    Even if you believe that code can be bug free, there is no way anybody who write code really locks it down so it can't be used for anything other than what it was intended. There was a security vulnerability that took advantage of just this. I bug in some signed Microsoft code. I'm not sure how it was fixed.

    Currency conversion with understands "convert 23 dollars to pounds" [coinmill.com]

    • by Homology ( 639438 ) on Wednesday August 31, 2005 @07:37PM (#13450040)
      During the windows update process you are asked to accept signed code and you may "Always trust code from Microsoft".

      For some reason there is no option to never trust certain certificates.

    • by owlstead ( 636356 ) on Wednesday August 31, 2005 @07:41PM (#13450072)
      You are trusing Microsoft code that folks other that Microsoft give you to run.

      I know this is true, and bugs have been found in libraries. What was even more wrong is that the same key was used for multiple libraries, making it hard for Microsoft to put the key out of its misery (put it in a Certificate Revocation List.

      This is an example where the technique is not so much wrong, but the system in which the technique is used is wrong (one of the spearpoints of Bruce). I do not want to give any web-site the ability to upload and install code on my computer, even if it is signed by someone I trust.

      In principle, the idea that MS signs code for automatic updates of their own code is great, it takes out the man in the middle attack (taking over the update site, attack on proxies etc). Leave the code signing be, but leave the snags out.
    • by ad0gg ( 594412 ) on Wednesday August 31, 2005 @07:47PM (#13450112)
      When you sign an activex control you can choose not to allow scripting calls to it. XP is pretty weak when it comes to security, server 2003 is lot better, it actually forces you to whitelist a site by default before javascript and activex runs on it. Problem with activex is that you can't fine tune security, its either all or nothing. Java code signing and code security is lot better alowing more control over what the code can do which be set by the programmer, from what i read you can replace activex controls with .net controlls for more fine tune controls. I just never seen it done in the real world.
    • >You are trusting microsoft to: > 1. Write perfect code > 2. Envision every possible use of code they write Since I am running MS OS, I am trusting (or risking) it already. This makes no sense!
      I already have some 100MB of library that may (and do) contain bugs! What the signature says is that that code come from MS, and that is a lot more than "I hope I typed the URL correctly".
    • by bitslinger_42 ( 598584 ) on Wednesday August 31, 2005 @08:02PM (#13450205)

      In addition to the two points on what you are trusting Microsoft to do, there is a third, even more important, thing that you are trusting. By "trusting" the signed code, you also trusting the chain of certificates involved.

      "Huh?" you say? "WTF does that mean?" Most of the time, the certificate that was used to sign the code was also signed by another certificate. This is supposed to establish a chain of trust. In Microsoft's example, their root certificate may be signed by Verisign. The theory is that Verisign is trusted by everybody, and therefore if Verisign signs someone's key, the signed key can also be trusted.

      Unfortunately, the theory breaks down. There was a well-publicized instance where Verisign issued a code-signing certificate to someone claiming to be from Microsoft but actually wasn't. When Verisign screws up, or otherwise proves themselves to be not trustworthy, then the end user is left with trying to figure out which "Microsoft" keys are good and which ones aren't. Above and beyond the fact that many users aren't equiped to make those decisions, the vast majority simply don't care.

      In a closed-form environment (i.e. inside a company with a PKI in place, physical security on the PKI servers and root key, documented procedures for establishing the identities of the cert requestors, where the apps being signed are for internal use only), code signing, and even chain of trust, mostly works. Once you get out of that tight model, the signature on the code only says "This code was signed by someone claiming to be Microsoft".

    • If you don't trust Microsoft code, what the hell are you doing running Windows ? I think Microsoft should have preloaded their cert store with the trust bit set.
  • No, but... (Score:5, Funny)

    by TeknoHog ( 164938 ) on Wednesday August 31, 2005 @07:28PM (#13449970) Homepage Journal
    I co-sign. It comes in handy when your code has lots of trig math.
  • by frovingslosh ( 582462 ) on Wednesday August 31, 2005 @07:31PM (#13449995)
    Of course I code sign, I'm deaf and mute you insensitive clod!
  • by Anonymous Coward on Wednesday August 31, 2005 @07:31PM (#13449996)
    FIRST POST

    -----BEGIN PGP SIGNATURE-----
    Version: PGP for Personal Privacy 5.0
    MessageID: 5NWrD3M0/1xt+ynMPHbCYX+e3KSK9qhU

    iQCVAwUBOFV2W1FO4fmE3w/VAQHgrgP9GlNAaTdNR7DI/Mh62H aZj49496wbM1Nh
    YKlmtJIse2vcLF4LFVLJ47zQi4dK21vPlQ9XXAk4n4cype4gDn p6nWR+Rrz+3DPC
    gpTUtsdlxZyMh0PvbAmssEX8z3In+cWgs43sjw6Tf0G4ENx68K 8yCEK0oe/aX0vv
    mktgUuXP6A4=
    =3mUU
    -----END PGP SIGNATURE-----

  • Bruce is right (Score:4, Insightful)

    by Anonymous Coward on Wednesday August 31, 2005 @07:33PM (#13450011)
    Bruce is right. You mention that code signing is not designed to handle problems of security or safety. Well, what good is that? The primary reason you want to know who wrote the code is because some you trust some organizations to write safe code. Yet a restricted security model (sandbox, etc.) would give you a greater level of security. It's nice to know that Friendly Company X put their seal of approval on some flunky's ActiveX, but it's much nicer to know that the system is restricting system calls and network access.
    • Re:Bruce is right (Score:5, Insightful)

      by Anonymous Coward on Wednesday August 31, 2005 @07:44PM (#13450090)
      This isn't "insightful".

      You need both a sandbox and authentication of the provider. I can give you code for your sandbox that purports to be a login client for your bank, you enter your creds and I can send them to another URL or do other nasty things.

      Code signing is designed to handle the problem of types "is this software from my bank really from my bank". It's the same problem an SSL certificate solves. You can have a perfectly valid SSL certificate, but if it claims to be from your bank and really isn't your data could go anywhere.

      In other news, seatbelts proven not to prevent auto-accidents!
    • This is not a good reason. Since *both* sandboxing and code signing can be used, you can have a safer system by not excluding one or the other because the other one is better.

      That's like saying you don't want cream on your cake since it already has chocolate on it.
    • You can use both sandbox and code signing together. If wanted to write an accounting system that used plugins say for document export. I could sandbox the plugins to disallow network access and then only accept plugins that have been signed by certs I trust. Java and .net both support sandboxing and code signing.
    • Re:Bruce is right (Score:5, Insightful)

      by vadim_t ( 324782 ) on Wednesday August 31, 2005 @08:01PM (#13450198) Homepage
      It has its value. It's just not a panacea.

      You can apply code signing for several things. For instance, you might use it while working from home. This way whoever receives your source can be quite sure it comes from you. This also assures that the source was not changed since you signed it, for instance, by a virus. The later relies on that it couldn't have been infected before it was signed, though.

      It could be also useful for distributions. Let's say, somebody breaks into a Debian mirror and replaces sshd with a version with a backdoor. If code signing was in place, you could notice it quite easily. Now, probably you don't trust every developer individually, but trust them because their key was signed by the general Debian key. But still, something can be arranged. For instance:

      Debian would have a master key that signs developers' keys. Debian would also have a list of developers, and a list of their projects, also signed with a key. And then there are packages signed by each developer.

      To check trust, you check the signature, then make sure the developer who signed it belongs to that project. This way merely being a Debian developer is not enough to put a backdoor in some random package.

      Of course, none of this assures complete security. It could be a bug, the developer's key could be stolen, etc. But this gives you interesting mechanisms, such as revoking a developer's key, and it makes life much harder for random script kiddies.

      Now, I completely agree that this is not a panacea. But let's be realistic, while a web browser could run in a VM, I doubt very much this approach would work so well with sudo. Being able to make sure that the update to sudo you're about to install comes from the usual developer has some value.
      • It has value, especially in the situation you're describing, but used as it's mostly used (and I mean signed activex) it's not useful at all.

        In the example you're describing, the intended user is probably experienced so that the signature means something to him (admin, developer, etc). He probably knows that if he finds a piece of signed code, but has no verified public key, the signature is worthless. He knows of webs of trust and chains of certificates. Some code is in fact signed with OpenPGP in the way
      • Re:Bruce is right (Score:2, Informative)

        by Minna Kirai ( 624281 )
        Let's say, somebody breaks into a Debian mirror and replaces sshd with a version with a backdoor. If code signing was in place, you could notice it quite easily.

        You've got some typos there. The word "if" falsely implies that Debian doesn't already do this. [debian-adm...ration.org] Replace it with "because". Several other words should be changed to past tense.
    • If you cant authenticate the source of the code, you can never make any assumptions about its intentions.

      Of course ultimate security is like what you get in Java, don't trust anybody and only allow the operations you specifically grant it. But not many turn that on on their JVM. Anyway, that too is based first on code signing.
    • Re:Bruce is right (Score:5, Insightful)

      by harlows_monkeys ( 106428 ) on Wednesday August 31, 2005 @08:45PM (#13450457) Homepage
      Yet a restricted security model (sandbox, etc.) would give you a greater level of security

      However, pretty much every sandbox implemention has had exploitable bugs that allowed code running in the sandbox to get out.

      So, even with a sandbox, it is wise to also avoid running code from people that you don't trust, so signing is still useful in a sandbox environment.

      Also, a sandbox doesn't help with code that has to run outside the sandbox, such as device drivers, or new versions of whatever implements the sandbox.

      Look at it this way: for a piece of code to do something malicious on your system, two things must happen:

      1. the code has to run on your system with sufficient privilege or access to do its malicious deeds
      2. the code has to actually contain something malicious

      You can protect your system by making sure that at least one of these conditions does not hold. Sandboxes try to make sure the first condition does not hold. Code signing tries to make sure the second condition does not hold.

    • Just because you sign a check doesnt mean there is money in the bank. It just means you endorsed it.
  • by bobalu ( 1921 )
    It IS useful in a perfect world, but most people are too clueless to not just go ahead and accept everything anyway.

    I personally never accept "always accept".

  • by imac.usr ( 58845 ) on Wednesday August 31, 2005 @07:35PM (#13450029) Homepage
    I recently installed Fedora Core 4, and after setting it all up I ran up2date and noted that it's set to require GPG signatures by default (I imported the key as well). Upon running up2date, though, practically every package it found brought up an error message stating that it couldn't recognize the signature, and asking if I wanted to install the package anyway. After about ten packages, I said "fuck it" and turned off GPG signing. (I had to do so by editing up2date's config file manually, since it only runs through its config process once, it seems.)

    If Red Hat can't be bothered to sign any of its updates (even the kernel, for pete's sake), then why as a user should I care one way or another?

    • by Anonymous Coward on Wednesday August 31, 2005 @07:49PM (#13450128)
      Then you imported the wrong key, tool. All the packages are signed. There are several different keys, you know. And why would Red Hat be signing Fedora's packages?
    • Here is some more information about this topic from the: Fedora FAQ [fedorafaq.org]

      I believe there is also an option to up2date to turn off signature checking (--nosig). I don't understand the full implications but this behavior is new in Fedora Core 4, and I thought earlier versions of Fedora Core also had signed packages but an implementation that worked...

    • by dtfinch ( 661405 ) * on Wednesday August 31, 2005 @10:12PM (#13450945) Journal
      Every newbie and their grandmother knows you just have to type "rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*" before using up2date, and do the same for any other repositories you might add.
  • Good comments (Score:3, Informative)

    by Jaborandy ( 96182 ) on Wednesday August 31, 2005 @07:37PM (#13450042)
    You make some good arguments. Code signing is not a panacea, but it does add value. saying it sucks because it doesn't solve world hunger is a worthless criticism of a good technology.

    I would add that "always trust X" is not appropriate for home users, and it is good that MS makes the unchecked state the default. I don't recall MS telling me to always trust MS, and if they do, I would want to give them feedback about that wording.

    The "always truxt X" feature is best used by domain admins who can pre-approve stuff for their users. It's even better if they can resign the code themselves with a cert on the approved list.

    --Jaborandy
    • Re:Good comments (Score:5, Insightful)

      by lukewarmfusion ( 726141 ) on Wednesday August 31, 2005 @09:00PM (#13450547) Homepage Journal
      I think Schneier's criticisms often come off that way. His critique of certificates amounts to "they're not perfect, so don't bother." This "all or nothing" type of attitude may not be exactly how he feels, but his writing certainly makes one feel that way.
      • Re:Good comments (Score:5, Interesting)

        by bitslinger_42 ( 598584 ) on Wednesday August 31, 2005 @10:58PM (#13451194)

        I've been reading Bruce's writings for several years now. I've even met the man and had dinner with him. To be honest, I'm not entirely sure what keeps him going.

        One common comment at his blog is that most of his writings point out the flaws, but few point out solutions. A perfectly valid criticism, and quite accurate. Having worked in the computer security industry for nearly ten years now, I am coming to the conclusion that there may be no solution. We've all heard the joke about the only secure computer (no power, locked in a safe, encased in concrete, and at the bottom of the ocean), and laughingly made comments about how security would be easier if it weren't for the users, but have we really thought about that?

        I've written several comments on /. regarding security, and I'm starting to come up with a trend: it isn't possible to really secure the computer if the end-user doesn't understand and/or care about security. Here on /. there are many, many people who care and understand. I run multiple firewalls on my systems AT HOME, plus antivirus and antispyware programs. I actually review my logs. I don't run any program that was written more recently than my AV updates. I'm what most "normal" people would consider paranoid. And I still run into issues.

        Since I work in the industry, I am really struggling with this. I believe in security, I desire security, I really, really WANT security. I also see that none of my efforts will bring it as long as people are involved. People make coding mistakes. People are greedy. People are petty. People are malicious. The same instincts at work looting in New Orleans tonite lead some people to do anything in their power to hack other people's systems. The rest of the people, the so-called good people, sit at home and want their computers to be as simple as their toasters. They don't want to have to know about viruses, spyware, phishing, and Nigerian 419 scams. They want email, smilies, and porn.

        Regardless of how despondant I feel about security in general, security theater really pisses me off. When I see a product or a process being sold as perfect security or as any kind of silver bullet, I just have to yell. People believing that one relatively good tool will fix everything is bad enough, but when they're told that a worthless tool will fix all their problems...

        In theory, code signing has the potential in some environments to limit the risks from certain vulnerabilities. In practice, code signing for the masses is worse than worthless, because Joe User sees "Do you trust Microsoft?" and honestly believes that the code will do him no harm. He will then download and run any program, regardless of where it actually came from, as long as he gets presented with another "Do you trust Microsoft?" button, because he's been conditioned to say "Yes" by Windows Update. In this case (i.e. for general use on the Internet), the "all or nothing" concept is appropriate. Joe User would be far better off treating every application with suspicion than learning that the Code Signing Fairy will bless certain bits and everything else will be covered in foul-smelling, rotten tomatoes. There is no way that the code signing theory is applicable in general use, so using it is a bad idea.

        Now that I'm sufficiently depressed, I think I hear a bottle of Jack Daniels calling me

        • I've been accused of pointing out flaws but not providing a solution. I've also been accused of suggesting unrealistic solutions after I pointed out the flaws. Major security risk, but the client doesn't want to spend money to fix it. You just can't win.

          In the "all or nothing" world, I envision my wife's grandparents clicking No to every box even when they should click Yes (Do you want to install Ad-aware, Do you want to update your antivirus definitions, Do you want to delete the virus?). They are "all or
    • "You make some good arguments. Code signing is not a panacea, but it does add value. saying it sucks because it doesn't solve world hunger is a worthless criticism of a good technology."

      Yes, but one could argue that in this case, the technology is misleading. The average user (unlike an experienced hacker who knows not to click the "always trust" box) may wrongly interpret code signing as a panacea, or at least something close to one. This can give them a false sense of security, which can be a huge ris

  • Do you code write? (Score:3, Informative)

    by Evro ( 18923 ) <evandhoffman.gmail@com> on Wednesday August 31, 2005 @07:38PM (#13450052) Homepage Journal
    When used in this context, "code sign" doesn't make sense... shouldn't it be "Do you sign your code?" Or if it's intended as a new phrase, maybe it should be "Do you code-sign?"
  • by SilentReallySilentUs ( 908879 ) on Wednesday August 31, 2005 @07:39PM (#13450062) Homepage
    I think most of the users have no idea what "This code is signed by BlahBlah corp" or "See certificate" etc. means. They simply click on something to get past the annoying window.
  • 2 things... (Score:2, Informative)

    by deviantphil ( 543645 )
    1. The book is a few years old (1999 or 2000 IIRC).
    2. I believe Bruce is referring to the fact that: yeah....you can say so and so created this code. But that doesn't teel you how trustworthy the person or how well the code was made. So therefore putting too much faith into a "seal" saying that it is signed is a mistake.

    Maybe Bruce himself reads /. and will post. I read his blog daily and I know he often posts comments in his own blog.

    • by jd ( 1658 )
      ...is that the code is from who it claims to be from - in other words, nobody tampered with it in transit or on the remote site. This doesn't tell you it is safe, it DOES tell you that any danger is not from an unknown/outsider.

      In other words, it doesn't eliminate risk, but it does quantify it - provided that the signature chain is meaningful.

      For example, a "correct" approach is to have the package maintainer sign the package as verified by the maintainer. The maintainer's key is signed by someone else - pr

  • I'd rather get a certificat and sign my code. Would you rather your application appear as "don't trust this code, it's unsigned" or as "Signed by . Do you want to trust?"

    You can probably co-opt Thawte's freemail certificate for code signing. [dallaway.com]

  • Point #5 (Score:3, Insightful)

    by rewt66 ( 738525 ) on Wednesday August 31, 2005 @07:56PM (#13450163)
    I believe that I understand Bruce's point #5. Let's say that I'm going to download the Linux kernel from RedHat. And let's say that I want to be sure that it's the real Linux kernel instead of some trojaned thing. So I check the signature (assuming that RedHat actually signed it...)

    But how would it not be the real executable? I only see two possibilities:

    1. Somebody hacked into RedHat's servers and overwrote the executable. But if they did that, why not just overwrite the signature too? (I know, it isn't that simple if the signature mechanism uses a public key, which I suspect that it does. Then you would have to have access to a valid RedHat private key to sign the bad executable. But you could just delete the signature instead, making it look like RedHat didn't bother to sign the file.)

    2. Somebody is playing with you via DNS or ARP poisoning or some such, and you aren't going to RedHat at all. But the exact same argument applies - they just remove the signature, and who's to know? (Well, everybody knows who is checking signatures, but everybody assumes "they just didn't sign it" rather than "oops, hostile action!")

    So the point is that signatures don't really protect you here, unless you are really paranoid, and in practice, very few people really operate consistently in paranoid mode...
    • Re:Point #5 (Score:4, Insightful)

      by Keeper ( 56691 ) on Wednesday August 31, 2005 @08:29PM (#13450357)
      1. You are correct, you would need access to RedHat's private key to "fake" a signature. If the file isn't signed, you know that whoever created the binary didn't have access to the private key and that you can't determine the origin of the file. If you choose to believe that the file's origin was from RedHat after RedHat told you that they sign their binaries, then you made a poor decision.

      2. Again, poor decision.

      At the end of the day, it is up to the user to determine what to trust and what not to trust. They are the only ones who can make the trust decision. Code signing is intended to give users the information they need to make that decision. If you want to take the decision out of the hands of the users, a 3rd party must decide what can and can't be safely run on a machine. That isn't an acceptable solution.

      Security is entirely about paranoia. You lock your front door because you're afraid someone is going to walk into your house and steal your stuff. You lock your car because you're afraid someone is going to steal it. You have a logon/password to your computer because you're afraid someone is going to find your porn collection.

      If you want to operate a computer in an enviornment that exposes you to to hostile applications, you must be paranoid enough to determine where an executable came from and if you trust that location before running it.
  • Are you and/or Bruce talking about code signing as in microsoft pretending that signed activex controls are magically safe? If so, then Bruce is right, it does nothing and is a total waste of time. Signing is just so you know that the code was written by who you think, you still have to decide for yourself wether you trust that person to write good code.

    If we're talking about people like me, who offer pgp signatures of the software they wrote along with the tarballs, then its worth doing. Based on how ma
  • I do. (Score:2, Informative)

    by stg ( 43177 )
    I sign my shareware, simply because WinXP's screen when running signed software is slightly less frightening. I think that is worth the yearly US$100 investment (I didn't do a double-blind test, though - it's just an educated guess).

    About Bruce's Argument #1, that is true. However, the idea is that whomever they got their certificate from (Comodo, Thawte, Verisign, etc) will revoke the certificate as soon as they do something against the rules. It will show as revoked if the user is on-line when the screen
  • Argument #5 (Score:3, Informative)

    by DVega ( 211997 ) on Wednesday August 31, 2005 @08:02PM (#13450206)

    " Bruce's Argument #5) The fact that the evidence of attack (the signature on the code) is stored on the computer under attack is mostly useless: The attack could delete or modify the signature during the attack, or simple reformat the drive where the signature is stored.

    My comments: I am not sure what this statement means...."

    Sometimes "code-signing" is said that even though it does not garantee the safety of a downloaded component, at least you know who to blame if it crash your computer. But, a bad-guy can sign his component, you accept the signature, and then the component can delete all traces of the signature from your computer. So even if you later realize that it was a "bad-component", you have no mean to review the signature.

  • by Mensa Babe ( 675349 ) on Wednesday August 31, 2005 @08:08PM (#13450234) Homepage Journal
    No But I Sign Code.

    Editors: Please correct the mistake in the headline. Thank you.
    • It's not a mistake. The article really is about combining ASL and Pig Latin into Code Sign Language.
    • It's actually not a mistake. It's talking about the act of code signing so "do you code sign" is acceptable usage. I can't think of a good example of where else this happens but as much as I enjoy pointing out editor mistakes, I don't believe this is one.
  • I see this book mostly as an opinion piece, as well as a good starting point on how to think about security. I must say I like his other books much more than this one.

    The small part about code signing is just that, a not so well thought out part of his book with little around it to back it up. It mentions ActiveX especially, which probably means that it is taking code signing to mean "this package has safe code inside it, which other (untrusted) applications may run. I agree wholeheartely that this is littl
  • You work for a corporation. You have some control over a bunch of desktops. You have an hopefully security aware groups to vet things and inspect and what not. You're the ONLY group of users who see real benefits to code signing. Of course it all works, FROM YOUR POINT OF VIEW. (sorry for shouting).

    Me, I'm just one guy. I'm not going to do due diligence for every software out there. Half of what I use isn't signed. Should I just give up on it? I don't have TIME to deal with security enough to make a whiteli
  • by bitslinger_42 ( 598584 ) on Wednesday August 31, 2005 @08:13PM (#13450266)

    You state that several of Bruce's arguments do not apply, since code signing wasn't designed to solve problem A or problem B. Unfortunately, this isn't an issue of what signing was designed to solve, it is a question of what the end user thinks code signing is for.

    If the end user is presented with pop-ups asking "Do you want to trust code from Company X?", the user will be making a decision about that trust. They may (or may not) be concerned with questions such as "Will this code crash my computer?" or "Is this a Trojan horse?". They couldn't care less if the code was really authored by Simon P. Coder while under the employ of Company X. When they click "Always Trust", if they're thinking at all (not guaranteed), they will think that the code is safe, won't crash, and won't have extra "features" that steal their private information.

    This is Bruce's point. Because of the presentation and implementation issues, most end users are left with the impression that signed code==good code, an impression that is not always accurate. If the technology is leading the end users to believe things that simply aren't true, there is a problem. In certain, limited, tightly-controlled environments, code signing can work as intented. In general, it is at best an annoyance to the end users and at worst a complete fraud.

  • If you feel like Bruce Schneier, how would you fix it?

    This makes the assumption that it's a solvable problem; not every problem is, especially when it comes to computer security (or security in general, for that matter).

  • Comment removed based on user account deletion
  • When you are developing applications and are compiling against packaged components that maybe you didn't write, it's critical to be able to say "I am building this application against this component/library - when it gets out in the production world, don't link/load any version of the library that isn't digitally signed with the private key that matches this public key". That is what code signing is good for. Not some farsical "proof of correctness" aquatic ceremony. :)
    • it's critical to be able to say "I am building this application against this component/library - when it gets out in the production world, don't link/load any version of the library that isn't digitally signed with the private key that matches this public key".

      Interpreting "don't" as "you break the EULA if you" violates the GNU Lesser General Public License, as the license requires the author of a program that depends on a covered library to allow users to upgrade libraries that a program depends on. On

  • by PIPBoy3000 ( 619296 ) on Wednesday August 31, 2005 @08:27PM (#13450350)
    Our organization recently bought an email encryption product called Tumbleweed that was quite expensive. One of the features is that any e-mail with "[Secure]" at the beginning of the subject would automatically be encrypted. The catch is that in order to make a button in Outlook that added this text, it would cost an extra $8,000 for a custom add-in.

    Naturally the request came to me (even though I develop web applications). I whipped up a ten line Outlook macro that did it, spending about twenty minutes on it. Easy, right? The catch is that Outlook security is incredibly tight and unless you open massive security holes, the macro wouldn't run unless it was digitally signed by a trusted provider.

    I plunked down $400 for a Verisign certificate and spent the next couple weeks working with our SMS guy to create packages for the various Outlook versions, and the desktop guys to deal with people who had custom Outlook macros.

    Basically it was a huge hassle, done only because we had to. Still, it worked, and ended up saving some money. Crazy, though.
    • The catch is that in order to make a button in Outlook that added this text, it would cost an extra $8,000 for a custom add-in.

      Naturally the request came to me (even though I develop web applications). I whipped up a ten line Outlook macro that did it, spending about twenty minutes on it. Easy, right? The catch is that Outlook security is incredibly tight and unless you open massive security holes, the macro wouldn't run unless it was digitally signed by a trusted provider.

      I plunked down $400 for a Verisign
      • If he was going to be sitting at his desk pounding his pud and reading slashdot, four weeks is actually a recovery of sunk costs.
      • We certainly joked about it. Most of my time after writing the code was as an occasional advisor. I ended up talking with the SMS person a bunch, helping him make a package that would work with everything. It certainly wasn't four man weeks. It probably was along the lines if six hours spread across many weeks, as is most of the stuff I do.

        Personally I mock the people who can't juggle more than one or two projects. I typically have a dozen or two, continually ebbing and flowing, often not really eve
    • Sounds like you should have just plonked down the $8000. Probably would have been cheaper in the long run.

    • USD 8000 for an auto encrypt function?!!?!?!

      I don't want to even think about the price of tumbleweed itself....

      What's so special about that software? It's certainly not more secure that PGP (I use GPG+Thunderbird+Enigmail myself). Customer service, I suppose. They appear to use S/MIME.

      At least it looks safe to use.
  • and wrote my findings here: signed archives: an evaluation of trust [monkey.org]. from the abstract:

    in 2002, a series of high profile compromises of internet software servers resulted in the alteration of software archives. this prompted an evaluation of the state of trust of the signed software distribution system. over 2800 archives representing over 1400 unique software packages were downloaded and their corresponding signatures evaluated for validity. these software packages were pulled from over 260 different sit

  • Whoops! (Score:3, Interesting)

    by Asprin ( 545477 ) <(moc.oohay) (ta) (dlonrasg)> on Wednesday August 31, 2005 @08:44PM (#13450447) Homepage Journal

    Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.

    My Comments: I fully agree with this. However Code Signing was never intended for this purpose. Code signing was design to prove the authenticity and integrity of the code. It was never designed to certify that the piece is also securely written.


    I thought the purpose of code signing was to vouch for the integrity of the SIGNER, not the code itself. If you want to argue that code signing guarantees the code because nobody but the signer could sign it, OK, but that still leaves you having to explain to users which signers are OK and which aren't.

    The thing that's always bugged me about signing is that it relies on the root cert issuers (Verisign, Thawt, etc.) to do their jobs and verify that their customers are who they say they are, and the sense I'm getting lately is that $800 and a valid email address is enough to convince them that you are anyone you want to be. Am I wrong?

    This is a bad example, but what happens if Joe the hacker incorporates a dummy company named "Micr0soft.com", registers a domain for it, installs web & mail servers with matching certs, then buys a code-signing cert from your favorite root-cert company, then uses that cert to sign plugins as "Micros0ft.com"? It would have a valid cert path, wouldn't it? Do the root cert issuers even check for that kind of crap anymore? They used to take D&B numbers as proof of identity, or in lieu of that, notorized copies of incorporation documents on letterhead, but I don't think they even bother checking anymore. At least, they didn't when I bought an SSL cert in December.
    • If the issuers of code signing certificates still relied on proof of incorporation, they would be discriminating against purchasers that are not corporations. Because the current version of Microsoft Windows puts up scary warnings if a program is not published by someone who has been issued a code signing certificate, Microsoft is using FUD against all software published by an individual, which includes much of the available free software. This could lead to yet another antitrust case, especially in a coun

  • by Jaime2 ( 824950 ) on Wednesday August 31, 2005 @08:50PM (#13450490)
    As stated numerous times, code signing is not designed to let a user decide whether code is good or bad. But, for signed code, there is a way to track it back and make the author accountable. If all of today's viruses were signed, most of the authors would be caught. Even if they were signed in a fraudulent manner, there would be a thread to trace back. Enough threads and a good investigator will catch the bad guy.

    So, code signing is a sign of software good-faith. Everyone should show that they are distributing software as something more than an Anonymous Coward. It always disappoints me that major hardware manufacturers won't even sign their device drives.
    • So, code signing is a sign of software good-faith. Everyone should show that they are distributing software as something more than an Anonymous Coward. It always disappoints me that major hardware manufacturers won't even sign their device drives.

      Evidently you haven't seen some fine examples of C2 Media's good faith. I've seen spyware sent signed hoping that some gullible users will accept it thinking it's ok.

      After seeing that I've confirmed what I always suspected: Microsoft's authenticode is 100% pure shi

    • for signed code, there is a way to track it back and make the author accountable
      Unless, as per Bruce's point #5, the code modifies or deletes it's signature - then your 'way to track it back and make the author accountable' doesn't exist.
  • Who are you anyway? (Score:3, Informative)

    by samj ( 115984 ) * <samj@samj.net> on Wednesday August 31, 2005 @09:07PM (#13450586) Homepage
    Bruce is right, code signing (at least in its present form) sucks. In fact trust in general sucks, and will until we come up with an intelligent way to assign it. So you want a 'whitelist'? By that you presumably think that the 'whitelist' of CAs rolled out with browsers works? It doesn't. Nor will telling 'safer' to consult it before running code.
  • by pVoid ( 607584 ) on Wednesday August 31, 2005 @09:12PM (#13450616)
    Bruce's Argument #1) Users have no idea how to decide if a particular signer is trusted or not.

    My comments: True. [...]The IT dept should know not to trust "Snake Oil Corp." [...]

    You are missing the point entirely: What if I were to present you with "Citrix Corp." and "Citrix Corporation" and "Cirtix Inc.". Which would you *know* comes from *the* Citrix corp. Also, notice how the third one had a typo. Also, I will remind you of some guy who had obtained a cert from verisign for the name of a well known company. I forget which one it was, but it was something like Microsoft or Sun.

    Bottom line: the cert only assures you that the string ("Citrix") it corresponds to is correct. It doesn't say anything else. Which begs to ask: why have a signature?

    Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.

    My Comments: [...]Code signing was design to prove the authenticity and integrity of the code.[...]

    Again, this is aside the point: when you for example give shell access to students at university machines, all the binaries they run are part of a secure base. cp and ls are *the* tried and true binaries from every distribution. An administrator *knows* that they can trust that code.

    Now, let's say an administrator installs a signed ActiveX plugin. Let's say it's even the Flash player. What we cannot know, and what makes this mechanism extremely dangerous (by means of perceived safety), is that the player might have a security hole in it. So you might go to a web page, and an action script loaded into the player could cause the player to execute random code. This is a big no-no. And not because the player is flawed, but rather because you've decided to integrate this piece of code into your trusted base OS.

    Bruce's Argument #3) Just because two component are individually signed does not mean that using them together is safe; lots of accidental harmful interactions can be exploited.

    My comment: Again Code Signing was was never designed to accomplish this.

    Bruce's Argument #4) "safe" is not all-or-nothing thing; there are degrees of safety.

    My comment: I agree with this statement.

    Combined with the first two points, you're basically saying that there's no point in having code signing.

    Bruce's Argument #5) The fact that the evidence of attack (the signature on the code) is stored on the computer under attack is mostly useless: The attack could delete or modify the signature during the attack, or simple reformat the drive where the signature is stored.

    This is a very important feature of security: auditing. If you have a system that's been compromised, you want to know how it happened. *Especially* if you are in a corporate environment: you see one workstation get 0wn3d and formated, you won't be sitting around to see when the next one hits. You will want to know what did it.

    All in all, I agree with everything he says. Even though I'm just a mere mortal.

    • I agree with most of your posting.

      Also, I will remind you of some guy who had obtained a cert from verisign for the name of a well known company.

      Actually, he got two certificates - both of which impersonated Microsoft Corporation. They aren't much of a threat anymore, since both have been since revoked.

      Bottom line: the cert only assures you that the string ("Citrix") it corresponds to is correct. It doesn't say anything else. Which begs to ask: why have a signature?

      A proper cert system will also keep t

  • If you're even talking about this topic you are clearly not living in the real world of tight schedules and out-of-control projects. Chances are you're working in government. Who cares?
  • The IT dept should know not to trust "Snake Oil Corp.", however anything from "Citrix Corp" should be fairly safe.
    No shit. The problem never was that, but wheter "1024D/40558AC9" should be trusted, or perhaps if "1024D/8E297990" is more trustworthy.

    I can label my key with "Citrix Corp" if I like, and I might even be able to convince Verisign to sign it as these guys did [globus.org].

    Code Signing was never intended for [guaranteeing code is safe].
    But users don't know that. They're being duped into thinking that

  • Given that anything you say, do, or code may become illegal in the future, why would you want it to be so traceable back to yourself? I kind of prefer to have email or whatever where the security is such that it would be very easy to raise reasonable doubt as to whether I myself actually am responsible for it or not.
  • S/GCC (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Thursday September 01, 2005 @01:29AM (#13451871) Homepage Journal
    Make GCC sign code in a post-process by default. It can spawn a background child, so the compiled code is immediately available, while the signer completes in the background. But it's become obvious that programmers and security pros (securers?) are two distinct roleplayers. Like programmers and sysadmins, or programmers and users, or even programmers and designers/architects, or programmers and graphic artists: rarely is the same person good at both (though sometimes programmers are good users). So programmer tools should automate the process according to best practices. Leaving it voluntary is no longer an acceptable risk.

For God's sake, stop researching for a while and begin to think!

Working...