Do You Code Sign? 259
Saqib Ali asks: "I am a regular reader of Bruce Schneier's Blog, Articles, and Books, and I really like what he writes. However I recently read his book titled 'Secret and Lies' and I think he has done some in-justice to the security provided by the 'Code Signing.' On page 163 of his books, he (Bruce Schneier) basically states that: 'Code signing, as it is currently done, sucks.' Even though I think that Code Signing has its flaws, it does provide a fairly good mechanism for increasing security in an organization." What are your thoughts on the current methods of code signing in existence, today? If you feel like Bruce Schneier, how would you fix it? If you feel like Saqib Ali, what have you signed and how well has it worked?
"The following are the reasons that he (Bruce Schneier) gives:
Bruce's Argument #1) Users have no idea how to decide if a particular signer is trusted or not.
My comments: True. However in an organization it is the job of the IT/security dept to make that determination. It shouldn't be left up to users. The IT dept should know not to trust "Snake Oil Corp.", however anything from "Citrix Corp" should be fairly safe. Moreover Windows XP SP2 provides provides a mechanism to create a Whitelist of certain trusted signers, and reject everything else. This is a very powerful security mechanism, and greatly increase the security in a corporate environment, if the workstations are properly configured. Having said that, this feature may not be that useful for home user, who can not tell the difference between Snake Oil and Citrix Corp.
Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.
My Comments: I fully agree with this. However Code Signing was never intended for this purpose. Code signing was design to prove the authenticity and integrity of the code. It was never designed to certify that the piece is also securely written.
Bruce's Argument #3) Just because two component are individually signed does not mean that using them together is safe; lots of accidental harmful interactions can be exploited.
My comment: Again Code Signing was was never designed to accomplish this.
Bruce's Argument #4) "safe" is not all-or-nothing thing; there are degrees of safety.
My comment: I agree with this statement.
Bruce's Argument #5) The fact that the evidence of attack (the signature on the code) is stored on the computer under attack is mostly useless: The attack could delete or modify the signature during the attack, or simple reformat the drive where the signature is stored.
My comments: I am not sure what this statement means. I think this type of attack is outside the realm of Code Signing. 'It is like saying host based IDs or anti-virus are useless, because if you can compromise the system you can turn them off.'
I would really appreciate any comments / thoughts / feedback on the above mentioned Bruce's arguments and my commentary. I am planning to give a short talk about benefits of code signing, so any feedback will really help me."
"Always trust code from Microsoft" (Score:5, Insightful)
By agreeing to always trust Microsoft you are agreeing to several things you may not realize:
The second one is the kicker. If there is a bug in some signed code by microsoft that allows JavaScript to call it and write to any file, then anybody can give you that signed code and some JavaScript and take over your computer. This will be done without any further notification at all to you as the end user.
You are trusting microsoft to:
Even if you believe that code can be bug free, there is no way anybody who write code really locks it down so it can't be used for anything other than what it was intended. There was a security vulnerability that took advantage of just this. I bug in some signed Microsoft code. I'm not sure how it was fixed.
Currency conversion with understands "convert 23 dollars to pounds" [coinmill.com]
Re:"Always trust code from Microsoft" (Score:5, Insightful)
For some reason there is no option to never trust certain certificates.
Re:"Always trust code from Microsoft" (Score:5, Informative)
In firefox you will have to remove 3 ticks instead of one button, but those ticks are way easier to find. Not that anyone knows, but it is possible.
Re:"Always trust code from Microsoft" (Score:2)
Re:"Always trust code from Microsoft" (Score:2)
For some reason there is no option to never trust certain certificates.
Actually, this was finally added to XP SP2.
Re:"Always trust code from Microsoft" (Score:5, Insightful)
I know this is true, and bugs have been found in libraries. What was even more wrong is that the same key was used for multiple libraries, making it hard for Microsoft to put the key out of its misery (put it in a Certificate Revocation List.
This is an example where the technique is not so much wrong, but the system in which the technique is used is wrong (one of the spearpoints of Bruce). I do not want to give any web-site the ability to upload and install code on my computer, even if it is signed by someone I trust.
In principle, the idea that MS signs code for automatic updates of their own code is great, it takes out the man in the middle attack (taking over the update site, attack on proxies etc). Leave the code signing be, but leave the snags out.
Re:"Always trust code from Microsoft" (Score:5, Insightful)
Re:"Always trust code from Microsoft" (Score:2, Insightful)
I already have some 100MB of library that may (and do) contain bugs! What the signature says is that that code come from MS, and that is a lot more than "I hope I typed the URL correctly".
Re:"Always trust code from Microsoft" (Score:5, Informative)
In addition to the two points on what you are trusting Microsoft to do, there is a third, even more important, thing that you are trusting. By "trusting" the signed code, you also trusting the chain of certificates involved.
"Huh?" you say? "WTF does that mean?" Most of the time, the certificate that was used to sign the code was also signed by another certificate. This is supposed to establish a chain of trust. In Microsoft's example, their root certificate may be signed by Verisign. The theory is that Verisign is trusted by everybody, and therefore if Verisign signs someone's key, the signed key can also be trusted.
Unfortunately, the theory breaks down. There was a well-publicized instance where Verisign issued a code-signing certificate to someone claiming to be from Microsoft but actually wasn't. When Verisign screws up, or otherwise proves themselves to be not trustworthy, then the end user is left with trying to figure out which "Microsoft" keys are good and which ones aren't. Above and beyond the fact that many users aren't equiped to make those decisions, the vast majority simply don't care.
In a closed-form environment (i.e. inside a company with a PKI in place, physical security on the PKI servers and root key, documented procedures for establishing the identities of the cert requestors, where the apps being signed are for internal use only), code signing, and even chain of trust, mostly works. Once you get out of that tight model, the signature on the code only says "This code was signed by someone claiming to be Microsoft".
Re:"Always trust code from Microsoft" (Score:2, Funny)
Re:"Always trust code from Microsoft" (Score:2)
No, but... (Score:5, Funny)
Re:Now let's not go off on a tangent.... (Score:3, Funny)
Just a sine of the times. (Score:5, Funny)
Re:Just a sine of the times. (Score:2, Funny)
Re:No, but... (Score:2)
Do You Code Sign? (Score:5, Funny)
If... (Score:2)
Nah! (Score:2)
Yes, I sign everything (Score:5, Funny)
-----BEGIN PGP SIGNATURE-----
Version: PGP for Personal Privacy 5.0
MessageID: 5NWrD3M0/1xt+ynMPHbCYX+e3KSK9qhU
iQCVAwUBOFV2W1FO4fmE3w/VAQHgrgP9GlNAaTdNR7DI/Mh62H aZj49496wbM1Nhn p6nWR+Rrz+3DPCK 8yCEK0oe/aX0vv
YKlmtJIse2vcLF4LFVLJ47zQi4dK21vPlQ9XXAk4n4cype4gD
gpTUtsdlxZyMh0PvbAmssEX8z3In+cWgs43sjw6Tf0G4ENx68
mktgUuXP6A4=
=3mUU
-----END PGP SIGNATURE-----
Re:Yes, I sign everything (Score:5, Funny)
In case there is an imposter Anonymous Coward, finally we've got a way to detect it!
Re:Yes, I sign everything (Score:3, Informative)
You joke about that, but that's exactly what the authors of "Who wrote Sobig" did. They published anonymously, but put a public key in their text so no other "anonymous coward" could pretend to be them (or he, she or otherwise).
Re:Yes, I sign everything (Score:2, Funny)
No.
Re:Yes, I sign everything (Score:3, Interesting)
Bruce is right (Score:4, Insightful)
Re:Bruce is right (Score:5, Insightful)
You need both a sandbox and authentication of the provider. I can give you code for your sandbox that purports to be a login client for your bank, you enter your creds and I can send them to another URL or do other nasty things.
Code signing is designed to handle the problem of types "is this software from my bank really from my bank". It's the same problem an SSL certificate solves. You can have a perfectly valid SSL certificate, but if it claims to be from your bank and really isn't your data could go anywhere.
In other news, seatbelts proven not to prevent auto-accidents!
Re:Bruce is right (Score:2)
That's like saying you don't want cream on your cake since it already has chocolate on it.
Re:Bruce is right (Score:2)
Re:Bruce is right (Score:2)
Re:Bruce is right (Score:5, Insightful)
You can apply code signing for several things. For instance, you might use it while working from home. This way whoever receives your source can be quite sure it comes from you. This also assures that the source was not changed since you signed it, for instance, by a virus. The later relies on that it couldn't have been infected before it was signed, though.
It could be also useful for distributions. Let's say, somebody breaks into a Debian mirror and replaces sshd with a version with a backdoor. If code signing was in place, you could notice it quite easily. Now, probably you don't trust every developer individually, but trust them because their key was signed by the general Debian key. But still, something can be arranged. For instance:
Debian would have a master key that signs developers' keys. Debian would also have a list of developers, and a list of their projects, also signed with a key. And then there are packages signed by each developer.
To check trust, you check the signature, then make sure the developer who signed it belongs to that project. This way merely being a Debian developer is not enough to put a backdoor in some random package.
Of course, none of this assures complete security. It could be a bug, the developer's key could be stolen, etc. But this gives you interesting mechanisms, such as revoking a developer's key, and it makes life much harder for random script kiddies.
Now, I completely agree that this is not a panacea. But let's be realistic, while a web browser could run in a VM, I doubt very much this approach would work so well with sudo. Being able to make sure that the update to sudo you're about to install comes from the usual developer has some value.
Re:Bruce is right (Score:3, Interesting)
In the example you're describing, the intended user is probably experienced so that the signature means something to him (admin, developer, etc). He probably knows that if he finds a piece of signed code, but has no verified public key, the signature is worthless. He knows of webs of trust and chains of certificates. Some code is in fact signed with OpenPGP in the way
Re:Bruce is right (Score:2, Informative)
You've got some typos there. The word "if" falsely implies that Debian doesn't already do this. [debian-adm...ration.org] Replace it with "because". Several other words should be changed to past tense.
Re:Bruce is right (Score:2)
Of course ultimate security is like what you get in Java, don't trust anybody and only allow the operations you specifically grant it. But not many turn that on on their JVM. Anyway, that too is based first on code signing.
Re:Bruce is right (Score:5, Insightful)
However, pretty much every sandbox implemention has had exploitable bugs that allowed code running in the sandbox to get out.
So, even with a sandbox, it is wise to also avoid running code from people that you don't trust, so signing is still useful in a sandbox environment.
Also, a sandbox doesn't help with code that has to run outside the sandbox, such as device drivers, or new versions of whatever implements the sandbox.
Look at it this way: for a piece of code to do something malicious on your system, two things must happen:
You can protect your system by making sure that at least one of these conditions does not hold. Sandboxes try to make sure the first condition does not hold. Code signing tries to make sure the second condition does not hold.
Re:Bruce is right (Score:2)
Re:Bruce is right (Score:2)
barf (Score:2)
I personally never accept "always accept".
Why code signing sucks. (Score:4, Insightful)
If Red Hat can't be bothered to sign any of its updates (even the kernel, for pete's sake), then why as a user should I care one way or another?
Re:Why code signing sucks. (Score:4, Informative)
Re:Why code signing sucks. (Score:2)
I believe there is also an option to up2date to turn off signature checking (--nosig). I don't understand the full implications but this behavior is new in Fedora Core 4, and I thought earlier versions of Fedora Core also had signed packages but an implementation that worked...
Re:Why code signing sucks. (Score:4, Funny)
Good comments (Score:3, Informative)
I would add that "always trust X" is not appropriate for home users, and it is good that MS makes the unchecked state the default. I don't recall MS telling me to always trust MS, and if they do, I would want to give them feedback about that wording.
The "always truxt X" feature is best used by domain admins who can pre-approve stuff for their users. It's even better if they can resign the code themselves with a cert on the approved list.
--Jaborandy
Re:Good comments (Score:5, Insightful)
Re:Good comments (Score:5, Interesting)
I've been reading Bruce's writings for several years now. I've even met the man and had dinner with him. To be honest, I'm not entirely sure what keeps him going.
One common comment at his blog is that most of his writings point out the flaws, but few point out solutions. A perfectly valid criticism, and quite accurate. Having worked in the computer security industry for nearly ten years now, I am coming to the conclusion that there may be no solution. We've all heard the joke about the only secure computer (no power, locked in a safe, encased in concrete, and at the bottom of the ocean), and laughingly made comments about how security would be easier if it weren't for the users, but have we really thought about that?
I've written several comments on /. regarding security, and I'm starting to come up with a trend: it isn't possible to really secure the computer if the end-user doesn't understand and/or care about security. Here on /. there are many, many people who care and understand. I run multiple firewalls on my systems AT HOME, plus antivirus and antispyware programs. I actually review my logs. I don't run any program that was written more recently than my AV updates. I'm what most "normal" people would consider paranoid. And I still run into issues.
Since I work in the industry, I am really struggling with this. I believe in security, I desire security, I really, really WANT security. I also see that none of my efforts will bring it as long as people are involved. People make coding mistakes. People are greedy. People are petty. People are malicious. The same instincts at work looting in New Orleans tonite lead some people to do anything in their power to hack other people's systems. The rest of the people, the so-called good people, sit at home and want their computers to be as simple as their toasters. They don't want to have to know about viruses, spyware, phishing, and Nigerian 419 scams. They want email, smilies, and porn.
Regardless of how despondant I feel about security in general, security theater really pisses me off. When I see a product or a process being sold as perfect security or as any kind of silver bullet, I just have to yell. People believing that one relatively good tool will fix everything is bad enough, but when they're told that a worthless tool will fix all their problems...
In theory, code signing has the potential in some environments to limit the risks from certain vulnerabilities. In practice, code signing for the masses is worse than worthless, because Joe User sees "Do you trust Microsoft?" and honestly believes that the code will do him no harm. He will then download and run any program, regardless of where it actually came from, as long as he gets presented with another "Do you trust Microsoft?" button, because he's been conditioned to say "Yes" by Windows Update. In this case (i.e. for general use on the Internet), the "all or nothing" concept is appropriate. Joe User would be far better off treating every application with suspicion than learning that the Code Signing Fairy will bless certain bits and everything else will be covered in foul-smelling, rotten tomatoes. There is no way that the code signing theory is applicable in general use, so using it is a bad idea.
Now that I'm sufficiently depressed, I think I hear a bottle of Jack Daniels calling me
Re:Good comments (Score:2)
In the "all or nothing" world, I envision my wife's grandparents clicking No to every box even when they should click Yes (Do you want to install Ad-aware, Do you want to update your antivirus definitions, Do you want to delete the virus?). They are "all or
Re:Good comments (Score:2)
Yes, but one could argue that in this case, the technology is misleading. The average user (unlike an experienced hacker who knows not to click the "always trust" box) may wrongly interpret code signing as a panacea, or at least something close to one. This can give them a false sense of security, which can be a huge ris
Do you code write? (Score:3, Informative)
Onus on user doesn't help (Score:3, Interesting)
Re:Onus on user doesn't help (Score:2)
Re:Onus on user doesn't help (Score:2)
2 things... (Score:2, Informative)
Maybe Bruce himself reads /. and will post. I read his blog daily and I know he often posts comments in his own blog.
What it tells you... (Score:3, Insightful)
In other words, it doesn't eliminate risk, but it does quantify it - provided that the signature chain is meaningful.
For example, a "correct" approach is to have the package maintainer sign the package as verified by the maintainer. The maintainer's key is signed by someone else - pr
Signed is still better than unsigned (Score:2)
You can probably co-opt Thawte's freemail certificate for code signing. [dallaway.com]
Point #5 (Score:3, Insightful)
But how would it not be the real executable? I only see two possibilities:
1. Somebody hacked into RedHat's servers and overwrote the executable. But if they did that, why not just overwrite the signature too? (I know, it isn't that simple if the signature mechanism uses a public key, which I suspect that it does. Then you would have to have access to a valid RedHat private key to sign the bad executable. But you could just delete the signature instead, making it look like RedHat didn't bother to sign the file.)
2. Somebody is playing with you via DNS or ARP poisoning or some such, and you aren't going to RedHat at all. But the exact same argument applies - they just remove the signature, and who's to know? (Well, everybody knows who is checking signatures, but everybody assumes "they just didn't sign it" rather than "oops, hostile action!")
So the point is that signatures don't really protect you here, unless you are really paranoid, and in practice, very few people really operate consistently in paranoid mode...
Re:Point #5 (Score:4, Insightful)
2. Again, poor decision.
At the end of the day, it is up to the user to determine what to trust and what not to trust. They are the only ones who can make the trust decision. Code signing is intended to give users the information they need to make that decision. If you want to take the decision out of the hands of the users, a 3rd party must decide what can and can't be safely run on a machine. That isn't an acceptable solution.
Security is entirely about paranoia. You lock your front door because you're afraid someone is going to walk into your house and steal your stuff. You lock your car because you're afraid someone is going to steal it. You have a logon/password to your computer because you're afraid someone is going to find your porn collection.
If you want to operate a computer in an enviornment that exposes you to to hostile applications, you must be paranoid enough to determine where an executable came from and if you trust that location before running it.
It depends on the context. (Score:2)
If we're talking about people like me, who offer pgp signatures of the software they wrote along with the tarballs, then its worth doing. Based on how ma
I do. (Score:2, Informative)
About Bruce's Argument #1, that is true. However, the idea is that whomever they got their certificate from (Comodo, Thawte, Verisign, etc) will revoke the certificate as soon as they do something against the rules. It will show as revoked if the user is on-line when the screen
Argument #5 (Score:3, Informative)
Sometimes "code-signing" is said that even though it does not garantee the safety of a downloaded component, at least you know who to blame if it crash your computer. But, a bad-guy can sign his component, you accept the signature, and then the component can delete all traces of the signature from your computer. So even if you later realize that it was a "bad-component", you have no mean to review the signature.
Do I Code Sign? (Score:3)
Editors: Please correct the mistake in the headline. Thank you.
Re:Do I Code Sign? (Score:2)
Re:Do I Code Sign? (Score:2)
On the book of Bruce (Score:2)
The small part about code signing is just that, a not so well thought out part of his book with little around it to back it up. It mentions ActiveX especially, which probably means that it is taking code signing to mean "this package has safe code inside it, which other (untrusted) applications may run. I agree wholeheartely that this is littl
Corporations vs. The World (Score:2, Insightful)
Me, I'm just one guy. I'm not going to do due diligence for every software out there. Half of what I use isn't signed. Should I just give up on it? I don't have TIME to deal with security enough to make a whiteli
Code signing for trust (Score:5, Insightful)
You state that several of Bruce's arguments do not apply, since code signing wasn't designed to solve problem A or problem B. Unfortunately, this isn't an issue of what signing was designed to solve, it is a question of what the end user thinks code signing is for.
If the end user is presented with pop-ups asking "Do you want to trust code from Company X?", the user will be making a decision about that trust. They may (or may not) be concerned with questions such as "Will this code crash my computer?" or "Is this a Trojan horse?". They couldn't care less if the code was really authored by Simon P. Coder while under the employ of Company X. When they click "Always Trust", if they're thinking at all (not guaranteed), they will think that the code is safe, won't crash, and won't have extra "features" that steal their private information.
This is Bruce's point. Because of the presentation and implementation issues, most end users are left with the impression that signed code==good code, an impression that is not always accurate. If the technology is leading the end users to believe things that simply aren't true, there is a problem. In certain, limited, tightly-controlled environments, code signing can work as intented. In general, it is at best an annoyance to the end users and at worst a complete fraud.
Is It Even Solvable? (Score:2)
This makes the assumption that it's a solvable problem; not every problem is, especially when it comes to computer security (or security in general, for that matter).
Re: (Score:2)
What it's useful for in large enterprises (Score:2)
GNU LGPL (Score:2)
it's critical to be able to say "I am building this application against this component/library - when it gets out in the production world, don't link/load any version of the library that isn't digitally signed with the private key that matches this public key".
Interpreting "don't" as "you break the EULA if you" violates the GNU Lesser General Public License, as the license requires the author of a program that depends on a covered library to allow users to upgrade libraries that a program depends on. On
One man's quirky tale (Score:3, Funny)
Naturally the request came to me (even though I develop web applications). I whipped up a ten line Outlook macro that did it, spending about twenty minutes on it. Easy, right? The catch is that Outlook security is incredibly tight and unless you open massive security holes, the macro wouldn't run unless it was digitally signed by a trusted provider.
I plunked down $400 for a Verisign certificate and spent the next couple weeks working with our SMS guy to create packages for the various Outlook versions, and the desktop guys to deal with people who had custom Outlook macros.
Basically it was a huge hassle, done only because we had to. Still, it worked, and ended up saving some money. Crazy, though.
Re:One man's quirky tale (Score:2)
Naturally the request came to me (even though I develop web applications). I whipped up a ten line Outlook macro that did it, spending about twenty minutes on it. Easy, right? The catch is that Outlook security is incredibly tight and unless you open massive security holes, the macro wouldn't run unless it was digitally signed by a trusted provider.
I plunked down $400 for a Verisign
Re:One man's quirky tale (Score:2)
Re:One man's quirky tale (Score:2)
Personally I mock the people who can't juggle more than one or two projects. I typically have a dozen or two, continually ebbing and flowing, often not really eve
Re:One man's quirky tale (Score:2)
Re:One man's quirky tale (Score:2)
I don't want to even think about the price of tumbleweed itself....
What's so special about that software? It's certainly not more secure that PGP (I use GPG+Thunderbird+Enigmail myself). Customer service, I suppose. They appear to use S/MIME.
At least it looks safe to use.
i looked at this a few years ago ... (Score:2)
Whoops! (Score:3, Interesting)
Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.
My Comments: I fully agree with this. However Code Signing was never intended for this purpose. Code signing was design to prove the authenticity and integrity of the code. It was never designed to certify that the piece is also securely written.
I thought the purpose of code signing was to vouch for the integrity of the SIGNER, not the code itself. If you want to argue that code signing guarantees the code because nobody but the signer could sign it, OK, but that still leaves you having to explain to users which signers are OK and which aren't.
The thing that's always bugged me about signing is that it relies on the root cert issuers (Verisign, Thawt, etc.) to do their jobs and verify that their customers are who they say they are, and the sense I'm getting lately is that $800 and a valid email address is enough to convince them that you are anyone you want to be. Am I wrong?
This is a bad example, but what happens if Joe the hacker incorporates a dummy company named "Micr0soft.com", registers a domain for it, installs web & mail servers with matching certs, then buys a code-signing cert from your favorite root-cert company, then uses that cert to sign plugins as "Micros0ft.com"? It would have a valid cert path, wouldn't it? Do the root cert issuers even check for that kind of crap anymore? They used to take D&B numbers as proof of identity, or in lieu of that, notorized copies of incorporation documents on letterhead, but I don't think they even bother checking anymore. At least, they didn't when I bought an SSL cert in December.
"Not Signed; Don't Install" is FUD against free SW (Score:2)
If the issuers of code signing certificates still relied on proof of incorporation, they would be discriminating against purchasers that are not corporations. Because the current version of Microsoft Windows puts up scary warnings if a program is not published by someone who has been issued a code signing certificate, Microsoft is using FUD against all software published by an individual, which includes much of the available free software. This could lead to yet another antitrust case, especially in a coun
What code signing is for (Score:3, Interesting)
So, code signing is a sign of software good-faith. Everyone should show that they are distributing software as something more than an Anonymous Coward. It always disappoints me that major hardware manufacturers won't even sign their device drives.
Re:What code signing is for (Score:2)
Evidently you haven't seen some fine examples of C2 Media's good faith. I've seen spyware sent signed hoping that some gullible users will accept it thinking it's ok.
After seeing that I've confirmed what I always suspected: Microsoft's authenticode is 100% pure shi
Re:What code signing is for (Score:3)
What they probably want is to get their software accepted, identification is mostly a secondary effect.
When I saw signed spyware, I first though someone had compromised a key (it was a 512 bit RSA, so it could have been factored). Further investigation showed that the certificate was good and they were a spyware company.
The problem wit
Re:What code signing is for (Score:3, Insightful)
Re:What code signing is for (Score:3)
That imples that if you have enough other software installed you may be able to derive some benefit from a signature - not that it's of much use by itself. Of course if you have the right additional software which is properly configured (along with the rest of the system) you are less likely to get into trouble in the first place.
What would a sig tell you? there are a few possiblities:
1) Who signed the code
2) If the certificate system has been cracked then who
Who are you anyway? (Score:3, Informative)
My answer to your answers:... (Score:5, Insightful)
My comments: True. [...]The IT dept should know not to trust "Snake Oil Corp." [...]
You are missing the point entirely: What if I were to present you with "Citrix Corp." and "Citrix Corporation" and "Cirtix Inc.". Which would you *know* comes from *the* Citrix corp. Also, notice how the third one had a typo. Also, I will remind you of some guy who had obtained a cert from verisign for the name of a well known company. I forget which one it was, but it was something like Microsoft or Sun.
Bottom line: the cert only assures you that the string ("Citrix") it corresponds to is correct. It doesn't say anything else. Which begs to ask: why have a signature?
Bruce's Argument #2) Just because a component is signed doesn't mean that it is safe.
My Comments: [...]Code signing was design to prove the authenticity and integrity of the code.[...]
Again, this is aside the point: when you for example give shell access to students at university machines, all the binaries they run are part of a secure base. cp and ls are *the* tried and true binaries from every distribution. An administrator *knows* that they can trust that code.
Now, let's say an administrator installs a signed ActiveX plugin. Let's say it's even the Flash player. What we cannot know, and what makes this mechanism extremely dangerous (by means of perceived safety), is that the player might have a security hole in it. So you might go to a web page, and an action script loaded into the player could cause the player to execute random code. This is a big no-no. And not because the player is flawed, but rather because you've decided to integrate this piece of code into your trusted base OS.
Bruce's Argument #3) Just because two component are individually signed does not mean that using them together is safe; lots of accidental harmful interactions can be exploited.
My comment: Again Code Signing was was never designed to accomplish this.
Bruce's Argument #4) "safe" is not all-or-nothing thing; there are degrees of safety.
My comment: I agree with this statement.
Combined with the first two points, you're basically saying that there's no point in having code signing.
Bruce's Argument #5) The fact that the evidence of attack (the signature on the code) is stored on the computer under attack is mostly useless: The attack could delete or modify the signature during the attack, or simple reformat the drive where the signature is stored.
This is a very important feature of security: auditing. If you have a system that's been compromised, you want to know how it happened. *Especially* if you are in a corporate environment: you see one workstation get 0wn3d and formated, you won't be sitting around to see when the next one hits. You will want to know what did it.
All in all, I agree with everything he says. Even though I'm just a mere mortal.
Re:My answer to your answers:... (Score:2)
Actually, he got two certificates - both of which impersonated Microsoft Corporation. They aren't much of a threat anymore, since both have been since revoked.
A proper cert system will also keep t
Re:My answer to your answers:... (Score:2)
The end user downloads your software, downloads your public key from one (or more, if they're paranoid) public keyservers, and verifies the signature on the software.
If You Are Even Talking About Code-Signing (Score:2, Funny)
Rebut! (Score:2)
The IT dept should know not to trust "Snake Oil Corp.", however anything from "Citrix Corp" should be fairly safe.
No shit. The problem never was that, but wheter "1024D/40558AC9" should be trusted, or perhaps if "1024D/8E297990" is more trustworthy.
I can label my key with "Citrix Corp" if I like, and I might even be able to convince Verisign to sign it as these guys did [globus.org].
Code Signing was never intended for [guaranteeing code is safe].
But users don't know that. They're being duped into thinking that
Why take the risk (Score:2)
S/GCC (Score:3, Interesting)
Re:Anybody can get code signed if they send cash (Score:4, Insightful)
Re:Anybody can get code signed if they send cash (Score:2)
If they do something bad with their key, Verisign will put their certificate up on their Certificate Revocation List. And last but not least, where do you store your private key? I know Verisign
Is OpenCA "trusted"? (Score:2)
You can use OpenCA or any other open source CA to keep track of your certificates and keys.
But has Microsoft chosen to trust OpenCA for Windows Code Signing? Because of the Big Scary Alert Box in Windows XP SP2 when running a setup.exe file downloaded from the Internet, a lot of inexperienced users have chosen to trust only those organizations that Microsoft has chosen to trust. Are OpenCA's certificates distributed along with the home PC market's "favourite web browser or operating system (or both)"?
Re:Is OpenCA "trusted"? (Score:2)
Well, in that case, as far as I can see, the problem here is with Microsoft and its Internet Explorer.
I, for one, wouldn't trust Microsoft to be my "security advisor" telling me what's safe and what's not (Big Scary Alert
It shouldn't matter if your website is comprimised (Score:2)
Typos in certificates? (Score:2)
Offering a PGP signature for your code already solves the comprimised website problem.
How can one get the Windows operating system's built-in code authentication systems to work with OpenPGP format signatures in addition to X.509 format signatures?
Your public key should not be on your website, it should be in several publicly available keyservers, so an attacker would have to hack a bunch of keyservers, plus your website to be able to do anything.
In this case, "hack[ing] a bunch of keyservers" wou
Windows code signing does not work with GPG (Score:2)
You can sign your code using a public key scheme like GPG. No need for a middle-man like VeriSign.
Microsoft Internet Explorer in Windows XP Service Pack 2 marks each file that you download from the Internet as "from the Internet". If an executable is "from the Internet", Windows Explorer will attempt to verify the file's digital signature. If this fails, you get a Big Scary Alert Box whose suggested action is Cancel. Problem here: Windows Explorer understands only signatures using middle-man technology,
Re:This is it (Score:2, Funny)
"Yeah, I code signs. Ever been by the I75 exit on the Ohio Turnpike? That's mine."
Re:It's Simple (Score:2)
Re:Signatures aren't useful for security (Score:2)
Anything that only permits signed code to run is fundamentally flawed
In that case, the Atari 7800, Atari Lynx, Atari Jaguar, Microsoft Xbox, and Nintendo DS wireless are fundamentally flawed, but the Xbox and NDS still sell. All of the above systems refuse to execute code that has not been signed by the maker of the device.