Are Open Standards Bad for Encryption? 18
An Anonymous Coward asks: "Open standards for most things are good -- they keep the big fish from closing the pond. For encryption, though, open standards make things easier to break. What will happen to computer security when someone finds a way to factor very large numbers? Every implementation on every computer will suddenly be obsolete. How would that effect the much heralded 'new economy' and how quickly would the encryption industry be able to reestablish a standard?"
Re:Wrong. (Score:2)
Thanks for clearing it up. I hereby stand (type?) corrected.
Re:Obscurity in Security.. (Score:3)
Presently (i.e. disregarding work in the quantum crypto/computing fields), the most secure, and in fact ONLY KNOWN UNBREAKABLE cryptography is a one-time pad (OTP). The security here is that it will be (if correctly keyed) invulnerable to mathematical attacks such as frequency analysis. In fact, the ENTIRE security model for OTPs is in the fact that you don't let the "bad guys" get the pads!! So by protecting the pads ("obscuring" them), you can, for now, guarantee security.
So, to answer the question in the article, in some cases, open crypto is your achilles' heel. In those that are algorithmically secure (RSA, Rijndahl, etc), open standards can be a good thing and allow the masses to poke holes in ways you wouldn't think to do...
Great question - check out Simon Singh's "The Code Book", that's reviewed here somewhere on slashdot for more on it!
Electronic communication can't be made obscure. (Score:3)
Given this state of affairs, it's better to make the system open so that it can be more easily peer reviewed and audited for things like buffer overflows (yes, people who can peer-review encryption algorithms are rare, but they exist, and Joe Coder can spot stupid coding errors readily enough).
A hardware-based encryption scheme is tempting to implement, but you'll have to worry about a) your hypothetical adversaries obtaining schematics by bribery and social engineering, and b) someone gutting the box and (if necessary) stripping the chips to find out what the device is doing. Harder, yes, but not impractical by any stretch.
In summary, I don't trust obscurity to significantly hinder hostile parties' extraction of the encryption algorithm.
Obscurity in Security.. (Score:4)
Well... that all too often gets taken out of context.
What it means is, if your only security is the fact that nobody knows anything about your security measures, then you are deluding yourself. This is most commonly quoted due to the mass numbers of security loopholes in software in the last 15 years; companies keeping it 'queiet' that there was a bug, hoping nobody would find out, and hence, keeping things secure. That's the kind of bad 'obscurity' we don't need.
On the other hand, obscurity can be an important aspect of a system's security. Take any old job. The supermaket I used to work in (and my family owned). Due to the fact that my family owned it, I got to observe who what and when with regards to handling large amounts of cash. That's not to say that we had no security, but the fact that the common person who might want to rip us off has no idea how the money handling process works is *part* of that security. If he knew what I knew, he'd be at an advantage.
And take system security. Why on earth would I publish my security? Certainly, if I have sensitive documents that are encrypted, I'm also going to keep it a secret which algorithms I used...that's part of the system.
To answer the other questions... (Score:2)
For those cryptographers in the audience, yes, I did handwave a little bit. Computing square roots modulo n is provably equivalent to the integer-factorization problem, so any break against RSA would also break Rabin. The general point still holds, though.
Wrong. (Score:2)
But you're confusing the algorithm with the secret key.
In a conventional cipher, you protect a big secret (megabytes of market research data, whatever) with a much smaller secret--say, 128 bits of a Blowfish key, or 2048 bytes of an RSA key, or whatever. The entire security of your market research data rests in the secrecy of the key. If the key gets publicized, it's all over.
With a one-time pad, you perfectly protect a big secret (megabytes of market research data) with a secret of equivalent size. The entire security of your market research data rests in the secrecy of the one-time pad. If the one-time pad material gets publicized, it's all over.
A one-time pad is not hurt, in any way, by being one of the open techniques of cryptography. You're confusing the fact that the pad material must be kept secret with the algorithm for using the pad material being kept secret. That's not the case at all.
A OTP is conceptually no different from a block cipher running in output-feedback mode. A OTP is more secure, absolutely, but from a conceptual standpoint an OFB cipher is absolutely identical; it generates a long stream of apparently-random values with which the original data is XORed. If a OTP has its Achilles' heel in its openness, then so does any block cipher running in OFB mode.
Nope. (Score:2)
Nope. I can get all the source for the login code of any Linux distro. There's no security afforded to it by means of obscurity. The nature of a login password is security through secrecy, which is a different beast from security through obscurity.
If logins were secure through obscurity, then having source code for the login would permit me to root anything I wanted. It doesn't.
Re:To answer the other questions... (Score:2)
My point still stands: there are lots of cryptographic methods available, and breaking one is very unlikely to bring the whole house of cards down.
Groan... (Score:3)
Put simply: if your system depends on the secrecy of the algorithm, it is pretty much guaranteed to be weaker than one which depends solely on the secrecy of the key. The reason is that attacks against cryptosystems rise to an exponential power of the weaknesses. If you have a system with two exploitable weaknesses, it's not half as secure as one with only one exploitable weakness--the progression is geometric, not linear.
Think of it this way: RSA is designed to be secure when properly used, even if your opponent is a genius in number theory. Why? Because geniuses designed and reviewed RSA, and still do so this day, searching for any exploitable weakness. It has survived hundreds of thousands of hours of concerted, directed, extremely skilled cryptanalysis and come out on top.
Now look at a system like CSS, which was designed by some very bright folks but was never put out there for peer review. The cryptanalytic weaknesses of CSS are profound, and it seems like a new weakness is reported every other week. Had CSS been introduced in a scholarly journal, the DCDCCA would have had thousands of hours of cryptanalysis for free, and they'd have discovered that their cryptanalytic scheme was weak, had an exhaustible keyspace, and had so many keys that the already-too-short keyspace was reduced to absolute triviality.
Now, it's true that the vast majority of new ciphers are broken soon after they're released. Hell, even someone as illustrious as Rabin isn't immune--I've seen some very good cryptanalysis of Rabin's latest "provably secure" algorithm, and Rabin really stepped in it. Yes, anyone who rushes out to implement a new cipher is most probably smoking crack, because it's overwhelmingly likely that it will be broken within six months of its release.
It's just in the last year that I've begun to put a substantial amount of faith in Bruce Schneier's Blowfish algorithm, for instance, even though it's been out almost ten years now. After ten years of constant cryptanalysis, Blowfish is still surviving pretty damn well.
The alternative, using a closed-source, proprietary, non-peer-reviewed cipher, is absolutely absurd.
Even the NSA is learning this lesson the hard way. After they published the SKIPJACK [*] algorithm, a few Israeli cryptanalysts presented some theoretical attacks against it... and then, it seemed as if every single week the attacks got better and better and better as the crypto community invented totally new branches of cryptanalysis (impossible-differential cryptanalysis, mostly) with which to attack SKIPJACK.
Today, SKIPJACK is regarded as an utter failure. Of the original 32 rounds specified in the algorithm, there exist better-than-brute-force attacks against 31 rounds. That means SKIPJACK currently has no margin of safety, which means it's an absolute, utter, wretched failure.
And this is the National Security Agency we're talking about.
Short version: peer review is essential to getting good ciphers. Sure, you can posit that by having ciphers be open for peer-review it also makes attacks against them possible. That's true. But what you're overlooking, and what the crypto community keeps on shouting at the top of our lungs to anyone who'll listen, is that ALL OTHER METHODS ARE EVEN WORSE.
I'm sorry for shouting so much in this message, but really, guys. People keep on asking this question over and over again. And the answer is always the same, no matter how much you ask it.
[*] SKIPJACK, by the way, was the "ultra-secure" cipher at the heart of the Clipper Chip, if memory serves me right. Makes me feel all warm and fuzzy to think that the NSA was asking us to trust our encryption keys to government-controlled escrow, and give us a cipher that was so subtly and fatally flawed. Some people think these flaws were intentional on the NSA's part; I'm not one of them, since SKIPJACK was also supposed to be used by our own troops as part of a Defense Department initiative.
Security through Obscurity (Score:1)
Re:Security through Obscurity (Score:2)
Re:The same. (Score:1)
Open systems tend to have more bugs found and fixed but closed systems have far fewer exploits found -- they're about the same.
Think of OpenBSD and Windows NT, then say that again.
Security isn't so much a question of open vs. closed, but the underlying techniques behind the development. If you're always pushing for neat toys (ASP [a poisonous snake]) over fixing things (formatting string bugs), you'll end up with more holes, regardless of whether the end product is open or closed.
Getting back to encryption, would you trust my algorithm? I can't tell you what it is; you just have to trust me that it's "secure".
Re:Security through Obscurity (Score:1)
However, hardly anyone rearranges their keyboard keys to make it harder for an attacker to enter their root password. The data (password) is not open, but the method used to enter it (the keyboard layout) is an open standard.
intro question about cryptography. (Score:1)
Why did Kerchkhoff made such a radical statement? Because over the last, oh roughly 500 years, history has told the sad tale of bold cryptographers who sold their systems as unbreakable, and grossly underestimated the inventiveness of their enemies.
Ciphers (encryption algorithms) need to be designed to withstand the most cunning of oppositions. Who's main method is thinking "out of the box" to come up with diffierental cryptanalysis [execpc.com], timing attacks [cryptography.com] -- timing how long an encryption takes, differential power analysis [cryptography.com] -- measuring the power consumption, impossible cryptanalysis [nec.com] -- figuring which differentials aren't possible).
Bruce Schneier at Counterpane Labs [counterpane.com] and Ross Anderson [cam.ac.uk] at Security Group [cam.ac.uk] at Cambridge University have several essays about how security systems fail because the enemy "breaks the rules". (Why Cryptosystems Fail [cam.ac.uk], Why Cryptography Is Harder Than It Looks [counterpane.com], etc.)
To understand more about how "security through obsurity" does more harm than good, read any one of the dozen accounts about the Engima used during World War II, and the Anglo-American (and Polish) effort which successfully analysed this "unbreakable" system. Like Code Breaking [slashdot.org], The Code Breakers, or The Code Book [slashdot.org].
Hardware based encryption scheme (Score:1)
The same. (Score:1)
Well security through obscurity, however bashed here on Slashdot, is certainly a way of making it more difficult to get into an unencrypted message. Security through obscurity vs a measurably secure system is almost an academic vs commercial war in that academics need to analyse and prove the algorithms while commercial people tend to use what has historically worked. I'm sure there are dozens more bugs yet to be unearthed in Windows that have already been patched and fixed in Linux. This doesn't make Windows bad and Linux better but instead different ways of looking at the same thing...
If you're god looking in a system (the academic) - immediately identifying all exploits then you can make the system more approvable to you and other gods. But when you can't analyse the system and can only prod the black box you certainly have a much less chance of finding flaws. Also the development time in making something you would be proud of showing others (fear of peer review in a closed source world) may be better spent elsewhere. Again - this doesn't make closed encryption systems worse just different.
Open systems tend to have more bugs found and fixed but closed systems have far fewer exploits found -- they're about the same.
Re:Obscurity in Security.. (Score:1)
Re:To answer the other questions... (Score:1)