Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Encryption Security

Are Open Standards Bad for Encryption? 18

An Anonymous Coward asks: "Open standards for most things are good -- they keep the big fish from closing the pond. For encryption, though, open standards make things easier to break. What will happen to computer security when someone finds a way to factor very large numbers? Every implementation on every computer will suddenly be obsolete. How would that effect the much heralded 'new economy' and how quickly would the encryption industry be able to reestablish a standard?"
This discussion has been archived. No new comments can be posted.

Are Open Standards Bad for Encryption?

Comments Filter:
  • ack! i realized that right after i posted... who knew i'd need to THINK on a saturday?!

    Thanks for clearing it up. I hereby stand (type?) corrected.

  • by psychosis ( 2579 ) on Saturday April 14, 2001 @08:24AM (#292333)
    Good points! In most cases, I agree that obscurity is marginal protection at best, but after reading this question an interesting thought came to mind:

    Presently (i.e. disregarding work in the quantum crypto/computing fields), the most secure, and in fact ONLY KNOWN UNBREAKABLE cryptography is a one-time pad (OTP). The security here is that it will be (if correctly keyed) invulnerable to mathematical attacks such as frequency analysis. In fact, the ENTIRE security model for OTPs is in the fact that you don't let the "bad guys" get the pads!! So by protecting the pads ("obscuring" them), you can, for now, guarantee security.

    So, to answer the question in the article, in some cases, open crypto is your achilles' heel. In those that are algorithmically secure (RSA, Rijndahl, etc), open standards can be a good thing and allow the masses to poke holes in ways you wouldn't think to do...

    Great question - check out Simon Singh's "The Code Book", that's reviewed here somewhere on slashdot for more on it!

  • Unfortunately, the question is academic for any computer-based encryption scheme. If anyone wants to know how the encryption works, they'll run the binary through a disassembler and a debugger and find out in glorious detail. Thus, against the adversaries you're interested in (people actively trying to subvert the encryption), there is no practical difference between an open and a closed encryption scheme.

    Given this state of affairs, it's better to make the system open so that it can be more easily peer reviewed and audited for things like buffer overflows (yes, people who can peer-review encryption algorithms are rare, but they exist, and Joe Coder can spot stupid coding errors readily enough).

    A hardware-based encryption scheme is tempting to implement, but you'll have to worry about a) your hypothetical adversaries obtaining schematics by bribery and social engineering, and b) someone gutting the box and (if necessary) stripping the chips to find out what the device is doing. Harder, yes, but not impractical by any stretch.

    In summary, I don't trust obscurity to significantly hinder hostile parties' extraction of the encryption algorithm.
  • by mindstrm ( 20013 ) on Saturday April 14, 2001 @03:07AM (#292335)
    Yeah... everyone likes to blindly shout the mantra 'Security through Obscurity is no security at all!'.

    Well... that all too often gets taken out of context.

    What it means is, if your only security is the fact that nobody knows anything about your security measures, then you are deluding yourself. This is most commonly quoted due to the mass numbers of security loopholes in software in the last 15 years; companies keeping it 'queiet' that there was a bug, hoping nobody would find out, and hence, keeping things secure. That's the kind of bad 'obscurity' we don't need.

    On the other hand, obscurity can be an important aspect of a system's security. Take any old job. The supermaket I used to work in (and my family owned). Due to the fact that my family owned it, I got to observe who what and when with regards to handling large amounts of cash. That's not to say that we had no security, but the fact that the common person who might want to rip us off has no idea how the money handling process works is *part* of that security. If he knew what I knew, he'd be at an advantage.

    And take system security. Why on earth would I publish my security? Certainly, if I have sensitive documents that are encrypted, I'm also going to keep it a secret which algorithms I used...that's part of the system.
  • How would that effect the much heralded 'new economy' and how quickly would the encryption industry be able to reestablish a standard?

    1. Nobody has any clue. That's an economics question, not a CompSci question. Ask ten economics professors that question and you'll get at least twenty different contradicting answers, no two alike. Repeat: nobody has any clue.
    2. About three seconds. "A fast method of factoring large composites has been developed? Fuck, man. Time to switch to El Gamal." Or, alternately, "A fast method of computing discrete logarithms in a finite field has been developed? Fuck, man. Time to switch to Rabin." Or, alternately, "A fast method of computing square roots modulo n has been developed? Fuck, man. Time to switch to elliptical-curve systems." Or, alternately... you get the idea. There are lots of perfectly good cryptosystems out there for the using. It would take far, far, far longer for computer manufacturers to ship updated SSL plugins and whatnot than it would for the cryptographic community to find a good replacement.
    3. How long it would take the market to field replacements is an economic question, not a computer science one. See point 1.
    ...This really, really, really oughtta be in a FAQ somewhere.

    For those cryptographers in the audience, yes, I did handwave a little bit. Computing square roots modulo n is provably equivalent to the integer-factorization problem, so any break against RSA would also break Rabin. The general point still holds, though.
  • by rjh ( 40933 )
    Open access to the methodology behind a OTP doesn't help anyone out in breaking it. Not even quantum computation can nail a OTP; it is provably secure against quantum computing, which is a really cool thing and is used in some quantum cryptographic schemes.

    But you're confusing the algorithm with the secret key.

    In a conventional cipher, you protect a big secret (megabytes of market research data, whatever) with a much smaller secret--say, 128 bits of a Blowfish key, or 2048 bytes of an RSA key, or whatever. The entire security of your market research data rests in the secrecy of the key. If the key gets publicized, it's all over.

    With a one-time pad, you perfectly protect a big secret (megabytes of market research data) with a secret of equivalent size. The entire security of your market research data rests in the secrecy of the one-time pad. If the one-time pad material gets publicized, it's all over.

    A one-time pad is not hurt, in any way, by being one of the open techniques of cryptography. You're confusing the fact that the pad material must be kept secret with the algorithm for using the pad material being kept secret. That's not the case at all.

    A OTP is conceptually no different from a block cipher running in output-feedback mode. A OTP is more secure, absolutely, but from a conceptual standpoint an OFB cipher is absolutely identical; it generates a long stream of apparently-random values with which the original data is XORed. If a OTP has its Achilles' heel in its openness, then so does any block cipher running in OFB mode.
  • by rjh ( 40933 )
    The nature of a login password is security through obscurity.

    Nope. I can get all the source for the login code of any Linux distro. There's no security afforded to it by means of obscurity. The nature of a login password is security through secrecy, which is a different beast from security through obscurity.

    If logins were secure through obscurity, then having source code for the login would permit me to root anything I wanted. It doesn't.
  • Not quite. A generalized break against the discrete-logarithm problem will break RSA, yes, but breaking a specific subset of the discrete-logarithm problem will not.

    My point still stands: there are lots of cryptographic methods available, and breaking one is very unlikely to bring the whole house of cards down.
  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Saturday April 14, 2001 @03:20AM (#292340)
    This question comes up so frequently that there oughtta be a frickin' Ask Slashdot FAQ... as in, "moderators, don't put these questions up, they've been too frequently asked."

    Put simply: if your system depends on the secrecy of the algorithm, it is pretty much guaranteed to be weaker than one which depends solely on the secrecy of the key. The reason is that attacks against cryptosystems rise to an exponential power of the weaknesses. If you have a system with two exploitable weaknesses, it's not half as secure as one with only one exploitable weakness--the progression is geometric, not linear.

    Think of it this way: RSA is designed to be secure when properly used, even if your opponent is a genius in number theory. Why? Because geniuses designed and reviewed RSA, and still do so this day, searching for any exploitable weakness. It has survived hundreds of thousands of hours of concerted, directed, extremely skilled cryptanalysis and come out on top.

    Now look at a system like CSS, which was designed by some very bright folks but was never put out there for peer review. The cryptanalytic weaknesses of CSS are profound, and it seems like a new weakness is reported every other week. Had CSS been introduced in a scholarly journal, the DCDCCA would have had thousands of hours of cryptanalysis for free, and they'd have discovered that their cryptanalytic scheme was weak, had an exhaustible keyspace, and had so many keys that the already-too-short keyspace was reduced to absolute triviality.

    Now, it's true that the vast majority of new ciphers are broken soon after they're released. Hell, even someone as illustrious as Rabin isn't immune--I've seen some very good cryptanalysis of Rabin's latest "provably secure" algorithm, and Rabin really stepped in it. Yes, anyone who rushes out to implement a new cipher is most probably smoking crack, because it's overwhelmingly likely that it will be broken within six months of its release.

    It's just in the last year that I've begun to put a substantial amount of faith in Bruce Schneier's Blowfish algorithm, for instance, even though it's been out almost ten years now. After ten years of constant cryptanalysis, Blowfish is still surviving pretty damn well.

    The alternative, using a closed-source, proprietary, non-peer-reviewed cipher, is absolutely absurd.

    Even the NSA is learning this lesson the hard way. After they published the SKIPJACK [*] algorithm, a few Israeli cryptanalysts presented some theoretical attacks against it... and then, it seemed as if every single week the attacks got better and better and better as the crypto community invented totally new branches of cryptanalysis (impossible-differential cryptanalysis, mostly) with which to attack SKIPJACK.

    Today, SKIPJACK is regarded as an utter failure. Of the original 32 rounds specified in the algorithm, there exist better-than-brute-force attacks against 31 rounds. That means SKIPJACK currently has no margin of safety, which means it's an absolute, utter, wretched failure.

    And this is the National Security Agency we're talking about.

    Short version: peer review is essential to getting good ciphers. Sure, you can posit that by having ciphers be open for peer-review it also makes attacks against them possible. That's true. But what you're overlooking, and what the crypto community keeps on shouting at the top of our lungs to anyone who'll listen, is that ALL OTHER METHODS ARE EVEN WORSE.

    I'm sorry for shouting so much in this message, but really, guys. People keep on asking this question over and over again. And the answer is always the same, no matter how much you ask it.

    [*] SKIPJACK, by the way, was the "ultra-secure" cipher at the heart of the Clipper Chip, if memory serves me right. Makes me feel all warm and fuzzy to think that the NSA was asking us to trust our encryption keys to government-controlled escrow, and give us a cipher that was so subtly and fatally flawed. Some people think these flaws were intentional on the NSA's part; I'm not one of them, since SKIPJACK was also supposed to be used by our own troops as part of a Defense Department initiative.
  • Come on, admit it, we all rely upon security through obscurity. I mean, how many of you (security challenges notwithstanding) make your root passwords available on the 'net for anyone to see?
  • When I worked for National Defense, I saw the neatest thing. The 3 by 4 keypads for door access were all LEDs. To open a door, you'd hit the 'activate' button, and the LEDs would randomly display the numbers between 0 and 9. Then you'd punch in your access code, and away you went. Next person wants in, they'd hit the activate key, and get a randomized key layout for themselves. Sweet.
  • Open systems tend to have more bugs found and fixed but closed systems have far fewer exploits found -- they're about the same.

    Think of OpenBSD and Windows NT, then say that again.

    Security isn't so much a question of open vs. closed, but the underlying techniques behind the development. If you're always pushing for neat toys (ASP [a poisonous snake]) over fixing things (formatting string bugs), you'll end up with more holes, regardless of whether the end product is open or closed.

    Getting back to encryption, would you trust my algorithm? I can't tell you what it is; you just have to trust me that it's "secure".

  • Uhm...Nobody?

    However, hardly anyone rearranges their keyboard keys to make it harder for an attacker to enter their root password. The data (password) is not open, but the method used to enter it (the keyboard layout) is an open standard.

  • Kerchkhoff's Principle: The security of the crypto-system must not depend on keeping secret the crypto-algorithm. The security depends only on keeping secret the key. (written in 1883)

    Why did Kerchkhoff made such a radical statement? Because over the last, oh roughly 500 years, history has told the sad tale of bold cryptographers who sold their systems as unbreakable, and grossly underestimated the inventiveness of their enemies.

    Ciphers (encryption algorithms) need to be designed to withstand the most cunning of oppositions. Who's main method is thinking "out of the box" to come up with diffierental cryptanalysis [execpc.com], timing attacks [cryptography.com] -- timing how long an encryption takes, differential power analysis [cryptography.com] -- measuring the power consumption, impossible cryptanalysis [nec.com] -- figuring which differentials aren't possible).

    Bruce Schneier at Counterpane Labs [counterpane.com] and Ross Anderson [cam.ac.uk] at Security Group [cam.ac.uk] at Cambridge University have several essays about how security systems fail because the enemy "breaks the rules". (Why Cryptosystems Fail [cam.ac.uk], Why Cryptography Is Harder Than It Looks [counterpane.com], etc.)

    To understand more about how "security through obsurity" does more harm than good, read any one of the dozen accounts about the Engima used during World War II, and the Anglo-American (and Polish) effort which successfully analysed this "unbreakable" system. Like Code Breaking [slashdot.org], The Code Breakers, or The Code Book [slashdot.org].

  • As I recall Skipjack was hardware-based, it was based on the hardware in the clipper chip. One can make reverse-engineering difficult, but probably not impossible, and once the schematics are out of the bottle...
  • Begin 'security through obscurity' vs a 'measurably secure system'.

    Well security through obscurity, however bashed here on Slashdot, is certainly a way of making it more difficult to get into an unencrypted message. Security through obscurity vs a measurably secure system is almost an academic vs commercial war in that academics need to analyse and prove the algorithms while commercial people tend to use what has historically worked. I'm sure there are dozens more bugs yet to be unearthed in Windows that have already been patched and fixed in Linux. This doesn't make Windows bad and Linux better but instead different ways of looking at the same thing...

    If you're god looking in a system (the academic) - immediately identifying all exploits then you can make the system more approvable to you and other gods. But when you can't analyse the system and can only prod the black box you certainly have a much less chance of finding flaws. Also the development time in making something you would be proud of showing others (fear of peer review in a closed source world) may be better spent elsewhere. Again - this doesn't make closed encryption systems worse just different.

    Open systems tend to have more bugs found and fixed but closed systems have far fewer exploits found -- they're about the same.

  • The nature of a login password is security through obscurity.
  • Actually, computing discrete logs in a finite field also breaks RSA. And any general solution to elliptical curve problems is likely to solve factoring as well. Even if it didn't, the curve y=n/x could be modeled piecewise as elliptical curves within constraints, and each piece could be searched for discrete valued points.

An adequate bootstrap is a contradiction in terms.

Working...