Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Encryption Security

Distributed.Net-Why Isn't ALL Of The Source Open? 11

nullset writes "I was browsing on distributed.net's site, and noticed this article that talks about why the client isn't fully open source. Most of the source however, is available. The article says that they haven't opened the entire source because of the worries of people faking packets. They even mention that this really ISN'T secure, but they're looking for a better solution. What would you do in this case?" Once again, it's the ever present case of Security Through Obscurity. The old argument aside, are there cases where it works well enough? Is Distributed.Net secure enough in their position where they can keep things closed, or would opening the rest of the code help in any way?
This discussion has been archived. No new comments can be posted.

Distributed.Net-Why isn't ALL of the Source Open?

Comments Filter:
  • by PooF ( 85689 ) on Sunday July 02, 2000 @03:08PM (#962477)

    The problem is similar to quake being open sourced. There were cheaters before but more afterwards. DNet would suffer a lot of abuse if the networking code was open sourced, but they probally do face a lot of abuse right now anyway.

    As they mention on the page; it is easy to fake packets and they know that, but how do you build a system that can verify information sent without checking over everything again. If people had the code to generate packets and people abused it; it could ruin years of many people's CPU cycles.

    However, I'm sure if there was a secure way to deal with the problem, that could be open sourced they'd do it in a flash.

    Aaron "PooF" Matthews

  • running trusted code on untrusted systems is currently impossible to solve..which is exactly the problem dnet faces. theres no solution in sight other than security thru obscurity and currently this is in the academic research state (read : years from any possible solution).
  • I think the QuakeWorld Forever project did something useful like this. It had a way of using an encrypted client authentication system.

    Bascially, you generated a key, then you compiled the client with it and told the server to accept clients using that key. That way, the source was entirely open. Everyone could build secure clients, just not the same secure clients that the server would accept.

    In this way, the distributed.net people could distribute a working, open-source client and server, without having to allow every hacked version of it to mess with their statistics.
    --------
    "I already have all the latest software."
  • by Anonymous Coward on Sunday July 02, 2000 @05:30PM (#962480)
    There is a quite simple solution to this problem: Have the client checksum the result of all the decryptions. If the checksum is bad then you know that the response is faked. You don't have to actually redo all the work; if the client is faking packets then they're all going to be wrong, so you only have to check one. Doing random checks on maybe 1 in 100 should be sufficient to catch the cheaters.

    As for why distributed.net is not doing this, I have no idea. It's been discussed many times.
  • by Zaffle ( 13798 ) on Sunday July 02, 2000 @05:49PM (#962481) Homepage Journal
    Esentialy I can create DNet packets that say that I have tried these key-blocks, and they don't work, and the result was 0x12345678 The problem is I can lie, and say I've done them when I haven't. There are a two main ways to combat this.

    Redundancy - Give each key-block to a number of clients, and if one says the result was a different number than the others, then that one may be cheating. The problem is if a LOT of clients are cheating, you start running into higher ratios, and have difficulty determining what the right answer is.

    Block Processing analysis. If someone is going through 10 billion keys a second, they are either the NSA, or are cheating. This is the primary way to detect cheaters, however its not always accurate, and as people get smarter, they'll start submitting their keys slower, and from different IP addresses, and use different email addresses.

    Another way would be to make the result the client sends computationaly difficult to find, but easy to verify. (Hmm, sounds like encryption to me). I'm not totaly sure what result the client sends back, but if something like this was possible, then the server could always verify if the key indeed has been checked.

    If i recall, the DNet system involves giving each client a block of keys, some cipher text, and some clear text, and telling the client to try the key and see if it gets clear text.

    ---

  • It is mentioned in the article, and they list a number of problems/reasons they haven't done it yet.
  • No matter what you add to the packets to check for integrity and authenticity, the first law of cracking still stands : whatever you come up with to protect your data, someone will crack it in less time than it took you to design it. You can add whatever checksums, hashes you like, it'll be dead easy for anything but a script kiddie to reverse it. If the server is able to validate the checksums, then it's just as easy for a cracked client to use the same code.
  • The problem with d.net is that it's impossible to tell the difference between a right answer and a wrong one most of the time. For almost every block, the client just sends back a message saying it didn't find anything. So if you send the same block to N clients and they all say they didn't find anything, how do you know whether one of the clients was cheating or not?

    I think they are using redundancy along with false positives (where the server knows that the result should be "the key might be 0xdeadbeef"), but I don't know if that's good enough to prevent cheating completely.
  • The page at d.net mentions this idea, but it points out that Netrek has tried this idea and it is possible to extract the key from the trusted binary and compile it into a cheating binary. It's a lot of work, but some people will do anything to cheat.
  • The Problem

    In the case of a distributed cracking effort, the client is given hunk of ciphertext and a block of keys to try on it. A malicious client can send back false negatives (well, 99.999% of them will be true negatives, but you know what I mean) to corrupt the search effort, or send back false positives which waste CPU time at the central server due to verification overhead.

    Proposed solution 1: Bait

    In addition to tracking which blocks have been checked, the server must also maintain a record of which host (by IP address or other provided data such as email address) checked which blocks. The server then occasionally provides the client with a known ciphertext and a block that contains the correct key. If the client provides a negative response, all blocks known to have been submitted by the client are marked as unchecked. If the server wishes the client to remain ignorant of being caught, it should continue to provide the client with blocks of ciphertext and keys, and discard the results.

    The same technique can be used for false positives: systems that give false positives can be put in a blacklist and their results ignored. The server can continue to provide blocks and ignore the results in order to deceive the client.

    Weaknesses in this solution include the fact that it is normally more convenient to have a fixed hunk of ciphertext. Cipher feedback modes may necessitate that the cracking attempt be started at the head of the ciphertext. Unless there is a range of ciphertexts to attack, it will be obvious to the client which hunks are bait, and which are real.

    Proposed solution 2: The Chinese Lottery

    Based on an idea found in
    Applied Cryptography. A cracking effort does not have to be centrally organised: this is done purely to increase efficiency by decreasing redundancy. In a super-large keyspace, however, the redundancy caused by duplicated effort isn't nearly as big an overhead as most people think.

    To be precise, the efficiency of an uncoordinated effort decreases over time, initially following a roughly linear curve, but tapering off as the area of the keyspace already searched becomes larger. For example, when 5% of the keyspace has been searched, there is a 95% chance that any random trial will try a key that has not been tried before. When 50% of the keyspace has been searched, the system tries new keys with 50% efficiency.

    The (somewhat surprising, I think) result of this is that if it takes time T to search 50% of a keyspace at 100% efficiency (no double-checking any keys), then a system that picks keys at random can check the same portion of the keyspace in time T*1.39 on average. Although the random system is operating at 50% efficiency by the time it hits the 50% mark, its history of greater efficiency means that its average efficiency is closer to 72%.

    This is a clear loss of efficiency relative to the coordinated approach, but with two major benefits. First, there is no communication or memory overhead in the cracking algorithm! None! The system is stateless and need only report success to someone when it actually finds something. The client need only check in with a central server to see if the lottery is over and if there is new work to do. The second benefit is a consequence of the first: because there is no need to report progress, clients can't submit false negatives. Indeed, there is no part of the protocol where negatives are reported!

    Of course, if we go for the chinese lottery solution, then the nature of the central organiser goes from one of coordinator to one of "site where the software is developed and discussed" and possibly "where decisions are made about what problems to feed the clients". I really like the chinese lottery idea: it's really distributed!

  • Hey,

    If I worked for D-net, I'd mark every submission with the time the client has been running. Then I'd work out a maximum processing speed, and compare it with the maximum processing speed for other systems on the same processor and OS. If it's above the previous record by more than, say, 1.5%, I'd start to worry.

    To backstop this, I'd have every client encrypt thier submission before sending it to the server, and I'd secure the executable before making it availiable for download (I don't know how easy it would be to do, but software DVD decoders (well, the ones that DIDN'T get cracked) did something similar...). This way, everything but the key by which the data is encrypted can be open-sourced. I think.

    Finally, I'd get every client to return data on the plaintext if hypothetically the key HAD worked. I'd then tie this to the username and perform random checks, optionally by redundently distributing blocks.

    Hey, I'm not much of a crypto expert (IANACE) but I think these could work... If anyone knows better, feel free to correct me.

    Why didn't this make the front page, where it might get some responses?

    Michael Tandy

Cobol programmers are down in the dumps.

Working...