Distributed.Net-Why Isn't ALL Of The Source Open? 11
nullset writes "I was browsing on distributed.net's site, and noticed this article that talks about why the client isn't fully open source. Most of the source however, is available. The article says that they haven't opened the entire source because of the worries of people faking packets. They even mention that this really ISN'T secure, but they're looking for a better solution. What would you do in this case?" Once again, it's the ever present case of Security Through Obscurity. The old argument aside, are there cases where it works well enough? Is Distributed.Net secure enough in their position where they can keep things closed, or would opening the rest of the code help in any way?
IMHO (Score:3)
The problem is similar to quake being open sourced. There were cheaters before but more afterwards. DNet would suffer a lot of abuse if the networking code was open sourced, but they probally do face a lot of abuse right now anyway.
As they mention on the page; it is easy to fake packets and they know that, but how do you build a system that can verify information sent without checking over everything again. If people had the code to generate packets and people abused it; it could ruin years of many people's CPU cycles.
However, I'm sure if there was a secure way to deal with the problem, that could be open sourced they'd do it in a flash.
Aaron "PooF" Matthews
research bub. (Score:1)
Re:IMHO (Score:2)
Bascially, you generated a key, then you compiled the client with it and told the server to accept clients using that key. That way, the source was entirely open. Everyone could build secure clients, just not the same secure clients that the server would accept.
In this way, the distributed.net people could distribute a working, open-source client and server, without having to allow every hacked version of it to mess with their statistics.
--------
"I already have all the latest software."
There is a solution... (Score:3)
As for why distributed.net is not doing this, I have no idea. It's been discussed many times.
Its a difficult problem to tackle (Score:3)
Redundancy - Give each key-block to a number of clients, and if one says the result was a different number than the others, then that one may be cheating. The problem is if a LOT of clients are cheating, you start running into higher ratios, and have difficulty determining what the right answer is.
Block Processing analysis. If someone is going through 10 billion keys a second, they are either the NSA, or are cheating. This is the primary way to detect cheaters, however its not always accurate, and as people get smarter, they'll start submitting their keys slower, and from different IP addresses, and use different email addresses.
Another way would be to make the result the client sends computationaly difficult to find, but easy to verify. (Hmm, sounds like encryption to me). I'm not totaly sure what result the client sends back, but if something like this was possible, then the server could always verify if the key indeed has been checked.
If i recall, the DNet system involves giving each client a block of keys, some cipher text, and some clear text, and telling the client to try the key and see if it gets clear text.
---
Re:There is a solution... (Score:1)
Re:There is a solution... (Score:1)
Re:Its a difficult problem to tackle (Score:1)
I think they are using redundancy along with false positives (where the server knows that the result should be "the key might be 0xdeadbeef"), but I don't know if that's good enough to prevent cheating completely.
Re:IMHO (Score:1)
Two Possibilities (Score:2)
The Problem
Proposed solution 1: Bait
Proposed solution 2: The Chinese Lottery
Of course, if we go for the chinese lottery solution, then the nature of the central organiser goes from one of coordinator to one of "site where the software is developed and discussed" and possibly "where decisions are made about what problems to feed the clients". I really like the chinese lottery idea: it's really distributed!
Possible solution? (Score:1)
If I worked for D-net, I'd mark every submission with the time the client has been running. Then I'd work out a maximum processing speed, and compare it with the maximum processing speed for other systems on the same processor and OS. If it's above the previous record by more than, say, 1.5%, I'd start to worry.
To backstop this, I'd have every client encrypt thier submission before sending it to the server, and I'd secure the executable before making it availiable for download (I don't know how easy it would be to do, but software DVD decoders (well, the ones that DIDN'T get cracked) did something similar...). This way, everything but the key by which the data is encrypted can be open-sourced. I think.
Finally, I'd get every client to return data on the plaintext if hypothetically the key HAD worked. I'd then tie this to the username and perform random checks, optionally by redundently distributing blocks.
Hey, I'm not much of a crypto expert (IANACE) but I think these could work... If anyone knows better, feel free to correct me.
Why didn't this make the front page, where it might get some responses?
Michael Tandy