Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security

What is Responsible Disclosure for Security Flaws? 235

Silverdot writes "In an article on ZDNet, the author brought up a few cases of uneasy relationships between security researchers and software firms. While those who report the bugs should first seek to notify and work with the software firm to resolve the flaw, One researcher commented: "All researchers should follow responsible disclosure guidelines, but if a vendor like Microsoft takes six months to a year to fix a flaw, a researcher has every right to release the details." Should the onus be on the software firm to manage each issue and the relationship well, or does it fall to the morally responsible user?"
This discussion has been archived. No new comments can be posted.

What is Responsible Disclosure for Security Flaws?

Comments Filter:
  • by denissmith ( 31123 ) * on Wednesday September 07, 2005 @03:33PM (#13502642)
    The cost of secrecy is high. Reasonable response times ( up to, say, 3 months) before disclosure should be allowed - even for firms that seem to be sitting on their hands, and if the firm is close to a patch and they are willing to communicate and work with the researcher a longer time may be reasonable. Overall, disclosure of a problem is always in the USERS best interest, and secrecy is always in the SOFTWARE FIRMS best interest. The longer a known security issue exists, in secret, the more likely it is that someone else has found it - and that puts everyone at risk. The rights of users ( who are victims of the software firms bad code) should always come before the rights of the software firm. Always. So this means disclosure should be seen as a blessing. Those who complain about irresponsible researchers putting everyone at risk are wrong - everyone is already AT RISK. Failure to let me know what risks I face should be seen as the problem. I need to know.
    • Not necessarily (Score:5, Insightful)

      by dereference ( 875531 ) on Wednesday September 07, 2005 @03:41PM (#13502733)
      The longer a known security issue exists, in secret, the more likely it is that someone else has found it - and that puts everyone at risk.

      I mostly agree with your overall analysis, but I'm compelled to point out that this one statement seems self-contradicting. What is the difference whether a security issue is "known...in secret" rather than simply "unknown"? I submit that a better way to say this would be that "the longer any security issue exists, the more likely it is that someone else has found it," without regard to how known or unknown it may be during the interim.

      The only way this is not true is if you consider the (perhaps non-trivial) cases where the "secret" is leaked, intentionally or otherwise.

    • I consider 1 week (5 workdays) a reasonable response time. If some company might want to get a little bit more time, say to their next "patch thuesday", then well, fine, but certainly not more.
      • by Donny Smith ( 567043 ) on Wednesday September 07, 2005 @05:21PM (#13503739)
        > I consider 1 week (5 workdays) a reasonable response time.

        Response as in "confirming a bug report"?

        What do you base your "reasonable" upon?

        Are you aware that QA alone sometimes takes weeks?

        I would prefer to keep the system exposed (in a controlled environment) to installing an non-QA-ed hack on my production servers.

    • How is it in the users best interest to disclose it when 99% of users are not capable of defending themselves. Do you really think that most non-techie people are going to read security lists to find out that there is a hole in a web browser. Then read/figure out alternative ways to mitigate that risk, by disabling some feature in their OS or application.

      The ONLY people that have an advantage or early disclosure is Security folks, Sys Admins, and other IT people that care.
      • The ONLY people that have an advantage or early disclosure is Security folks, Sys Admins, and other IT people that care.

        And hackers, writers of viruses, and other such people. If the information is now public but the populace doesn't know/care, it's that much easier to exploit the security problem.
      • Regular users MAY be slow to move, but if businesses take action one of the main targets of an exploit vanishes and the incentive to write an actual exploit goes down.
      • by Grayputer ( 618389 ) on Wednesday September 07, 2005 @04:24PM (#13503221)
        It is in the best interests of the user community in several ways. First, it puts pressure on teh software company to patch quickly. Second, it allows users to compare patch histories (quantity and response time) in choosing a product/company. Lastly, the bad guys have a communications system just like the good guys, eventually it WILL be common knowledge on the 'dark side', it needs to be common knowledge on the 'light side' as well.

        The issue is 'what is a reasonable timeframe'. Someone said 3 months, someone else said 1 week, and someone said it can take a year. As we all know, 'reasonable' is a subjective term.

        I think 1 week is not sufficient. I work in a small software company and I deal with large companies. A large company can't issue a memo in 1 week, it can't get from the department where you reported it to the department that needs to fix it in a week (notify development, support, QA, and 'shipping/distribution' managers and get an impact statement).

        As to the one year to fix it estimate, that seems very unreasonable, especially at Internet time speeds. The 'dark side' will be well versed in the bug well before that time has passed.

        Is it 1 month, 3 months, 6 months, or 9 months? It probably depends on where the bug is located. We develop a small niche market piece of middleware. A 'full QA' run (specifically test new function, regression test all other functions) can take a couple weeks even using automated test suites (runs on 10+ platforms, several hundred tests per platform, some testing is single threaded due to 'expensive hardware' resource issues, some test have to be manual). Packaging issues across 10 platforms with doco can add a couple more weeks. All assuming we know the exact coding bug and do not have to analyse the reported problem.

        So if a specific coding bug is reported in a core secton of the product, it can take a couple weeks of QA, plus Dev time, plus packaging, plus documentation, plus ... So a core change that impacts packaging and doco can easily be a month or more. On the other hand, a 'typo' that does not impact packaging (impossible, we have to release it somehow so some 'packaging' is needed) or doco can be a few days.

        Consequently, I'd guess 1 month is too short as well. I'd posit that a couple of months (3?) from the time the coding level fix it identified is probably a reasonable startng point. Given some reports I've received it can take from minutes to a lifetime to narrow the reported issue to a coding level fix.
        • Personally I'd say 2 weeks until I'd release a public notice that there IS a bug in programX and whether or not it can be exploited remotely or locally (or both). That'd be about it, no details, just a warning to system admins (and savvy users) that there's a problem with that program. Then I'd continue communicating with the person(s) responsible for patching the thing and wait until they've (a) Got a patch out or (b) are refusing to do what they should be doing (fixing it) before releasing any further det
    • "disclosure of a problem is always in the USERS best interest"

      Unless, of course, the software publisher does not take steps to address the security flaw, and an exploit is developed.

      Sure, we may want to know the risk we are taking by using the unpatched software, but if the software is critical to the users' operations? Better to lessen the chance of an exploit.
      At any rate, if you're using an irresponsible software firm for critical software, that's your own problem.

      I do agree with your post in en
    • http://www.eeye.com/html/research/upcoming/index.h tml [eeye.com]

      Looks like certain software companies sit on the issues for a long time (and are still sitting on them).

      In their defense, most of the KNOWN viruses/worms/trojans are written after the public release of the patch when the less capable people can see the exploitable code.
    • by EvanED ( 569694 )
      Overall, disclosure of a problem is always in the USERS best interest

      I disagree to an extent. There are a couple reasons.

      First, spreading a virus or other malicious code would probably be easier than patching the problem, at least most of the time. This means that exploits for a vulnerability would be be out before fixes for them.

      I'd prefer the uncertainty that hackers may or may not have found a vulnerability above the certainty that they can find it, even at the cost of being unaware of the vulnerability
      • Your argument basically states that using a networked machine responsibly is too hard.

        Should we make the same argument about driving or public health? When you use a networked computer, your actions or inactions effect the rest of the community just as much as your driving habits and efforts to contain communicable diseases.

        The bottom line is that the hardware is yours. Any software you put on it that you don't author yourself is from a vendor. Microsoft has no responsibility for the health and well-being o
    • by drgonzo59 ( 747139 ) on Wednesday September 07, 2005 @04:07PM (#13503027)
      I disagree. I think they should disclose it as soon as possible.

      First of all they should stop calling the mistakes"bugs". There are not "bugs" there, these are mistakes. If work for Ford and I am responsible for the carburators, I screw up and the QA never catches it and then people's cars are blowing up, it would not be called a "bug" as if something just crawled in there without anybody's fault, it would be _my_ mistake, a personal responsability.

      The software companies are churning code to get it out of the door without adequate testing, it is their fault. If someone exploits it, it both software makers fault and the exploiter's. The company should restitute the costs associated with the loss. Hopefully, that would promote a culture of responsability, and software engineering would be taken more serously, just like mechanical, electrical or nuclear engineering is.

      Chances are that if there is immediate disclosure, the users will have a chance to stop using the product until a patch is available. Every day until the patch is issued they should just bill the software company. That would be a great incentive to test well, code carefully and fix the problems faster.

      • by garcia ( 6573 ) on Wednesday September 07, 2005 @04:25PM (#13503236)
        Chances are that if there is immediate disclosure, the users will have a chance to stop using the product until a patch is available. Every day until the patch is issued they should just bill the software company. That would be a great incentive to test well, code carefully and fix the problems faster.

        Chances are that a) no one will stop using the unpatched software because they can't afford to or they don't care to be informed of it (how many places were pwnt by worms that had known patches months in advance?)

        b) No one will indemnify their code because it wouldn't be cost effective to do so and it wouldn't stop the issue.

        It would be a *great* incentive but the cost of software would go up and either people would buy software that was less money (not as well coded) and put the others out of business or they would just continue on the old path.

        Wishful thinking but it isn't going to happen.
      • Your analogy, even way of thinking is flawed. This isn't about someone's car not working, it's about someone being able to break in or destroy someone's car.

        Say the keypad entry for all GM's had a flaw where if you pressed a certain sequence of keys, you can get into any GM vehicle that has keyless entry.

        You go to GM and tell them they need a recall. They tell you they need to get with their suppliers, have them redo their software or hardware logic (after months of the engineers pointing the finger), or el
        • The exploit grows in popularity in the darker parts of society, and GM cars around the country are now being broken into becuase you thought you were doing society a "favor".

          But you ARE doing society a favour --- people will know next time that they should not buy GM cars because GM doesn't care enough to produce cars without such problems or to fix the problems in time! GM doesn't have a 'right' to retain its market share, and certainly not if it is culpably releasing and selling known defective cars. No

    • Say a new DMCA law is enacted that makes it illegal to disclose security flaws. Consider that companies can now fire all but a few of the people involved in security patches and boost profit. How many security flaws do you think will get fixed? How long after a worm is released since staff has been reduced?

      Say that a new law (along the lines of collusion) is enacted that makes it illegal to only disclose to a company and not to the public since you are putting the public at risk by withholding information..
      • If I buy a bike lock that can be picked with an ordinary pen do I want to know about it?

        Some years ago, I heard an interesting interview on the radio with a fellow who was an ex-burglar. He had written a book explaining locks and how to test them for ease of picking. He also named a bunch of brands that were inherently weak. His argument for publishing all this was the obvious one: The pro criminals know this, or have ways of learning it. Don't you want to know whether your own locks are any good? And
  • by TripMaster Monkey ( 862126 ) * on Wednesday September 07, 2005 @03:35PM (#13502664)

    "Responsible disclosure" is a propganda term propogated by the software firms to a) get as much time as possible to fix security holes, and b) indemnify themselves as much as possible against any public disclosure of said security holes by labeling the disclosers as 'irresponsible'.

    If a security hole exists, it exists, despite how much public discussion about said hole is quashed. Today more than ever, there are unscrupulous people out there laboring to find and take advantage of these holes. Muzzling the virtuous hackers, who only wish to make things more secure, is counterproductive in the extreme. The only 'responsible disclosure' is full and immediate disclosure.
    • by garcia ( 6573 ) on Wednesday September 07, 2005 @03:42PM (#13502744)
      b) indemnify themselves as much as possible against any public disclosure of said security holes by labeling the disclosers as 'irresponsible'.

      And to prepare legal proceedings against those that do end up disclosing the holes against the wishes of the companies trying to patch them (here [zdnet.co.uk]).
    • I really love the Full public disclusure advocates. They seem to have a romantic view of the black hats. Rather than a selection of small groups, they seem to imagine a vast international consipracy.

      The justification seems to be that they might already know of the vulnerability. A weak argument if ever there was one. Just because some black hats know of it doesn't mean all of them do. And there's no evidence that any of them know of the vulnerability before the flaw is revealed.

      We change from a pos
      • by grasshoppa ( 657393 ) on Wednesday September 07, 2005 @03:59PM (#13502952) Homepage
        The justification seems to be that they might already know of the vulnerability. A weak argument if ever there was one. Just because some black hats know of it doesn't mean all of them do.

        Good point. From now on, I'm only going to allow those blackhats that don't know of the vulnerability to access my services.

        And there's no evidence that any of them know of the vulnerability before the flaw is revealed.

        You have a narrow view of reality. How can you know that no one knows of something before it's officially revealed?

        The risk that they might know of it is what drives it.

        While I'm on the fence as to which I support ( full disclosure or informed disclosure ), your arguments are flawed, and I had to point that out.
        • Good point. From now on, I'm only going to allow those blackhats that don't know of the vulnerability to access my services.

          Security is about reducing risk to an acceptable level, you can never eliminate it entirely. Believe it or not you are better off with only some blackhats knowing about your vulnerability than all of them.

          • Believe it or not you are better off with only some blackhats knowing about your vulnerability than all of them.

            Not that I agree with this statement, but you only paint half the picture. Your statement implies trust in corporations to patch the vuln in a timely matter. The longer the system remains unpatched, the greater the risk to your systems.

            Again, not that I fully agree with one side or another, but I feel a decent comprimise would be in order. The default policy would be to do full disclosure, but
            • As I've said elsewhere, I think disclose to the company first but do full disclosure a week afterwards (and tell the company you're doing this). If they haven't fixed it in a week then it's their problem. But they deserve some time, and untrusted-by-default makes it too hard to get things working, as anyone who's tried to implement a secure network knows.
    • by gr8_phk ( 621180 ) on Wednesday September 07, 2005 @03:48PM (#13502826)
      "The only 'responsible disclosure' is full and immediate disclosure."

      I'd argue that giving the software company a heads up to find a fix would be more responsible than immediate disclosure. There is no fixed amount of time either. If the company is unresponsive, wait as long as you feel appropriate and go public. If the company responds and appears to be making reasonable efforts to fix it, give them time. The public isn't going to fix the problem, so blabbing to them isn't going to help. Blabbing to them that the company has known for X months and isn't doing anything will help the public form an opinion about the company and move away from their products.

      • I'd argue that giving the software company a heads up to find a fix would be more responsible than immediate disclosure. There is no fixed amount of time either. If the company is unresponsive, wait as long as you feel appropriate and go public. If the company responds and appears to be making reasonable efforts to fix it, give them time.

        That pretty well sums up my view. As long as the company is sincerely working on it, give them space. Some holes are obviously going to be harder to patch than others, so n
    • I agree to an extent, but my opinion is that if a security firm spots a hole, they should work with the company in question and keep contact on the status of any kind of patch. If a security hole exists, I think the company should be given some time to remedy the situation before information is publicized that could be used by the unvirtuous hackers.

      However, if $company['foo'] is not taking adequate measures to fix the problem (which should usually be started|completed ASAP), then the security firm owes it
      • It's important not to open yourself up to liability (eg "He told us about it and we were working on it, then he went ahead and told the public!"), and also important to disclose quickly in case you weren't the first one to discover the problem. Even if it's not fixed, there are often steps administrators can take to avoid being hacked (turn off a service if noncritical, switch to a backup service, limit exposure to internal users, etc).

        I'm basically in favor of the approach of letting the vendor know, and
    • "Responsible disclosure" is a propganda term propogated by the software firms to a) get as much time as possible to fix security holes, and b) indemnify themselves as much as possible against any public disclosure of said security holes by labeling the disclosers as 'irresponsible'.

      I agree that a and b are true, but that doesn't mean there is no such thing as responsible disclosure.

      For example: it is irresponsible to release information about a security hole into the wild without informing the softwar

    • bull (Score:3, Insightful)

      by badriram ( 699489 )
      If "Responsible disclosure" is only a propaganda term, why would Mozilla and other popular open source projects use them. Why do they block access to security issues.

      If a hole exists, it exists, however not everyone (including hackers) knows about it until it is published. Holes exist nowadays for years, some flaws for instance in NT 4.0 are discovered now 10 years later. Software is waay to complex nowadays it is good bet to take that unless published most holes will go completely unnoticed, until some oth
    • This is not just a computer issue. It is a general security issue. If a newspaper knows of a place vulnerable to terrorism, should they notify the gov't before notifying the people?
      Keep in mind, all this "they knew it was a problem and did nothing" about the Levees in NOLA, is going to be rehashed with the borders, i.e. when something bad (terrorits, bird flu, whatever) comes over in an illegal immigrant, you are going to hear about how we knew the risks and no one did anything.
      We are a reactive, not a pr
    • by bmajik ( 96670 )
      The majority of worms,trojans, etc that do real damage are not written by security researchers.. they're written by thugs that use someone elses research and attach a payload to it.

      The goal of responsible disclosure is to reduce the aggregate damage of a security incident.

      We've seen that in the past, malware has been written by adopting the code/info/techniques in the bulletin, sometimes even the info-light ones released by MS!, and has caused considerable harm.

      Yes, it is true that _somebody_ might secretly
  • I did it! (Score:2, Funny)

    by Tachikoma ( 878191 )
    I'd be the first to tell people about my security flaws, hell i'd advertise them. I'm just going to make some half-ass excuse and blame someone else anyway. at least thats what all the k00l keds do dez d4yz.
  • by Karma_fucker_sucker ( 898393 ) on Wednesday September 07, 2005 @03:36PM (#13502687)
    What's the reasonable response time to fix the problem?

    Someone tells you that you have a security hole; you fix it - A.S.A.P!!

    • If you don't take the time to test your "fix" you can end up causing more problems. I'd say a week is fair - that should be adequate to fix the problem, even without people working overtime, and get it tested and the binary fix out to clients. After that, make it public, and any vendors who couldn't be bothered to patch yet are on their own.
  • by Anonymous Coward on Wednesday September 07, 2005 @03:38PM (#13502703)
    If you find a security hole in someone else's code you can either:

    1) use the DJB approach: reveal the hole in a public forum, preferably with a working exploit.

    2) use my preferred approach: fix your clients' copies of the program, and otherwise keep quiet. Consider it a competitive advantage when the next Apache/SSH/PHP worm hits.

    Any other approach is an utter waste of time, for everyone except the vendor.

    If you reveal the flaw only to the author, you are:

    a) Working for someone else without getting paid.

    b) Saying "it's okay" to write software with security holes, because shucks, some kind soul will fix it for you.

    c) Not telling the rest of us, the sysadmins of the world, how to protect our own systems. You see, the company or author has already demonstrated incompetence. Why help them? Of course you don't owe anybody anything (see point #a) but if you're going to tell anybody, tell the people with the most to lose!

    Of course, I'm assuming we're working with software where you have the source code. Secure software without source code is an oxymoron. And no, I don't think the license makes code more secure, since maybe 95% of the coders out there can't code properly. My ability to audit is what makes it more secure, and yes, I do as much of that as I can.

    Let me make the point clear: I don't care if the author fixes the code or not, or how quickly he can "patch". I need to know the details of the problem so I can solve it myself. That's what's important.

    People who advocate "irresponsible disclosure" (my term for any disclosure that doesn't inform the end-user first) are really secretly afraid that someone will someday find flaws in THEIR systems and embarrass them. But that's the point: embarrassment is a cost, and people will try to minimize costs any way they can. At some point they will actually try writing secure software. And maybe at some point, users will start demanding secure software.

    I think we can all agree that the security situation is getting worse, not better. Most of the software I see these days is garbage, to put it mildly. Bloated, complex, insecure.

    Constantly holding the hand of authors via irresponsible disclosure is not going to solve the problem. Do you want to wait until the government regulates software, basically punishing everybody for the sins of the incompetent? Or should we let market forces do their thing?
    • Do you REALLY audit every piece of code that you run? The entire Linux kernel, for instance? I don't believe it. And even if you make a good effort to get most of the network-exposed code audited, you can never be sure that you're actually finding vulnerabilities--can't prove a negative.

      Disclosure of exploits and fixes to the author is like any other OSS bug-fix submission: Yes, you're doing work that you're not getting paid for. But at the point where you've already done the work, your time is a sunk
    • 2) use my preferred approach: fix your clients' copies of the program, and otherwise keep quiet. Consider it a competitive advantage when the next Apache/SSH/PHP worm hits.

      That's not very neighborly of you. Many other people can benefit from that knowledge and yet you use it to your own advantage, and don't even let the rest of us know we are vulnerable?

      Why would you do this to the rest of us?

      -molo
  • Thorny Dilemma (Score:3, Interesting)

    by gbulmash ( 688770 ) * <semi_famous@ya h o o .com> on Wednesday September 07, 2005 @03:39PM (#13502719) Homepage Journal
    First off, it would be nice if software didn't have so many holes, but even the best open source ventures end up having to issue patches and revisions, so while Microsoft may have more holes than most, let's not act as if this is a "the world vs. Microsoft" issue.

    This is a very tough one because it is multi-faceted. The most common argument against researchers publishing is it practically guarantees an exploit will surface in the wild sooner rather than later, possibly before a patch is available. OTOH, if they don't publish, it might be discovered by criminals first and exploited more quietly, gaining a foothold in the wild before anyone even knows the hole is there. Sort of a damned if you do, damned if you don't scenario.

    But when a vendor is sitting on the report and not issuing a patch, it can grow increasingly frustrating for the researcher. They not only have to watch people trundle along, blithely unaware of this gaping hole in their systems, but they cannot get their proper credit for the discovery. It's a bit like publishing for academics. Getting to take credit for the discovery of a security vulnerability has certain perks that the researchers are denied as they sit quietly and wait for some corporation to decide when to go public with the announcement of the hole and the patch for it.

    Probably the best solution would be to have a set of universal guidelines set at one of the major conferences. Something that takes fairness to the researcher, fairness to the software vendor, and fairness to the public into account. I know, I sound like a politician saying "let's form a committee to study this", but I doubt that any one person has a solution that makes everyone happy or could even be considered a fair compromise by all involved parties.

    - Greg

    • The way I see it, if someone wants to make software into a service or a product, it damned well ought to come with some guarantees. The EULAs don't typically offer guarantees though thankfully, some state laws supercede EULA-weasling anyway.

      If software is free, no worries right? Got what you paid for... user assumes responsibility.

      If software is a service or a product, there had better be backing to keep it supported against bugs. It's what people are paying for, if not on paper then in their minds. Peo
  • If the software company waits until exploits are wild before they patch something, they will have screwed themselves, and you can point and laugh and get all the publicity for alerting them of the vulnerability years before it was exploited and patched. It takes patience, but being able to say "I told you so" is much better than advertising a vulnerability before it's patched. Software companies like to schedule their security updates to minimize exposure. If the vulnerabilities haven't been exploited yet (
    • by Knome_fan ( 898727 ) on Wednesday September 07, 2005 @03:44PM (#13502767)
      And what about users that would be able to do something against the security risk (not use a certain program, disable a vulerable service or firewall it in for example), if they only would be aware of it?
      • And what about users that would be able to do something against the security risk (not use a certain program, disable a vulerable service or firewall it in for example), if they only would be aware of it?

        Users who need the program/service because it's running their website or database server or whatever can't turn it off, and will probably already have it firewalled as far as possible without making it unusable. Users who don't need it should have disabled it anyway. And, unfortunately, a) crackers have

  • Tact (Score:5, Interesting)

    by jellomizer ( 103300 ) * on Wednesday September 07, 2005 @03:41PM (#13502734)
    The trick to proper security flaw reporting is understanding what is the tactful way to state it vs. tactless way.

    An example of a tactful way first report it to the software developers and see if there is a patch. If not then get a little more forceful and release to the public that there is a flaw in a feature on this product and it seems to effect this range of people.

    An example of a tactless method is to make a root kit that takes advantage of that flaw. Or tell the general public how to reproduce it.

    You will need to remember what you say publicly will be used by people who will do good things about it and bad things about with it. So if you give them enough information to say block a port or temporarily turn off a feature vs. giving giving the bad guy a way in while the person will need to figure out what you did in your root kit then find that is the problem.

    Be mindful when you report the flaw to the software company as well. You are telling them that they have an ugly baby and most people don't want to hear that. Try to be friendly with them but stern on the severity on the flaw. When it comes to reporting flaws you are no longer dealing with computers but with people and if you piss them off to much they will be less then helpful.
    • Re:Tact (Score:4, Interesting)

      by TheRealMindChild ( 743925 ) on Wednesday September 07, 2005 @04:23PM (#13503213) Homepage Journal
      Sometimes, it isn't so easy. Lord knows I have found and reported my share of exploits. Of them, a few took a bit too long, but communicated with me a majority of the way. One of them, however, told me they knew about it, decided it was better to call me an asshole, and to pay them consulting fees if I wanted X security hole resolved.

      In the latter case, the only course of action (not due to spite mind you, though it felt good) was to release a usable exploit. The creator of said software had no intentions of ever fixing it. They had every intention of belittling anyone who brought such things to their attention. For me, the only way I would see this work, is if all of a sudden, the world was afraid to use software Y because a simple script kiddie could comprimise them.
    • Or tell the general public how to reproduce it.

      That is funny, i had this idea that somehow i was called freedom of speech.

      Silly me.
    • by jd ( 1658 )
      I agree with everything you said, but would add just one more point. To me, "responsible disclosure" is making sure that the appropriate people know about the problem, where the "appropriate people" are those who really do need to know.

      Now, those who "need to know" will vary according to the situation, the time since discovery, etc. I don't see it as a static thing. It is also affected by who else knows. Just because someone doesn't know about the flaw does not mean they're immune to the effects. Thus, anyo

  • Plain and simple (Score:5, Insightful)

    by It doesn't come easy ( 695416 ) * on Wednesday September 07, 2005 @03:42PM (#13502743) Journal
    Responsible disclosure from Microsoft's perspective: You tell us and only us. We'll tell the rest of the world when we think it's necessary (if ever).
  • Common Sense (Score:4, Insightful)

    by MattW ( 97290 ) <matt@ender.com> on Wednesday September 07, 2005 @03:45PM (#13502784) Homepage
    Common Sense is sometimes violated egregiously by one side or another, and then this is raised as an issue. If a security researcher sends one email to some ill-checked "bugs@" address and gets no response, then just releases a couple weeks later, that's irresponsible.

    When someone emails a vendor many times at many addresses, finally gets a response where they tell him, "We're looking into it", and then proceeds to cease communicating for 45 days, that's irresponsible by the vendor, and the researcher has every right (and some would say, a responsibility) to publish.

    Where's the middle ground? Well, it's a wide open space. Those without bad ulterior motives (ie, publicity-hungry or vendor-hating researchers, or head-in-sand or deny-first-ask-questions-later vendors) don't really have much difficulty negotiating the middle ground, because there's a lot of room. The problem, of course, is that the only time you *hear* about disclosure issues is when someone is being a muttonhead - either vendors trying to keep secrets, or researchers who feel no sense of responsibility or make the most token efforts to make contact. For the rest, there's little debate, because it's just easy to do it right.
  • by the arbiter ( 696473 ) on Wednesday September 07, 2005 @03:45PM (#13502786)
    I can't believe that this question:

    "should the onus be on the software firm to manage each issue and the relationship well, or does it fall to the morally responsible user?"

    is even being seriously asked. Of COURSE the onus is on the software compnay...they made the damn thing and are making the money from it, right?

    This could only be a question in the software world. Before making the jump to IT, I built acoustic guitars for years. If I BUILT it wrong, not only did I hear about it, and not only did a lot of my potential clients hear about it, but I HAD to fix it, and not next week, but ASAP.

    I'm honestly curious as to how the rules got changed for software.
  • Shouldn't it be who is responsible?

    Anyway, more on topic: I don't think its cut and dried situation, I think that disclosing a bug immediately can be good in one situation but harmful in another.

    For a company the size of Microsoft, sitting on a bug for 6 months to a year may be the time it takes to adequately test their patches. Remember, for years they strived to make their software work with everything under the sun and make it backwards compatible with everything under the prehistoric sun. Its co
  • The responsibility to the public is to minimise their risk.

    Full disclosure increases risk. Malicious people who would not have known about such vulnerabilities often learn of them through full disclosure.

    However, by keeping quiet about a vulnerability, you are enabling the vendor - who does not necessarily have the public's best interests as their primary concern - to put the public at further risk by not fixing these vulnerabilities promptly.

    The real question is how long a vendor should be allow

    • > The responsibility to the public is to minimise their risk.
      >
      > Full disclosure increases risk. Malicious people who would not have known about such
      > vulnerabilities often learn of them through full disclosure.

      Debatable. Before accepting this, I'd want to see them numbers. Let's face it, even a year -after- patches are released, many people are getting hit. There are too many variables in this for me to accept any answer without seeing some real analysis.

      And that kind of analysis is a job of wor
  • by Anonymous Coward
    This makes an interesting read

    http://wiretrip.net/rfp/policy.html [wiretrip.net]

    Well thought document, written with input from big names in the computer security scene.

  • by jnadke ( 907188 ) on Wednesday September 07, 2005 @03:53PM (#13502882)
    But when my holes are open I close them quick before someone shoves something in there... like a Trojan.
  • Why not wait until you have a solution to the security problem before disclosing the problem?

    Granted, I understand that closed-source must make this difficult, but even so, a researcher who understands enough about a system to break it must surely understand enough to come up with a workaround.

    I know it is a lot easier to find flaws than fix them, but unless a researcher is willing to offer a fix, disclosure of security flaws doesn't do much to help:

    1. The fixing of security bugs pushes back new devel
  • by miffo.swe ( 547642 ) <daniel@hedblom.gmail@com> on Wednesday September 07, 2005 @03:56PM (#13502920) Homepage Journal
    Responsible disclosure has no real benefit to the end user. It may stop some percentage of big outbreaks of worms but it doesnt in any way make life for admins guarding sensetive information a bit easier. From what i have understood many exploits are used by crackers long before they are in the wild. That is, many networks and servers are broken into and gathered for intel long before there is a patch, sometimes even for years.

    Responsible disclosure only lenghten the period for the crackers in wich they can use their exploits for real cracking. It gives at best a breather for software manufacturers to drag their ass. It also doesnt promote real testing and auditing of software before its shipped. I would as an end user much prefer more tested software, that includes OSS thank you very much. Current release cycles is way to short to give any time for testing.
    • Which statement describes more machines connected to the public internet:

      - computers belonging to home users or uninterested business users, with no training or expertise at securing and patching computers

      - computers that are
      -- managed by security experts
      --- that, given a full disclosure statement, would know what to do to mitigate their protected assets against the vulnerability and could do so within 24 hrs,
      -- given the above, are running machines configured such that they were actually vulnerable to the
  • by Anonymous Coward
    (Posting anonymously for career protection.)

    The problem has been posited as researchers v. software companies. The problem is, there are some researchers who work for software companies, some very dirty software companies, and their management would very much like to take market share from their competitors.

    Someone else gets hurt? That's Cisco's problem [wired.com].

    -----------

    WN: So ISS knew the seriousness of the bug.

    Lynn: Yes, they did. In fact, at one point
  • by Anonymous Coward on Wednesday September 07, 2005 @04:00PM (#13502964)
    Oddly enough, I used to work on a project for a huge company where this happened. We had a large search-engine like project that was running much slower on a 16 proc Sun box than I thought it should. I noticed that 40% of our traffic came from the same 5 subdomains, representing over 10 - 20,000 hits/hour. "Who uses a search engine that much?" I asked.

    Me: Something fishy is going on here.
    Boss: Report your findings to the project team.
    Project Team: Hmmm... that is fishy

    [weeks go by]

    Me: Something fishy is STILL going on here.
    Boss: Report your findings to the project team.
    Project Team: We don't have a disclaimer on our site that restricts the number of hits/hour. Contact legal.
    Legal: We'll get back to you.

    [weeks go by]

    Me: Something fishy is STILL going on here, and it's getting worse!
    Boss: Report your findings to the project team.
    Project Team: Did legal get back to you?
    Legal: We'll get back to you.

    [weeks go by]

    Me: Something fishy is STILL going on here, can I at least block them via hosts.allow or a firewall?
    Boss: Report your findings to the project team.
    Project Team: Hmmm... I don't know. Did legal get back to you?
    Legal: We'll get back to you.

    [weeks go by]

    Slashdot: "Your search engine is a known hack to alter page rankings at Google!"
    Slashdot Commenters: OH yeah, that's been a problem for a while. That damn company!
    Me: YIKES!! SLASHDOT has posted our company name in connection with fraud. AGAIN!
    Boss: FUCK! DO SOMETHING! This is a PR nightmare!
    Project Team: FUCK! DO SOMETHING! This is a PR nightmare!
    Me: Luckily, I have already written a script to do so. Give me a sec--
    Legal: We have shut down all admin access to this box, because there was this article on Slashdot, and we need to see if it's been hacked. We've opened a ticket.
    Me: GAAAAAHHH!!!

  • blame shifting (Score:5, Interesting)

    by cahiha ( 873942 ) on Wednesday September 07, 2005 @04:02PM (#13502990)
    The two groups who are responsible for security problems are software vendors and companies that buy buggy software and use them for critical data. Those are the primary parties at fault when security problems cause loss of money or life. Unfortunately, both of those groups are increasingly successful at trying to blame other people and creating legal obligations for other people.

    What we really need is a market driven solution. If MegaBank discloses 200000 customer records to criminals due to a security bug in their Loses XP operating system, then they should be responsible for all the identity theft-related expenses that that causes their customers, plus statutory damages (say, $1000/customer) for distress and inconvenience for their customers. If they do that sort of thing too often, they'll go out of business. That kind of financial risk will force them to demand guarantees from the creator of the Loses XP operating system, which will force that company to finally get a handle on security or go out of business themselves and be replaced by companies that understand security. And if it turns out that it simply isn't possible to do something securely with software, well then only the non-computerized companies will survive in the market.

    So, what's the "responsible" way of disclosing security bugs? Any way you feel like it, as far as I'm concerned. The security problem in someone else's software is not your responsibility in any way, shape, or form.
  • It takes time ... (Score:2, Insightful)

    by newandyh-r ( 724533 )
    Let us suppose the researcher (R) reports a security hole to the software producer (let's call them M).
    First the report on the problem has to get to someone who has the knowledge to investigate and fix the problem. In a large organisation this is likely to take several days. It then has to be prioritised in his workload (Hey! you think there is only one problem?).
    Next he has to verify that the problem exists (hopefully R has provided a repeatable example).
    Then comes the first difficult part - identifyin
  • I'm a full disclosure sort of guy.

    I believe in people taking full responsibility for the software they use -- hence I use OpenBSD.

    I'm happy when I read about a worm destroying Windows and costing them time & money: if they want to be irresponsible and run that stuff, it is important that they get some negative feedback, else they are likely to persist in falling victim to keyloggers and other malware.

    I'm one of those, "it has to get worse before it gets better folks."
    • by bmajik ( 96670 ) <matt@mattevans.org> on Wednesday September 07, 2005 @04:51PM (#13503470) Homepage Journal
      I use OpenBSD too, so don't start with me.

      How can you be happy when someone else is suffering? It's not your grandmothers fault she uses windows. OpenBSD is NOT appropriate for "home users", and it's not designed to be, and it cannot ever be as secure as it is yet as functional as required for non-power-users.

      Every operating system in use on PC's has security issues, even openBSD. OpenBSD is where it is because it's entire focus is security/correctness.

      Security and correctness are NOT the most important aspect of general software development - if they were the only requirements, then a lead box buried in the ground would easily be more secure than openBSD. The issue is functionality vs security and correctness.

      When there is something that works as well as windows for what people that use windows need to do, but has fewer problems, people will change to it in droves. For some people, that is Mac OS - although it has its own severe security problems - do you laugh when people with macs have to reboot their machines because of SoftwareUpdates ?

      In any case - 0 day full disclosure hurts the majority of computer users. No amount of pain will convince them to stop using windows. If you want people to stop using windows, develop a credible alternative. Don't sit and laugh at people that don't have better choices available to them, and then say things like "i support people making life harder for windows users".

      • People who can't secure their computer shouldn't connect it to the internet.

        What little negative feedback that these people get is good for all of us -- the zombies are a threat to us all, if only due to a DDOS.

        Yeah, I'm happy that Mac users have to reboot and suffer their share of problems. They've chosen to hand their security over to Apple. It is important that they suffer some consequences now and then.

        I guess I wouldn't be such a crab if running a BSD box wasn't so irritating.
  • Microsoft publicly chastises security researchers who don't follow its rules.

    It's simple. Researchers should form an organization and make their own rules regarding disclosures. Then follow them to the letter and expect the companies to do the same.

    Both parties would fall under the umbrella of the group and have one set of procedures/rules for all. Not seperate procedures/rule for each company.

    Of course doing this is the hard part. It could be funded by the major players. It would save them face and stre
  • There are several issues.

    -Not disclosing the vulnerability is a bandaid. If one person found it, others can as well. Not disclosing vulnerabilities can -never- be viewed as more than a temporary delaying tactic. Once a vulnerability is known, we can picture a clock that starts a countdown, ticking off the days to an actual exploit.

    This countdown starts when the vulnerable system is -released-, not when researchers discover the vulnerability or the vulnerability is discussed openly. Absolutely no one can pre
  • An article by Mary Ann Davidson (CSO, Oracle)

    Security researchers problematic bunch? [zdnet.com.au]
  • by Geak ( 790376 ) on Wednesday September 07, 2005 @04:23PM (#13503214)
    Step 1) Find bug.
    Step 2) Write exploit
    Step 3) Write fix
    Step 4) Let vendor know about security flaw and show them the exploit. Tell vendor you want X amount of dollars for the fix within Y days or you will release said exploit publicly.
    Step 5) If vendor doesn't put up the dough or produce a publicly available patch within Y days, patent said fix and disclose exploit to the public.
  • ..., but if a vendor like Microsoft takes six months to a year to fix a flaw ...

    Now, I'm not the Microsoft apologist of Slashdot, but I mean can we at least throw the names around of some other extremely bad abusers of this "sit on exploits/bugs and fix when we feel like it" policy?

    Oracle and Cisco should also be admonished for their response time to fix exploits/bugs disclosed to them as well. Cisco and their Black Hat convention fiasco proves that MSFT isn't alone and really shouldn't be singled o
  • by Matt_Bennett ( 79107 ) on Wednesday September 07, 2005 @04:29PM (#13503272) Homepage Journal
    At the very least, I believe in full disclosure to the company that writes the software- but public disclosure opens up too many risks- yes, someone else may find it, but I really don't think that Microsoft purposely leaves out fixes- they have lots of fixes to put in, and they have to prioritize them. Yes, if it is public, it gets higher priority, but they have finite resources to investigate the fix, find it, fix it, and then test to make sure that they don't break anything else.

    I think that what the open source community fails to realize is the huge amount of effort that goes into testing. I used to work for one of the big computer manufacturers- (rhymes with Hell), and the software that is released on a system (my experience is with servers) is usually frozen months in advance so that the different phases of test can pound on it, not just so that they can find errors, but so that they can characterize those errors. The fix for a critical error may be done in a day, but the testing may take weeks- to test with different hardware, system software, amount of RAM, HD, different processors... the list is long- but ultimately what they are trying to make sure is that a fix for one thing doesn't cause something else to break in a worse way. This is particularly important when you are maintaining code someone else wrote- you may not fully realize why it was done that way... until it is too late.

    Unfortunately, the IP issues often force companies not to reveal what they are doing. Every single person I worked with was extremely conscientious about their work, they take flaws very seriously, but remember, their priorities are not the same as yours, and if you could see the scope of what they are working on, you might be able to better understand why their priorities are where they are. On the other hand, they don't know you from Adam, and you might just be one of those black hats- so they can't reveal the HUGE GAPING HOLE they just found, to show you that they really do care about the tiny (but significant) hole you found.
    • I have a slightly different take on it. As a user, whether or not the vendor has released a patch there are still things I can do to mitigate my own risk from the vulnerability. I can, for example, restrict or block access to the vulnerable services, or change from the vulnerable software to some other software that's not vulnerable. But I can only do that if I know the problem exists. As a user I don't particularly like a vendor deciding that their QA resources are more valuable than my network and systems

      • I see your point- but keeping the disclosure of the flaw under wraps may be more effective at keeping the black hats out- in the short term. Ultimately, it seems that the Internet as a whole would be better off if the disclosure was kept away from the black hats until it is reported in the wild- which I would guess would be the responsibility of a trusted source (CERT?). Once it is reported and there is a way to stop it- hell yes, full disclosure, ASAP.

        The problem I have with full, immediate disclosure is
  • Give too little time to companies, and you'll be helping the blackhats. Give them too much time, and we end up with a phantom enemy that is surely out there, and the public isn't aware until it's too late (i.e. credit card numbers stolen, etc).

    But I'd say a month's more than enough. Unless of course, a new (and unpatched) version of their software is going to be released earlier. In that case, i'd give the company a week max.
  • by jehreg ( 120485 ) on Wednesday September 07, 2005 @04:49PM (#13503453) Homepage

    The Openswan project [openswan.org] is directly affected by this this month. We were contacted by an agency and asked to sign a non-disclosure agreement, following which they would tell us of a possible vulnerability in our code. This non-disclosure would prevent us to release details of the vulnerability until such time as the rest of the "group" would be ready for it to be announced.

    In the case of an Open Source product, we cannot even do a "stealth" fix; we have to describe what each patch does when we commit it to CVS. That would make the vulnerability public and would be a no-no to this agency.

    In essence, the agency could decide which bug we could fix and which ones we could not.

    I see this as the equivalent to blackmail: Sign our non-disclosure and we will give you a possible vulnerability; don't sign it and you will look bad when the vulnerability is made public.

    I am a CISSP, and quite willing to hold on the patch until others can fix their code if the allowed time is reasonable, but the non-disclosure is broad and has no time limitations... So what the heck should we do ?

    • by Todd Knarr ( 15451 ) on Wednesday September 07, 2005 @05:12PM (#13503665) Homepage

      Refuse to sign the NDA. Then make a public announcement: "$AGENCY has notified us that a possible vulnerability exists, however they won't tell us what the vulnerability is unless we sign an NDA. Comitting a patch to fix the problem to CVS or releasing fixed code before they allow it would violate the NDA, and we aren't willing to agree to deliberately not fix a security bug.". Let the resulting PR headache be $AGENCY's headache.

  • Quite frankly, we wouldn't have to speak about whos or whats. It is common sense actually to expect any purchaser of software to audit the systems it puts in place.

    Which is one of the reasons we audit all of our software on our network, for our ERP and accounting to our Warehousing.

    Obviously, you can't do this with proprietary software.

    Too bad for you, you got ripped.

    For us, we do our own security audits because we can, and because it makes sense to do so.

    After all, the manufacturer of the software doesn't
  • are always a problem, because if one person detects a flaw, it is always possible that other more malicious persons also know about that flaw. If a flaw is detected any counter-measure against the flaw should also be publicized.

    Reasonable notification time shouldn't be more than 30 days - if a company has a longer lag than that to fix a security flaw/problem, then their business model is flawed.

  • by mmmbeer ( 9963 ) on Wednesday September 07, 2005 @05:03PM (#13503580) Homepage
    I have actually been in the middle of this before, having found a way to passively grab billing information from a larger online subscription-based game. A colleague and I built a complete technical description of the issues, theoretical "worst case" possibilities, as well as a working proof-of-concept exploit, making it clear that we would wait for a suitable fix to be in place before we would release details. We got an email back within several hours from one of the lead programmers who wanted to do a conference call but when we agreed, we never heard from him again.

    We were then contacted by a manager-type that this 2-year-old hole was already being fixed. We offered our expertise to review their solution but they weren't interested. Instead, 2 special agents from the FBI showed up at my work to interrogate my involvement in felony extortion.

    After a 4 hour long interview session (during which I was escorted to the restroom and watercooler by an armed agent) and I provided all correspondance and source code, they left and never came back. The best part the WTF? look I got when I described how this company was transmitting credit card information XORed with a cleartext seed transmitted in the previous packet. Second place had to be the "Why are we wasting our time here?" look I got when I explained the only credit card information I have actually seen was my own.

    The hole was silently fixed in a patch distributed weeks later, and replaced with another algorithm which was also easily exploited. Again, we went through the entire process of creating documentation and a working exploit and sending it to the company. This hole was patched months later.

    The company's response was always hostile at best, which I found odd considering we were trying to help them protect their customers. They never disclosed (publicly or to the credit corporations providing merchant accounts) that there was any issue.

    American Express has an extremely specific policy on security, and have a minumum requirements policy which all online merchants are supposed to adhere to. They also require you to notify them if security has been compromised. California law also states that card holders must be notified if their account information may have been compromised. Apparently it's not a big deal if noone abides by these rules.

    When the company is unresponsive in resolving security flaws, I feel that disclosure is imperative to allow users to take their own precautions to protect themselves. I'm not malicious, but I imagine that there are a lot of people smarter than I am that are.
  • Should the onus be on the software firm to manage each issue and the relationship well, or does it fall to the morally responsible user?

    This is a bullshit question. I don't want to hear another single question like this until we have ethical accountability from the corporate execs who are running the companies that are bitching about security disclosures.

    Yesterday I read the article about Steve Ballmer ranting like a psychopath calling for the head of the competition. I sent that article to some business ex
  • Tao says : Morality is the penury of faith and trust and the beginning of confusion.
  • What if someone were to come up with a way to trivially factor the product of two very large primes? This would pretty much ruin RSA. What would be a responsible level of disclosure for something like that? How long until a public announcement without the method? How long until a public announcement with the method? How do you ensure someone doesn't try to snuff you to be sure the method never gets out?

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...