Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

How Do You Know Your Code is Secure? 349

bvc writes "Marucs Ranum notes that 'It's really hard to tell the difference between a program that works and one that just appears to work.' He explains that he just recently found a buffer overflow in Firewall Toolkit (FWTK), code that he wrote back in 1994. How do you go about making sure your code is secure? Especially if you have to write in a language like C or C++?"
This discussion has been archived. No new comments can be posted.

How Do You Know Your Code is Secure?

Comments Filter:
  • by Llywelyn ( 531070 ) on Monday January 08, 2007 @06:36AM (#17506184) Homepage

    0) Don't "roll your own" security unless absolutely necessary. Find someone else's implementations and work with those.

    1) Design the code for security, code to that design. I've seen of security bugs creep into code because it was never designed to be secure.

    2) Use static code checkers--such as Splint [splint.org] for C/C++ and FindBugs [sourceforge.net] for Java--that look for security vulnerabilities.

    3) Peer reviews/code audits. Sit down with your code (and have others who know how to look for security vulnerabilities sit down with your code) and do a full review.

    Nothing is foolproof, but every little bit helps. It should be noted that all of the above also improve the overall quality of the code and reduce the number of overall bugs: Finding existent implementations of features that can be used can reduce maintenance and reduce bugs; Designing the code and putting it through a proper design review can catch a lot of logic problems and ensure that the code fits the requirements list--I've seen a huge number of synchronization bugs in Java simply because the author didn't know how to use synchronization properly; static code checkers find a lot more than just security bugs; and Peer Reviews/Code Audits can help isolate a variety of problems.

  • Valgrind (Score:5, Informative)

    by chatgris ( 735079 ) on Monday January 08, 2007 @06:50AM (#17506266) Homepage
    By using valgrind. It's a virtual machine of sorts that runs your code and checks for any memory problems at all, including use of uninitialized memory. Combine that with thorough test cases, and you can be virtually assured that you have no memory errors in your C/C++ code.

    However, security is a lot more than buffer overflows... but at least it brings you up to the relative security of Java, with speed to boot.
  • by Rogerborg ( 306625 ) on Monday January 08, 2007 @06:53AM (#17506280) Homepage
    Another issue with (manual) testing is that testers tend to pursue bugs aggressively in whatever area they first happen to find some, which means you get good depth coverage, but can end up missing out on testing whole areas of functionality.
  • by ojQj ( 657924 ) on Monday January 08, 2007 @06:54AM (#17506286)
    Unfortunately stl isn't binary compatible. That means you have to make sure you've compiled with exactly the same version of the stl with all the components of your program which accept and pass strings. This in turn makes it impossible to release different parts of your program separately from each other if you are using the stl at the interface between your components.

    There are a couple of solutions to this problem:

    1.) Pass character arrays at the interfaces between your components and immediately put those character arrays under the control of your library once they come in.
    2.) Write or find your own string library and pass that string class between program components. Be careful when doing this. Mistakes will come back to byte you.

    All of it's kind of nasty. It'd be nice if C++ could standardize their binary representation, even if it's only a standard valid per platform.

    Then there's also:

    3.) Choose a language which unlike C++ already has a standardized binary representation for strings, or a system global interpreter for a varying binary representation. This is just an extension of the "higher-level library which does the memory management for you" option really.

    Don't get me wrong -- I'm agreeing with the parent post. I'm just adding a caveat.
  • Re:I don't. (Score:5, Informative)

    by Anonymous Coward on Monday January 08, 2007 @06:56AM (#17506298)
    Grammar tip: "Effect" is a verb. "Affect" is a noun.

    Um, how's that?

    Your poor grammar has a chilling effect on me. If I were you, I'd find a way to effect an improvement in your knowledge. Luckily it affects me only a little. But the fact that so few seem to understand that these two words are both verb and noun leaves me of sad affect.
  • String overflows (Score:3, Informative)

    by Rik Sweeney ( 471717 ) on Monday January 08, 2007 @07:08AM (#17506344) Homepage
    I think for some people, moving from using a language like Java to using C can cause them a multitude of problems since there's no bounds checking by default and overruns aren't caught.

    For example, I recently fixed a bug Blob And Conquer to do with Strings, the code was something like this:

    char nm[2];

    nm[0] = mission[11];
    nm[1] = mission[12];

    The code then went on to doing a

    missionNum = atoi(nm);

    Most of the time, this'd work OK because of the way atoi works. Other times though it'd stray off into other memory and pick up a random number and return a three or more digit number instead.

    Obviously there's an easy way to fix it.
  • by shutdown -p now ( 807394 ) on Monday January 08, 2007 @07:59AM (#17506630) Journal
    Uh... let's see. Open the most recent ISO C++ standard, and search for all occurences of "undefined". Repeat for "implementation-defined". Make a notice of how many of those are from the sections related to the Standard Library. Then meditate on the results.

    Yes, sure, if you use STL, you need not worry about getting the buffer size wrong. And that's about it - container indexing is not bound-checked (unless you use at() instead of operator[] - and that's about the only instance of run-time safety check I remember seeing in STL!), iterators can go outside their container without notice, or can suddenly become invalid depending on what their container is and what was done to it. Even leaving library issues aside, there are some nasty things about the language itself - it's just way too easy to get an uninitialized variable or a class member, or to mess up with the order of field initializers in constructor.

    This is not to say that C++ is not a good language. All of the above are features in a sense they are there for a reason - but they certainly don't make writing secure software easier.

  • by Anonymous Coward on Monday January 08, 2007 @09:27AM (#17507192)

    If you program using strictly functional programming, you can not only verify that your code is 100% secure, but you can even automate the process.
    You cannot write a program in a turing-complete language to determine if another program in a turing-complete language is 100% secure. It's trivially reducable to the halting problem. You can't automate it, at least not with 100% accuracy, in every case.

    Assume you have an algorithm (however complex) that can determine if a program in some turing complete language is secure, call it IsSecure(). IsSecure() is provably secure, because you've ran it on itself.

    Now, write a program that has a security hole if and only if IsSecure() returns true:

    #Program A
    start(input)
    {
            if(IsSecure(input))
                  ExposeSecurityHoleInSelf()
            else
                  #The hole must be in the function IsSecure(), which is silly, because you've used IsSecure() to secure IsSecure()
    }


    Call program A passing itself as input.
    Q.E.D.

  • Re:Easy (Score:3, Informative)

    by swilver ( 617741 ) on Monday January 08, 2007 @10:00AM (#17507524)
    Buffer overflows may not be the only security problem, but at the moment they are by far the most common security problem. A race condition MIGHT allow you some unauthorized access to certain things.. more likely however, it will just show up as a minor malfunction in the program without any security implications at all.

    It certainly won't allow you to execute arbitrary code in for example a Java application -- infact, you'd have to find a bug in the JVM itself or one of the native implementations of basic classes like String to have any chance of that. That is however highly unlikely given the amount of use these core parts of Java see.

  • by Random Walk ( 252043 ) on Monday January 08, 2007 @10:06AM (#17507596)
    Although many (if not most) open-souce apps are written in C/C++, there are no really useful open source tools to check C/C++ code for security:
    • valgrind is very nice, but only reports memory corruption if it really occurs (i.e. you have to trigger the bug first). Not very useful to detect bugs.
    • splint doesn't understand the flow of control, thus it needs tons of annotations to work properly. A royal PITA if you work on existing code. Also, it just shifts the problem: how do you now prove that your annotations are correct? Besides, it produces tons of spurious warnings.
    • flawfinder, rats, et. al. just grep the code for suspicious functions like strcpy(). They don't understand C/C++, and thus produce warnings even in cases where it's perfectly clear that these functions are used safely.
    • some academic projects (like e.g. uno, ccured, ...) look interesting, but usually don't work on nontrivial code (at least not unless you are part of the development team and know the required wizardry to make them work). Also, most acedemic project go into limbo as soon as the thesis is written.
    I think one of the major problems is that commercial vendors like e.g. Coverity offer free service at least to major open-source projects, thus stifling any initiative to produce open-source counterparts of such tools.
  • by Decaff ( 42676 ) on Monday January 08, 2007 @10:18AM (#17507710)
    Why do people keep this meme that C/C++ is so insecure? Remember, deep down inside the other languages, there often is a compiler, library, interpreter, etc written in C/C++.

    Which is irrelevant. That code can be thoroughly tested and safe, even with the fundamental issues of C++. What matters is your code. You probably won't get the chance to test that code thousands or millions of times the way the compiler/library or interpreter has been.

    It's not that C/C++ is so insecure by itself, the problem is that programmers may not have used the best programming practices. There are plenty of libraries for handling strings and memory allocation in C, in C++ there are string and storage classes that do as much or as little checking as you need.

    C/C++ IS insecure by itself, because of what it allows you to do. No programmer is perfect, and we all make mistakes. Driving without a safety belt is fundamentally less safe, and you can't argue it away by talking about 'people not driving skillfully enough'.

    When you are an expert programmer there are places where you need more efficiency than the super-safe string routines can give you. It's the job of the expert to determine exactly how to balance efficiency against security, and only C/C++ can give you this balance.

    I would be very surprised to know exactly when you think this is the case. If it is, it is for the most specialised circumstances. The problem with C/C++ is you have this division between safe (with checks) or fast. Other languages get around this problem, by including safety, but allowing the safety checks to be optimised away by code analysis, often at run time. For example, you could have code something like this:

    int [] array = new int [4];
    for (int i = 0; i 4; i++) array[i] = i;

    The compiler/runtime can analyse this code, and because it is obvious that the bounds of 'array' are never exceeded here, remove any checking and optimise hugely.

    It is a myth that you need to balance efficiency against security the C/C++ way.
  • by Anonymous Coward on Monday January 08, 2007 @10:23AM (#17507780)
    I love functional programming, I'm not bashing it, but it's not enough.


    If you know lambda calculus then you also know The Halting Problem. There are an entire set of exploits based up on it. Real ones, they don't generally lead to data compromise but they negatively impact performance and hide other things. Snort for example allows for regular expressions to be used in signatures, likewise there are pathological datasets that cause snort to spend 10s of thousands times more time processing regexes than initially expected. There are signatures that have datasets that can cause a modern machine to spends minutes processing regexes, while real hacker data is passing through unseen. It's a classical halting problem example


    FP is about algorithm correctness.


    Another problem is that programs are attacked at their touch points to the world, users and other program. FP nicely ignores those problems as side effects and doesn't have a clearly definied lambda calculus for dealing with them.


    I definitely thing FP solves some set of problems and should be used more but it won't make anything more secure any time soon.

  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Monday January 08, 2007 @10:34AM (#17507922) Journal

    whereas in a highly abstracted C++ program using the STL with lots of objects being copied and references flying around it can be a LOT harder to figure out whats really going on , especially since different compilers do different things under the hood.

    Those bugs aren't harder to track down than "old-style" bugs, in fact I think they're vastly easier to track down than, say, a wild pointer. The difference is that you're less experienced at dealing with the new problems, so they seem harder to you. With time and practice, you'll see through copy/reference errors quickly. In the meantime, a little discipline can cover your lack of experience -- never store raw pointers in collections, always "objects". If you don't want to create copies, then store objects of a smart pointer class. In fact, avoid ever using raw pointers at all. *Always* assign the result of a 'new' operation to a smart pointer (auto_ptr works for a surprisingly large set of cases, but you may have to get a reference counted pointer type or similar for others -- the BOOST library has some good options if you haven't already rolled your own).

    If you really run into different behavior with different compilers, then at least one of the compilers is buggy. That does happen, but it's a lot rarer today than it was a few years ago. When you find that situation, wrap the tricky bit behind another abstraction layer and implement compiler-specific workarounds so that your application code can just use the abstraction and get consistent behavior. In most cases, someone else has already done this work for you. Again, look into BOOST.

  • Re:I don't. (Score:1, Informative)

    by Anonymous Coward on Monday January 08, 2007 @10:39AM (#17507964)
    Grammar tip: "Effect" is a verb. "Affect" is a noun.

    Perhaps your constant exposure to the poor grammar here has affected your ability to use it properly - I hear it's a common effect amongst Slashdot readers and posters.

    Oh, and as a few other people have stated, they can both be used either way, but the common usage is exactly the opposite of what you've stated.

    HTH, HAND.
  • Re:Verified (Score:2, Informative)

    by Anonymous Coward on Monday January 08, 2007 @10:57AM (#17508132)
    On one hand your comment is funny due to the chronic security risks associated with MS products.

    On the other hand, MS has some of the best code analysis technology available in Prefast, FXCop, SAL, and Application Verifier:

    http://msdn.microsoft.com/msdnmag/issues/05/11/SDL /default.aspx [microsoft.com]

    Disclaimer from the linked content:

    "Security tools will not make your software secure. They will help, but tools alone do not make code resilient to attack. There is simply no replacement for having a knowledgeable work force that will use the tools to enforce policy."

  • by asuffield ( 111848 ) <asuffield@suffields.me.uk> on Monday January 08, 2007 @10:57AM (#17508144)
    You cannot write a program in a turing-complete language to determine if another program in a turing-complete language is 100% secure. It's trivially reducable to the halting problem. You can't automate it, at least not with 100% accuracy, in every case.


    Congratulations, you have won today's "Ignorant undergraduate misunderstanding of the Halting problem" prize.

    You're wrong on every significant point. You can write a program in a turing-complete language to determine if another program in a turing-complete language is 100% secure. The way that you specified implementing such a program is flawed. The Halting problem says that there exist certain ways to specify a problem which admit no solution, even though a solution to the problem exists. It does not say that there exist no alternative ways to specify the problem which do admit solutions. People always seem to think that the purpose of the Halting proof was to demonstrate that the real-world problem couldn't be solved - this is wrong. The purpose of the proof was merely to demonstrate that there exist certain non-trivial, interesting mathematical problem specifications which don't have solutions. This has interesting results in computability theory. It has very little relevance to the question of what sort of software we can write. It's all about how you reduce the real-world problem into a mathematical problem specification.

    Assume you have an algorithm (however complex) that can determine if a program in some turing complete language is secure, call it IsSecure(). IsSecure() is provably secure, because you've ran it on itself.

    Now, write a program that has a security hole if and only if IsSecure() returns true:

    #Program A
    start(input)
    {
                    if(IsSecure(input))
                                ExposeSecurityHoleInSelf()
                    else
                                #The hole must be in the function IsSecure(), which is silly, because you've used IsSecure() to secure IsSecure()
    }


    IsSecure returns false when passed this program as input. It doesn't matter that you think the answer is silly. This program is not secure because there exists a call to ExposeSecurityHoleInSelf in it and IsSecure failed to prove that this call was unreachable, or just didn't give a damn that it was unreachable. That is defined as an insecure program for the purpose of the IsSecure function. By specifying the problem in this way, we admit the possibility of a solution, and the Halting problem is avoided.

    In most cases, the Halting problem can be avoided in this manner. Nothing compels you to define your program as having no false positives.
    For the purposes of automated security validation, false positives are not a serious problem - we can easily write the program in a manner that can be proven secure by a given prover. We don't have to accept arbitrary programs as input.

    In practice, we don't do it like this. The function we use in the real world is is_proof_of_security_valid(), and it takes two inputs - a program and a proof of the program's security. The function checks that the proof is valid for this program. The proof itself is generated semi-automatically, but some parts are supplied by humans - typically via markup in the program's source (lint tags are a classic example of this sort of thing). It's much easier to write the thing this way.
  • by Curien ( 267780 ) on Monday January 08, 2007 @11:35AM (#17508630)
    And that's exactly why so many things are "implementation defined" or "undefined". Many real-world users of C++ demand that, for instance, vector::iterator be a typedef for a raw pointer for efficiency reasons. Other equally-important users would prefer an iterator type that guarantees sensible behavior in the face of real errors. The ISO standard allows for both behaviors by conforming C++ implementations.

    There's something attractive about the Java and C# languages having all constructs so well-defined. But both of those languages could afford not to support real hardware. Both target abstract machines and are happy with the results. C++ can afford no such conceit: it thrives in high-performance, customized, and otherwise exotic environments.
  • by cnettel ( 836611 ) on Monday January 08, 2007 @11:38AM (#17508674)
    Dynamic linking will also frequently create thunking tables close together, and lots of C code have other function pointer tables in special places anyway. (In a Win32 environment, you have that table for any COM object, it won't matter if you implement it in C, for example.)
  • Re:Easy (Score:3, Informative)

    by h2g2bob ( 948006 ) on Monday January 08, 2007 @11:44AM (#17508748) Homepage
    I'd suggest:

    #define BUFSZ 1024
    char buf[BUFSZ];

    printf("Enter something: ");
    fgets(buf, BUFSZ, stdin); /* the SECURE way to do it, don't even think of using gets() or scanf()! */
    strip_newline(buf, BUFSZ); /* some function to remove trailing newline */
  • Re:You don't (Score:2, Informative)

    by Anonymous Coward on Monday January 08, 2007 @12:07PM (#17509104)
    The purpose of his macro isn't to fix the dangling reference problem. The purpose is to help developers find dangling reference problems in their code.
  • by Coryoth ( 254751 ) on Monday January 08, 2007 @01:29PM (#17510354) Homepage Journal
    There are a few misconceptions here which deserve comment.

    You can't prove, for example, whether a lambda program will terminate (Halting Problem), and in fact you can prove that you can't prove this.

    This simply isn't the case - there are lots of programs for which you can easily prove termination. The catch with the Halting problem is that you cannot find a procedure that will work for all programs. In other words you may find yourself in a situation where you cannot prove termination for certain programs; that does not mean, however, that you can't prove termination for others, nor that such proofs are invalid. Trying to prove termination is far from futile - the Halting problem will at worst tell you that you might not be able to do it, but if you can (and often enough you can indeed) then the proof is perfectly valid.

    If you have a sufficiently well expressed specification for your program, you can verify that the program and the specification match. Unfortunately, if you have a specification that concrete, you can just compile it and run it.

    Again, this isn't quite true. Certainly, for some problems, an accurate specification will be equivalent in complexity to an implementation. On the other hand, there are a great many problems where that isn't the case. Take a specification for finding a square root (to within a specified error tolerance epsilon): given an input number x, the function must produce a value y such that abs(x - y*y) < epsilon. That's a complete specification (and it isn't hard to formalise that into a suitable specification language) but you certainly can't compile and run it and get anything useful - the actual implementation of how to find the square root is going to be more detailed, and quite important.

    Similarly we can specificy a sort function: given a list of items (comparable by '<') the function must return a list that is a permutation of the input list (that is, they contain the same elements), and such that for each list index i (except the index of the last element) result[i] < result[i+1]. Again, that's a complete specification for a sort function - it ensures that the function does indeed sort the list. On the other hand it is not compilable (except, maybe, into bogosort), and any particular sort implementation will have to use a specific sorting algorithm (be it quicksort, mergesort, or otherwise) which will be undoubtedly more complex than the simple specification given.
  • by yermoungder ( 602567 ) * on Monday January 08, 2007 @04:50PM (#17513616)
    FUD alert!!!

    C# might be appropriate for your domain but it certainly isn't in Ada's - safety critical or mission critical systems.

    It's also easy to learn as can be seen here http://www.stsc.hill.af.mil/crosstalk/2000/08/mcco rmick.html [af.mil]

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...