Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Editorial

Certifying Software As Secure? 84

perikalessin asks: "It's obvious that software companies can get to the point where they're large software companies and still generally tend to be only reactive when it comes to security issues, not proactive, in terms of getting security audits, evaluations and certifications on their products. After several weeks of research, on and off, interwoven with other projects, I finally found the important keywords for anyone who wants to look into this topic: 'security evaluation', 'Common Criteria', 'ICSA' and older phrases: 'ITSEC/TCSEC' and 'Orange Book'." If you've ever performed security audits on software, your input would be greatly appreciated.

"It turns out that the much-touted Microsoft Windows NT 3.5 and 4.0 TCSEC C2 rating basically states that the operating system assures separation of users and data and audits user and security-related events -- capacities that are essentially standard expectations of any modern 'enterprise' operating system. That rating is essentially two (2) levels out of seven (7) from the rating for utter lack of security (D1). See the U.S. Government's Commercial Product Evaluations page and its associated Trusted Technology Assessment Program (TTAP)'s FAQ entry on TCSEC evaluation rating interpretation for more information. For now, be aware that the evaluation ratings go non-intuitively, from lowest to highest: D1, C1, C2, B1, B2, B3, A1. Microsoft's rating also only applies to very specific configurations of the Windows NT Operating System and none of its frills -- like ASP, for instance.

Still, even from the standpoint of researching evaluation and certification options, it looks like only International Government Evaluation (i.e. the 'Common Criteria' evaluation process) and perhaps the ICSA certification are available to any vendor who wants to be pro-active and benefit from standards in the process. (Please let me know if you know better!) And I've talked with a number of hacker types who sneer at the idea that any of these certifications are worth the money and effort to put into them.

At the same time, pointy-haired types eat this certification stuff up. In point of fact, government contracts can be much more possible and much easier to obtain if you get certified this way, and as Microsoft's spin-doctoring of their C2 TCSEC rating points out, it just makes the company that has the rating look more responsible, all around, or can, if your readers and customers don't know what the rating actually means.

Sure, it's possible to contract with any security auditing firm to get something or someone to say that your product's at least minimally secure, but it's still unfortunate, but true, that if you want any kind of widely-recognized, standard certification, you'd better actually seek out some kind of formal evaluation and rating.

Do people agree, disagree, and either way, can they prove it?"

This discussion has been archived. No new comments can be posted.

Certifying software as Secure?

Comments Filter:
  • It seems all the these certifications refer to the design of the system, and don't address implementation aspects.

    Suppose your C2 certified NT machine has a bug or backdoor that no one knew about, that let you overwrite system memory with anything you want if you push a magic keyboard combination. Well, all that access control list and whatnot doesn't mean squat. Up,down,left,right,A,B and poof, you can do anything you want.

    Of course, how can you tell an implementation doesn't have bugs or backdoors when you don't have the source?
  • It seems all the these certifications refer to the design of the system, and don't address implementation aspects.
    Yes, implementation aspects are part of the process. At least they were when I worked on T rus ted Mach [pgp.com], which we planned to have evaluated to TCSEC level B3; our code was reviewed by "trust engineers".

    IIRC, you don't have to deliver the source to the evaluators, but you have to at least have someone in-house designated to do reviews. Somewhere in the huge pile of documents has to be plan for ensuring that implementation meets design - as well as plans for testing and configuration management.

  • "Pretty Good", of course.

    It would probably be in the "B" range (access control to data, separation between public and private keys, except it doesn't keep the necessary logs), but, since it's not a full system, and only a single application, it doesn't get a rating. As a component it could probably be helpful in doing the required authentication, but that's not enough to get a rating (it would also need to provide logs for this, probably).
  • In addition to MANDATORY authentication and
    auditing, B2/A1 security rating requires that
    there be NO POSSIBILITY that a user can access
    any information or perform any operation that
    is not allowed by the user's security rating.

    This means that the OS must be PROVED to have
    no "Buffer Overrun" errors or any other bug by
    which a user can get access to any priviledges
    beyond what the user is entitled to.

    The final test is that even with complete source
    code and documentation of the entire OS, no
    loopholes can be found.

    This means that C++ is forbidden as well as C
    strings and any compiler and/or library that
    executes any code that is in the stack or heap.

  • The thing is, you just need to get the NSA to certify you for B-7. which is just a matter of turning off FTL,BTD, and BST.

    Unless of course you want to go for full A-9 certification, in which case you'll definitely need to get a PTU and an EPI audit done on all you FDC's. Now that's a real pain in the ass. This one guy my cousin knows spen three weeks just working on his EPI's TRD, not to mention the PTU an

    Oh dear. I think too many TLAs have FRD my BRN.

    FNK
  • The NSA report of the evaluation of NT 4.0 is at http://www.ra diu m.ncsc.mil/tpep/epl/entries/TTAP-CSC-EPL-99-001.ht ml [ncsc.mil]

    Of course boxes are need to be evaluated, I can have an A1 secure OS, but if the prom allows me to bypass it, then the box isn't secure.

  • Java doesn't 'depend' on the JVM, that's just what allows (in theory) pseudo-binaries compiled on one OS to run on another. Java can be compiled to native binaries just like any other language, and it is more secure in the sense that it's much harder to overwrite important memory, get pointers wrong etc.
  • I've been talking to the BFI [www.bfi.de], the German govt's IT security office, about certifying components of our company's infrastructure as "secure", whatever that means.

    Basically, for us it's a selling point both to customers and investors, that an office that's generally pretty respected has decided that we've taken all the common-sense measures within our power (no, you'll never be able to defend your network against H@x0r1ng by the aliens from Mars) to protect ourselves from intrusion. "Technical due diligence" is the key term in this situation.

    Our problem was to try and certify portions of our entire network (working together) as being "secure", as opposed to just single products. I've found that so far, the BFI guys were pretty helpful and technically clued. What they will do for you is to let you define "protection profiles" ("Schutzprofile") based on the Common Criteria for security, for parts of your infrastructure. They then check whether (a) your criteria and profiles make sense, and (b) whether they comply with their idea of CC.

    One of the really cool parts of this is that assuming they decide that you're kosher security-wise, you can decide to release the profiles you developed for general use, and they will then certify other companies against those same standards. Likewise, you can just get some pre-defined off-the-shelf requirements that sound usable, and have yourself judged by them...

  • No, but a paradox. Logic states:

    A) if you follow it, you must regect it.

    B) if you reject it, you are following it.

    Of course, the path to follow is: Screw logic, and ignore it.

    --

  • I've worked on high-ITSEC software... I know of only two products that have achieved an ITSEC E6 rating recently - MULTOS (a smartcard operating system) and the Mondex Purse (an electronic cash application that runs on MULTOS) See http://www.multos.com/ for some info. The only programming language that comes anywhere near meeting the ITSEC requirements is SPARK, which is well-known is the safety-critical community here in Europe, but actually has its roots in security research. See http://www.sparkada.com/ - Rod Chapman
  • And the NT page reads like an MS press-release: 'NT was designed from the ground up to be secure blah blah robust blah blah scalable blah blah e-commerce blah blah enterprise blah blah buzzword blah'. And lest you think I'm just slagging MS off, all the other server vendors are just as bad.
  • There are reasons why you need to have an administrative ability to access a file that a user doesn't want other ordinary users to see. Suppose, for instance, that an abnoxious user greatly exceeds his (soft) disk quota. An administrator for the system needs to have the ability to go in and archive and delete some files that are using up the common space. In general, users can be selfish and obnoxious, and somebody has to have the right to override their stupid decisions that can hurt other users. You need some kind of administrative right to step in and do that.

    The problem is not in letting an administrator play god. The system needs to have someone with godlike powers to do that stuff, and it's very useful to have programs that can proxy for the administrator and do tasks that ordinary users shouldn't be allowed to do, like reading the encrypted form of user passwords. The problem with Unix and the like is that there's no segmentation of those powers. You can't easily delegate to a program the right to look at /etc/shadow and nothing else. The result is that you have a lot of daemons running with full administrative privelege when they need only limited privelege, and a failure in any one of those programs can give an attacker full privelege. That means that you need OpenBSD levels of auditing and care, because any single failure can result in catastrophy. Unix needs to add some kind of compartmentalization of administrative privelege in order to have real security. That way, even if you miss something, an attacker won't have absolute free reign on your box.

  • Internet Security is not an oxymoron because 128-bit SSL can, with much effort, be broken. The easiest way to compromise a system is rarely by attacking encryption engines. There is always a much weaker link.

    The reason Internet Security is an oxymoron is becuase of the incomprehensible complexity of the internet itself. There are so many systems interconnected and each is administered in its own fashion with boundless variety of configurations. On top of that, everybody in the world has the opportunity to attack each system on the internet.

    How the hell do you completely secure one system on the internet against all known and unknown types of attacks that could be launched by anyone from anywhere?

    For this reason, it is impossible to honestly guarantee a system on the internet as secure.
  • So, if I read this rightly, NT has been C2 certified with service pack 6a and special c2 updates, for less than a year (though MS has been claiming certification forever...) and while you're allowed to have a network card, your network configuration is TCP/IP only, no appletalk, netbeui, or IPX, and no network services of any kind. (That is, I believe the list of 'specifically excluded' services is a superset of the standard install services.)
    Also, what data you transmit over the net is not evaluated. They're just saying you can have a network card, and if no ports are open you're still safe.
    Which I guess is better than the (supposed) no-network configuration of NT3.51. If that's true.
    I think I'm going to print this out and take it home. THanks. ;)


    --Parity
  • A suggestion for secure system etc:- Use smartcards containing a PKI Certificate and a password / pin/ biometric's etc, implement ACL's using LDAP via ssl using attributes of the certificate and restrict access on that basis. Forget the idea of a UID having root permissions and make it a part of a normal user's permissions (well normal to anyone who can't see the ACL db). Make the ACL db fully encrypted and secured, only allow certain authenticated users to modify only the parts they need to. Make it so no single user could fully comprise the system.
  • Linux capabilities aren't nearly sufficient yet, but they could eventually become that complete. The main problem is that UNIX is just not designed for that sort of thing. Even just adding acls to a filesystem makes things not work the way people and existing program expect. Eventually, however, the issues may get worked out, and there'd be a real capabilities system.

    Even breaking up root priviledges somewhat would be useful, though; make a stack-smashing attack only give you the little priviledge that the program that you found a hole in had. Beyond that, you could also restrict some usual user priviledges (exec comes to mind), to eliminate abilities that daemons don't really need. I.e., make it so bind can't call exec or do anything priviledged except open low-numbered sockets.

    On the other hand, this adds a bunch more stuff that has to be configured correctly, as well as requiring some new way of setting all this.
  • Heh. Java needs a JVM to run. If you write the JVM in Java, that JVM would need a JVM to run on. Ie, you'd have a JVM written in Java running over a JVM written in C. :-)

    Heh, very practical. :-)

  • POI: NT has a C2 rating *including* networking.

    You will never get A1, that requires a formal proof, and doing that on a existing code base is impossible. Doing anything higher than B1 is not worth the effort.

    Most UNICES are C2 feature complete (except Linux). A few are sold in B1 versions (ie Trusted Irix)

    C2 and B1 are doable. All that Linux is lacking for C2 is an audit trail. (and of course the line-by-line analysis...:-) But we can separate feature completeness from certification.

    B1 requires MAC (as you stated), but to be useful it requires a MAC aware X server, commonly called CMW, that too is doable, it will just take longer, since you need some standard way of passing labels around the network.

    While I'm here, can we plese separate TRUST from SECURITY. There is this crypto package that uses 1million bit keys, is it SECURE ? Yes. Can I TRUST it ? No.

    richard--SGI Trust Team, but not speaking for them
  • This just illustrates that the Government attitude that brought you the $300 hammer is still working well. Hey, let's continue to spend lots and lots of money on using an inefficient operating system rather than bringing a free one up to scratch. At least if it goes wrong it's someone else's problem, right?

    The day Microsoft make something that doesn't suck will be the day they start making vaccuum cleaners.

    Vik :v)
  • Never trust anyone over 90000

    Heh, I realize that this is way off topic, but the immortal John Carmack [slashdot.org] is UID 101025...
  • Well, considering how long it has taken for distributed.net to crack RC5-64 i think 128 is pretty safe. http://www.distributed.net/rc5/
  • There is starting to be support for capabilities in Linux (a major requirement for B and higher stuff). Of course, this is a major break from the UNIX model, where root has all the capabilities, and other people may be granted root-like power in restricted code, and everything else is done through the file system with simple ACLs (i.e. gorups).

    It may be possible to make a B-level distribution, assuming that physical access is controlled, and programs are set up very carefully. But you probably wouldn't find it terribly useful, since nobody could become root, as that would seriously break the security model. You'd basically have to deal with not having a user-level capabilities system by lacking abilities.

    Of course, you could probably get C2 by turning off all the services you don't actually want, removing the setuid bit from programs that shouldn't have it, restricting access to some other programs, and replacing the rest of the setuid programs with versions which are simple enough to verify their security.

    Generally, many of these ratings aren't very helpful unless you're a government, because at the higher levels it's mostly concerned with making sure that your secret data can't go to untrusted places. If you're big enough that you actually talk to trusted places, this is helpful, but for most places, it means the computer is unusable.

    For example... the machine can't let you cut and paste from a secret document to anything like, say, a web browser or ssh window. It can't let you accomplish this in several steps, either. It quickly becomes impossible to deal with having anything that can send information out to anything but verifiable secure and trusted sites. Not only do your directory listings not include secret files if you're not a trusted user, they don't even if you are, if you can copy out to something untrusted. It's actually easier on the user to have a separate machine for secret data, and it's all silly unless you're also searching your employees for secret files at the door.

    Of course, in the business world, you generally don't deal with secret data on this level. Security is aimed at preventing access from users who shouldn't get it, not preventing spies from getting information out. B3 or C2 is about where it's worth getting, and beyond that what you're interested in is an entirely different scale of security.
  • heh . . .

    Well, it was, of course, "Never trust anyone over 30" but then I saw my own UID and realized that this would have been a problem (though it's certainly good advice).

    Oh well . . . I wished I could still just post AC like I did before I got an account . . . but it's impossible with the noise and current moderation system. Add to this, that most posters start out with a 2pts above an AC, and it's impossible to be heard unless you're really early.

    --

  • The problem with the orange book rating system is that it is old, decrepid, and irrelevant. For instance, to do C level testing you were supposed to have assembly language coding experience. How many people reading this note have much assembly experience?

    Also, use of ACLs or labels in today's world is pretty much irrelevant to the consumer. They're a nice add on for servers, but that's the best you could say.

    A seal of approval is only as useful as its meaning. A good seal of approval will have some abstract meaning like "hard to break in", and then concrete measures that define that abstract term. Over time, the concrete measures should be updated (i.e., one could lose one's rating).

    For instance, If I were to design a security stamp of approval of "reasonably secure" my stamp would mean the following:

    1. Source openly available
    Without source only the hackers can find the problems.

    1a. The system must be documented. None of this hard to write/hard to read crap.

    2. The code must have been reviewed either by a person or a program to show that common failures are avoided (such as a failure to check bounds). In short, no use of gets() and it must pass lint. Java has a clear advantage here over both C and perl, by the way. I wouldn't let C touch a system that really needed to be secure. Useful as pointers are, blown pointers and buffers happen (in C).

    3. BY DEFAULT the system must not allow administrative access via unencrypted or unauthenticated means. No ::s in the password field with a real shell allowed, telnet, rlogin, and most of inetd disabled. Similarly, X must not allow remote screens by default.

    4. BY DEFAULT, the only Internet protocol MUST be IP (no IPX, Appletalk, DECNET, etc). and daemons MUST be disabled or inaccessible to the outside (including routed, dhcpd, sendmail, and X). NO file system protocols, thank you very much!

    5. Any sort of applet technology requires authentication (i.e., signatures) and authorization.

    Eh.. just some musings...
  • I've often wondered why so many operating systems have a superuser account that overrides all security controls. If an ordinary user has a file that has its permissions set to read/write by user only (Unix mode 0600), why should the operating system allow any other user access to that file? Windows NT started out with some good ideas but someone screwed it up by adding the take ownership feature. Someone might ask, how do you backup the files on the system? One solution would be to give special access rights to a trusted and audited backup/restore program. I would also suggest that the backup program encrypt all data stored on tape with its own private key.
  • I couldn't agree more. I run a Certificate Authority for hire. We strive to use solutions that are not only certified, but certified to work together to some level of assurance. The next step for us is to have our operation of these pieces validated. The Common Criteria has plenty of Protection Profiles describing requirements for the components, but not mmuch on operational requirements. btw, this is mandated in the German Digital Signature Law, which is a good model for what a legally binding CA service needs.
  • Isn't that an oxymoron? :) (Fire Extinguisher: I run an Indy at home and I like SGI and all their tech, not to mention how generous they've been to the whole Free Software movement).
  • I had dealings with a place that wanted to install Virtual Vault.

    They wanted a secure web-server, running their in-house written CGI code. The PHBs decided that as long as the underlying OS was certified as secure, they would have no security problems! Yes, people are really that naive!

    Virtual Vault was eventually dropped when it was discovered that their Systems Management software (which used the extreemly insecure SNMP) wouldn't run on the proposed system, and they needed everything to report back to one central super-console.

  • It should be obvious that certifying software as being secure must include certifying that that software does not contain any bugs. For anyone who thinks that this is easy, remember that Donald Knuth once said [stanford.edu]... ``Beware of bugs in the above code; I have only proved it correct, not tried it.'' It may be possible to assure oneself that small sections of code do one thing only, but for complete systems it is impossible.

    Then, of course, there's the whole question of certifying that the way in which the software (or hardware, for that matter) is used is certifiably secure. Again, nobody can guarantee that lapses aren't possible.

    Bruce Schneier has been saying recently that he's come to the conclusion that (paraphrasing) certification isn't the answer to computer security; if you want to feel secure (and protect your business in the case that there are lapses), then get insurance instead. Manage your risks, in other words, rather than placing blind trust in a particular technology or a paper certificate.

  • I have had the honor of being the first test subject of the Common Criteria here in Canada. CSE (Canadian Security Establishment, or just "The Establishment" to the people playing the Game...) was responsible for testing out the Common Criteria, as proposed, over 4 years ago.

    We knew that we had to get our product (a transparent-proxy firewall) "certified" if we wanted to sell it to the Canadian government.

    Enter CSE.

    They told us that the Orange Book was being phased out and that we could be the first profuct to be evaluated under the new "Woldwide" Common Criteria. We accepted. *I* was the one who was assigned to do it.

    Since this was the first product to undergo the Common Criteria "checklist", I could debate any point of the criteriaif I didn't agree with it (which I did, often...).

    Of course, the dice were loaded: they were 7 CSE people, and I was alone. I often had to debate my points over and over to different people until the head techie (the Brain) agreed and put a note into the Criteria. I assume that they then reported the proposed change to NSA (or the "Headquarters" as they called it).

    Our product was evaluated under the EAL-1 checklist; that's the lowest, but it was the only real one achievable at the time.

    The next 7 months were rather tedious: I would give all the product documentation and white papers, and they would lookup each function of our product in their checklist, such as :

    • Component prevents communications between targets if critical error condition occurs: YES
    • Component outputs error message if critical error condition occurs: NO
    For EAL-1 to be attributed, we would need the YES in items 1.1,3.4,7.6,101.4.6, and so on...

    They would give this back to me, and I would have to check each and every point. If I didn't agree with a point, I had to document the product even more (more white papers, more changes to the user guide, admin guide, etc...). We could not change the code... Oh, and I am not a tech writer... but boy, did I ever have to become one then...

    Since each and every log message has to be documented and explained in the product documentation, for a product to get EAL-1, I almost quit the day that they told me that I had to document DEBUG log messages! Since we had rather original coders in our midst (hey San!), I could not conceive having to explain debug messages like :

    Jesus, I need a beer in seconds. or Oh no don't touch me there.

    Luckily, I was able to debate that one requirements out. Thank G0d.

    The bottom line is that this forced us to document our product way the hell more than what we had at the time, and I think that our users benefitted greatly. CSE ended up with a better, more logical Criteria, and we were then able to sell to the Canadian Government.

    Was it worth it ? Well, I could have used the sleep... But I think that it made our product and documentation much better, and it opened a market that we wanted, and then realized that we were the only player in it. That meant big bucks; way more than we originally thought. So, yes, it was worth it.

    Would it still be worth it? Hell yes, that's why I want the Black Hole Project [sourceforge.net] to get cetified under the Common Criteria as soon as possible. (Disclaimer: I have left the company that did the product originally, and I am now on my own)

  • When you compile Java into native binaries, you _still_ append the JVM to these binaries. What do you think makes memory garbage collection in Java? Where do you think these processes will go when you generate the native binaries?

    You are not getting rid of the problem, you are just hiding it. All these "ugly things" prone to errors are simply _hidden_ inside the JVM. But they are _still_ needed, and _still_ used. And written in C. :-)
  • As rightly said, sofware accreditation has been around for a while. Orange book got there first. For each level the test were mandatory. There was no proof that the items being tested linked together to protect something. The Europeans got in with a number of methods, consolidating on ITSEC where you had to state what the security objective was and how you met it (the claim, or Target of Evaluation) and they also tested strength of mechanism. The Canadians were next and published a method. The move to Common Criteria is an attempt to make an evaluation/accreditation in one country valid in others (handy for government and military procurement). The scales are not linear. The testing methods aren't either. E6 (EAL7) requires formal mathematical proof of the claim being met by the system, whilst E1 (EAL2) needs documentation and flowcharts. Unfortunately that just makes it harder to understand for the non-security propellor head. Bottom line is that there aren't many properly established ways of doing this work. Only governments and the military are prepared to spend the kind of money needed to decide how to do this, who can do it, what are the testing methods to give realiable results from one evaluation to another, and how to deal with things that may be matter of opinion (just how strong is your algorithm?, does the way you are protecting the Trusted Computing Base really work) and still be reliable. Industry has, to a large extent, avoided any kind of external review of the quality of their work (which is what evaluation comes down to). SHould we be surprised? Maybe not. IN the first two years of operation the UK ITSEC board stated that they had never had a product in for evaluation that was not found to have at least two major flaws present on initial review. Problem is that going forwards we are betting our anatomies increasingly upon all this IT stuff working, when, from a security standpoint, the foundations are shaky at best. (No matter how well the manufacturer does, the customer has the ability to defeat security - see your local firewall.) There have been frequent calls from the security community to improve the situation, but you have to bear in mind that until someone loses their skin there is no motivating force. In the meantime, military evaluations will continue (Orange Book, ITSEC, Common Criteria), but since industry never takes any notice of them when doing its own purechasing they will continue to be a specialist backwater. Commercial evaluations of crypto algorithm output are useful but don't tell if your keys are protected, which might be rather more useful! Keep on trying. If people don't complain that product is not evaluated it never will be. Once you have them in the circuit you can improve the quality, but if they never get in then there's not a lot to achieve.
  • So, in short: It's the Humans Stupid.
  • My C=64 - no way to get in - no services, no monitor, no drives, no power supply, no function. After I case it in cement and drop in into the abyss, it should be pretty safe indeed...
    --
  • ...is make sure that VBScript is disabled.
    :)


    ---
  • by v4mpyr ( 185039 ) on Monday September 25, 2000 @01:09PM (#755334)
    You can find all of the rainbow books here [ncsc.mil] and here [fas.org]. They're worth a look.

    --
  • by Anonymous Coward on Monday September 25, 2000 @01:13PM (#755335)
    So far, I've proven that addition on my Turing machine is secure, provided the intruder doesn't have physical access to the tape.

    I'm still working on multiplication...
  • Since any security measure can be defeated, there's no point in instituting such a thing?

    Now that you're not rushing to get that first post, would you like to reconsider your position?
  • the only secure box is an unplugged one, put in a steel box and thrown at the bottom of the sea
  • by devphil ( 51341 ) on Monday September 25, 2000 @01:16PM (#755338) Homepage

    Sun's Trusted Solaris (I'll let somebody else get a few Informative points by posting a link; I don't have it handy) lets you do some useful things in this respect. I don't recall their rating offhand; somewhere in the midrange.

    You can do some really cool things besides impress your boss with the rating, too. Like make indidivdual directories and files simply not be there when certain users do an ls(1). I don't mean "permission denied" kind of things, I mean the kernel itself just skips over that file; doesn't even report its existence.

    It's great for situations when information at different classification levels (Top Secret, Secret, Confidential, Stuff That Used To Be Secret Before You Put It On The Damn IIS Server And Some Eleven-Year-Old Kid Got It, etc) all need to live on the same machine.

  • by SheldonYoung ( 25077 ) on Monday September 25, 2000 @01:16PM (#755339)
    To truly certify software as secure is a very huge task. It assumes you understand and have validated every single state the machine can be in, which is practically infinite. Even the state of the power button matters.

    I really hate the false sense of (ahem) security that certification gives. They're trying to assure the software is secure, which to is almost an impossible task for any non-trivial system. Anybody who says a system is secure is lying to themselves and others.

    With things this complex, security can only be approximated.

    Of course, you can certify a design as secure with much less effort but it's the implementation that matters the most.

  • wasting too much time
    forgot Option Explicit
    time for bugs galore

    J
  • by \\x/hite \\/ampire ( 185046 ) on Monday September 25, 2000 @01:19PM (#755341)

    Network Security Library [secinf.net]

    Common Vulnerabilities and Exposures [mitre.org]

    SecurityFocus [securityfocus.com]

    You can find everything you want to know (and more) at these sites.

  • what is the rating of PGP?
    how does it affect you decision to use or not to use it?
  • The only two operating system/hardware combos I've ever seen with an A1
    rating under this [yes, A1 is both hardware and software security] are
    Trusted Xenix [where did this one go?] and Honeywell Multics [this one
    was flushed down the shitter in France.]
  • What's with the steel box?
  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Monday September 25, 2000 @02:55PM (#755345)
    1. THERE IS NO SUCH THING AS SECURE SOFTWARE.

    Like Yogi Berra said, "In theory, there's no difference between theory and practice. In practice, there is." No matter how stringent the testing, no matter how exacting the software development (up to and including provably-correct software), software cannot be secured. In theory, following a provably-correct software design (such as is possible in some Ada subsets) allows you to design software which is provably correct... but that's theoretical, not practical.

    Sometimes, the buggiest thing in a system is a feature which is working exactly as it's designed to do. Provably-correct software is predicated on there being a correct assessment of what the software needs to do and needs to not do, and so far, nobody's come up with a way to do provably-correct brainstorming. :)

    Auditing cannot, repeat, cannot make a piece of software secure. All it does is find errors, not all errors, and maybe even not all the major errors.

    2. TRUSTED SYSTEMS ARE JUST THAT.

    Trust. It's another way of saying "I have faith in you." Faith is the antithesis of proof. For years my Linux box was a haX0r's dream--I didn't bother to turn off services, my root password was fairly easy to guess, etc. That doesn't sound like a trusted box, does it?

    Wrongo. It was very trusted, because it wasn't connected to any network and it was in my bedroom. I trusted it a lot--I had faith that it wasn't going to be compromised.

    Whenever you see someone advertising a "trusted system", ask yourself: who trusts it? Why do they trust it? Should I trust it? "Trusted systems" are sometimes a lot of snake-oil; people who don't know beans about security buy "Trusted Solaris" because it says "Trusted", even though their incompetency as a UNIX sysadmin makes the box vulnerable.

    (Note: I have a lot of respect for Trusted Solaris, even more than I do for OpenBSD. I'm just making the point that the word "Trusted" doesn't mean much.)

    3. THE MOST IMPORTANT ELEMENTS IN SECURITY ARE THE USERS AND THE SYSADMIN, IN THAT ORDER.

    Most people will reverse this around, claiming that the sysadmin is more important security-wise. There's merit to that (after all, root == God), but I reverse it. There's only one sysadmin, and an attacker more or less has to take his chances that the sysadmin is incompetent enough to fall for (a crack, a social engineering attack, a DDoS, etc.). But if there are hundreds of users, it's certain that at least one of them is going to be a complete fargin' idiot, which means attacks which involve users are more effective than those which go straight for root.

    This is an important point. No matter how secure the box, no matter how trusted it is, the weakest link are the users. When companies get on a security kick, they tend to spend lots of money on software and very little on educating their users. This has always struck me as backwards.

    4. USERS DON'T WANT SECURE SYSTEMS.

    Have you ever tried to use Trusted Solaris, or OpenBSD in a particularly bondage-and-discipline configuration? Sure, they're locked up tight against intrusion, but this comes at a steep price in usability. People want computers to be easy to use more than they want them to be secure. If you make computers too secure, your own legitimate users will circumvent security. I regularly see passwords on Post-It notes stuck to monitors--not just in Corporate America, but in government offices which routinely handle extremely sensitive data.

    5. FOR ALL THIS, SECURITY AUDITS ARE A GOOD IDEA.

    Security audits do two things: first, they tend to ensure that software works the way it ought, and second, they tend to ensure that software doesn't work the way it oughtn't. The potential problem that's spotted and corrected due to a security audit may never have resulted in an exploit, but it may well have resulted in a Blue Screen of Death at some point down the line.

    Security audits don't just make systems more secure; done properly, they make systems more reliable, which in turn makes them more usable.
  • Seriously, how many times has there been a release of an all new certification method, security protocol, or whatever, when all of the sudden, it's been broken by hackers in a matter of months, sometimes days, sometimes hours?

    These software designers really need to take their head out from between their butt cheeks and start thinking of something decently secure. Last I heard, breaking 128-bit SSL wasn't so much of a daunting task.

  • I invented the airgap [airgap.net] and I could not get the time of day from Rockwell, Lockheed, or anyone without Common Criteria EAL4 certification. FYI, that costs about a million dollars US. Whee!
  • Never trust anyone over 9576.
  • When they said that information wants to be free, they meant free as in speech, not free as in beer.

    Really? I haven't seen it in the original context, but the quote is:

    Information wants to be free, because it has become so cheap to distribute, copy and recombine. It wants to be expensive because it can be immeasurably valuable to the recipient. -- Stewart Brand

    It sure sounds like Brand is talking about free beer. Unless there's such a thing as expensive speech.

  • If someone audits your security, and declares it to be secure, then it is secure until someone breaks in.

    The key is to get more than one group to agree that it's secure, might be expensive, but more minds are better than one. Redundency and stuff.

    And when someone does break in, tough shit, make sure you have plenty of logging taking place. If you don't want it secure, remove it from the internet.
    --
    Peace,
    Lord Omlette
    ICQ# 77863057
  • by Animats ( 122034 ) on Monday September 25, 2000 @03:25PM (#755351) Homepage
    • Windows NT just barely makes the lowest evaluation level. And that's after years of trying. Plus you need service pack 6A with a fix beyond that; out of the box, NT flunks. Some UNIX variants have done much better. See the Evaluated Products List. [ncsc.mil]
    • Anything below a B level isn't secure. B-level systems have mandatory security, which means users can't do insecure things even if they want to. Systems with mandatory security are a pain to use, but the security is real.
    • Security policy is a mess UNIX doesn't have a security policy, just lots of permission bits. Systems with mandatory security have a security policy, but it's very restrictive. There have been some efforts to bolt mandatory security onto Linux, but all the administration tools need to be modified to live within the tighter rules (no more root, for example), so not much has happened in that direction.
    • What's needed is a liveable security policy that's really secure if enforced, a set of tools that can live within it, and a small microkernel that enforces the rules. Think about issues like how package install can work if it isn't trusted.
  • I noticed win2K wasn't on the list. Hmmm....
  • by pb ( 1020 )
    You're completely right, my man.

    Although I'll stick to "never trust anyone over 2^10, for obvious reasons"... :)

    But talk about a stupid issue; since we were both around before we had UIDs, it doesn't matter at all. Although I'm glad I don't have to type in my contact info every time.

    I remember when one new user thought that "BoredAtWork" was an Anonymous Coward-type account (i.e. a generic name assigned by the system if you don't give it a name) because he posted so #@*& much back then. But he's UID #36, so of course he doesn't post much now. His posts are still excellent, though.

    And yes, it's the old users who get fed up with the system and turn to trolling, because the system encourages it. But don't even get me started on that one...
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Odd, but I don't see Solaris 7 with PitBull listed as B1. Check http://www.radium.ncsc.mi l/t pep/epl/epl-by-class.html#B1 [ncsc.mil] for yourself. Looks like pure marketing weaseldom.
  • First off, a system doesn't "just barely" make a level. Either it does (as Windows NT has done for multiple releases) or it doesn't. Being certified is binary. Either you are or you aren't.

    Basically, though, Windows NT is the only OS available for PC hardware that is even listed.

    • Linux - No trusted systems at any level
    • OS/2 - No trusted systems at any level
    • BeOS - No trusted systems at any level
    • Solaris - No trusted systems at any level
    • FreeBSD - No trusted systems at any level
    • QNX - No trusted systems at any level
    • MacOS - No trusted systems at any level
    • Windows NT - Two trusted systems at level C2
    Which means that Windows NT's C2 still puts it well above any PC OS.

    If you can live with C2, you can configure a Windows NT system. If not, your only trusted system choices are very expensive proprietary hardware/software combinations.

  • The company I work for, CygnaCom Solutions [cygnacom.com], is in the business of performing security-related evaluations. We perform different evaluations, including:
    • TCSEC ("Orange Book"), a somewhat outdated U.S. Gov't standard for evaluating trusted systems to see how they comply with requirements along the lines of 4 general areas: security policy, accountability, assurance, and documentation.
    • Common Criteria, an internationally recognized grammar for stating security functionality and assurance requirements that is rapidly taking the TCSEC's place.
    • FIPS 140-1 and FIPS 140-2, a U.S. Gov't standard for testing cryptomodules (hardware and software) for a level of assurance.
    We could probably arrange some sort of more detailed discussion of what these standards are, how the testing is done, and what good it does, if there is sufficient interest.
  • The folks over at secure computing http://www.securecomputing.com are under contract to develop a "secure" version of Linux for government use. The boys in Ft. Meade are funding this effort, so it's pretty safe to assume that it will have to meet the common criteria. The big question is whether or not the results will be GPL'ed.
  • I want ACLs. Already there's a stack of Linux utilities which try and implement their own ACL system because the Unix rwx permissions are so limited. I also want to be able to modify permissions for ports simply by changing permissions for , say, /proc/ports/23 [or something similar].

    A basic tenement of Linux [and all Unixs] should be the ability to rename UID 0 [the accoutn typically called `root']. This way potential crackers have to realize that the pretend `root' account you created doesn't actually giev them the preiliges they expect, and forces them to find the username of the UID 0 account.

    Unfortunately, this seems to break many things on Linux and many Unixes.
  • I am a security professional who audits government as well as commercial software and systems. The thing about these audits are that they rely almost solely on the person actually doing the audit(or sometimes the team performing the audit). The bad thing here is experience. While you may have peer reviews of your audit report, you might not have the experience to search or detect certain vulnerabilities, this is were most of the problems lie with certifications......they are only as good as the person doing the audit(or team). If that person(team) is not familiar with certain pitfalls, you end up with a flawed certification. Certainly the best thing to do in the case of development would be to put up a production model and let people hack away. That is the only way to determine possible holes in the system. I would never recommend someone putting anything into production with sensitive data until it is certain that most individuals cannot defeat your security. But of course there is usually little time for this in todays plans. To much money to lose if it doesn't get immediately shipped. I see no end to the trend of todays security any time in the next 20years.

    --Too many holes so little time(You sick bastard!!...I was talking about software!)
  • One thing to remember about formal certifications, they are only good for the version they were evaluated. If you make any bug fix, even a minor one, you have to do another *expensive* evaluation.

    I've heard estimates for the initial ITSEC level 7 or FIPS level 4 certification at upwards of a million US Dollars. Evaluating a changed versions are in the tens, to hundreds of thousands of US Dollars. Sure some of this will be cheaper if the design and code changes are open source, but the evaluation labs are not cheap.

    This is the primary reason that formally certified products (both software and hardware based) will remain pretty scarce. In the real world, you tend to get systems where some parts have been formally evaluated (for instance - the boot code for a security device). Other parts have been evaluated from time to time (the application program on the security device).

    The other thing that happens is the formally certified part gets "stale". For really expensive certificaitons (like a Level A1 OS), it only gets done once, and customers are forever stuck with that version. How would you like to use a 15 year old version of an Operating System? That is part of the price you pay for the highest level of security.

    Another important consideration is that true security depends upon the whole environment. I've put Hardware-Security-Module products through the German central bank certification process (ZKA). Just because customer X's environment is certified with the product, does not mean customer Y will automatically get a certification. True, the certification will probably be somewhat shorter (and less costly) if the product has been part of another certified environment.

  • To some extent, it's important to certify qualified software as secure. But security largely consists of guarding against that one exploit that hasn't been tried yet, and it's nearly impossible to prove that no such exploit exists. It's important for operating systems to be certified, but it's more important for people to understand what security is (a process) and what it is not (a product).
  • While we're naming others, DEC's OpenVMS Vax was also certified C2 (as well as many others). SGI's Trusted Irix was rated B1 [ncsc.mil].

    --

  • Basically, any OS can be made secure or insecure. It all depends on the skills of the system administrator.

    For example, a Windows NT sitting behind a firewall and doing nothing is pretty secure :)

    Default installations of Linux distributions suck from the security aspect, as every distromaker tries to include something for everybody and knowing that the average administrator is a brain-dead moron used to WinNT point-and-click UI, everything is open by default. This is a sharp contrast to OpenBSD, where only really necessary services are running by default and you have to add others.

    Linux can be made pretty secure. Most of my boxes have never had any problems, of course I took time to secure them. Default installations suck, you have to do some work yourself. No OS certification can fix your mistakes.
  • PGP/GPG don't have one. These military/governmental classifications apply to a specific configutation of an OS on a specific machine.
  • Wasn't ICSA the the jokers that were calling themselves NCSA (Natl Computer Securiyt A???) when everone used Mosaic just to so people would think they were related to the supercomputer people (aka the real NCSA).

    And now a new breed of jokers want to sell me their firewalls that are security cerified by people that willfuly lied about their credentials. Thats a great markting plus for a security product.
  • I think you miss the point, in thinking anybody is talking about 100-atomic-proof-% secure systems here. They ask for a "usable"-guide on which system to use on which task. This is, however, quite an effort, but it's an effort we have to look into. A working rating system on how good software defends against certain kinds of attacks will help preventing stupid decisions from even more stupid suites. IMHO thats what really is the point :-)

    btw.: And yes security can just be aproximated, but you have to make dam sure your alpha-error is well below 0.05 if you talk about some real aplication.

    sorry on the spelling ... my brain got time damaged today.
  • You can bring Solaris 7 up to B1 trusted with a little package called PitBull [argusrevolution.com]. From what I've heard it's doing pretty well for itself.

    --
  • by rgmoore ( 133276 ) <glandauer@charter.net> on Monday September 25, 2000 @01:34PM (#755368) Homepage

    I personally think that there is at least some value in getting your software audited. OpenBSD is clearly a good test case for the value of internal audits in producing secure code. OTOH, internal audits are never going to be as convincing to some people of the quality of security as external audits are because of the temptation to cheat.

    I think that the government standards for secure computing bases are very valuable in giving you good ideas of what to do. It's clearly the result of careful thought by some very intelligent people. I think that they're missing out on an intermediate security level between their C and B levels that includes horizontal mandatory access controls (basically capabilities) without security levels.

    That being said, I think that all flavors of Unix are always going to be inherently insecure as long as they maintain their "root is god" attitude. As it is there's no room for error. One security hole is enough to give an attacker complete control over your box, and OpenBSD levels of paranoia and auditing are necessary in order to achieve security against anything but a casual attacker. Unix isn't going to be reasonably secure until it implements some kind of mandatory controls, either capabilities or a full class B access control with security levels.

  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Monday September 25, 2000 @04:31PM (#755369)
    > There is starting to be support for capabilities in Linux

    Linux's so-called "capabilities" are a joke. They are nothing of the sort, they are just more acl bits tacked onto operations. You want real capabilities, try something like EROS [eros-os.org]. A true capability manifests as a visibility thing -- you can't call a forbidden operation if you can't even get a handle on it. A true capabilities system is a "thought police" model. You can't perform a forbidden operation because you just can't have that thought. You can't delete a file you can't touch. You can't open a device you can't see. Etc.

    Capabilities can be rock-solid security, but they do have some problems, like revocation. The neat thing about EROS is that stack smash attacks can't gain any extra privileges, because they can't manufacture any extra capabilities -- you'd have to smash the kernel stack to do that.
  • For a system to be B-level compliant, it must have mandatory access controls, something that Linux does not have. There are a few 3rd party tools that can help with this, but they are not complete, and not part of Linux. It may be possible to build a B-level box out of Linux, but Linux itself is not, and probably never will be. Believe it or not, it's for the same reason that NT will not: it makes the system very difficult to use. Mind you that NT is only C2 compliant without a network card installed, and Linux would probably fall into the same category. For certification purposes, NT and Linux are on the same playing field, because the certifications are more into the design of the system, and rarely address the implementation, or bugs.

    I guess the point is that you could have a B or A level box, but you'd never use it for anything interactive because it would be too inflexible. To answer your question, AIX is fairly secure, but the OS has to go through a number of hoops before it passes any level of certification, which, BTW, NT does also.

  • Trusted Solaris [sun.com]


    HP Virtual Vault [hp.com] Based on HP-UX CMW


    SCO CMW [sco.com]


    Of course all of these are CMW products which meet a slightly different set of criteria...

    11. What are the CMWREQs and the CMWEC?


    The criteria used by the Defense Intelligence Agency (DIA) to rate a product as a Compartmented Mode Workstation (CMW) was the Compartmented Mode Workstation Evaluation Criteria (CMWEC), which superseded the CMW Requirements (CMWREQs) in 1991. This criteria defined a minimum level of assurance equivalent to the B1 level of the TCSEC (see TCSEC Criteria Concepts FAQ, Questions 9-11). It also defines a minimum set of functionality and usability features outside the scope of the TCSEC (e.g. a graphical user interface via a window system was required along with the capability to cut and paste between windows). Neither set of requirements are currently to evaluate products although products that are designed to have these features may be evaluated with the Common Criteria for Information Technology Security Evaluation (CCITSE).

  • So you wouldn't let C get anywhere near a secure
    system, eh?

    Well, Java depends on the JVM. What did you write
    the JVM in?
  • Certification isn't *useless*. But it's true value is questionable and this is essentially because, while it may provide the *functionality* you need to enforce the appropriate security policy, it doesn't provide the *assurance*. The higher levels of TCSEC provide assurance that the *design* of the functionality meets the security policy but they DO NOT provide any assurance that the implementation meets this policy. As anyone who knows anything about computer security vulnerability knows, the majority of security problems are implementation errors/oversights.
  • Software and Operating system evaluation is a very complex subject, and by no means is it ever black and white. There are many different ideologies that pervade the field. The international security community is embracing a new system know as Common Criteria(CC). The concept is that you define a set of objectives, and a CC testing facility checks to make sure that your software/OS/Hardware meets those standards. This is much more flexible than the TCSEC(Orange Book) evaluations. This also adds layers of complexity. Which CC eval spec do you need/want, who defines these specs, and how do you get your software tested. Well, the International Information Systems Security Certification Consortium (ISC)^2 for short has many resources for you to find CC specs and CC testing facilities. They also provide a comprehensive training and certification program for people interested in learning about information security. Web Site [isc2.org]. The cert is well accepted, but don't think it's going to be a Microsoft MCP exam easy lick - 6 Hours, 250 questions. Hope this is helpful.
    Deven Phillips, CISSP
    Network Architect
    Viata Online, Inc.

  • POI: NT has a C2 rating *including* networking.

    Personally, I've never been able to find any serious evaluation of NT's rating anywhere on the web or in print. There is, of course, MS's marketing claims that 'NT is C2', period and end of statement, no details, no clarifications, no special configurations mentioned.
    Then there's people sounding off on the 'net, who generally say, 'The guy who put together the NT 3.51 box to pass C2 certification had to do all kinds of things to make it even work, (including removing networking, tweaking the registry, removing this that and the other program), and then when he tried to publicize what he'd done Microsoft effectively murdered him by suing him here there and everywhere and bad-mouthing him so that he had high stress and was unemployed and so unable to afford medical care died of stress related illness'.
    Okay. Whatever. I don't entirely believe that it took a year of intense configuration and ripping out the critical guts of NT to make it secure, and I also don't believe that every version of NT is C2 secure out of the box, which is what MS implies. The government, of course, only says, 'Only boxes are rated C2 secure, not OSes'. (Except they say it in bureaucratese...)

    In other words, your 'Point of Information' is just one more bit of noise and there is no signal in sight.It's an unsubstantiated claim on a widely disputed and underdocumented issue.


    --Parity
  • > the only secure box is an unplugged one, put in a steel box and thrown at the bottom of the sea

    And even that's not safe with Bob Ballard around.

    --
  • The Common Criteria;its predecesor, the DoD Orange Book; and British Standard 7799 are not actually "security" certifications, per se. What I'm saying is that being "C2" certified does not mean that the system was certified as "secure". It means the system has certain features and functions and that they worked as described when tested.

    So, for example, one of the C2 criteria is that user be uniquely identified. Sure enough, any C2 certified system has user identifiers and every process on the system uses one. Does that make the system secure? No, but it helps an administrator secure the system. The certification just means that feature is there.
  • Not only that, but their current name ICSA is deliberately misleading too. They claim now that it is "not an acronym" [isca.net] (Of course it's not an acronym, it's an abbreviation. Acronyms are abbreviations that are pronounced as a word. E.g., FBI - abbreviation; COBOL - acronym. Unless they want it to be pronounced "ick-suh".), but not so long ago it stood for International Computer Security Association. This makes it sound like it is some international organization, like the Information Systems Security Association, only it isn't, wasn't and didn't even try to be. It is a company that wants to sell computer security consulting, evaluation and similar services. Do not mistake it for an standards body made up of experts from around the world, like the IETF. It is just a company, kindof like the Wheel Group (before Cisco bought them), only not as good.
  • The only two operating system/hardware combos I've ever seen with an A1 rating under this [yes, A1 is both hardware and software security] are Trusted Xenix [where did this one go?]
    Trusted Xenix was B2, not A1. TIS (now NAI labs) sold a few copies, but aimed further development efforts at T rus ted Mach [pgp.com], which was targeted at B3 but basically ended up going nowhere. TMach was deemed insuffiently interesting to speculators^H^H^H^H^H^H^H^H^H^H^Hinvestors (who were interested mostly in the Gauntlet firewall) and so was cancelled shortly after TIS's IPO.
  • The first step to security is to eliminate the insecure stuff.
  • Have a look at SGI Trusted Irix

egrep -n '^[a-z].*\(' $ | sort -t':' +2.0

Working...