Certifying Software As Secure? 84
"It turns out that the much-touted Microsoft Windows NT 3.5 and 4.0 TCSEC C2 rating basically states that the operating system assures separation of users and data and audits user and security-related events -- capacities that are essentially standard expectations of any modern 'enterprise' operating system. That rating is essentially two (2) levels out of seven (7) from the rating for utter lack of security (D1). See the U.S. Government's Commercial Product Evaluations page and its associated Trusted Technology Assessment Program (TTAP)'s FAQ entry on TCSEC evaluation rating interpretation for more information. For now, be aware that the evaluation ratings go non-intuitively, from lowest to highest: D1, C1, C2, B1, B2, B3, A1. Microsoft's rating also only applies to very specific configurations of the Windows NT Operating System and none of its frills -- like ASP, for instance.
Still, even from the standpoint of researching evaluation and certification options, it looks like only International Government Evaluation (i.e. the 'Common Criteria' evaluation process) and perhaps the ICSA certification are available to any vendor who wants to be pro-active and benefit from standards in the process. (Please let me know if you know better!) And I've talked with a number of hacker types who sneer at the idea that any of these certifications are worth the money and effort to put into them.
At the same time, pointy-haired types eat this certification stuff up. In point of fact, government contracts can be much more possible and much easier to obtain if you get certified this way, and as Microsoft's spin-doctoring of their C2 TCSEC rating points out, it just makes the company that has the rating look more responsible, all around, or can, if your readers and customers don't know what the rating actually means.
Sure, it's possible to contract with any security auditing firm to get something or someone to say that your product's at least minimally secure, but it's still unfortunate, but true, that if you want any kind of widely-recognized, standard certification, you'd better actually seek out some kind of formal evaluation and rating.
Do people agree, disagree, and either way, can they prove it?"
certified design or certified implementation? (Score:1)
Suppose your C2 certified NT machine has a bug or backdoor that no one knew about, that let you overwrite system memory with anything you want if you push a magic keyboard combination. Well, all that access control list and whatnot doesn't mean squat. Up,down,left,right,A,B and poof, you can do anything you want.
Of course, how can you tell an implementation doesn't have bugs or backdoors when you don't have the source?
Re:certified design or certified implementation? (Score:2)
IIRC, you don't have to deliver the source to the evaluators, but you have to at least have someone in-house designated to do reviews. Somewhere in the huge pile of documents has to be plan for ensuring that implementation meets design - as well as plans for testing and configuration management.
Re:PGP for example (Score:1)
It would probably be in the "B" range (access control to data, separation between public and private keys, except it doesn't keep the necessary logs), but, since it's not a full system, and only a single application, it doesn't get a rating. As a component it could probably be helpful in doing the required authentication, but that's not enough to get a rating (it would also need to provide logs for this, probably).
What B2/A1 security is all about (Score:1)
auditing, B2/A1 security rating requires that
there be NO POSSIBILITY that a user can access
any information or perform any operation that
is not allowed by the user's security rating.
This means that the OS must be PROVED to have
no "Buffer Overrun" errors or any other bug by
which a user can get access to any priviledges
beyond what the user is entitled to.
The final test is that even with complete source
code and documentation of the entire OS, no
loopholes can be found.
This means that C++ is forbidden as well as C
strings and any compiler and/or library that
executes any code that is in the stack or heap.
There is such a thing as expensive speech (Score:1)
The thing is... (Score:1)
Unless of course you want to go for full A-9 certification, in which case you'll definitely need to get a PTU and an EPI audit done on all you FDC's. Now that's a real pain in the ass. This one guy my cousin knows spen three weeks just working on his EPI's TRD, not to mention the PTU an
Oh dear. I think too many TLAs have FRD my BRN.
FNK
Re:Point of Non-Information. (Score:1)
Of course boxes are need to be evaluated, I can have an A1 secure OS, but if the prom allows me to bypass it, then the box isn't secure.
Re:what does that rating mean? (Score:1)
German Govt. Certification (Score:1)
Basically, for us it's a selling point both to customers and investors, that an office that's generally pretty respected has decided that we've taken all the common-sense measures within our power (no, you'll never be able to defend your network against H@x0r1ng by the aliens from Mars) to protect ourselves from intrusion. "Technical due diligence" is the key term in this situation.
Our problem was to try and certify portions of our entire network (working together) as being "secure", as opposed to just single products. I've found that so far, the BFI guys were pretty helpful and technically clued. What they will do for you is to let you define "protection profiles" ("Schutzprofile") based on the Common Criteria for security, for parts of your infrastructure. They then check whether (a) your criteria and profiles make sense, and (b) whether they comply with their idea of CC.
One of the really cool parts of this is that assuming they decide that you're kosher security-wise, you can decide to release the profiles you developed for general use, and they will then certify other companies against those same standards. Likewise, you can just get some pre-defined off-the-shelf requirements that sound usable, and have yourself judged by them...
Re:Trusted Irix (was Re:Trusted Solaris) (Score:1)
No, but a paradox. Logic states:
A) if you follow it, you must regect it.
B) if you reject it, you are following it.
Of course, the path to follow is: Screw logic, and ignore it.
--
Re:What B2/A1 security is all about (Score:1)
Re:The real deal on evaluation (Score:1)
Why have root? (Score:1)
There are reasons why you need to have an administrative ability to access a file that a user doesn't want other ordinary users to see. Suppose, for instance, that an abnoxious user greatly exceeds his (soft) disk quota. An administrator for the system needs to have the ability to go in and archive and delete some files that are using up the common space. In general, users can be selfish and obnoxious, and somebody has to have the right to override their stupid decisions that can hurt other users. You need some kind of administrative right to step in and do that.
The problem is not in letting an administrator play god. The system needs to have someone with godlike powers to do that stuff, and it's very useful to have programs that can proxy for the administrator and do tasks that ordinary users shouldn't be allowed to do, like reading the encrypted form of user passwords. The problem with Unix and the like is that there's no segmentation of those powers. You can't easily delegate to a program the right to look at /etc/shadow and nothing else. The result is that you have a lot of daemons running with full administrative privelege when they need only limited privelege, and a failure in any one of those programs can give an attacker full privelege. That means that you need OpenBSD levels of auditing and care, because any single failure can result in catastrophy. Unix needs to add some kind of compartmentalization of administrative privelege in order to have real security. That way, even if you miss something, an attacker won't have absolute free reign on your box.
You are the moron (Score:1)
The reason Internet Security is an oxymoron is becuase of the incomprehensible complexity of the internet itself. There are so many systems interconnected and each is administered in its own fashion with boundless variety of configurations. On top of that, everybody in the world has the opportunity to attack each system on the internet.
How the hell do you completely secure one system on the internet against all known and unknown types of attacks that could be launched by anyone from anywhere?
For this reason, it is impossible to honestly guarantee a system on the internet as secure.
Re:Point of Non-Information. (Score:2)
Also, what data you transmit over the net is not evaluated. They're just saying you can have a network card, and if no ports are open you're still safe.
Which I guess is better than the (supposed) no-network configuration of NT3.51. If that's true.
I think I'm going to print this out and take it home. THanks.
--Parity
Re:I want ACLs, and the ability to rename UID 0 (Score:1)
Re:please post security ratings (Score:1)
Even breaking up root priviledges somewhat would be useful, though; make a stack-smashing attack only give you the little priviledge that the program that you found a hole in had. Beyond that, you could also restrict some usual user priviledges (exec comes to mind), to eliminate abilities that daemons don't really need. I.e., make it so bind can't call exec or do anything priviledged except open low-numbered sockets.
On the other hand, this adds a bunch more stuff that has to be configured correctly, as well as requiring some new way of setting all this.
Re:what does that rating mean? (Score:2)
Heh, very practical.
Re:Linux is nowhere near B-level compliance (Score:1)
You will never get A1, that requires a formal proof, and doing that on a existing code base is impossible. Doing anything higher than B1 is not worth the effort.
Most UNICES are C2 feature complete (except Linux). A few are sold in B1 versions (ie Trusted Irix)
C2 and B1 are doable. All that Linux is lacking for C2 is an audit trail. (and of course the line-by-line analysis...:-) But we can separate feature completeness from certification.
B1 requires MAC (as you stated), but to be useful it requires a MAC aware X server, commonly called CMW, that too is doable, it will just take longer, since you need some standard way of passing labels around the network.
While I'm here, can we plese separate TRUST from SECURITY. There is this crypto package that uses 1million bit keys, is it SECURE ? Yes. Can I TRUST it ? No.
richard--SGI Trust Team, but not speaking for them
Wonderful illustration of Govt Spending (Score:1)
The day Microsoft make something that doesn't suck will be the day they start making vaccuum cleaners.
Vik
Re:Trusted Solaris (Score:2)
Heh, I realize that this is way off topic, but the immortal John Carmack [slashdot.org] is UID 101025...
Re:Internet Security is an oxymoron. (Score:1)
Re:please post security ratings (Score:2)
It may be possible to make a B-level distribution, assuming that physical access is controlled, and programs are set up very carefully. But you probably wouldn't find it terribly useful, since nobody could become root, as that would seriously break the security model. You'd basically have to deal with not having a user-level capabilities system by lacking abilities.
Of course, you could probably get C2 by turning off all the services you don't actually want, removing the setuid bit from programs that shouldn't have it, restricting access to some other programs, and replacing the rest of the setuid programs with versions which are simple enough to verify their security.
Generally, many of these ratings aren't very helpful unless you're a government, because at the higher levels it's mostly concerned with making sure that your secret data can't go to untrusted places. If you're big enough that you actually talk to trusted places, this is helpful, but for most places, it means the computer is unusable.
For example... the machine can't let you cut and paste from a secret document to anything like, say, a web browser or ssh window. It can't let you accomplish this in several steps, either. It quickly becomes impossible to deal with having anything that can send information out to anything but verifiable secure and trusted sites. Not only do your directory listings not include secret files if you're not a trusted user, they don't even if you are, if you can copy out to something untrusted. It's actually easier on the user to have a separate machine for secret data, and it's all silly unless you're also searching your employees for secret files at the door.
Of course, in the business world, you generally don't deal with secret data on this level. Security is aimed at preventing access from users who shouldn't get it, not preventing spies from getting information out. B3 or C2 is about where it's worth getting, and beyond that what you're interested in is an entirely different scale of security.
OT: Sig, and even further OT (Score:1)
heh . . .
Well, it was, of course, "Never trust anyone over 30" but then I saw my own UID and realized that this would have been a problem (though it's certainly good advice).
Oh well . . . I wished I could still just post AC like I did before I got an account . . . but it's impossible with the noise and current moderation system. Add to this, that most posters start out with a 2pts above an AC, and it's impossible to be heard unless you're really early.
--
what does that rating mean? (Score:1)
Also, use of ACLs or labels in today's world is pretty much irrelevant to the consumer. They're a nice add on for servers, but that's the best you could say.
A seal of approval is only as useful as its meaning. A good seal of approval will have some abstract meaning like "hard to break in", and then concrete measures that define that abstract term. Over time, the concrete measures should be updated (i.e., one could lose one's rating).
For instance, If I were to design a security stamp of approval of "reasonably secure" my stamp would mean the following:
1. Source openly available
Without source only the hackers can find the problems.
1a. The system must be documented. None of this hard to write/hard to read crap.
2. The code must have been reviewed either by a person or a program to show that common failures are avoided (such as a failure to check bounds). In short, no use of gets() and it must pass lint. Java has a clear advantage here over both C and perl, by the way. I wouldn't let C touch a system that really needed to be secure. Useful as pointers are, blown pointers and buffers happen (in C).
3. BY DEFAULT the system must not allow administrative access via unencrypted or unauthenticated means. No
4. BY DEFAULT, the only Internet protocol MUST be IP (no IPX, Appletalk, DECNET, etc). and daemons MUST be disabled or inaccessible to the outside (including routed, dhcpd, sendmail, and X). NO file system protocols, thank you very much!
5. Any sort of applet technology requires authentication (i.e., signatures) and authorization.
Eh.. just some musings...
Re:There is value (Score:2)
Re:The bogusity meter is pegged! (Score:1)
Trusted Irix (was Re:Trusted Solaris) (Score:1)
Re:Trusted Solaris (Score:2)
They wanted a secure web-server, running their in-house written CGI code. The PHBs decided that as long as the underlying OS was certified as secure, they would have no security problems! Yes, people are really that naive!
Virtual Vault was eventually dropped when it was discovered that their Systems Management software (which used the extreemly insecure SNMP) wouldn't run on the proposed system, and they needed everything to report back to one central super-console.
certifying sw as secure, and BUG FREE (Score:1)
Then, of course, there's the whole question of certifying that the way in which the software (or hardware, for that matter) is used is certifiably secure. Again, nobody can guarantee that lapses aren't possible.
Bruce Schneier has been saying recently that he's come to the conclusion that (paraphrasing) certification isn't the answer to computer security; if you want to feel secure (and protect your business in the case that there are lapses), then get insurance instead. Manage your risks, in other words, rather than placing blind trust in a particular technology or a paper certificate.
From someone who has been through it. (Score:1)
We knew that we had to get our product (a transparent-proxy firewall) "certified" if we wanted to sell it to the Canadian government.
Enter CSE.
They told us that the Orange Book was being phased out and that we could be the first profuct to be evaluated under the new "Woldwide" Common Criteria. We accepted. *I* was the one who was assigned to do it.
Since this was the first product to undergo the Common Criteria "checklist", I could debate any point of the criteriaif I didn't agree with it (which I did, often...).
Of course, the dice were loaded: they were 7 CSE people, and I was alone. I often had to debate my points over and over to different people until the head techie (the Brain) agreed and put a note into the Criteria. I assume that they then reported the proposed change to NSA (or the "Headquarters" as they called it).
Our product was evaluated under the EAL-1 checklist; that's the lowest, but it was the only real one achievable at the time.
The next 7 months were rather tedious: I would give all the product documentation and white papers, and they would lookup each function of our product in their checklist, such as :
They would give this back to me, and I would have to check each and every point. If I didn't agree with a point, I had to document the product even more (more white papers, more changes to the user guide, admin guide, etc...). We could not change the code... Oh, and I am not a tech writer... but boy, did I ever have to become one then...
Since each and every log message has to be documented and explained in the product documentation, for a product to get EAL-1, I almost quit the day that they told me that I had to document DEBUG log messages! Since we had rather original coders in our midst (hey San!), I could not conceive having to explain debug messages like :
Jesus, I need a beer in seconds. or Oh no don't touch me there.Luckily, I was able to debate that one requirements out. Thank G0d.
The bottom line is that this forced us to document our product way the hell more than what we had at the time, and I think that our users benefitted greatly. CSE ended up with a better, more logical Criteria, and we were then able to sell to the Canadian Government.
Was it worth it ? Well, I could have used the sleep... But I think that it made our product and documentation much better, and it opened a market that we wanted, and then realized that we were the only player in it. That meant big bucks; way more than we originally thought. So, yes, it was worth it.
Would it still be worth it? Hell yes, that's why I want the Black Hole Project [sourceforge.net] to get cetified under the Common Criteria as soon as possible. (Disclaimer: I have left the company that did the product originally, and I am now on my own)
Re:what does that rating mean? (Score:2)
You are not getting rid of the problem, you are just hiding it. All these "ugly things" prone to errors are simply _hidden_ inside the JVM. But they are _still_ needed, and _still_ used. And written in C.
Evaluation and accreditation of software (Score:1)
Re:Problems with "secure" software... (Score:1)
I Work on an A1 rated system... (Score:2)
--
First thing you do... (Score:2)
:)
---
Orange Book (Score:4)
--
I'm working on A1 Security. (Score:4)
I'm still working on multiplication...
In other words... (Score:1)
Now that you're not rushing to get that first post, would you like to reconsider your position?
security smecurity (Score:1)
Trusted Solaris (Score:4)
Sun's Trusted Solaris (I'll let somebody else get a few Informative points by posting a link; I don't have it handy) lets you do some useful things in this respect. I don't recall their rating offhand; somewhere in the midrange.
You can do some really cool things besides impress your boss with the rating, too. Like make indidivdual directories and files simply not be there when certain users do an ls(1). I don't mean "permission denied" kind of things, I mean the kernel itself just skips over that file; doesn't even report its existence.
It's great for situations when information at different classification levels (Top Secret, Secret, Confidential, Stuff That Used To Be Secret Before You Put It On The Damn IIS Server And Some Eleven-Year-Old Kid Got It, etc) all need to live on the same machine.
The bogusity meter is pegged! (Score:3)
I really hate the false sense of (ahem) security that certification gives. They're trying to assure the software is secure, which to is almost an impossible task for any non-trivial system. Anybody who says a system is secure is lying to themselves and others.
With things this complex, security can only be approximated.
Of course, you can certify a design as secure with much less effort but it's the implementation that matters the most.
A VB Haiku (Score:1)
forgot Option Explicit
time for bugs galore
J
Great Links: (Score:3)
Network Security Library [secinf.net]
Common Vulnerabilities and Exposures [mitre.org]
SecurityFocus [securityfocus.com]
You can find everything you want to know (and more) at these sites.
PGP for example (Score:1)
how does it affect you decision to use or not to use it?
the FUN in all of it (Score:1)
rating under this [yes, A1 is both hardware and software security] are
Trusted Xenix [where did this one go?] and Honeywell Multics [this one
was flushed down the shitter in France.]
Re:security smecurity (Score:1)
Problems with "secure" software... (Score:4)
Like Yogi Berra said, "In theory, there's no difference between theory and practice. In practice, there is." No matter how stringent the testing, no matter how exacting the software development (up to and including provably-correct software), software cannot be secured. In theory, following a provably-correct software design (such as is possible in some Ada subsets) allows you to design software which is provably correct... but that's theoretical, not practical.
Sometimes, the buggiest thing in a system is a feature which is working exactly as it's designed to do. Provably-correct software is predicated on there being a correct assessment of what the software needs to do and needs to not do, and so far, nobody's come up with a way to do provably-correct brainstorming.
Auditing cannot, repeat, cannot make a piece of software secure. All it does is find errors, not all errors, and maybe even not all the major errors.
2. TRUSTED SYSTEMS ARE JUST THAT.
Trust. It's another way of saying "I have faith in you." Faith is the antithesis of proof. For years my Linux box was a haX0r's dream--I didn't bother to turn off services, my root password was fairly easy to guess, etc. That doesn't sound like a trusted box, does it?
Wrongo. It was very trusted, because it wasn't connected to any network and it was in my bedroom. I trusted it a lot--I had faith that it wasn't going to be compromised.
Whenever you see someone advertising a "trusted system", ask yourself: who trusts it? Why do they trust it? Should I trust it? "Trusted systems" are sometimes a lot of snake-oil; people who don't know beans about security buy "Trusted Solaris" because it says "Trusted", even though their incompetency as a UNIX sysadmin makes the box vulnerable.
(Note: I have a lot of respect for Trusted Solaris, even more than I do for OpenBSD. I'm just making the point that the word "Trusted" doesn't mean much.)
3. THE MOST IMPORTANT ELEMENTS IN SECURITY ARE THE USERS AND THE SYSADMIN, IN THAT ORDER.
Most people will reverse this around, claiming that the sysadmin is more important security-wise. There's merit to that (after all, root == God), but I reverse it. There's only one sysadmin, and an attacker more or less has to take his chances that the sysadmin is incompetent enough to fall for (a crack, a social engineering attack, a DDoS, etc.). But if there are hundreds of users, it's certain that at least one of them is going to be a complete fargin' idiot, which means attacks which involve users are more effective than those which go straight for root.
This is an important point. No matter how secure the box, no matter how trusted it is, the weakest link are the users. When companies get on a security kick, they tend to spend lots of money on software and very little on educating their users. This has always struck me as backwards.
4. USERS DON'T WANT SECURE SYSTEMS.
Have you ever tried to use Trusted Solaris, or OpenBSD in a particularly bondage-and-discipline configuration? Sure, they're locked up tight against intrusion, but this comes at a steep price in usability. People want computers to be easy to use more than they want them to be secure. If you make computers too secure, your own legitimate users will circumvent security. I regularly see passwords on Post-It notes stuck to monitors--not just in Corporate America, but in government offices which routinely handle extremely sensitive data.
5. FOR ALL THIS, SECURITY AUDITS ARE A GOOD IDEA.
Security audits do two things: first, they tend to ensure that software works the way it ought, and second, they tend to ensure that software doesn't work the way it oughtn't. The potential problem that's spotted and corrected due to a security audit may never have resulted in an exploit, but it may well have resulted in a Blue Screen of Death at some point down the line.
Security audits don't just make systems more secure; done properly, they make systems more reliable, which in turn makes them more usable.
Internet Security is an oxymoron. (Score:2)
These software designers really need to take their head out from between their butt cheeks and start thinking of something decently secure. Last I heard, breaking 128-bit SSL wasn't so much of a daunting task.
Been there, done that (Score:1)
Re:Trusted Solaris (Score:1)
Re:There is value (Score:1)
When they said that information wants to be free, they meant free as in speech, not free as in beer.
Really? I haven't seen it in the original context, but the quote is:
It sure sounds like Brand is talking about free beer. Unless there's such a thing as expensive speech.
lalazerg (Score:1)
The key is to get more than one group to agree that it's secure, might be expensive, but more minds are better than one. Redundency and stuff.
And when someone does break in, tough shit, make sure you have plenty of logging taking place. If you don't want it secure, remove it from the internet.
--
Peace,
Lord Omlette
ICQ# 77863057
The real deal on evaluation (Score:4)
Re:The real deal on evaluation (Score:1)
YAOTP (Score:1)
Although I'll stick to "never trust anyone over 2^10, for obvious reasons"...
But talk about a stupid issue; since we were both around before we had UIDs, it doesn't matter at all. Although I'm glad I don't have to type in my contact info every time.
I remember when one new user thought that "BoredAtWork" was an Anonymous Coward-type account (i.e. a generic name assigned by the system if you don't give it a name) because he posted so #@*& much back then. But he's UID #36, so of course he doesn't post much now. His posts are still excellent, though.
And yes, it's the old users who get fed up with the system and turn to trolling, because the system encourages it. But don't even get me started on that one...
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Pit "Bull" (Score:1)
Re:The real deal on evaluation (Score:1)
Basically, though, Windows NT is the only OS available for PC hardware that is even listed.
If you can live with C2, you can configure a Windows NT system. If not, your only trusted system choices are very expensive proprietary hardware/software combinations.
From a security evaluation laboratory... (Score:1)
Re:Linux is nowhere near B-level compliance (Score:1)
I want ACLs, and the ability to rename UID 0 (Score:1)
A basic tenement of Linux [and all Unixs] should be the ability to rename UID 0 [the accoutn typically called `root']. This way potential crackers have to realize that the pretend `root' account you created doesn't actually giev them the preiliges they expect, and forces them to find the username of the UID 0 account.
Unfortunately, this seems to break many things on Linux and many Unixes.
Security Audits Bring Awareness (Score:1)
--Too many holes so little time(You sick bastard!!...I was talking about software!)
Version Nightmare (Score:1)
I've heard estimates for the initial ITSEC level 7 or FIPS level 4 certification at upwards of a million US Dollars. Evaluating a changed versions are in the tens, to hundreds of thousands of US Dollars. Sure some of this will be cheaper if the design and code changes are open source, but the evaluation labs are not cheap.
This is the primary reason that formally certified products (both software and hardware based) will remain pretty scarce. In the real world, you tend to get systems where some parts have been formally evaluated (for instance - the boot code for a security device). Other parts have been evaluated from time to time (the application program on the security device).
The other thing that happens is the formally certified part gets "stale". For really expensive certificaitons (like a Level A1 OS), it only gets done once, and customers are forever stuck with that version. How would you like to use a 15 year old version of an Operating System? That is part of the price you pay for the highest level of security.
Another important consideration is that true security depends upon the whole environment. I've put Hardware-Security-Module products through the German central bank certification process (ZKA). Just because customer X's environment is certified with the product, does not mean customer Y will automatically get a certification. True, the certification will probably be somewhat shorter (and less costly) if the product has been part of another certified environment.
Certifying is important, but hardly everything (Score:1)
Re:Trusted Solaris (Score:2)
While we're naming others, DEC's OpenVMS Vax was also certified C2 (as well as many others). SGI's Trusted Irix was rated B1 [ncsc.mil].
--
Re:please post security ratings (Score:1)
For example, a Windows NT sitting behind a firewall and doing nothing is pretty secure
Default installations of Linux distributions suck from the security aspect, as every distromaker tries to include something for everybody and knowing that the average administrator is a brain-dead moron used to WinNT point-and-click UI, everything is open by default. This is a sharp contrast to OpenBSD, where only really necessary services are running by default and you have to add others.
Linux can be made pretty secure. Most of my boxes have never had any problems, of course I took time to secure them. Default installations suck, you have to do some work yourself. No OS certification can fix your mistakes.
Re:PGP for example (Score:1)
ICSA? Would the real NCSA please stand up? (Score:2)
And now a new breed of jokers want to sell me their firewalls that are security cerified by people that willfuly lied about their credentials. Thats a great markting plus for a security product.
Re:The bogusity meter is pegged! (Score:1)
btw.: And yes security can just be aproximated, but you have to make dam sure your alpha-error is well below 0.05 if you talk about some real aplication.
sorry on the spelling
Re:Trusted Solaris (Score:1)
--
There is value (Score:3)
I personally think that there is at least some value in getting your software audited. OpenBSD is clearly a good test case for the value of internal audits in producing secure code. OTOH, internal audits are never going to be as convincing to some people of the quality of security as external audits are because of the temptation to cheat.
I think that the government standards for secure computing bases are very valuable in giving you good ideas of what to do. It's clearly the result of careful thought by some very intelligent people. I think that they're missing out on an intermediate security level between their C and B levels that includes horizontal mandatory access controls (basically capabilities) without security levels.
That being said, I think that all flavors of Unix are always going to be inherently insecure as long as they maintain their "root is god" attitude. As it is there's no room for error. One security hole is enough to give an attacker complete control over your box, and OpenBSD levels of paranoia and auditing are necessary in order to achieve security against anything but a casual attacker. Unix isn't going to be reasonably secure until it implements some kind of mandatory controls, either capabilities or a full class B access control with security levels.
Re:please post security ratings (Score:4)
Linux's so-called "capabilities" are a joke. They are nothing of the sort, they are just more acl bits tacked onto operations. You want real capabilities, try something like EROS [eros-os.org]. A true capability manifests as a visibility thing -- you can't call a forbidden operation if you can't even get a handle on it. A true capabilities system is a "thought police" model. You can't perform a forbidden operation because you just can't have that thought. You can't delete a file you can't touch. You can't open a device you can't see. Etc.
Capabilities can be rock-solid security, but they do have some problems, like revocation. The neat thing about EROS is that stack smash attacks can't gain any extra privileges, because they can't manufacture any extra capabilities -- you'd have to smash the kernel stack to do that.
Linux is nowhere near B-level compliance (Score:2)
I guess the point is that you could have a B or A level box, but you'd never use it for anything interactive because it would be too inflexible. To answer your question, AIX is fairly secure, but the OS has to go through a number of hoops before it passes any level of certification, which, BTW, NT does also.
Re:Trusted Solaris (Score:2)
Trusted Solaris [sun.com]
HP Virtual Vault [hp.com] Based on HP-UX CMW
SCO CMW [sco.com]
Of course all of these are CMW products which meet a slightly different set of criteria...
11. What are the CMWREQs and the CMWEC?
The criteria used by the Defense Intelligence Agency (DIA) to rate a product as a Compartmented Mode Workstation (CMW) was the Compartmented Mode Workstation Evaluation Criteria (CMWEC), which superseded the CMW Requirements (CMWREQs) in 1991. This criteria defined a minimum level of assurance equivalent to the B1 level of the TCSEC (see TCSEC Criteria Concepts FAQ, Questions 9-11). It also defines a minimum set of functionality and usability features outside the scope of the TCSEC (e.g. a graphical user interface via a window system was required along with the capability to cut and paste between windows). Neither set of requirements are currently to evaluate products although products that are designed to have these features may be evaluated with the Common Criteria for Information Technology Security Evaluation (CCITSE).
Re:what does that rating mean? (Score:2)
system, eh?
Well, Java depends on the JVM. What did you write
the JVM in?
Why certification doesn't mean much (Score:1)
Evaluation standards and complexity (Score:1)
Deven Phillips, CISSP
Network Architect
Viata Online, Inc.
Point of Non-Information. (Score:2)
POI: NT has a C2 rating *including* networking.
Personally, I've never been able to find any serious evaluation of NT's rating anywhere on the web or in print. There is, of course, MS's marketing claims that 'NT is C2', period and end of statement, no details, no clarifications, no special configurations mentioned.
Then there's people sounding off on the 'net, who generally say, 'The guy who put together the NT 3.51 box to pass C2 certification had to do all kinds of things to make it even work, (including removing networking, tweaking the registry, removing this that and the other program), and then when he tried to publicize what he'd done Microsoft effectively murdered him by suing him here there and everywhere and bad-mouthing him so that he had high stress and was unemployed and so unable to afford medical care died of stress related illness'.
Okay. Whatever. I don't entirely believe that it took a year of intense configuration and ripping out the critical guts of NT to make it secure, and I also don't believe that every version of NT is C2 secure out of the box, which is what MS implies. The government, of course, only says, 'Only boxes are rated C2 secure, not OSes'. (Except they say it in bureaucratese...)
In other words, your 'Point of Information' is just one more bit of noise and there is no signal in sight.It's an unsubstantiated claim on a widely disputed and underdocumented issue.
--Parity
Re:security smecurity (Score:2)
And even that's not safe with Bob Ballard around.
--
These are not security certifications (Score:1)
So, for example, one of the C2 criteria is that user be uniquely identified. Sure enough, any C2 certified system has user identifiers and every process on the system uses one. Does that make the system secure? No, but it helps an administrator secure the system. The certification just means that feature is there.
Re:ICSA? Would the real NCSA please stand up? (Score:1)
Re:the FUN in all of it (Score:2)
Look for the Microsoft logo... and don't buy it! (Score:1)
Re:please post security ratings (Score:1)