Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Auditing for Linux? 118

steelwraith asks: "I'm a contractor working for a DoD agency, and there has been an on-going firefight over whether to allow Linux to be used withing the agency, with a possibility of this spilling over into DoD as a whole. Does anyone know of a project to create or port auditing into any of the Linux distributions? This is a major hurdle to the widespread adoption of Linux in the government; while it has a toe hold in places already, I fear it could be cut off before it has a chance to show its worth."

"A quick search of several sites (I'm under the gun, so I don't have a lot of time to do research) shows that there are no add-ons to Linux to allow C2 level auditing (a la BSM in Solaris). This is one of the primary arguments left for the side that want to deep-six Linux in the agency (on top of the requirement for a vendor integrity statement of some kind)."

This discussion has been archived. No new comments can be posted.

Auditing for Linux?

Comments Filter:
  • by Anonymous Coward
    DoDLinux [sourceforge.net] features advanced auditing. With the -B switch DoDLinux will dispatch a B2 stealth bomber to the policy violaters location to deliver the violation message in a descreet 500 kg package.
  • by Anonymous Coward
    I read all of those threads as they were posted. Curry, who was the consultant who guided NT3.5 to C2 certification, claimed to have a contract to work on a package for securing NT4, every service pack to which broke the security modifications. In his posts he stated that there was pressure exerted on him to fraudulently vouch for NT4 as secure, which he refused. (MS was marketing NT4 as if the cert from NT3.5 carried over!) MS broke their contract with him--refusing to pay him I suppose--and threatened him with a smear campaign if he made noise. He made noise and sought interviews with highlevel DoD officials like Cohen. I believe MS made an effort to characterize him as an unbalanced, disgruntled, insignificant (never revealing how important to them he'd been in the past) contract employee. He never collected on his contracts and died a ruined man.
  • by Anonymous Coward
    I heard that with Medusa [fornax.sk] is possible to make secure system even with B1 security level.It can really add security to linux box.
  • by Anonymous Coward
    DII COE is a nasty, nasty, terrible, ugly, abomination that should be drowned and suffocated out of existence.

    Unfortunately, the DoD loves the stuff. They think COE is the best thing since sliced bread. Especially DISA. DISA is the Defense Information Systems Agency -- And I personally think the D in DISA stands for dysfunctional.

    Here's a short description of what DII COE is:

    First, there's a web site, where you may download the DII COE kernel and toolkits for Slowlaris, or for HP.
    ftp://ftp.uccs.jpl.nasa.gov

    They also have DII COE kernels for NT.

    What is the aim of DII COE?

    DII COE is the brainchild and programming effort of The Jet Propulsion Labs, and has been partially-embraced by the Defense Information Systems Agency. I don't say completely-embraced, because there are a few enclaves within DISA that outright refuse to go the way of DII COE. Namely the Security Testing and Implement Guidelines often conflict and outright disagree with some of the basic concepts of the DII COE.

    The DII COE was developed to make a common operating interface, so Joe Private could install packages, er... I mean segments (That's what they're called, I don't know why), without having to type complicated commands like "ls" and "pkgadd"

    The DII Coe also replaces the kernel (for Slowlaris at least) so that some of the basic concepts of BSM are turned on. BSM is the Basic Security Model, which logs who logged in, who attempted to log in, what directories they changed to, what files they attempted to read, what files they successfully read, which libraries they used, which accounts they su-ed to, too much information for any decent SA to administer.

    BSM can be so verbose that ... LITERALLY ... a whole 8 gigabyte disk got filled with one GBAF (Great Big Ass File) from two days of operation. Systems administrators are busy enough, and they never have time to read 8 GB of logfiles.

    So what is so bad about this?,
    you may be thinking

    COE insists that everyone running the same operating system run the same packages, with the same permissions, with the same files in the same places, with a system of segmenting that takes forever to build new packages ^H^H^H^H I mean segments, a completely-screwed up distribution system, a distribution system without the benefit of a smart checksum system (the packages installed determine their correctness with a standard 32768 checksum. No MD5 Here. Basically because Slowlaris and HP don't come with MD5 Sum. (don't get me started) )

    Safety in heterogeneity is unheard of. The philosophy is that it is safer to have everybody running exactly the same thing. When I hear of this complete fallacy, I think of the potato famine in Ireland. Everybody had the same strain of Potato, and when a specific blight liked that breed of potato, the entire country was devastated by the blight. A little heterogeneity would have saved them. Instead millions starved. Do we learn nothing from history?

    General Kelly, Yes I'm calling him specifically, gave a promise to Admiral Hale through the 'good-ol-boy' network. Admiral Hale is the big COE-backer -- the guy who thinks COE is the saviour of all bad things of the Internet.

    General Kelly basically said "Okay, I'l run your COE", and now that a General makes this edict, a technical decision made on the golf course, nobody in DISA has the balls or the smarts to tell this general that this is a terrible, terrible, terrible idea, and could be the downfall of the entire DII infrastructure.

    Making the whole of the DISA land run something stable and secure might be a good idea.

    But unfortunately, the DISA concept of secure means running NT, or Solaris right out of the box + patches, and using telnet, because it has been certified free of backdoors.

    These people at DISA don't have a clue what secure shell is. They don't have a clue that it's bad to telnet into machines. They do seem to think that the hackers will be held at-bay if you lock out root logins directly. While this is true, a good hacker with a sniffer will just wait for the password after a 'su' command. Stupid, stupid, stupid.

    These are the people who think that it's good enough to put a package, I mean segment on a website and say "It's good!" No PGP signatures, no MD5 checksums, no anything. Just compiled binaries on an FTP server. Stupid Stupid Stupid.

    these are the people who think that NT is a good operating system because the vendor tells them that it is a good operating system. Stupid, Stuipd, Stupid.

    Along this line of reasoning, because they can't set up auditing to the level of granularity that is required by the C2 orange book, Linux is being shunned by those who believe in COE. For those who think that COE is a complete piece of crap, those people ususally think that Linux would be a great thing on the Military Internet.

    But the latter category of people is a quiet minority. DISA and Military networks always are at least 3 years behind the industry -- The organizations are full of bureaucrats who would rather swim in red tape than make a decision, or take a risk.

    In order for the COE loving dimwits to be happy with Linux, there needs to be COE set up for Linux. I personally think COE is the worst mistake in military history, but I'm just one man. Maybe somebody else out there could improve on it.

    American Citizens! Be afraid! Be very very afraid! you have complete screw-ups in charge of the Infrastructure of the Military Internet!

    And yes, I'm posting anonymously, I don't want to lose my job.

    Then again, I hate DISA so much, because it's such a bunch of complete idiots and bumbling doodleheads, maybe I should find a new job.

    sigh

  • by Anonymous Coward
    The last time I checked, NT wasn't C2 certified unless you left the NIC out of the machine, which makes it useless as a server. I'm new to Linux myself, but I imagine somebody has had to address this somewhere before.
  • by Anonymous Coward
    Is this [securify.com] what you are looking for?

    From the man page:

    Auditd is part of the linux kernel auditing toolkit. It will capture auditing trails created by the kernel auditing facility from /proc/audit, filter them, and save them in specific log files. For the moment, auditd only supports the -t option, which enables audit trails timestamping. Other command line options will probably be implemented in the next releases to add more flexibility to the package.

  • by Anonymous Coward
    Didn't the NSA recently commission some company to build them a 'secure' distribution of Linux? Shouldn't that include auditing as part of the requirements?
  • by Anonymous Coward
    When I was a wee lad (not too long ago actually) I was an intern at Department of Information Technology Contracting Office.. (That was its new acronym, don't ask me to remember the old one... ;-) (Note: I'll try to use as few acronyms as possible for our larger audience, please adjust flame accordingly)

    DITCO handled telecommunications contracting for DOD and other government agencies at a percentage... Curious... A governmental business, I kid you not...

    Anyways, their AWESOME product ala 1992 was an online order processing system using several networked DOS Machines (ala BBS Software, Mustang Software if I remember), Banyan VINES for the NOS, FoxPRO for the Database backend, and a Sun machine to help the whole thing achieve internet connectivity via telnet sessions and null modem cables...

    FUNKY... It was built by an awesome intern (not me) who went on to greater/grander things...

    What's the point?

    Just my perception but it's all politics... Currently NT is not C2, and making a C2 product is not easy... and Linux is not C2.. yet.. Yet NT is accepted...

    So don't sweat it... Do whatever your manager will allow and slowly ease Linux in... In the end it only works if he is comfortable with it, ad infinitium up the chain...

    I can give a few general security recommendations. Tripwire is a must... Beyond that you just have to turn up the security based on your experience (may you learn fast ;-)

    But I also remember attending one of my first and ONLY NT classes and seeing every single "NT Admin" agree that using NTFS was silly and counterproductive, and using FAT partitions was wiser since you had more utilities to fix FAT Partitions...

    So much for security...

    In the end security is a synergy between the engineers who made it, and the admins who maintain it... Ever vigilant, amen... Specially since network access of some form is pretty much given these days...

    Classifications like C2 and big companies like Microsoft give managers a warm safe feeling... That's it...

    Good Security isn't a destination, it's an ongoing project... Talk to your manager for advice, in the end he works for you, as much as you work for him.

  • > Atlas Shrugged, he didn't bend over and grease up.

    We don't all live in a Randite fantasy land. The vast majority of us live in a place where, as George Orwell put it, we sleep soundly in our beds at night because rough men stand ready to do violence in our name. Those "rough men" need whatever tools are best, so for our sake, I hope they get Linux rather than NT.

    It speaks wonderfully well of how well those "rough men" (and women) do their job that even licensarian kooks can safely express their opinions :)

  • In England we have the Ministry of Death, same thing.
  • Is this installation-specific? I did a 'man auditd' and came up with 'No man page for auditd'. Would I have to recompile this option into the kernel?


  • "f they need auditing, try to ask them
    WHAT should be audited."

    I think it ought to be US who ask the question. We, the people who are committed to Linux and other open-source project, should be the one who take the proactive step, and ask the questions ourselves.

    If we wait until the DoD or others to tell us what needs to be audited, then we are not better than MicroSoft - DoD as a USER should not be responsible for the SECURITY and ROBUSTNESS of Linux, and we who have contributed to Linux and other open-source projects should make sure that what we produce will stand up to any kind of test, and we must make sure that we commit a portion of our time to device the various testcase in making sure what we are producing is up to the challenges.

    Let me reiterate -

    The initiative to make a better Linux
    (and all other open-source projects)
    should be ours.

    It shouldn't be DoD or any other users.



  • Your wrote:

    "This kind of reeks of "WE are in control,
    not some silly USER!!" mentality. This goes
    directly against at least what IMHO is the
    whole point of Open Source."

    "Everyone can be a developer. Each USER can
    modify it as needed to meet their needs."

    Good point !

    But I am afraid that it was because of my mistake that you have jump to the wrong conclusion.

    What I actually meant in the whole point is that we, the one who are responsibled for the original codes of open-source projects should also be pro-active in making sure that what we wrote makes sense.

    In other words, we DO have to have pride in what we wrote - and the pride cometh in the form of GOOD SOLID CODE.

    So, to make sure that what we are contributing to the world are GOOD SOLID CODE, the onus in on us to find ways to BREAK OUR OWN PROGRAMS and then find ways to mend the broken parts.

    You are correct in saying that EVERYBODY has a part in this open-source thing - but we, the one who contributed the original codes, should not only rely on the users to give us feedbacks or to enhance our codes. We should take steps to ensure that what we give to the world are something that worth their while to use, or else what is the point of open-source if everything cometh from open-source projects are junks anyway?



  • You said:

    "let say i want to audit some code,
    what should i do?"

    I am not an auditor, but the general guideline for auditing is that the first thing is to set up test environments - test cases - trying to find ways to BREAK whatever you are trying to audit.

    If it's a database program, then you try your best to overload or dump whatever things you can think of to the db and see if (or when) it dies, and document how that happens.

    Then, you go down to fix whatever is wrong.

    "i'm a good c programmer, what should
    i look for? buffer overflow? useless
    suid?"

    Whatever that breaks the program or code.

    Sometimes the real culprit are not that easy to spot. SOmetimes it's not the code per se, but the DESIGN of the entire program.

    [ Reply to This | Paren
  • In order to get certified, you have to turn over your source. The gov't will sign an NDA, but you are still required to turn over your source. Also, to get the higher level security ratings they require you to provide "assurances" that your code is secure, along with additional security features. For the highest level (A1) you have to submidt a formal proof that your system is secure, and have written your code in such a way that they can veriry your prood easily.

    There's a good reason why there are no A1 rated OS's (commercial, at least).
  • Secure Computing also wants to sell it to financial houses, etc.

    And, by definition, unless Secure is simply an onsite "contractor" then they are distributing it to the NSA. They also plan to hold onto the copyright on the code as well.

    Either case, they must release the code.

    Pan
  • Excuse my ignorance, but what is "auditing"?
  • Isn't that a Didi reference?
  • shapecfg allows traffic shaping based on IP address - is that the sense in which you mean that the bandwidth allocated to a user can be controlled ?

    Or is there some tool that will shape traffic differently for users on the same box ?

  • And ask them if there NT servers are C2 certified. The funny thing, against any other claim, it isn't (NT 3.5x is certified, but only if it has no disk drive and no network connection...).

    Correct me if I'm wrong... NT should not have even a hard disk drive in order to be C2 certified, right?
  • Please correct me if I'm wrong, but a long time ago I read some government manuals about C2 certification. It turns out that level of certification isn't so much based on the actual security of the product...rather, it is a measure of the types of security that it implements.

    For example, have you ever wondered why Microsoft chose Ctrl-Alt-Delete to be the log-on, log-out and general system access combination? That's because NT's kernel traps that key combination, and only notifies the log in code that the user has pressed those keys. This prevents some random program from catching Ctrl-Alt-Delete, displaying its own log in box, and recording the password to send to crackers. It also prevents trojan horses from just displaying something that looks like a log in box and catching the user's password, since if the user isn't sure it's a real log-in box, she can just press Ctrl-Alt-Delete and get a real one.

    I believe C2 also requires ACLs, which let you control access to a file at an atomic level (Linux can't really do this). And NT has a built-in advantage over Linux as a graphical operating system - since users don't normally have Telnet- or SSH-style access to the system, an attacker can't break in unless the administrator has been _really_ stupid.

    I like Linux. I think that in an empirical way it is more secure than NT - when was the last time you heard of a system being compromised? I've heard of a few different break-ins, but they all boiled down to very difficult-to-exploit buffer overflows. In an inductive way, though, NT is more secure, because it implements all sorts of security policies which are simply impossible under Linux (evenif they don't always even work on NT).

    It doesn't really matter, though - the only way to make a machine truly secure is to unplug the NIC.
  • by D3 ( 31029 )
    I would put money on them having internal people write their own version with special drivers, etc.
  • > Lsof is a Unix-specific diagnostic tool.

    Just a little note: while there is a Linux port of lsof, the tool on Linux to investigate about open files, sockets, etc. is definitively fuser, which IIRC should have every functionality offered by lsof (and more), and it's installed by default (while lsof is not).

  • Check here [ncsc.mil].

    Basically, it's a government standard for computer security. Most free unixes don't even come close.

    This is partly because, as security increases, convenience decreases. A and B rated systems require hardware that's designed for security, and PC hardware isn't.

  • You're missing the point. For true security (as defined by the Orange Book) you need to test and validate all this in a very rigorous manner. This is neither fast nor cheap, and I am afraid it cannot be done in the usual open-source way by a loosely coupled team. Someone has to bite the bullet and just do it along with all the neccessary paperwork involved.
  • Just my perception but it's all politics... Currently NT is not C2, and making a C2 product is not easy... and Linux is not C2.. yet.. Yet NT is accepted...

    No politics about it. NT4 is C2. http://www.radium.ncsc.mil/tpep/epl/entries/TTAP-C SC-EPL-99-001.html

    Yes, that's with network. (3.5 was the one without.)

    Eric

  • Secure Computing is working on a C2+ secure version of Linux for the NSA, which, I've been told, will be released to the public.

  • Didn't see this one mentioned: kha0s linux

    http://www.kha0s.org/goals/

    A distro made for secure computing; possibly a decent choice until C2 is available.
  • I believe that an auditing system may exist for one of the BSDs. This could be (easily?) ported over to Linux.


    Beware the SPAM in your pajamas.

  • I've been working on a design for auditing system calls within the kernel. Basic idea is that details of every system call are communicated to a listening daemon in userland using, for example, netlink sockets. Every process will have an audit mask that determines which calls are audited for that process. Control via ioctl() on fake character device. Design is still fluid, partly influenced by Digital Unix's kernel auditing. I haven't yet started coding in earnest and will look into LinuxBSM (mentioned elsewhere in this thread) before deciding whether or not my efforts are worthwhile.
  • If I recall correctly ...
  • When C2 says 'auditing', they are talking about
    auditing "events". Some of the auditing has
    to be in the kernel. For example you could
    choose to audit all failed open attempts, any
    suid/sgid, any login, any passwd change, etc. The
    last two are not in the kernel, the former would be. Generally you'd want to write the events to an audit device buffer. On the outside of the
    kernel, you have an 'auditd' that simply reads
    data out of the audit device (deviced owned by
    auditor, auditd runs as auditor) and writes it to some storage (disk, cdrom, tape, NFS filesystem, etc). You have audit records that talk about
    "what" (subject) to "what" (object).

    Each login, cronjob, atjob, remote-login (anything that is a user or runs on behalf of a user) has a unique *auditid* (conventionally = userid at login). An "su" during the session doesn't change the audit id. Audit id's are inherited across
    forks, execs, threads...etc. The can only be
    changed by a program with "CAP_AUDIT_CONTROL".

    The audit device is only writable by programs
    running with "CAP_AUDIT_WRITE". In Linux those
    CAP's would currently equate to 'root' or something similar. In a file-based capability system, you could have those capabilities on certain programs that needed it (maybe on a
    pam module?).

    Note that on a C2 compliant system, if auditing
    should "stop", the desired action is to halt
    the system or bring it down to a maintenance mode.

    Under some circumstances, you are allowed to lose events that are in memory but not yet written to disk (say in the event of a power failure). But this can also happen if something kills the audit deamon. Then the kernel just continues to fill up internal (non-ring) buffers with audit data until memory is exhausted. At which point the system is effectively halted (it hangs).

    Obviously it would be real good for the auditd to not die or run out of space to write to. :-)

  • As someone says later -- you don't exactly have the right idea on Mandatory Access Controls. Also, encryption is not considered security -- since anything that is encrypted can *eventually*
    be unencrypted. That's not considered secure. They may not want to declassify a given datum for an indefinite (infinte) amount of time.

    In IRIX B1, there are ACL's, sensitivity and
    Integrity labels (following the Bell-LaPadula and
    Biba models) and capabilities. BLP basically
    states that information can't flow downward
    So all user's run with a Label with S/I components. A user with S=Top Secret can't write into an object (file or dir) that has a rating less than "Top Secret". Lower levels are to
    have append-only access to higher-sensitivity data (like an audit trail). In addition you still have the "discretionary" access of standard unix to further control info. But an owner of a file can't downgrade the file unless they are authorized (have CAP_DOWNGRADE (or something similar)). The Integrity part means user's requiring high integrity can't read lower integrity files -- so "root", let's say, only has
    access to pre-approved files at some minimal level of integrity. If a user writes to a file, the resulting integrity is the lower of the two(user,file). If a user reads a file (assuming permitted), then if you allow floating user integrity, the user's integrity becomes the lower of (user's, file's). Files like /etc/passwd, are
    defined to be of "high integrity" and "low sensitivity" (anyone can read them). I think IRIX has a total of about 47 capabilities to manage it's B1 system. In addition to the mandatory
    controls, there are also ACL's.

    The B2 level also requires covert channel analysis and higher levels of proof of correctness. B1 requires only 'features' that support the B1 security model and there is documentation to show how the given feature set implements B1. There's also a B3 level -- I forget what's in that, but at the A1 level, formal mathematical proofs are required from the hardware level on up.

    Each system certified at a security level C2, B1 is also tied to the particular harware configuration that it is certified on. For example, it wouldn't help to cert B1, then have the user add floppy hardware they could boot from and still expect mandatory access control (B1
    cert). You can still claim B1 features in the OS, but the cert is tied to 1 exact box -- no part
    number changes.

    For a system to be B1 certified, it would likely have to only be certed standalone or on a private network that supports the Sens/Int labels on each TCP session and each udp packet.

  • Good question! I'm a Department of Defense (DOD) contractor as well. We use SCO OpenServer and it is C2 compliant. (VAX/VMS was as well.) I haven't gone looking for Linux info, but I haven't seen anything about it. I suspect that it doesn't exist.

    C2 certification will require some investment and a corporate commitment to the sanctity of the code. This looks like a real good opportunity for one of the major Linux flavors to snag some market share. But it won't happen in time for your project.

    Good Luck!

  • Let me reiterate -

    The initiative to make a better Linux
    (and all other open-source projects)
    should be ours.

    It shouldn't be DoD or any other users.


    This kind of reeks of "WE are in control, not some silly USER!!" mentality. This goes directly against at least what IMHO is the whole point of Open Source. Everyone can be a developer. Each USER can modify it as needed to meet their needs.

    Ultimately, ONLY the users truly understand their needs. Developers are not a "self licking ice cream cone". They need the USERS to communicate their needs and then go about meeting them. The DoD has decided that it needs certain security features that are detailed in the Orange Book (C2 Certification). They've got the resources to pay developers to meet that need or, because they represent a major source of revenue, the industry will decide its worth their money to meet that need.

    r/

    Dave
  • I'm a research scientist at philips lab (kirtland AFB New Mexico), and I have 3 linux boxes that I admin as well as using for very large dataset analysis. I replaced an aging RS6000 with a dual PII machine with 70 gig of storage and a tape jukebox. Linux makes my life easier.
    I got a new gateway for my desktop last year and the first thing I did was slick NT and put redhat 6.0 on it. I use startoffice to read any office gorp that people send me. Anything coming from my machine is either an acrobat file or a latex document. Sheldon
  • I think that its very cool that the government is looking into open source. in fact i think its fantastic. but the government should not be limited to open source, the government has to be able to use what ever software gets the job done. one way to ensure that this is open source is to have open source people work in government, to an extent that we write enough software to get the job of government done, that they dont need to buy propritary software.

    i don't like the idea of my government be run by non-gpl software, but i also dont like the idea of forcing anybody's hand.
  • if the DoD isn't gunna use gpl, its my understanding having actually sat down one night and read all of gnu.org, that they can't write anything based on it. the reasoning is that if they make their own linux distro, all components that share code with current linux distros will require it, by law, to be gpl, which the government will not be happy with. it appears that although title 17 section 105 is very close to gpl'ing something, it goes against the gpl of a product if its children are title 17 section 105'ed instead of gpl'ed.

    although it would appear on other occasions that the governemnt, in the end, will do what it damn well pleases, and that we can all go to hell if we think we can change that. we cant stop them from stealing from gpl'ed software, and it would be foolish of us to make them use it at the risk of them stealing it or classifing its children.

    maybe it would be good to have the government refining the code of non-critical source, imagine the us governemnt doing bug fixes for you, or better yet getting a government contract to do the bug fixes you already do!

    well, maybe none of this is really relevant, but i think that the government would be an asset and a danger (the kind of person your glad to work for but you always keep an eye on) on account that they could easily warp all of our effort for the use of evil (making the products of 'DoD Linux' closed source for security reasons).

    ok im rambling
  • galen is right to point that if the DoD wants to strengthen security they should do away with shared accounts and stregthen passwds. also they should get some of those key logging keyboards that were on slashdot, we'll maybe not. anyways, one way ive heard to ensure that the user at the terminal is who they say they are is to go back to another slashdot article which pointed me to this interveiw: http://www.geeknews.net/interveiws/kevinw.shtml

    for those of you who dont want to read, its about the cybernetics prof from Reading who put a chip in his arm (a setup similar to the mobile speedpas) that allowed computers and rooms to identify him and open doors for him and log into terminals for him.

    instead of loggin inot terminals, it should be used like a second passwd.

    login: daevt
    passwd: ************

    Reading from chip ............

    authorization failed!
    your unique chip number has been reported to the proper authorities!

    or

    authorization success!
    $USER, welcome to $HOST

    naturally, i think that there are a bunch of us who wouldn't want the governemnt to be able to track us with military grade satellites, (the chip is powered when it receives radiowaves through its powercoil and then spits out a string of numbers) but it would be the second securest method i can think of.

    the first would require a direct link into my head which nobody will ever get my permission to do. but the abilty to get a unique feedback for an image or sound fed into your brain. like you brain is your own passwd chipher, and since no one can duplicate your automatic responses, only you could give back every detail of your response.
    however i do beleive that time will never come, and its probably a good thing to, what happens if this thing crashes while your logining in, braindamage? now that im why off topic...
  • C2 is not security. It is only autiting, including logging of who has read and leaked the files on certain tobacco usage in the oval office.
    We need to get it implemented to sell things to many govrnmental institutions, not only in the US.
    And the best part is, it has NOTHING to do wit security. As long as you log who you think does things (like NT) then you are OK. And remember, NT is only C2 certified when disconnected from the network.
  • Free Software has nothing to do with socialism

    Actually, free software is an excellent model for socialism, but the way it was imagined, not the way it turned out.... And no, I don't know why I'm responding to obvious flame bait.

  • Just out of curiousity, what _is_ C2 Level Auditing?

    Chris
  • I think that you will find that they will want to know - 'was any code written by foreigners', if so has it been verified by a trusted US citizen? Also, I believe that part of the verification is that the vendor (the source code holder) will not go out of business and the code vanish into thin air. The software we provided had to have 'fully documented source code - independently certified by a government approved certification authority' As they would be getting the full source code (something which they did not envisage when the rules were set down) some of the normal rules would not apply.
  • Auditing is not only to detect breakins or suspicious behaviour, but also to trace every major system change. These logs need to be provided during investigations, or can be used to simply check up on a person whom there are suspisions about.
  • Please moderate this up

    My partner and I are the people working on the Linux BSM. Our goal is to build a Solaris compatible auditing system that will be C2 compliant. Our page is at: http://soledad.cs.ucdavis.edu/ [ucdavis.edu]
    please cc any email to:
    holmlund@cs.ucdavis.edu
    banford@cs.ucdavis.edu
    Jeremy Banford
    Co-Chief BSM Monkey

  • As a Navy guy, I must disagree. You are correct that the Navy moves slow with IT. But that's why Linux would be ideal. With Linux, you don't have to upgrade hardware and software every 2-3 years. When you learn how to do something, you don't have to relearn it with each new OS update. Also the Navy already has sailors and technicians who are trained to use commercial Unix for various systems. Those skills could be easily transferred to Linux.

    The biggest obstacle to DoD adoption of Linux is mindshare. The guys in charge are not geeks, they believe every lie that ever came out of Redmond, their knowledge of technology is about five years behind the power curve, and they cling to an outdated (non-open source-centric) view of how software should be purchased and supported.

  • Guess I come a little late, but I spent some time reading through the whole thread, and was wondering if anybody has ever read the "Security Requirements for System High and Compartmented Mode Workstations" criteria (DDS 2600-5502-91)? This is the criteria to which the SecureWare's original (yes I mean it) CMW product was built upon. A must for those who are looking forward to actually design and/or implement a trusted system based on Linux, that would meet the Orange Book's criterias.

    I don't think SecureWare exists anymore. I think they changed name to SecureFirst and is in
    online banking e-commerce business right now. I used to work in Hewlett Packard and have participated in some of the HP-UX BLS/CMW developement projects (the not so well know 9.09, 10.06, 10.24 release). I think it's currently marketed as some big e-biz bundle called Virtual Vault, but that was the older days... Anyway, if there is such BLS project for Linux, I'd be interested to join. Any links/URLs, please?

    BTW, you do not build a Linux B1/B2/C1/C2 system, you build a B1/B2/C1/C2 system based on Linux. A lot of the B1/B2/C1/C2 criterias defeat the original unix implementation concepts. The higher up you go, the less unix-feel you're going to get. Like, hey, I thought root is meant to be the god but why'd I get a 'rm -rf /: permission denied'? You get the idea....

    A little off-topic tidbit, but is nice to know is that one of the very first CMW B1 box was built on a (drum roll)
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    Apple Macintosh! Yeah, no sh!t.

    Yours,
    --Albert
  • I have not carefully studied Argus' BLS design and
    implementation, but I *seriously* doubt that their
    so-called BLS system was officially endorsed by
    the DoD. The certification process is loooooooong,
    and is extremely costy $$$.

    Yours,
    --Albert
  • by Anonymous Coward
    The biggest hurdle to getting linux on board is getting a DII COE kernel developed for it.

    Now for those who don't already know, DII COE stands for Defense Information Infrastructure Common Operating Environment. The project is done by JPL and maintained by DISA.

    The DII COE kernel overlays the OS, and provides a common set of utilities and security measures to help administrate a box. The government is a little tired of an application taking up a whole machine, and wanted more flexibility. The DII COE provides a set of tools to do this, providing more security to boot.

    And it may one day get to the point where all DOD systems will be expected to run DII COE, and all applications are to be delivered in DII COE's package format. And if that happens, Linux will be locked out.

  • by Anonymous Coward
    NT4 is C2 certified, both standalone and networked.

    http://www.radium.ncsc.mil/tpep/epl/entries/TTAP -CSC-EPL-99-001.html

  • That's nothing...1 DoD installation were I contracted was using Novell, Microsoft AND Lotus (I think the Lotus was for classified/top secret email, but since I don't have a clearance I can't say for sure). Just to make things interesting, I built a Linux DNS server from left over parts laying around (AFAIK, it's still up).

  • DoD = Department of Defense (the department that oversees the US military).

  • Not everyone that reads Slashdot is from the US, and/or studied our branches of government. The original poster did not make the distinction.

  • The mainstream Linux community (including the developers that have brought it this far) have no real interest in military-style security. This is one of those cases where the end user (the DoD) should scratch its own itch.

    They have the means (money) and the motivation (money + security) to do this themselves or sponsor somebody else's efforts. The result would be an OS that exactly matches their requirements and that they can continue to mold to their own purposes without relying on some vendor.

    Of course even if they don't do this some companies will. SGI and IBM have a lot riding on Linux and they need to sell into big government accounts, so they'll take care of it.

  • Ok, I'll accept that Linux doesn't have the file controls you describe, at least, yet.

    The network side, though, is a tad more complex. :) It is posible, for example, using the features in the various trffic shaping tools, to specify that user X has access to 20% of the bandwidth for FTP, 0% for the web, and 5% for ssh. You can further state that the 20% FTP bandwidth is split into 15% for the local server, 5% for the corporate off-site server, and 0% for everything else.

    Whilst this does not constrain what the user DOES with the net access that's been granted, they have effectively been given access to named resources at named sites. No other access exists for that user, because they have zero bandwdth to do anything with.

    As for the assurance, I can certainly see that side of things. I imagine a group such as SuSE or Debian have the capability and resources to go through their distributions and formally prove it secure. (Red Hat, whilst good at many things, are not number #1 on security holes.)

    Alternatively, I can see some group springing up to produce a watertight Linux distribution, specifically designed and tailored to meet the B level classifications. (Might be fun to do that, actually....)

  • ip, from the iptools2 package (ftp.inr.ac.ru/ip-routing) allows you to shape traffic differently on a per-user, per-traffic type, per-interface and/or per-IP address.

    It's a really neat tool! Horrible interface, though. If someone wants to write a decent interface (even if it just takes a comprehensible /etc configuration file), I am sure that a lot of admins would worship at their feet.

  • They're calling it "Orange Linux", after the "Orange Book" that describes the requirements.
    Christopher A. Bohn
  • Yes, thanks for doing that Tom.

    A couple of clarifies...The code known as OB1 is extracted straight from TRIX 6.5.x (our Trusted B1 product).

    Its called "Sample Source" since its not a complete B1 product and if it was it could only be compiled into the Irix codebase. We thought it would help those interested in trusted systems if we made some of our code available to provide a "sample implementation" of something that is known to work.

    richard.

    PS. We were all laughing (well I was) that the first posting described my boss as a gentlemen...
  • beyond all of that, HOW DO YOU KNOW THAT THE SOFTWARE FUNCTIONS AS ADVERTISED?

    You don't -- but C2 / B1,2,3 or whatever doesn't really help either, because they can't really check the code line for line to make sure there are no bugs. And one stupid bug can screw everything up.

    When it comes to classified systems, you need to consider more than the "security classification", which really only serves as a cool bullet in the feature list. You need to consider the vendors track record, especially with regard to security and bugfixing.

  • SGI [sgi.com] is supposed to be working on C2 certification, which they're hoping to get into the mainstream so that pretty much any distro that wants it can be C2-certified, and also B1 or B2, I forget which (whichever one is legal to export). They are doing this specifically because they have reason to believe they can sell Linux to the DoD.

    Unfortunately, I can't find anything confirming this on their site -- this is information I picked up at the recent SGI "Linux University" travelling show. If I recall correctly, and they manage to keep to their projected timetable, we should see the C2 Linux become reality sometime later this year, and Bn in 2001.

    --
    perl -e '$_="06fde129ae54c1b4c8152374c00";
    s/(.)/printf "%c",(10,32,65,67,69,72,
  • ``Tripwire: [Description] Tripwire is a system integrity checker, a utility that compares properties of designated files and directories against information stored in a previously generated database. Any changes to these files are flagged and logged, including those that were added or deleted.''

    While tripwire is a nice tool (although some people say it's showing its age and there are newer tools now available) what it does is not going to satisfy anything close to C2-level security. Tripwire tells you that someone was messing with your system and by then it's too late (i.e., tripwire tells you that you need to shut the barn door because the theives have stolen your horses).

    What our contractor friend is looking for is (I believe) something on the order of access control lists that prevent the touching of the files and directories in the first place. I would think that a kernel module could be written that looks at between user space I/O requests and the filesystem and [dis]allows I/O operations based on the security capabilities granted to the user process. Load the module and your I/O requires properly setup ACLs; unload it and it's back to the garden variety UNIX file permissions (unless you're root, I guess, in case your ACLs are set up incorrectly).

    Personally, I would love to see this available on Linux. ACLs and rights identifiers were one of the coolest things (IMHO) under VMS; you could really tailor security far, far better then you could using the simple Read/Write/Execute/Delete access you could set up for objects. Having something like this on Linux (or most Unices, actually) would make it easier to dole out accounts with varying levels of privilege and go a long way toward assuaging those who complain about Unix having a superuser.



    --

  • You are right, I was being a bit harsh. I'm sure there are a lot of auditors that are technically inclined. I just haven't run into that many of them. Most of them that I've run into have an accounting background and view computers as just a tool for that purpose. It is worth noting that if your Anderson Consulting in the UK is part of the same Anderson Consulting that is one of the 'big five' in the US, that they are also in the IT consulting business. Undoubtedly people on that side of the business should have a lot more of a technical background than people in the financial/accounting side.

  • Is it because the Big Five don't know how to spell Linux or is it because they aren't really doing their job?

    I work in a big financial IT shop where we have big five auditors in occasionally. From what I've seen the answer to your 'or' question is 'yes'. Most of the auditors that come in are utterly clueless about Windows, let alone anything non-Windows. They can basically muddle about a little in MS-Office and that is about it. I've actually had to help some of them import CSV files into Excel, including helping them figure out how to load files off of a floppy disk and unzip them.

    I've also seen them sit down at a SparcStation and be rather puzzled when confronted by the login box... Then turn the power off, watch it reboot and come back up to the same point. At which time I told them 'that isn't a PC'. Blank expression. 'You can't run Outlook or MS-Office on that, it isn't a PC'. Blank expression. 'Use that PC over there'. Auditor shuffles off looking confused...

  • I would think that the vendor integrity statement problem could be gotten around by purchasing shrinkwrap box versions and then having the vendor of those distributions sign the paperwork the government wants.

    Someone should talk to the distribution vendors to see what efforts they could make to add and document C2 auditing to their distributions. I'd think if they thought they could get a significant number of sales to government agencies they would view that as a good investment in resources.

  • SGI has a site with some sample code and documentation about implementing a B1 secure system over here [sgi.com] which may be of interested in this conversation.
  • Yes, complete with lots of backdoors that are emergent properties of convoluted makefiles, or something ( like that compiler that compiled backdoors into itself...)
  • I recently linked the web page for SGI's B1 sample implementation on OSS. Here is the URL http://oss.sgi.com/projects/ob1/ [sgi.com]. You can find some source for a sample implementation.
  • I'm a little bemused by the extreme concern of DoD in computer security. Granted that they have many secrets to hide and their war-potential to protect. However, I would note that most security breaches are caused by human factors, whether deliberate or accidental. One can point to the example of an ex-CIA director who left incriminating files on a laptop. Also draw the analogy that engineers have concluded that more car safety technology is reaching points of diminishing returns as only 10% of accidents is attributed to mechanical failure and the rest mostly to the idiot behind the wheel (alcohol, road rage, sleepiness, whatever). In the same way I fear that network paranoia (while important and a hard target) is blown out of proportion to the more obvious risks of human failability. I would like to be comforted that the military has on-going *HUMAN* processes to keep improving the quality of their people rather than hoping the next Y2K bug doesn't accidently triggers the nukes. Invest in brains, not silicon bullets.

    In a broader view, what is it about technology that foster the simplistic magic pill approach? In any complex situation, after you eliminate the obvious weaknesses, there will be many vulnerable points of attack and more exotic technology in lieu of awareness training could create a false sense of security. Blind faith, whether religion, technology or dogma seems to be a point of hubris.

    LL
  • Me: What is the point of having a cheap bicycle lock that you are absolutely, positively sure will not stand up to a determined attempt to break it?

    You: Well, the point is, it stops a casual thief.

    No. That answers the question, "What is the point of having a cheap bicycle lock?"

    You said there is a place for a system with low security but high assurance. I am trying to figure out what that would be.

    In other words, why would you buy a $20 bicycle lock, and then pay $5000 to verify that, yes, it really is a cheap bicycle lock?

    Similarly, a situation where you don't do much to prevent people from coming and going, but are sure that their activities are on a security tape (for example, an ordinary bank) calls for low security, high assurance.

    I'm starting to see where the confusion is coming from. You're using terms in a different way.

    Security is a multi-part thing. Exact definitions vary, but it is generally defined to be the sum of integrity, availability, authenticity, and accountability. Accountability includes audit. In other words, keeping careful track of who comes and goes is a part of security.

    Assurance is how sure you can be that the security features of your system actually work as advertised. Not, "Do we know who took the money?" but "How sure are we that this camera we're thinking of buying will do the job?"

    I suspect that levels like A1 describe even better assurance than C2 -- but that you can't get credit for those levels of assurance without also having A1 security.

    Moving from Class B to Class A is entirely about adding assurance, and not about adding new security features. IIRC, the big thing about Class A1 is that it requires a formal mathematical proof that your security features work.
  • FWIW, it seems worthwhile to point out that "Orange Book" classifications are not all that well thought-out. The problem is that they tie increased security to increased assurance.

    That's right, it does. Think about it. Why do you increase your security? Because you have something more valuable to protect. If it is that much more valuable, wouldn't you also want to know your increased security also works?

    It is extremely difficult to establish a high-security system while simultaneously having a high assurance that it is correctly implemented.

    Yup. Life is hard.

    ... it is often useful to have a low-security, high-assurance system ...

    It is? What is the point of having a cheap bicycle lock that you are absolutely, positively sure will not stand up to a determined attempt to break it?

    I honestly can't think of a situation where you would want to pay a lot of money to be sure your security system isn't all that good. If you can think of an example or two, please, enlighten me.

    Indeed, if audit is the only really interesting property in this case, it sounds as though low-security (mostly logging) high-assurance (logging cannot be defeated even by 'creative' users) is exactly the solution that is needed.

    Congratulations, you just described Class C2 protection. :-)
  • I haven't hear much about Linux production environments getting hammered by any of the Big Five during year-end IT audits

    Wrong kind of auditing.
  • It's been a while since I was into all this security stuff, so I'm a bit foggy on what C2 requires in the way of accounting. However, the contractor I work for has had us securing our SGIs according to DoD standards. Now, I don't know whether these conform to C2 or not, but they're very very tight boxes now.

    From what I've seen in securing these SGIs, I don't believe I'd have any problems tightening down my Linux boxen at home in the same way. As far as accounting goes, the real issue is that a sysadmin can tell who the physical person is behind that user name. That means getting rid of shared accounts, strengthening passwords, etc.

    Granted, you won't be able to get a keystroke by keystroke log of a user's session, but that would be too much information anyway. There are, of course, commands that you'd want to monitor and most of these provide logging themselves (such as 'su'), while it would be easy to write wrappers for other commands that don't provide logging (things like 'chmod' && 'chown').

    Again, I don't recall what C2 requires, but I wouldn't be surprised if you could get there with the tools already available.

    Hope this helps. And if I'm completely off my rocker, please let me know. I'd always love to learn more.
  • It is nice that OpenBSD is great and wonderful. However, AFAIK, OpenBSD doesn't have C2 compliant auditing available for it either. I love people who give an answer before reading the question.

  • And ask them if there NT servers are C2 certified.
    Using the word certified is simply wrong. That implies a certification body exists. Products are evaluated to meet the standard.
  • Argus Systems Group, Inc. [argus-systems.com] has announced it's intent to produce a Linux version of it's PitBull compartmentalized OS. PitBull is a B1 certified, compartmentalized version of Solaris (currently Solaris 7) which I have used to much success. While all they have announced is an intent to produce a Linux version, the company moves fast enough that we might see something as soon as a year from now. This isn't exactly ideal, I understand, but in DoD time it isn't that far away.
    --
  • Your position is known, contemptously, as "security by assertion."

    You *assert* that the intl kernel patches (which I know and use) are adequate, complete, and correctly implemented.

    You *assert* that iptools2 are adequate, complete, and correctly implemented.

    On and on and on. You sound just like the Microsoft commercials that are equally eloquent at *asserting* my life will be much more pleasant once I toss out my moldy old Debian 2.2 system for Windows 2000.

    In fact, I've downloaded and printed out the requirements for C2 and B1 certification, and I've tried to figure out what they really mean. Linux doesn't have everything it could have (e.g., I've played with the idea of an "auditfs" that would record - in a secure manner - *all* calls to the kernel with process/user info and parameters). Linux doesn't have that, yet, but that's the only way to really know who made some sensitive calls.

    But beyond the issue of auditing, DACs, MACs, secure login prompt keys, adding security classification levels to the FS (how do you make a directory top secret/categorized? Remember that the specific category is itself classified, so everyone really use large bit fields in practice.), ensuring all external media (paper, tapes, disks, discs, modems, networks, plips, IrDA, and god knows what else somebody has written a kernel module for) properly preserves these classification tags.... beyond all of that, HOW DO YOU KNOW THAT THE SOFTWARE FUNCTIONS AS ADVERTISED?

    I agree with you that the assertions - plus my own review of the code when I feel the need - is adequate for my own uses. It's enough for my employers. But classified systems, by definition (hopefully), will be attacked by professional with signficant bankrolls, not bored teenagers or petty-ante criminals. They will contain information that, if misused, could result in the deaths of thousands or millions of people, not just a few annoying bogus credit card charges. The standards of proof must be *far* more strict, and there's no room for wishful thinking or unchallenged assumptions. That's why a formal review and rating is so important for the DoD (and DoE, among others) market.
  • Did you even look at the link? Especially where it says "The SAIC evaluation team has determined that Windows NT 4.0 with Service Pack 6a and the C2 Update as configured by the Trusted Facility Manual satisfies all the specified requirements of the criteria at class C2."

    Also, when you evaluate an OS for C2, you do it on a specified set of hardware. For any OS.

  • ... it is often useful to have a low-security, high-assurance system ...

    It is? What is the point of having a cheap bicycle lock that you are absolutely, positively sure will not stand up to a determined attempt to break it?

    Well, the point is, it stops a casual thief. A bicycle lock (cheap or not), as you describe, has low security and high assurance. Anybody willing to get some liquid nitrogen (cheap, BTW), can get by your lock no matter how great it is. But, somebody walking down the street who spontaneously thinks, "that's a nice bike," is thwarted. The brute force attack (liquid nitrogen) will defeat the security, but otherwise somebody has to have the key to get the bike.

    Similarly, a situation where you don't do much to prevent people from coming and going, but are sure that their activities are on a security tape (for example, an ordinary bank) calls for low security, high assurance. That is, the bank robber can easily get the teller to hand over the money (that's how tellers are trained, in fact), but we are confident that the police will catch the robber later (and bank robbers are almost universally caught). Anyway, you say that I described class C2 protection. To be honest, I haven't read the Orange Book enough to debate you here. But, I suspect that levels like A1 describe even better assurance than C2 -- but that you can't get credit for those levels of assurance without also having A1 security.

  • FWIW, it seems worthwhile to point out that "Orange Book" classifications are not all that well thought-out. The problem is that they tie increased security to increased assurance.

    It is extremely difficult to establish a high-security system while simultaneously having a high assurance that it is correctly implemented. OTOH, it is often useful to have a low-security, high-assurance system, and the Orange Book doesn't pay much attention to this case.

    Indeed, if audit is the only really interesting property in this case, it sounds as though low-security (mostly logging) high-assurance (logging cannot be defeated even by 'creative' users) is exactly the solution that is needed.
  • AFAIK, there is nothing about Linux that would prevent it from being certified as a DII COE platform. So long as you have a POSIX-compliant system you should be able get it certified through the Kernel Platform Certification (KPC - why do I think of fried chicken?) program. That's not to say no new code would be required, but I gather it's mostly a documentation process ("Does the system do X?" sort of thing). The 'kernel' services -- security, system management, networking, printing, etc. -- aren't anything cosmic.

    The catch is that someone must pay the costs for it to go through the certification process, and no single program would want to foot the bill because they are probably underfunded to begin with. That being the case, they will likely choose one of the three platforms already certified (NT, Solaris, or HP-UX).

    So it is left to the vendor to put forth the money and effort to get a platform through the process -- but in the case of Linux, which vendor? That's the catch that I see. Someone will have to carry the flag -- IBM? RedHat? It just depends on who wants to sell to the DoD bad enough.

    Now, once you've been certified, there's still the matter of getting all the infrastructure and common applications running on your platform -- but if they'll run under Solaris and HP-UX it shouldn't be too hairy to port them.
  • In this context, I'm assuming auditing means *security* auditing. When you turn on auditing in Solaris, for example, you can log login attempts (successful and unsuccessful), file creations, modifications, and deletions, and probably any number of other things I'm unaware of.

    Basically, it's a tool to help you detect breakins or suspicious behavior by users.
  • Any bugfixes, additions, modifications, kernel patches, etc. produced by the DoD are probably under this also. OTOH, they can justify classifying just about anything as SECRET. Because of their ability to classify, DoD is a poor test case for Linux in government.

    I think that the GPL is incompatable with section 105 for a number of reasons. Of course if they just add things to the stock kernel and redistribute the mods separately, there is no problem.


    Title 17 Section 105 just says that works created by the government are public domain and can not be copyrighted. That government pamphlet that you received with your census form is public domain and can not by copyrighted. You can print and give away or sell as many copies as you want.

    There is no problem at all from government employees maintaining their own Linux systems. There is no problem with the government making their own customizations of the stock Linux kernel. They may not be able to distribute them outside of the government, but that's not their job. So what if the DoD classifies tham SECRET if they can't be distributed legally anyway?

    Now, how is this incompatible with the GPL?

    Anomalous: inconsistent with or deviating from what is usual, normal, or expected
  • AFAIK there is no undergoing project for C2 level auditing. The best way to get something done like that would be sourceforge or sourcexchange or a place like that.

    If they need auditing, try to ask them WHAT should be audited. It would be an easy task to use PAM (= Pluggable Authentification Module) and add logging to that.

    And ask them if there NT servers are C2 certified. The funny thing, against any other claim, it isn't (NT 3.5x is certified, but only if it has no disk drive and no network connection...).

    If they allow NT, they should allow the use of a more secure OS (like Linux), too. Otherwise, they should remove all of their NT machines.

    Really certifying for level C2 costs lots of money, and I'm afraid no one will do that for now (for what reason, anyway?).

  • by Anonymous Coward on Tuesday March 28, 2000 @07:28AM (#1165575)
    There are two projects:
    RSBAC at www.rsbac.de [rsbac.de]
    ob1 at oss.sgi.com/projects/ob1 [sgi.com]

    The RSBAC works but is hard to configure. The ob1 has good docs but does not even run.

    I have work on the ALPHA and PPC port.

    Shaun Savage

  • Between the patches SGI already has out, the Trustees code and/or the POSIX ACL patches, auditing and logging code existing in the kernel, the auditing capabilities of the new IP stack and tools, projects such as OpenWall and Bastile Linux, the International Kernel Patches, FreeS/WAN and/or ENSkip and/or NIST IPSec, Kerberos 5, OpenSSH, MCrypt and MHash, OpenCA, the Linux Kernel capabilities attributes, the various OpenBSD servers being ported over, the various packages for scanning for incorrect attrbute settings, the various portscanners and vulnerability scanners, any additional security being added by either IBM's or SGI's journalling file systems, and the good omen of Tux himself, I'm amazed Linux isn't already mandatory in all Above Top Secret establishments in the US.

    Frankly, with the existing level of control you have in Linux, you should be able to easily walk away with a B1 for a careful installation.

    I don't know what the requirements are for a B1, but I'll guess that the four components I've listed form a part of it.

    • Mandatory security access on the FS: The International Kernel Patch, Trustees and the kernal capabilities should give you ample security control over the file system. I suspect strong encryption, fine-grained ACL's, and limits on what the kernel will permit, should meet this requirement.
    • Mandatory security access on the network: I don't know if this is a requirement, but iptools2 will allow you to shape traffic on a per-user basis. If the user isn't authorised, they've zero queue. If they are authorised for web acess only, simply allocate queue space for that traffic type only for that user. Problem solved.
    • Mandatory controls on remote network access: Simply use IPSec and strong host validation. This also covers any mandatory encryption on network traffic. Simply have ALL traffic routed through IPSec devices, thus making it impossible for non-authenticated, non-encrypted traffic to be transmitted or received.
    • Mandatory controls on remote users: OpenSSH and Kerberos give you some pretty strong user-level authentication. If you throw in tcp-wrappers or an enhanced inetd, you can place some fairly extensive controls on which users can use what services from what remote machines.

    I really can't see the military, for all it's paranoia, needing more extensive security than that. However, if it does, there's always Tripwire and assorted Intruder Detector packages. Not to mention firewalls, honey pots, buffer overflow detectors (or preventors), security auditing packages for ensuring that trivial holes are closed off, etc. Even the quota system offers some security capability.

  • by Skip666Kent ( 4128 ) on Tuesday March 28, 2000 @07:15AM (#1165577)
    Check here [securify.com] and scroll down or search for "auditd".
  • by jnazario ( 7609 ) on Tuesday March 28, 2000 @07:32AM (#1165578) Homepage
    hi,

    looks like i may be one of the first to offer a useful post.

    SGI is working on getting C2 grade Linux out there. they hope to have it working sometime this year. B2 will follow 18 months or so from that. Orange Linux is the project's name.

    the NSA and Secure Computing are working on a C2 grade Linux as well, with source of the stuff to be made publically available due to GPL licensing.

    some links:
    http://biz.yahoo.com/prnews/000113/ca_secure__1. html
    http://slashdot.org/articles/00/01/13/1029206.sh tml
    http://lwn.net/1999/1118/a/sgilinuxuniv.html

    /me
  • by thelars ( 30688 ) on Tuesday March 28, 2000 @07:43AM (#1165579) Homepage
    Whoops, that url was goofy. Try
    this one [ncsc.mil] instead. Sorry.

    --
  • by istartedi ( 132515 ) on Tuesday March 28, 2000 @08:08AM (#1165580) Journal

    IANAL am and only familiar with this law within the context of the VRML test suite, the license of which I will now quote:

    This software was developed at the National Institute of Standards and Technology by employees of the Federal Governmentin the course of their official duties. Pursuant to title 17 Section 105 of the United States Code this software is not subject to copyright protection and is in the public domain. NIST assumes no responsibility whatsoever for its use by other parties, and makes no guarantees, expressed or implied, about its quality, reliability, or any other characteristic. We would appreciate acknowledgement if the software is used.

    Any bugfixes, additions, modifications, kernel patches, etc. produced by the DoD are probably under this also. OTOH, they can justify classifying just about anything as SECRET. Because of their ability to classify, DoD is a poor test case for Linux in government.

    I think that the GPL is incompatable with section 105 for a number of reasons. Of course if they just add things to the stock kernel and redistribute the mods separately, there is no problem.

    The real problem comes from government employees doing maintenance work on Linux.

    Then what you have is the software business paying taxes so that the government can write free software and put them out of business.

    They should look at BSD. It is very close to public domain. If anybody tries to touch this section of the Federal law to make an exception for Linux, I will be marching down to see my congressman so quickly to let him know that it is wrong, Wrong, WRONG!!!

  • Yep, It's being replaced by the common criteria, a joint product of Europe, Canada and the US. It's just been recently standardized into an ISO. These sites should be public:
    Common Criteria Project at NIST [nist.gov]
    Trusted Product Evaluation Program [ncsc.mil]
  • by Anonymous Coward on Tuesday March 28, 2000 @07:39AM (#1165582)
    You can find the full specs for C2, B1, and other security levels (the "orange book") online at http://www.radium. ncsc.mil/tpep/library/rainbow/5200.28-STD.html [ncsc.mil].

    For other interesting books in the rainbow series, see http://www.radium.ncsc.mil/tpep/li brary/rainbow/ [ncsc.mil].

  • by DragonHawk ( 21256 ) on Tuesday March 28, 2000 @06:08PM (#1165583) Homepage Journal
    I'm not so sure you understand just what Mandatory Access Controls really are.

    Unix traditionally has Discretionary Access Controls. I, as jruser, can grant or deny permission to other users to view my files as I see fit. If I want to "chmod o+rwx ~/.rhosts", I can do that.

    Under Mandatory Access Control, however, if I don't have permission to give away a file, I can't do it, even if I want to. In other words, I may not have the right to do a "chmod o+rwx".

    AFAIK, none of the features you describe enforce MAC. True, if the user doesn't have access to the network, they won't get out, but once they are granted the network connection, you have no say in what they use it for.

    There is quite a bit of stuff regarding "security labels" in B1. Any storage object in the system (disk file, block of memory, etc.) gets assigned a label which describes its sensitivity and category in the organizational hierarchy. Mapping that into traditional Unix security mechanisms would be messy at best.

    Possibly more importantly, once you start getting into the B levels, you find as much emphasis being placed on assurance as features. In other words, it isn't enough to say that Linux provides such-and-such, you actually have to officially prove that it does, document that proof, and find someone to sign off on it.

    The Orange Book and Unix doesn't exactly line up one-to-one. :-)
  • by thelars ( 30688 ) on Tuesday March 28, 2000 @07:41AM (#1165584) Homepage
    There have been a few questions posted so far, and not a whole lot of answers, so here's my humble attempt.

    (1) What is auditing?

    "Auditing", in this context, is the process of keeping detailed records of system activity. This can be as simple as recording when people log in and logout, or as involved as keeping a record of every single command line run by every user.

    (2) What is C2 level auditing?

    The DoD defines a number of classificating that have to do with the security of a computer system. Each level has specific requirements that must be met (and, in fact, even if a system meets those requirements it still needs to be officially certified).

    The C2 security level is (he said unauthoritatively) the minimum classification defined by the DoD (followed by B1, B2, B3, and A1). This defines a number of specific events (and information for each event) that must be audited.

    You can find a list of auditing requirements for all the above security levels by reading
    A Guide to Understand Audit in Trusted Systems [http], published by the National Computer Security Center.
    --
  • by Col. Panic ( 90528 ) on Tuesday March 28, 2000 @07:23AM (#1165585) Homepage Journal
    Tripwire [tripwiresecurity.com]: [Description] Tripwire is a system integrity checker, a utility that compares properties of designated files and directories against information stored in a previously generated database. Any changes to these files are flagged and logged, including those that were added or deleted. With Tripwire, system administrators can conclude with a high degree of certainty that a given set of files remain free of unauthorized modifications if Tripwire reports no changes.

    lsof [purdue.edu]: [Description] Lsof is a Unix-specific diagnostic tool. Its name stands for LiSt Open Files, and it does just that. It lists information about any files that are open by processes currently running on the system.

    and

    CASL [nai.com] [Description] Custom Auditing Scripting Language (CASL) implements a packet shell environment for the Custom Auditing Scripting Language that is the basis for the Cybercop(tm) line of products by Network Associates. The CASL environment provides an extremely high performance environment for sending and receiving any normal and/or morbid packet stream to firewalls, networking stacks and network intrusion detection systems as well as being sufficiently rich of a language to write honeypots, virtual firewalls, surfer hotel, phantom networks and jails.

  • by bob|hm ( 139518 ) on Tuesday March 28, 2000 @08:00AM (#1165586) Homepage
    Your answer is OpenBSD [openbsd.org]. I'm not sure of the certification level, but here's a quote from a recent interview with OpenBSD's project head, Theo Deraadt:

    "OpenBSD is so secure that it even got the attention of the U.S. Department of Justice, which stores and transmits top-secret data using 260 copies of the OS."

    The full article is here [itworldcanada.com].

    --Bob
  • by fialar ( 1545 ) on Tuesday March 28, 2000 @07:21AM (#1165587)
    I attended a Linux University workshop from SGI last Friday and at the Linux Security breakout session, the gentleman from SGI who does a lot of work with the NSA and the government said that SGI is working on making Linux C2 and B1 compliant. These should be finalized sometime next year. Auditing is one of the components that still needs to be worked on just to make Linux at least C2 compliant.

    For the B1 compliancy, there has to be further security checks (like mandatory security access on the FS)

    A lot of this good stuff will be coming from IRIX, which has been pretty secure in and of itself.
    We should be seeing a lot of security added to Linux this year.

    Fialar
  • There are two projects you may be interested in. The first is the Linux BSM [ucdavis.edu] project at U.C. Davis (home of an excellent security research lab [ucdavis.edu] by the way). The project's goal is to provide TCSEC-compliant auditing for Linux. They appear to have made reasonable progress. The last update to the web page was Feb. 15.

    The second project you may want to consider is that SGI is building an "orange book" Linux, with a goal of C2 by October, and B1 by next spring.

    Note that this question was posted to Slashdot last year [slashdot.org] so you probably want to go check out the responses there.

    Finally, while I'm here, I'll plug my own security-hardened Linux distro: Immunix [immunix.org]. Immunix is not TCSEC compliant or anything like that. Rather, it is designed to be extremely difficult to break into, while preserving a high degree of Linux compatibility. Currently, it is just Red Hat hardened with StackGuard [immunix.org], but we will be releasing additional security technologies shortly.

    Crispin
    -------
    CTO, WireX Communications, Inc. [wirex.com]
    Immunix [immunix.org]: Free hardened Linux

Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.

Working...