Computer Security Criteria 300
Rolf Marvin Bøe Lindgren writes: "For most human endeavors that involve some sort of risk, there are powerful, recognized public interest groups or even government-appointed organizations that investigate and analyze dangers, prescribe guidelines, determine criteria for acceptable risk, etc. This does not seem to be the case for software! I work for a ship classification company. The purpose of such companies are, very simply put, to determine how safe seagoing vessels are, for instance in order that insurance companies can decide insurance premiums. There are, needless to say, numerous conventions and special interest groups to determine safety at sea. That is, as far as I know (and I would very much like to be proven wrong), except the computer systems that the ships use. there are restrictions, laws and regulations involved in just about any object that goes into a ship except the computer system. Everybody seems to know, for instance, that UNIX is safer that Windows, but there are no safety, reliability or security criteria established by any recognized authority that can be used to defend one computer system over another."
"Now, I could ask Slashdot how to go about to form a recognized body, but I have access to competence in that particular matter. What I would rather like to know, is this:
- What might a set of safety criteria be like (I am just now most interested in criteria for computer systems that would address such issues as vulnerability to worms, viruses and crackers)?
- How should one go about to find competent and interested people who would like to be part of a body like I describe, or consultants to one?
Human Life (Score:3, Insightful)
Standards in Coding (Score:2, Interesting)
If a bug occurred in a very unique situation, and it took 20 years for that situation to come about and it caused just one death, people would still ask, "Why wasn't anything done to prevent this?" Something was done (hopefully), and standards and regulations help, but in the end that's pretty much all you can do.
I don't think he's asking as much about viruses or hackers in this case, but those are also valid questions. I tend to believe what is stated downstairs in this argument -- I've seen Unix systems that were wide open and Windows Boxes sealed tighter than a drum, and it's up to the admin. I'm not really sure if there's a test or anything that sysadmins must pass to become licensed sysadmins, but if there isn't (and I'm not talking about certification) there should be one, at least as far as sensitive data like this is concerned.
Also, a real life situation when computer software caused multiple deaths was in the case of the London Ambulance Service, which used a very poorly constructed computer system and was directly linked to 20 or 30 deaths (due to late ambulances) in the few days it was active.
Re:Human Life (Score:4, Insightful)
Think:
1. 911 call centers
2. Industrial robotics
3. Air Traffic Control
4. Engines with embedded software controls
5. The telephone network
6. The power grid
7. Medical equipment
I'd like to point out that there are documented deaths from software failures in most of these categories.
Re:Human Life (Score:2)
Naive or troll? (Score:5, Insightful)
"Reboot the air traffic control system."
"How long has the reactor control system been down?"
"Try to get the GPS working again before we enter the harbor in this fog."
Any of these sound like non-life threatening situations? And you did notice the questioner is specifically concerned with the third type of situation I mentioned, didn't you?
Re:Naive or troll? (Score:2, Informative)
Re:Naive or troll? (Score:2)
Re:Naive or troll? (Score:4, Insightful)
Re:Naive or troll? (Score:2)
It would not be appropriate for software regulations to be sweeping over all industries when the uses vary greatly. Instead, the industry should have individualized (by function) regulations. These could (actually they are in many cases) even be regulated by several athorities. For example, the Department of Saftey may regulate software controlling a chemical plant to prevent human injury. Then the Environmental Protection Agency regulates the software to prevent a computer crash from releasing nasty gasses.
Re:Naive or troll? (Score:3, Interesting)
http://www.cnn.com/2001/US/04/05/arms.osprey.02
Re:Human Life (Score:2)
The reason we don't require software to meet the same standards as other products, is that as a general rule, we think we can't - and THAT IS A CROCK.
This touches on anther
Bad (buggy) software probably costs more than good (bug-free) software.
The difference is that bad software is cheaper in the initial acquision. That's where PHB's focus on these things. After the product is in use, the real costs for crappy software rise quickly.
I'd rather pay up-front for something that works, and then get to enjoy the lower costs of continuing to use that good software.
Until we (the ones who know - the tech community) decide that crap software has to GO - even if it doesn't threaten your very life or property.
I can't say this enough - good software is cheap. Bad software is expensive. Bug-free/Good (or nearly so) isn't a totally impossible task.
It may be difficult, but the results would be impressive.
The real barrier to getting bug-free software is that software manufacturers are shielded from civil suits that would make them liable for the crap they produce.
Cheers!
Re:Human Life (Score:2)
I fully agree, also to most of the rest of the post. The problem is just that some people are willing to sell bad software and others are willing to live with it. Good software costs a lot to create yet is still cheap.
Let me compare Linux to Windows as an example:
With Linux a lot of highly qualified people are in the process of building an infrastructure, a basic block that can survive for a long time. It is still not finished. But I guess by now Linux was several times more expensive to create than all versions of Windows together. But it is a community effort of people that build to last! And it is a gradual improvement of a design that is knowen to work! A very good engineering practice for dependable systems. And you get a fallback if the effort fails, namely the system you improved upon!
With Windoes and Office a company is trying to make money with the lowest quality the market will accept. (I am still shocked by the unusability of Word and its document format whenever I have to use it.) These systems are not built to last. The business model depends on people having to throw away the old one and buy a new one every few years. This alone demonstrates that the old one was never intended to serve as long used reliable infrastructure.
Now the problem is not that Microsoft makes bad software. The problem is that MS is in the wrong market! Their approach is well suited for games and other software that is not critical. (Mind that MS Office _is_ critical for many companies.) But MS's approach is not suitable for anything even remotely approaching "critical" or "infrastructure".
I don't want all these critical systems to be computerized fast. I want them to be computerized so that they work for a long time without major changes. 30 years would be a good reasonable number. Now don't use Linux for that. But maybe use the Unix-API as embodied by Linux! If you limit the fancy graphical stuff that is unneeded anyway and pay attention to clean design and implementation you can move to a different Unix-like OS without changing data, user-interface or procedures!
Re:Human Life (Score:2, Informative)
hm! (Score:3, Funny)
Criteria (Score:5, Informative)
SANS has been publishing a series of "consensus" documents, asking for feedback from people on topics such as securing Windows and Unix versions. They've also put together a working group (pay to join).
If you have looked at these sources, I would be interested to hear how they do or do not fit in to what the author of the original question is looking for.
Re:Criteria (Score:2)
http://www.sans.org/SCORE/
I work for.. (Score:3, Funny)
Big ship..
Little ship..
Big ship..
Medium size ship..
common criterea? protection profiles? (Score:4, Informative)
http://www.commoncriteria.org/
http://csrc.nist.gov/cc/pp/pplist.htm
Re:common criterea? protection profiles? (Score:2)
CCITA
ISO/IEC 15408
NSA Rainbow
Which might be of note.
Re:common criterea? protection profiles? (Score:2, Informative)
Most secure (Score:5, Insightful)
Re:Most secure web server (Score:5, Interesting)
*Up go hands of Linux advocates*
Answer: Mac because it is the least available operating system and as such fewer attacks have been created for it, even if there are hypothetically more bugs. As such, you would be less likely to suffer a problem, all else being equal
Back to the article, would a measurement take into account this type of situation? Does Mac get a high rating for low rate of incidents or a low rating because it (probably) has more bugs than Linux. Open question
In essence... (Score:2)
I would like to see that point of view competently defended in the public court of security experts.
Re:In essence... (Score:2)
I would estimate that 95% of successful hacking attempts are either internal compromises or moderately-skilled users using pre-programmed exploits.
Security through obscurity, combined with good user policies and applications is quite effective. You cannot hack what you don't know about.
Re:In essence... (Score:2)
I think that it's more important to look at factors like how often the software you are using is updated in response to security flaws and how easy it is for you to replace your software given an update.
Basically, if the software is never updated, or if the server cannot go down under any circumstance, then maybe the more obscure platforms/software maybe an answer.
Re:In essence... (Score:2)
Right. And respekt KISS. If you want to securely serve static pages, don't use a full Webserver with CGI and other stuff! Use the simpelest possible! (See e.g. D. J. Bernstein publicfile [cr.yp.to] for such a solution.)
A lot of todays insecurity arises from feature creep and reinventing the wheel.
You *can* hack what you don't know about! (Score:2)
Yes, you can. Your Windows box can randomly throw 'sploits at a box you don't know exists until it finds one the admin didn't patch or didn't know about. Often, you don't know your Windows box is doing this, because you don't know that it's been thoroughly zombied.
One-of-a-kinds generally don't help as much as you might think because what you gain in obscurity you lose in maturity (ie, some script kiddie stuff will still work becuase the author made the same mistakes that were found and removed from Apache years ago).
Re:Most secure web server (Score:2)
Ah yes, good old security-through-obscurity. Shouldn't you say, "there are fewer publicized attacks created for it"?
Re:Most secure web server (Score:2)
Let us supose that someone discovers a bug in the MAC O/S that is only relevant to online control. almost no MACs are used for online control, what is the probability your bug will ever get fixed?
The original story in this case is posted with the intention of obtaining a particular answer. The poster is not really interested in what system would be secure, he wants to have his original prejudice reinforced.
Design of ship control systems is a real time control problem. As such it is not an application for which 'Linux' is a solution, you have to be much more precise and specify exactly which real-time enhanced Linux you are considering. It would also kinda help to actually specify the problems to be addressed
As for 'security', one would hope that you would not be hooking your control systems up to the Internet or running any sort of user application other than controls for the ship. The references to worms viruses etc suggest to me that the poster does not understand the problem or is trolling for anti-Microsoft stories to tell his manager.
Your Lecturer is WRONG (Score:4, Insightful)
*Up go hands of Linux advocates*
Answer: Mac because it is the least available operating system and as such fewer attacks have been created for it, even if there are hypothetically more bugs. As such, you would be less likely to suffer a problem, all else being equal.
This is short sighted, becuase it does not take into account what you are securing AGAINST. If you are securing against random, non targeted attacks from script kiddies, you might be right, becuase said script kiddies aren't going to spend the time to figure the system out... but if you are trying to secure against a real, concerted attack by agents of a competitor trying to steal your ideas or ruin your business, then you have made a very grave mistake.
When you say "all things being equal", then you are saying that 1 defaced web page is exactly equal to 1 stolen top secret formula, which is preposterous. A hypothetical question can not consider all types of attacks to be equal and still produce a valid and meaningful result.
If you use that logic, then using a completely open and unsecured network would be ok if you sealed the computer in a locked metal box, since it would deter physical attacks by baseball bats (ALL attacks are of equal value, right?). Or you could say that adding the line "WWJD" to the telnet login prompt would be a valid defense since it would lower the instance of attacks by Christians by 80%.
Go set him straight.
Re:Most secure web server (Score:2)
Maybe use both with a hard-coded failover or a combined system where both OSes have to be successfully hacked in order to compromise the system. Depends on what kind of security you need.
Become the majority of a minority (Score:2)
There are at least two huge flaws in this; firstly, a generic attack (or a manual followup to a generic probe) is more likely to work, and secondly the hack-attack numbers reflect a smaller population, not necessarily a smaller proportion of a population. It's a great comfort to know that you're unique as you sit there looking at your Mac server full of zeroes.
If I wanted to take advantage of the features advocated by the lecturer, I'd use something like Roxen on Linux on a MIPS box, chrooted and as far as possible readonly (chown/chmod then chattr +i then remove the chattr binary, and if possible also mount -o ro).
Re:Most secure (Score:2, Insightful)
Trivial but emblematic example: assuming that the string you are being sent will stay below a certain size means you don't need boundary checking. Congratulations! You have just made the main loop faster by orders of magnitude and much more human-readable!
Oops
But, but, but the code was simple! Its not my fault!
Re:Most secure [TANGENT] (Score:2)
Do you not realize that the vast majority of network server software is not even developed by the 'bundler' (e.g., OpenBSD, RedHat, etc), so the 'remote exploit' issue is quite irrevalent. As to the question of how many are enabled by default, there is practically no services enabled by default on RedHat (I'm not sure how many; I use custom installs). Even inetd isn't running on many RedHat systems.
Re:Most secure [TANGENT] (Score:2)
I am quite willing to spend the extra time to secure my linux system in order to get better driver support. Not everybody is able to do this though. These people should carefully consider whether to use OpenBSD or pay a Linux-expert to secure their installation. Or whether to use another Unix(-like) system. If you don't need the better drivers, OpenBDS should be a very good choice.
Re:Most secure (Score:2)
Very true. I would qualify this as "... in a specific system." The reason this is far better is that people will add security measures they understand (e.g. being careful, having emergency equipment and plans, having insurance,...) to the overall approach. And they will carefully observe the system because they are aware that bad things can happen.
Risks (Score:4, Informative)
--xPhase
You should be sorry! (Score:4, Funny)
Some simple observations (Score:2)
While there may not be a body of standards regarding security, there are some de facto standards regarding redundancy of data, the breakdown of different methods of communication(connection versus connectionless protocols) are quite well defined as standards, and the general structure of professional applications. Taking these as a starting point, one could build a list of vulnerabilities for each of these standards. For example, in a connectionless environment, one would be worried about DDOS attacks, and methods for identifying the assailant. In a connection-based environment, physical issues such as allowing someone to get access to a LAN line with a laptop inside the company building would be something that would require at least some preventative measures(ID cards at the door, social policies about bringing in computers, etc.)
How should one go about to find competent and interested people who would like to be part of a body like I describe, or consultants to one?
Be very careful. You will need to find people who are trustworthy AND brilliant. Good luck.
Air Gap... (Score:3, Insightful)
First and formost, keep the computer system closed. Do not hook it up to any outside networks. No networks, no phone lines, no serial connections. That will elimiate quite a bit of risk for attack.
If that is not an option, then run the outside network connection through a very tight firewall.
~Sean
Re:Air Gap... (Score:2)
Re:Air Gap... (Score:2)
~Sean
Re:Air Gap... (Score:2)
wow I can't beleive u want him to take such risks (Score:2)
Air Gap (Score:2, Informative)
Then there is the navigation and communication systems. These are very important for a ship but may require limited access to the outside (GPS, etc). This should be completely seperate from the air gapped systems above and of course implement all other possible security measures (firewall, etc).
On a modern ship there will likely be a third level of systems used for personal communication. Web browsing, Email and the like are not vital to the safety of human beings onboard the ship an thus do not require as stringent security.
Using a multiple-system and multi-tiered security model like this may affer the best combination of security, price, and convenience due to not having to secure everything to the highest order.
Security (Score:5, Insightful)
Re:Security (Score:2)
Protection "from the Internet" is only one part of the issue. Analyzing the security issues should include an analysis of the local issues. Let's look at the ship scenario, and come up with some potential non-Internet dangers:
These are just a few potential worries off the top of my head that do not, intrisically, have anything to do with Internet connectivity, or even necessarily with connectivity at all.
Common Criteria is a possibility (Score:5, Informative)
Some sample standards (or "Protection Profiles") include proxy and packet filtering firewalls.
My sense is the folks overseeing the Common Criteria would like industry groups to sponsor Protection Profile development. For example, banks could come up with profiles for wire transfer components, ATMs, etc. The shipping industry could be another.
BTW, if you visit the Website, there is an interesting line of Common Criteria-branded clothing, for the geek who has everything!
Re:Common Criteria -what about NIST in the US? (Score:3, Informative)
Their audit and risk checklists are quite extensive.
Safety of computer systems... (Score:2, Informative)
Backup systems have to be in place, and why captains have to be able to navigate manually. Just like how yachts have to have motors in case sails break, etc... and to be able to safely navigate in ports.
The threat of virii could be minimal because the physical security of the ship's navigation systems should be locked down. No internet access, no floppy disk drives, closed systems, etc.
However, there have been failures. I remember a Navy Submarine running Windows NT or something, and it crashed (the OS, not the sub). They had backup systems, of course, but they looked pretty stupid. Windows NT Crash on Navy ship [info-sec.com]
The key point here is that you can test systems anyway : running for long periods of time, checking memory leakage, hardware failure periods, etc... and bugs that come up are corrected for free, usually, when you're talking about expensive navigation systems.
Sure, you can lose money for being out of action for a few hours, but that could happen due to any number of other mechanical failures too, so you just calculate some kind of percentage chance of failure based on past history of the navigation system?
It depends on how the computers are used (Score:3, Insightful)
And I work for a railroad that moves a half-million people a day. I like to think they're not too dissimilar industries - when my computers shut down, the railroad stops running. I'm guessing that when your computer stops, the ship stops moving. That it doesn't sink or explode (i.e. there are hardware items that relieve excess pressure, etc.)
There are some differences. My trains have low-level hardware (based around gobs of vital relays) that will stop them from running into each other. I doubt ships have anything like this.
The standards for what you or I do are drastically different from what someone writing software for an airplane's fly-by-wire system has to do. There, if the computer stops or starts doing the wrong thing, it falls out of the sky. Scary stuff.
So, it depends on what the computer controls, but you haven't given us this information.
Now we post trolls? (Score:2)
Sounds to me like the shipping industry is behind the times -- there are lots of other industries that have standards for computer systems. The FDA is becoming much more strict about computer validation and there is a great deal of documentation and testing required to implement a validated computer system. There are also many, many recognized Quality Management Systems in existence that apply equally well to a computing environment.
>Everybody seems to know, for instance, that UNIX is safer that Windows
Sorry, I couldn't ignore this... Validation of a computer system is about proving something is fit for purpose. Documenting requirements, design, performance, data integrity etc. It ain't about what OS you run. There's not a sane business person in the world who will rally behind someone masking anti-Microsoft sentiment as "computer security".
Not what he's asking.... (Score:4, Interesting)
what i'm guessing he wants to know is something more along the lines of this.Windows NT cripples US Navy Cruiser [info-sec.com]
in which case, he's really asking which software/OS is the least likely to puke and leave you up a creek without a paddle.
Re:Not what he's asking.... (Score:5, Insightful)
1 - Non Secure
This describes a public terminal (e.g. what you might see in a shopping mall or your local university computer cluster) that is running MSDOS. The keyboard and mouse aren't even locked down.
2 - Half-Assed Security
This describes a public terminal that is securely bolted to the desktop and is locked shut. A log-on prompt appears, but is easily bypassed (e.g. Windows 95, or a Linux box that is bootable via an accessible CDROM or floppy drive). [Alternative: the logon prompt appears but passwords are available by shoulder-surfing, e.g. "employee only" terminals in retail stores.]
Levels 1 and 2 are a black hat's paradise.
3 - Almost Secure(tm)
This describes probably 95% of the unwashed masses connected to the internet. This machine has a firewall and virus scanning installed, but the virus definition might not be up to date, and the firewall isn't what you'd describe as industrial strength. Some security patches may or may not have been applied, but are probably not completely up to date. This machine might present a challenge for your ordinary script kiddy, but an experienced cracker can probably find a way in. Configurations in this category would include most Windows installations, default Linux installations (older Red Hat, I don't think the newer ones start everything up) that start up every service under the sun, and a public web servers that are "sort of" secure but have holes in CGI scripts or are missing security patches. This also describes a lot of corporate wireless networks.
The black hats enjoy level 3 probably more than 1 and 2, just because of the (slight) extra challenge.
4 - Pretty Good Security(tm)
This describes a machine that is physically locked down, but still connected to the network (generally behind an external firewall). Security patches are applied within hours of announcement. Logs are human monitored, and are written either on another machine, or on permanent media (e.g. printer or CDROM). There are no more services running on this machine than absolutely necessary (in other words, a mail server ONLY has ports 25 and 110 open).
In practice, these don't generally get cracked. When it happens, it is usually physical security -- telling someone your password, sending your password via email, etc. A break-in might also be caused from a yet-unpublished remote exploit in one of the major services (sendmail, bind, apache, etc.) These machines are often susceptible to certain types of DOS attacks (when such attacks can't be stopped at the router/firewall).
5 - Unbreakable security
This descrbes a machine that is physically secure (i.e. the hdd is locked down inside a secure chassis), and has no external network connections. It is also shielded from van Eck and other eavesdropping.
You won't get into this machine without weapons, "truth serum", or monetary inducements to certain priveleged individuals. Also worth noting is that this machine isn't really practical for everyday use...
Re:Not what he's asking.... (Score:2)
Perhaps someone dials in via satellite, gets some virus, and later plugs into the LAN to see what is for dinner and it spreads.
Like you said - the nav systems should be seculuded, but you never know. Perhaps the Captain likes to look at the info in his cabin?
Re:Not what he's asking.... (Score:5, Informative)
My father is a marine consultant, and I have been to several ships with him, which rely much more heavily than this on computer systems these days.
One specific example-
The charts used to navigate by a ship were running on an NT workstation on the bridge of the vessel. It is no longer a requirement for up to date backup charts to be kept on board. A CD is sent to the ship each week updating the charts to the latest version, but the backup paper charts that are kept are not updated at these regular intervals any longer because of the increased reliance on the NT charting software. The GPS onboard the ship updates the ships current position on the charting software running on the NT workstation so the master can see where they are with respect to the course that has been plotted previously.
This same ship contains a small network, only consisting of 4-5 computers (its only a coastal tanker). One for charting on the bridge, one controlling & monitoring the amount of oil flowing on/off the ship in dock etc.. but..
The ship also has access to email (and consiquently attachments) at sea via Immersat satellite software + (uhh-ohh) Microsoft Outlook. If a member of the ships crew were to open an email attachment apparently from the office, which was in fact a virus, and the network security was not up to scratch, it may have the capacity to shut down not only the ships main course plotting software (sending them to backup paper charts), but to disturb the monitoring of oil/balast on & off the ship in the dock.
There are also proposed inprovements which would in effect link in the course plotting software with the autopilot, thus controlling the ships movements from the PC's course plotting software (unless of course, any evasive action were needed to be taken - the master would switch to manual).
This is only a small example of the problems that could genuinely be caused if a virus infected some of the more modern ships in todays world.
Re:Not what he's asking.... (Score:2)
Don't worry! I'm sure that Crash Override, Acid Burn and Cereal Killer will save us all by hacking into a Gibson with their iBooks!
Re:Not what he's asking.... (Score:2)
Well this doesn't sound too horribly dangerous, although it's a little sloppy IMOP. Presumably (correct me if I'm wrong) it's acceptable in this situation if the navigation system is subject to short periods of unavailability? Just how bit a problem is it if that NT box is totally destroyed in mid-voyage, however?
Well obviously that's a huge problem just waiting to happen. I certainly would never sign off on such a system. But the question remains just how much better would be good enough? Just how catastrophic, for instance, would it be to lose that balast monitoring system?
If this system can be taken offline safely for, say, an hour at a time, then I would not say changing OS is necessary - a sensible program of security and reliability enhancement can easily make a windows based network perform at a level that's acceptable in that case. Given how much these vessels cost it would seem horribly short sited to scrimp, so I would recommend:
Switching Operating Systems might eliminate the need for some of that work, but much of it needs to be done regardless. Hardware failures need to be planned for, in particular.
Quit promoting that stupid myth (Score:2)
The article talks about a software problem, not an OS problem.
Rainbow Books (Score:3, Interesting)
Later the EU produced their Green book which looked at availability as well, this is kind of good for information systems but it doesn't really cover real-time control systems.
A long time ago, I worked on real-time control systems. We divided our systems into control/measurement, supervisory and at the top, information systems. At the lowest level, we are talking hard real-time and simple enough to be very reliable. They had to be as they were typically sitting by a man-sized chemistry set. The supervisory systems gave the pretty interfaces, they could crash, but generally they didn't. These were for control rooms, and whilst bypassing them was possible, it wasn't easy. The top level system ran all kinds of complicated software applictions that could and would occassionally crash. Apart from the crudest electrical standards for the stuff in the plant and the control room, there were no evaluation criteria.
How a defense contractor handles software (Score:4, Insightful)
When using a requirements-based system (where you write requirements for software and then the software is written from the requirements), there are multiple checkpoints. First, the requirements document for the software must meet or pass certain criteria. Second, the software must meet or pass the criteria put forth by the requirements document. Third, the software is rigorously tested.
Now, in fighter planes, the software must be incredibly robust - you don't want planes falling out of the sky - and in defense projects, bureaucracy tends to inflate the whole process.
That being said, requirements are an excellent way to control the quality of software, or an installed computer system.
And this is important! We all remember the movie Hackers, in which the Davinci virus was going to cause a bunch of oil tankers to tip over into the ocean. And we all know how closely that movie parallels reality.
Talk to the FAA (Score:4, Informative)
It'd be almost trivial for the shipbuilding industry to adapt them to their somewhat lower-risk environment.
--Blair
I don't know "that UNIX is safer" (Score:2, Interesting)
Insurance and UL (Score:2)
Unfortunately insurance doesn't pay if software is defective. There has been some talk about insurance companies writing policies to cover web site breakins, if this happened I'm sure UL (or some other company) would quickly step up to certifing software and configurations.
The real problem is maintaining the certification. What might be good at one point could have undiscovered problems that when discovered cause a once certified piece of software to become delisted.
Windows vs Unix security (Score:4, Insightful)
This is a poorly worded, and completely unsupported opinion. I despise the Evil Empire's crappy software as much as the next person Slashdotter, but Windows can be just as secure as anything out there, it's just that it's so poorly administered most of the time, it's often left unsecured, and therefore gets abused constantly.
Ultimately both operating systems have superusers, so both OS's are inherently dangerous. How is Unix any safer if you only have to exploit one vulnerability to take the whole system?
It's called engineering judgement. (Score:3, Insightful)
Design bases means that information which identifies the specific functions to be performed by a structure, system, or component of a facility, and the specific values or ranges of values chosen for controlling parameters as reference bounds for design. These values may be (1) restraints derived from generally accepted "state of the art" practices for achieving functional goals, or (2) requirements derived from analysis (based on calculation and/or experiments) of the effects of a postulated accident for which a structure, system, or component must meet its functional goals.
The same logic underlies all design. At some point you have to have engineers you trust and they should be versed in the "state of the art" and all applicable studies.
In the nuclear industry we can and do rely on vendor studies. Who else but GE is going to know the maximum power levels that are safe with their reactors? They built a full scale model and proved it.
In the software industry, as you have noticed, things are a little less clear. First, Microsoft is an unethical company. (gotta go before finishing!) You and me both know that Windows is an unstable system. It changes all the time and those changes break programs. Some would even say that Windows is unstable without any changes, and indeed sites that use it typically see 30 day uptimes and no better. Anyone who would relly on such a thing for something that in is in any way needed to protect the public safety is incompetent. How that might be worked into a ship is a matter of judgement. I would not use it except as a game platform in the rec room or to look after some system that is superfuous.
Solution (Score:2, Funny)
-mm
obscurity mon ami
It all depends (Score:2)
My thoughts on this are, what levels of security are required? I've never heard of someone hacking an oil tanker, but just because I've never heard of it doesn't mean it hasn't happened, or is impossible.
My opinion is that the most important thing you would need software for is navigation software, in order to determine location, and software for weather reports, so you can plan ahead for adverse weather conditions. Can you get both for either OS? Sure (but I don't know names). Do they work? Well, if they didn't we'd have a few more ships crashing into reefs.
It gets away from secure systems, in my opinion, and more towards robust systems. Maybe it's just words, but I view secure and robust being different.
EFGearman
--
Shh... (Score:2)
M@
PEBCAK (Score:2)
I don't care how secure your unix system is, if your root password is "password" or you let root telnet into it, you're system is insecure. Selecting "unix" over "NT" should not save you money on insurance if it's the same moron running either machine.
Not to mention that there is some inherent risk in change. If you declare that "Unix is secure" and give a break to anyone using it, you're going to end up with a former NT administrator forced to admin a system he knows nothing about. (The same would be true in reverse.)
Risks Associated with Ship Computer Systems. (Score:3, Interesting)
I recall that a while ago some navy ships were stopped dead in the water due to computer failure, so there are legitimate concerns. Most ships have a large number of fallback systems - notably crew - that can recover from most problems.
Large ships also benefit from a reasonable physical security structure - limited bridge and engine room access for crew - that help computer security
In light of a natural physical isolation, limiting the net access of the navigation computers is a natural and effective security boost.
Most of the 'essential' computer systems that are currently used are not OS based, but embedded. It would be silly to worry about the electronic fuel pump in your car getting a worm. These embedded systems are often virus proof because they use ROM program space. Any bugs are the result of programmer error and insuficient testing
So, I suspect that only high-level systems like navigation are vulerable to worms. Now, let's take a look at possible damage
Massive failures can be caused by hardware, so there must be a backup system regardless of the software that you choose
The same redundant systems can also be used to keep the master system honest
In general good policy and management is more important that what software is used.
Well, history repeats itself, right? (Score:3, Interesting)
On the other hand...when has the computer industry ever mirror any real world industry? We still don't have the equivalent of the Consumer Product Safety Commission nor is there product liability, recalls, or defect-related lawsuits.
If there were, Microsoft would make the Ford/Firestone fiasco look like nothing.
- JoeShmoe
.
Re:Well, history repeats itself, right? (Score:2, Interesting)
Saying that the admin makes a difference (which it does) is not much in the eyes of an insurance underwriter. You could say the same about a driver of a car, or even the captain of a shit (what is the captain of the Exxon Valdez doing these days)? You could be the safest driver on the road, but insurance just sees an 18 year old male with no prior accidents.
An NT box with a good admin can be made safer than a *nix box with a poor admin, but insurance looks at classifications.
Evaluation and Certification (Score:4, Informative)
Simply put: It defines a complete, scaleable, tailorable and relevant process to design, test, certify and maintain a system for use.
IF: 1. Good, well informed individuals identify vulnerabilities during system design and testing,
2. The upper management commits to following the maintenance plan, and
3. The priciples of good system design are followed (i.e. KISS, enforcement of least privilege), then many security issues are non-issues.
IMHO, one of the most important things in certifying a system for a critical app is to get the underlying SW from a reputable vendor, one who identifies "Day 0" exploits immediately, preferrably one on the Common Criteria List, and offers a modularized package to limit the amount of unused but potentially vulnerable code in the system. No system is going to be immediately perfect now and for its entire lifespan, but follow a good maintenance plan and you may even be able to make a M$ system secure!
Depends on the Industry (Score:5, Interesting)
I help build software for invasive diagnostic medical devices. The FDA (and similar organizations for other nations) is very concerned about the software we use. They don't have a checklist of brands, makes and models of software, since that's not the nature of software. But they do audit our development process. ISO compliance is easy. FDA compliance is hard.
For our next project, some boneheads decided on Win2K and "embedded" Win2K. I personally think the decision is stupid. But it probably won't affect the final quality of the device. Why? Because it won't be a stock Win2K, it will be the embedded version, stripped of everything we don't need. We will be in charge of the hardware it runs on. It will be tested under rigorous protocols. Etc.
The FDA doesn't care that it will have Windows on it. But they will care that it operates safely. That means it can't crash while diagnosing a live patient.
Depends on the criteria, too (Score:3, Insightful)
Re:Depends on the Industry (Score:2)
I think the consumer product example is good, because it displays a scale of increasingly stringent requirements as the implications of a product's failure increases. This keeps the end costs proportionate to how much we care.
We're all aware of UL and CSA, but the previous poster said that they don't really care, in the context of how much more the DOT (for cars) and FAA (for aircraft) care. A bathroom lightbulb is an imprecise, low-quality, highly unpredictable piece of garbage compared to an airplane wing bulb, and UL (or whoever) don't really care. Nor should they!
Networked ships? (Score:2, Interesting)
If the ship needs a internal network connected to an open network, then it should be entirelly physically separate to the control systems. No firewalls, no fancy security measures. Just no route between the two.
More of an issue is software reliability and stability. I won't get into the linux/windows argument, but generally, a more stable, stripped down system can be easily achieved with linux. In windows, you run the whole OS, no two ways about it, even if it just adds to instability and problems.
Generally, on essential computer systems, such as those on planes, radar, life support systems, and sattelites, are as simple as possible, and undergo rigerous testing. The development is often frozen early on because of this, resulting in reduced features, but better overall performance. It can take several years for changes to propagate throught the system... this can be annoying if it is as simple as a GUI change (say, one display needs to be frequently accessed, but requires several button presses, where another, rarely used display has instant access off the yoke).
Hardware reliability could be a problem as well - though I should imagine these systems are ready built by people who know what they are doing. I wouldn't trust off the shelf boxes and bog standard cat 5 linking them.
Redundant systems are probably a very good idea - as is some form of power conditioning and UPS system, as ships power may not be the best.
There is a lot to consider, but I think you may just as well turn to someone who has experience with aviation computers as well as someone who knows a lot about closed network security.
And imagine.... maybe the dodgy oil tanker plot in hackers could come true...
Risks of www.dnv.com (Score:3, Interesting)
Click on 'classifications', then try to use any of the links on the left, register of vessels and such. The link for that is file:///registerofvessels. Needless to say, that link doesn't work too well on a public internet.
Every ship captain's nightmare (Score:4, Funny)
Careful on requirements (Score:2)
Yes, [everyone knows] UNIX is safer that Windows. BUT, that is in general, not specific.
I can write my own Unix, make it fully posix, even pay for legal use of the unix name. (I don't have the money, but I could in theory) I'm a fairly good programers, and I've done some OS level work. However I know next to nothing about writting a secure system, and apart from the backdoors that I intentionally put in my code, there will be many accidental security holes. However it would still meet the standards to be called unix by all measures.
The point is your standards need to mandate a solution that works. Require code audits by qualified external parties if it is net connected. Make sure your external parties are well chosen (example Bruce S. or applied cryptology fame, or his company), but make sure you have several different experts represented. Make sure the requiremetns are reviewed. Accually, you probably have processes for reviewing the machanical areas of the ship, extend those processes to the software. Remember, anything you can do in software I can do with gears (though in some cases I don't know if there is enough metal on earth to accually make all those gears, not to mention the relability) so your mechanical review process should extend to software.
Do you let your suppliers buy an engine (eg from Cummins) off the shelf and put it in, or do you require that your mechanicial engineers examine the engine design first. If they can buy any engine, then they can put in any software. If you need to see all the engine design, then you need all the software design.
FDA Examples (Score:2, Informative)
Simply do any Google search on "FDA 21 CFR" and you'll find hordes of information that you can use.
Medical hardware (Score:2, Insightful)
Ahh... (Score:2)
Another day, another "Tell Me Exactly How To Do My Job post masquerading as an Ask Slashdot question.
:)
Not life-threatening, but... (Score:2)
Also I believe there is a similar set of standards for accountants using spreadsheets.
Most of us just assume that our software is going to work and tell horror stories when it doesn't, but for those whose very careers depend on the accuracy of their programs, software is indeed very closely monitored.
This is for SHIPS, folks... (Score:2)
* No network connections to non-trusted systems (i.e., onboard crew and passenger personal systems)
* Solid stability and reliability in operation.
Given those, your ship computers should be secure.
Re:This is for SHIPS, folks... (Score:2)
You can expect that navigation systems will at some point receive updated charts or Notices to Mariners via the Internet.
You can expect that navigators will receive up-to-the-minute, detailed reports about harbors they are about to enter.
You can expect that shipboard control systems will interface with shipboard navigation systems, which by reason of the aforementioned scenarios, will effectively have a traceable data connection to the PC whose monitor you are staring at right now.
What is necessary are firewalls: 1) between the satellite-uplink internet connection (duh, of course they have this, they'd be stupid if they didn't); 2) a packet-inspecting firewall between the LAN that has full internet access and the navigation system allowing only those packets pertaining to navigation to pass; and 3) a packet-inspecting firewall between navigation and control systems.
The navigation system may be allowed limited access to the internet, perhaps only to certain sites. The control system should have NO access to the internet; rather, it should only be able to communicate with the navigation system.
Of course, I say all this with NO expertise and NO experience in shipboard IT infrastructure.
Accepted security criteria (Score:3, Informative)
here [commoncriteria.org] and here [nist.gov].
Which supersedes the Orange Book:
here [ncsc.mil] and here [iastate.edu].
Guidelines for writing secure programs (HOWTO) (Score:2, Informative)
the low down (Score:2)
Since the computers are in a marine environment are they resistant to (salt)water?
Is there a knowledgable(sp) computer tech on board, and what are his additional duties. Is he/she there to make sure the computer system stays online or is he/she also cleaning out shitters?
One computer system is much like the other much as one OS is much like the other. Both Linux and Windows (pick your version) have it's bugs, and will the particular bugs have an effect on the operation.
In any operation there ideally should be enough spare parts around that you could build another complete unit if needed, but there's never an ideal situation.
The list could go on and on and on, but there are a few major points...
1) environment
2) support personell
3) inventory
4) access
Most here will be talking from an electronic security aspect, but on a ship the major focus should be physical security.
Dead Reckoning (Score:2)
It occured to me that this is the way software purchased are too often made: rather than determining exactly what is needed, purchases are based on what's already there and how fast development has proceeded. It seems like people buy the newest version not because they need it, but because it's available. Most users I know would be doing just fine with Word 97, (heck, most of them would do great with WordPerfect 6 for DOS) but they have upgraded to Word 2000 then Word XP because it's there. (I used to use WP6/DOS extensively, and it NEVER crashed on me.)
If Microsoft spent more effort making Word 2000 and Windows 98 more stable than succumbing to feature creep, the world would be a better place.
If people wouldn't upgrade for the sake of upgrading, they could demand that future software versions be compatible with older versions: a document in Word XP should be openable in Word 1.0.
The issue is SAFETY, not SECURITY (Score:3, Insightful)
SECURITY asks, will the lock keep out intruders?
SAFETY asks, will the lock allow personnel to pass quickly in the event of an emergency?
SECURITY asks, will the window resist breaking in an intrusion attempt?
SAFETY asks, will the window resist breaking if accidently impacted? Can the window be used as an egress in an emergency? If the window breaks, will the fractured glass cause injury?
SECURITY asks, can intruders compromise the ships navigation or control systems?
SAFETY asks, will failure or compromise of the navigation or control systems have a negative impact on life or property?
SECURITY asks, does the system have permission to perform task A while being restricted from performing task B?
SAFETY asks, are the navigation or control systems able to the specified job in the specified manner?
SECURITY asks, how will access be controlled in the event of a system failure or compromise?
SAFETY asks, how will catastrophic failure be prevented in the event of a single system failure or compromise?
Hopefully, these questions will give you an idea of the kinds of questions a computer systems safety panel would be responsible for answering. Security is concerned with authority, which is NOT the question here. Safety is concerned with protecting the life and health of personnel and the physical integrity of assets.
That being said, Michael should go back and revise the headline to read "Computer SAFETY Criteria."
safety critical systems (Score:2, Informative)
Do some actuarial analysis (Score:2)
The only way I can think of is to do some good old fashioned actuarial analysis. It's a lot of work and a lot of time, but basically answering this question involves (1) collecting gobs of data and (2) analysing it. As well intentioned, well-researched, and sensible as the rest of the front end design advice might be, it's basically a lot of handwaving. It's about where the rubber hits the road, not a theoretical discussion about what chemicals should be used to make tires.
The Basdic Problem (Score:2)
Software is different in three regards.
I think this will change drastically as soon as software makers start to have real liability for the products they sold (free software is a seperate issue), like other engineers do. Then it might just happen that you will not find anybody willing to do software where their feeling tells them the art is not advanced far enough. And software production will be slower and more careful. And even more important those that fail repeatedly will have to leave the business!
There are some safety standards (Score:3, Interesting)
This standard, which also applies to software (see 61508-3: Software requirements), defines some very stringent requirements for systems that have anything to do with safety, i.e. where a failure of the system could endanger life.
See the IEC's website [www.iec.ch] for more...
Computer Security Criteria (Score:3, Interesting)
Read this first ... (Score:3, Informative)
Bruce Schneier's Secrets and Lies : Digital Security in a Networked World [amazon.com]. Many of your questions will be answered, and you will walk away from the reading with much better questions.
If only it were that simple (Score:4, Insightful)
This probably means that critical systems on things like ships should not be running any flavor of Windows, nor maybe Linux either. There are a bunch of OS's made for embedded systems, and due to their small size and simplicity they are much smaller, probably faster, and certainly less vulnerable or even completely invulnerable to this kind of attack. If your requirements are that stringent, that's what you should be using.
Re:Thanks! (Score:2)
2. Criteria that clearly show the strengths and weaknesses of the alternatives that exist
3. The fact, if it is a fact, that it is possible to create a body that, based on objective criteria, can act as an authority that sets the standard for safety of ship's computer systems.
In reply:
1. For reliability, on cursory examination, I would suggest looking at the methods that new technology has replaced. Was it important for the previous method to be reliable? To what degree? Then presumably the new method needs to be as reliable.
2. Now we're back to how to set up a criteria creation body, which I think has been less addressed in the responses. I mentioned SANS in my early posting, why not contact these people, and people in similar industries (the person who mentioned they were in the railroad industry for example) and find how they did it, who the major players were, and talk to them.
3. I'll take a pass on this one.