darthtuttle asks:
"Hardware has a concept of meantime between failure, so how about applying a similar concept for software. Here's how it works. Cracks can be described by the level of access gained, some examples are: remote root, remote user (root if run by user root), remote group, local root, local user, local group, and so forth. Applications or services have their own measurements and descriptions as well. Most all types of cracks can be listed in an order and a higher level crack is equal to each of the lesser level cracks. For example: a remote root is also a remote user and remote group crack. Now measure the mean time between incidences! Do people find ways to break in to your system every day? Every week? Every month? Every year?"
"Rating a complex system would mean combining the ratings in some meaningful way. for example if you are measuring a RedHat install you might need to consider the name server, sendmail, and all other services running on the system on top of the kernel. Given a method to do this you could rate an entire infrastructure. I'm sure the insurance companies would love this. It would give them a way to measure the chance of you spilling the beans on your customers data.
I'm curious, do you think this would be useful if it could be done reasonably? What kind of mean times do you'd think you'd see for the various products out there?"
Re:Lot more variables to calculate fthan hardware (Score:1)
A break-in is something you can avoid ! (Score:1)
i think it goes something like this... (Score:1)
30 people x 30 years of life each == 900 man-years (literally, the number of 'men' multiplied by the average lifespan _so_far_, i'm just saying that they're all precisely 30 for simplicity. As long as the individual life-spans _add_up_ to 900 it's no big deal).
Now, 900 man years -- someone in that class should be dead soon...
MTBISA (Score:2)
Currently, 3 years.
Re:OpenBSD (Score:2)
I love the nice tight feel of the BSD's. I like the FreeBSD install much better than the over complicated or install-to-much Linux installs.
Even though I really like the BSD's, Linux has got a lot of momentum behind it at this point. Some of the coolest (for me anyway) opensource projects are all too often Linux-only.
True, a default Linux install can be less secure than say an OpenBSD install. Part of the problem is in the install and default configurations in Linux. I fully believe Linux can be as secure as OpenBSD if set up correctly. And there's some really neat security projects going on (SElinux, LOMAC, etc) and will really tighten up security for Linux (and offer more options and control than OpenBSD can at this time).
Linux also seems to have more choices for encrypted filesystems. I like the lookback single file/device vs. the CFS encrypting each file that the BSD's use. ppdd was/is cool too, waiting for a version that works with a recent kernel because it's awesome to encrypt your root filesystem.
Not practical (Score:2)
It would not be difficult to measure the period of cracks in default installations of $OS with no added software, exposed directly to the internet at a low-profile location. Such numbers would be useless. Almost nobody actually has true default installs of anything, and virtually every system has configuration changes, software added or removed, various local administration practices enforced, etc. It also tells yout nothing about the effect of firewalls and intrusion detection, or the impact of merely *being* a higher-profile target or a site which provides certain services to the world.
In short, there are conservatively a million different inputs to this function, among the least important of which is what the system looked like after the initial OS installation. Until such magical time as OS vendors find a way to make every possible user happy with the set of software and configurations installed out of the box, customizations will remain the rule, and at least in the Unix world, so many customizations are possible, exercising so much different code from so many different sources, that no reasonable analysis of this type is possible.
In short, neat idea, but not possible to implement in any meaningful way.
He sounds right to me (Score:1)
With hardware, you can imagine a scenario where you ask it to do exactly the same thing a million times (say, eject a disk); it can do it right 999,999 times and fail on the millionth occasion. But because of the problem the original poster outlined, in order to measure the time between failures of software you have to assess the frequency of the events which tickle the bugs; in the case of the behaviour of script-kiddies, this means that what's supposed to be a very simple statistical measure of reliability incorporates a complex and controversial social model of the behaviour of an unpredictable group of people.
--
Defining security assurance levels (Score:3)
Bruce Schneier has been talking a lot about the need for monitoring and response to intrusions, based on the reasonable premise that you can't prevent 100% of all intrusions for all time (even OpenBSD has had remote root holes some years ago, and most holes are actually due to specific applications). If you accept this, it seems sensible to define goals for monitoring and response times.
Defining such assurance levels is not easy - on one project, I wrote requirements for security assurance that defined quantified goals for such things as 'minimum time to detect break-in' and 'minimum time to respond to and stop break-in'.
If you are interested in quantifying requirements for security (and other 'soft' requirements such as performance, availability, reliability, usability and flexibility), have a look at Tom Gilb's work - his site is at http://www.result-planning.com/ and most useful book isn at http://www1.fatbrain.com/asp/bookinfo/bookinfo.as
Solaris????? (Score:2)
Speaking as a part-time sysadmin in a Solaris environment, I can only hope that referring to Solaris as one of "the most secure OSes" was a grotesque joke. I would not categorically say that it is worse than Linux or Windows, but it is definitely no better- new exploits for Solaris come out all the time- currently I know of one rootshell exploit which is just hanging over our heads because it's in a subsystem which can't be turned off, and which Sun has so far failed to patch. I can't speak to the other OSes you mention, but "how many expolits do you hear of for such-and-such" is totally irrelevant to the actual security of the system, and the fact that you include Solaris in the list makes me rather skeptical of the rest of it.
Re:mother of a website!? (Score:2)
They buy mostly dual-proc machines since, given processor obsolescence, they will last longer: it's the same reason why I bought a dual-proc at my home, and after three years it's not yet ready to be dumped.
Back to the matter at hand. Those computers (most running NT) sit idle 95% of the time, because the limitations are not CPU power, but ADMINISTRATIVE (what belongs to whom), ADMINISTRATION-related, what kind of setup is needed, whether it's to be high-availability), assorted problems with the OS (load a host more than X and NT - or Win2k - will go BANG), and general reliability problems (if you listened to Microsoft's specification, you'd have one site hosted on each server, no more.
Still, the double CPU thing somewhat limits obsolescence, and so it persists.
Re:It doesn't really work (Score:1)
A better metric might be the ratio of intrusions/attempts
For the rest of the text, one have to assume that intrusion attemts are common enough to measure. If there are less than one script kiddie attempt/month or something like that, chances are very small that someone would direct anything but a random attack at the site.
But, I would also suggest adding an Average Intrusion Age, i.e., the number of days/weeks/years the methods used for the intrusion attempts was known to the general public. In my opinion, that way a metric on how interesting a specific site is to hack can be obtained;
The theory is that only people that are interesting in hacking the specific site will bother using the newest methods. Script kiddies will be forced to use older methods, waiting for the availability of public tools, and should the hack attempt fail, they are likely to focus on other sites instead.
By combining that metric with the average time taken by you to patch a sequrity hole from the moment it the hole was known to the public, one would get an indication of the probability that an intrusion attempt will succeed.
That knowledge, toghether with the number of attempts/month can be an indicator of if the systems sequrity is good enough.
Second comment:
Hardware can be meaningfully rated with a MTBF value because the errors are random, physical (often mechanical) defects. It is a measure of how likely a given operation is to fail.
With software, usually the same operation always fails. Software errors are design errors, not random failure.
I would like to add some nuiances to your statement; Yes, hardware failures due to malfunctioning or breaking components are random - providing the design itself is flawless, but there may also exist design flaws.
The same does exist in software as well. Some failures are caused by design flaws, others are introduced due to human error (typos that makes it through the compiler etc). The latter are per definition random, and can happend anywhere in the code.
When a software fault occurs, it does the same error every time that piece of code is executed with the same data. But the same applies to a mechanical component as well; if it can break in one way, that's the way it breaks when it breaks.
However, mechanical objects can break in an unpredictable number of ways since mecahnical (and electrical items) are affected by a hughe number of unknown and apparently random outside parameters, but that is true for software as well: Software very seldom execute in the same way - due to circumstances not controlled by the actual application (such as other applications, actual sequence of user or I/O input, swapping, availability of resources, timing and interrupts etc).
Thus, the failure might appear to be random to an outside viewer in the exact same way mecanical failures can appear as random.
Re:Clarification on MTBF and MTTF (Score:1)
year is 9.3 so the MTBF for humans is ~108
man years.
What would it count for? (Score:1)
I also know for a fact nobody has ever TRIED or even cares about trying. My computer could quite possably have more security holes than swiss chease.. I'm a non-target..
I try to secure my box becouse random script kiddys don't care if the victom is a major bank or some random users game box. They just want to prove they are cool by distroying something.
I also avoid script kiddies.
So what would it count for?
Nobody ever hacks my computer becouse they don't want to. But ultra secure computers get hit daily. The diffrence is the importence of the box not the quality of security.
Re:mother of a website!? (Score:1)
One of ten Netra t1s...
Apache Server Status for (restricted)
Server Version: Apache/1.3.12 (Unix) mod_perl/1.24
Current Time: Monday, 21-May-2001 11:35:36 BST
Restart Time: Saturday, 19-May-2001 19:14:10 BST
Parent Server Generation: 0
Server uptime: 1 day 16 hours 21 minutes 26 seconds
Total accesses: 1650360 - Total Traffic: 3.8 GB
CPU Usage: u250.07 s78.74 cu7.47 cs2.02 -
11.4 requests/sec - 27.1 kB/second - 2444 B/request
25 requests currently being processed, 7 idle servers
One of two E450s....
Apache Server Status for (restricted)
Server Version: Apache/1.3.6 (Unix)
Current Time: Monday, 21-May-2001 11:43:40 BST
Restart Time: Monday, 21-May-2001 00:00:00 BST
Parent Server Generation: 234
Server uptime: 11 hours 43 minutes 40 seconds
Total accesses: 62688 - Total Traffic: 396.9 MB
CPU Usage: u1 s1.11 cu.09 cs.04 -
1.48 requests/sec - 9.6 kB/second - 6.5 kB/request
12 requests currently being processed, 6 idle servers
Re:Could be avoided by use of a non-broken perm. s (Score:2)
I would love to have zope type acl capabilities in linux and am happy that a group is working on it to replace this user/group/world thing.
Check out the kind of security problems zope has had. Almost every one is actually an admin error that got "fixed" so others could not make that mistake. Ie making things essentially suid root. I think if linux had full ACLs with very find grain control and large list sizes the security problems in linux would drop like a rock.
Re:Some Actual Research (Score:1)
Crispin
----
Crispin Cowan, Ph.D.
Chief Scientist, WireX Communications, Inc. [wirex.com]
Immunix: [immunix.org] Security Hardened Linux Distribution
Now available for purchase [wirex.com]
Some Actual Research (Score:5)
----
Crispin Cowan, Ph.D.
Chief Scientist, WireX Communications, Inc. [wirex.com]
Immunix: [immunix.org] Security Hardened Linux Distribution
Now available for purchase [wirex.com]
Re:lucky (Score:3)
I've been on a site where I specced out the firewall rules with the native sysadmin; between us, none of *our* boxes ever got cracked in the last 18 months. However, that didn't stop some schmuck sticking a RH6.x box with rpc.statd on a public IP# - give it a week, *boom*.
3hrs' turnaround including a forensics copy, custom build of RH and restoring data, I was quite proud of me. And it's never been cracked again, either. And then I finally found a writeup of a similar incident, and read `check for kernel modules', took another look at the forensics copy, and felt very small...
Maybe another statistic to add to `expected time to (between) cracks' would be `expected turnaround time' as well - part of your security strategy has to be having a spare box to replace anything with.
~Tim
--
Re:OpenBSD - Far from dead! (Score:1)
Well, I for one use OpenBSD for all daily needs at work and that's what really counts. It doesn't matter moneywise if I use all the Linux ISO-image distros at home for free vs. that my company buys every single OpenBSD release for money and I put it on n+1 servers running everything from firewalls to DNS to desktop systems.
I still use a home rolled Linux system for home use because of the much larger number of multimedia applications and support for SB Live! and SMP in Linux which OpenBSD lack currently.
Other than that I have no reasons what so ever to run Linux on production systems at work whereas I can't think of putting any Linux system on the line to become cracked next to no time once exposed to the Internet.
++ Ray
Re:OpenBSD (Score:2)
I agree that OpenBSD is probably the most secure OS out of the box, maybe even the most secure OS period. But the idea that FreeBSD is the highest performing OS is just plain silly. Pick any commercial enterprise level Unix variant at random and it will outperform any open source OS including FreeBSD. When FreeBSD can scale efficiently to 64+ processors like Solaris and AIX then maybe it will be in the same ballpark as these operating systems. Until then its small potatoes.
Of course you could have been trying to say that that it was the highest performing *BSD variant, in which case I'd say you were right.
Lee Reynolds
Re:mother of a website!? (Score:1)
For webservers check out : http://www.rackspace.com/dedicated/recommended/se
under the enterprise section. this is the average sort of website i deal with routinely. note that rackspace tends to skimp on redundancy...usually you see more redundant servers on most large company websites.
Re:OpenBSD (Score:3)
any normal (read not yours) company will have at least dual or quad CPU hardware running in a cluster for their webservers..in most cases this may be outsourced or hosted at exodus or other NOCs. OpenBSD can support only 1 CPU at this time so that blows it out the water. FreeBSD is in a niche (not may admins can use it -- companies hire from the mass market so the skillset is definitely limiting fbsds existance) and doesnt scale properly as far as CPUs go (yes i know about the fbsd 5 improvements -- it aint here yet and still has the giant kernel lock).
Linux is gaining more and more since the availability of admins is there and its easy to set up (even if crackability exists) and is familiar enough with apache. it also scales well now with the 2.4 kernel and supports a lot of rackmount hardware at datacenters.
Solaris is usually what you find with netscape iplanet or apache at most companies..it scales well but costs an arm and a leg.
NT/IIS is another alternative for those cheapo firms who cant afford to hire admins to run UNIX.
I tend to disagree (Score:2)
--
Not relevant was Re:Not practical (Score:2)
Widespread exploits depend on out-of-the-box insecurity.Similarly, security ratings of locks depend on their out-of-the-box characteristics, not when you've 'customized' them with a hacksaw.
However, the uncertainty of security ratings is almost certainly dependant on the install base of the software. So that, eg the certainty (not the value) of the security level of Windows variants is much higher than anyone else, while that of eg MVS should be fairly low as there are far fewer folk with access to mainframes.
-Baz
[1] This situation is different where there is a widely deployed insecure protocol, such that almost every implementation can be compromised by exploiting the same flaw in the protocol. However even this boils down for the most part to knowing the OS patch level.
The wrong question (Score:3)
who's improving their 'mean time between felonious drunken assaults'?
Hardware failure is inevitable and (generally) unpredictable. Gross statistical measures are one of the few meaningful ways to plan and budget for failures. Security is not the same way -- breaches can be avoided through vigilance and good management. Talking about 'mean time to exploit' is a cop-out -- it's surrendering responsibility to the whim of fate.
cheers,
mike
Calculus (Score:2)
Just read off the lowest one. The security on a machine is only as good as the least secure package.
Of course, you run into the problem of who will be the authority issuing these ratings. No software developer would be honest or forthright about their number (or the measurements would be entirely uncomparable), so a trusted external body would be responsible for the ratings... and they'd be liable for all manner of litigation from trademark infringement to libel to spoiling a market for a product by telling the truth about it.
So I predict it'll never happen. It's hard enough to get vague information about security breaches. Nobody'd dare quantify the risks.
lucky (Score:1)
Re:It doesn't really work (Score:2)
Second, cosmic rays do not actually cause memory errors. What can cause errors are alpha particles, usually given off by decay of trace radioactive elements in the ceramic casing of ICs. This is a problem for any IC, and they have to be designed to withstand it. I don't know if that is actually a significant source of errors in modern memory or not, though.
Of course, the #1 cause of memory errors is defective memory, but that is another issue entirely
It doesn't really work (Score:3)
With software, usually the same operation always fails. Software errors are design errors, not random failure.
While measuring the frequency of breakins is perhaps a useful metric, it shouldn't be confused with something like a MTBF for hardware. Also, the frequency of breakins due to script kiddies that scan and more-or-less target systems at random and just want a shell, or to deface your webpage, versus a deliberate and directed attack against you to steal/corrupt data are completely unrelated. The latter may have access to more sophisticated tools, better knowledge of your network and software, etc. Trying to apply numbers gained from random attacks to indicated your defendability against directed attacks is severely misguided.
Also, attempting an MTBF rating doesn't take into account visibility. If I drop most incoming connections through my cable modem and run a port scan detector, most people scanning my whole ISP will not even notice I am there. This doesn't work for a public website that many people know exists, even if it does drop their traffic to port 31337. Hardware MTBF is usually given in "operating hours" or some other well specified metric. I don't see how to do that for software. A better metric might be the ratio of intrusions/attempts, but since I would wager the majority if intrusion attempts, and even many successful ones are never discovered, that isn't a really good metric, either.
Could be avoided by use of a non-broken perm. sys (Score:4)
No wait, I forgot - most Unixes (apart from TrustedBSD, Solaris, Trix, etc) are still using a completely non secure non granular permission system - ie, in terms of security, a broken one.
Ever service a machine provides should run under an account with the same name with permissions to do what the program does and no more.
Capabilities can provide some of this function, but they still can't fix some aspects of this fundamentally broken system. Ie, I have some word processor documents stored on a server. Some users need to read and write the files, another group needs to read the files, and all other users should have no access at all. There are ways around this, but its hacky and makes the system much harder to administer, compared to a four line ACL.
The Linux ACL and Extended attributes program [google
And while we're on the topic: please don't ever assume UID 0 belongs to an account called root (apps, not documentation). Drone about STO all you want, but obscurity as a layer on top of real security simply does slow crackers down. Haven't you ever used a honeypot?
Re:Bad comparison (Score:1)
That might explain the lack of scans against that particular box. (Your co-workers could always begin scanning your box, if it would make you feel better about all of those ports/services that you closed)
---
Interested in the Colorado Lottery?
Bike Locks in New York City (Score:1)
Unix centric (Score:3)
If you applied such a system to our current linux, you might think it kind of silly since there would be a fair bit of redundancy (anyone who got root could do anything). However, I hope that in a couple years there will be several security systems that plug into linux that will make your current concept of root, user, and group privledges inadequate.
Re:Bad comparison (Score:1)
Clarification on MTBF and MTTF (Score:2)
Humans have a mean time between failures of something like 900 years. That means for every 900/man years lived on Earth, someone dies (somebody correct me if I'm wrong on that number). Humans however have a Mean Time to Failure of about 72 years. That's the average time a human lives before failing.
O-umlaut (Score:1)
Re:Clarification on MTBF and MTTF (Score:1)
Wouldn't the number work out to be very close to 72? I don't understand why 900 is so large, unless I'm doing something wrong and comparing apples to oranges.
Mean time between reboots (Score:1)
Of course there is netcraft's ratings. but thats not really that acurate.
-Jon
Same reason MacOS doesn't get hax0red (Score:1)
No-one except for the army that is.
-Jon
Re:Some Actual Research (Score:1)
Crispin - Where have you guys been? I was wondering when you would re-release the 7.0 version.
Does this release take care of the compilation problems of RH7? Can I build a 2.4 kernel with this? These questions and many more... are not answered on your web site! I used Immunix 6.2 for a while, and I liked it a lot, but I would really like to use XF86 4.03, the 4.2.x kernel, and latest Gnome.
Please let respond in this forum or to alewis@knightsbridge.com. I have a lot of interest in seeing what my company and my customers would think of this product.
Thanks!!!
Re:The wrong question (Score:1)
=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\
Re:You think you know, but you have no idea (Score:1)
Hmm, how do you do O-umlaut? (I know Goedel is acceptable in German, but won't be in a bookshop database)
----
Bruce Schneier discusses this (Score:2)
insurance companies cover this? (Score:2)
no, because all you can do is measure the past of a piece of software and that doesn't tell about the current version or a modified version. the function for calculating the score for a piece of software would be unfair because it probably couldn't take into consider oldness and popularity of a piece of software and if it is open or closed.
it would be better to rate admins by their history.
Re:OpenBSD (Score:1)
Re:The wrong question (Score:1)
Of course, from a marketing point of view, that cop-out is preferable to being realistic, and I suppose that's the main reason why this is never going to work: companies don't admit security breaches, they cover them up.
Re:OpenBSD (Score:2)
Re:I tend to disagree (Score:2)
Software shouldn't worry you (Score:3)
Re:Clarification on MTBF and MTTF (Score:2)
6 billion people
900 people-years MTBF
Now just divide to find the answer:
900 people-years / 6 billion people
= 0.00000015 years between failures (deaths)
= about 4.73 seconds between failures
Make sense now?
Re:OpenBSD (Score:2)
Well, if you are right or wrong, the fact is that the 'trusted' features everyone is raving about are available for OpenBSD right now (even if not part of the main system) go read deadly.org to hear about the patches... Not to mention the 'TrustedBSD' project that aims to impliment ACLs on FreeBSD, which will no doubt spread to oBSD and NetBSD with security improvements in the oBSD code...
---=-=-=-=-=-=---
Re:OpenBSD (Score:2)
---=-=-=-=-=-=---
Re:Not for software (Score:1)
Using "Obscurity" isn't necessarily STO (Score:1)
Absolutely. But this isn't Security Through Obscurity.
The complaints about STO from the security community came from the days when fewer people understood security. Some people who thought they did (but didn't) reasoned that hiding information was the same as removing it.
This led to all sorts of strange things, including "proprietary" protocols and algorithms. Many of these secret encryption algorithms were easily broken because the algorithm was flawed. One of the first things you learn about crypto is that keeping an algorithm secret does not increase its strength. Similar things have happened to many proprietary protocols.
The concern about obscurity in any form has been that it's often used as a crutch, and that's led to a community backlash.
Re:OpenBSD (Score:1)
I would be inclined to agree with you. I can't talk for OpenBSD because I've not tried it, but I definitely appreciate the craftsmanship that seems to have gone into FreeBSD (a nice summary of which is presented here [cons.org]).
The rag-tag "throw a zillion monkeys at the problem" chaotic nature of the Linux evolution doesn't help for things like consistancy - something that I appreciate in FreeBSD. Then again, I'm a Win32 coder at heart... :op
This sounds like an old Dilbert... (Score:1)
The problem is that you don't know the bug is there until it is exploited. So the question becomes: how do you estimate the number of bugs in a program? There are several rule-of-thumb based on statistics, but those aren't reliable enough to list as a spec.
The basic issue is that hardware manufacturers can test a statistical sample of their product and use those results to estimate MTBF for that product. With software, each program is unique, so it's difficult to say with any certainty that tests done on past software will extrapolate to new software, even if the statistical analysis is sound.
And when a program "breaks" (ie. a flaw is discovered), all copies of that software (or configuration) are affected. If a company buys 1000 copies of the same hardware, they can be confident that, on average, only a certain percentage of that HW will fail before the MTBF point.
With software, does it matter if the average time-to-exploit is high, if *your* current software package is hacked two weeks after installation? All one thousand software copies in the organization are now vulnerable; all have to be fixed/replaced. So that pretty MTTF spec isn't really very useful anymore.
Re:The wrong question (Score:1)
"KLOC" not "KLOCK" (Score:1)
Re:mother of a website!? (Score:1)
netcraft results (Score:1)
The site 140.183.234.14 is running Phantom/2.2.1 on MacOS.
The site www.goarmy.com is running Netscape-Enterprise/4.0 on Solaris.
The site www.cia.gov is running Netscape-Enterprise/4.1 on Solaris.
Re:Same reason MacOS doesn't get hax0red (Score:2)
The reason Mac OS (classic) doesn't get Hax0red has to deal with the OS's architecture. A circa 1982 design with no command-line interface, no unix or dos roots, and no real non-gui way to control the beast. The closest I've seen was a background-only daemon that listens on a certain TCP port for AppleScript commands which it will then execute. Not too useful.
mother of a website!? (Score:5)
Jippity! If "any normal company" has clusters of dual and quad CPU machines to run their websites, I hate to see the hardware that runs their databases! And on the same token, I guess I haven't experienced these websites from "any normal company".
I agree that it's a bit of a shame that oBSD isn't an SMP monster, but that fact alone really isn't much of a problem these days, especially with 1.0+ GHz processors being the norm. Of the websites I help maintain, one handles an average of 1.2 million requests per day (average of about 14 requests per second, and about 8 GB/day). Granted over 95% of that is static content, but it's all hosted through a Pentium 233 running a heavily-patched version of Red Hat 5.2 and the load average rarely goes above 0.15. Another website handles the registration and accounts of a regional academic competition program and gets an average of 5 CGI hits per second. Using MySQL and Apache+ModPerl on a PII-266 atop Red Hat, the whole works chugs along fine with a load average around 0.10.
oBSD on just one modern CPU may have its limitations, but it could easily saturate a pair of 45mbit DS3/T3 links with dynamic (PHP/perl/etc) content without much cpu load at all.
Re:what the fuck is this? (Score:1)
Re:what the fuck is this? (Score:1)
I doubt you could apply this to security (Score:1)
Re:The wrong question (Score:2)
software fails also... ever get a blue screen?
Yeah, but the point is that hardware is a physical thing that suffers from the weaknesses of being a physical thing. It has certain tolerances you can't exceed, otherwise it'll break, and it's going to wear out eventually anyway.
Software, OTOH, is an idea, and doesn't suffer from any of the "weaknesses of the flesh"; you can't wear out a concept, can you? The old adage saying you can't build bug-free software isn't saying anything about the nature of software, it's a statement about the weaknesses of the humans who write software. Finding a way to write bug-free software is the Holy Grail of software verification research.
And who says you can't write bug-free software, anyway? I've done a few decent "Hello, World" implementations in my time...
Re:OpenBSD (Score:1)
Re:I don't believe it can be predicted. (Score:1)
OT: Changing colors on the BSOD (Score:1)
BSOD Properties and Other Customizations [pla-netx.com]. This page has a little VB3 app to easily let you make the changes. Or, if you don't want to bother with that, it tells you what to add to SYSTEM.INI.
I made my BSOD red for a while too. But I found it induced too much anger in me, so I switched it back.
I don't believe it can be predicted. (Score:1)
Lot more variables to calculate fthan hardware (Score:3)
Re:Software metrics does not a good product make. (Score:1)
They actually have people emplyed who specialise in CMM and a great deal of planning goes into it all. But then when it has a military application I guess you can never be too careful.
Re:The wrong question (Score:1)
Re:what the fuck is this? (Score:1)
Re:Not for software (Score:1)
In that respect, then, your argument winds up supporting a "mean time between rootshell" calculation; for reasons discussed elsewhere, though, such an idea is pretty daft..
Flawed Premise (Score:2)
Secondly, I don't think there's any way of coming up with an objective standard, a base line by which such a meantime could be judged. No two servers are alike, and even within broadly comparable applications it's more complex than just measuring traffic to the server, average load, etc. Some servers are more visible, or more desirable (from a cracker's perspective) -- either from a kudos point of view, or in terms of the potential use that can be obtained from a cracked server. It's pretty meaningless to say "my server's been up for months without being cracked" if it's some no-name machine that nobody knows or cares about.
Security's also not just to do with the software -- the computing power of the machine also needs to be taken into account. You're not going to be able to crack a password list as quickly on a 486 as on the latest pentium machine, for example.
So: no, it can't be done, reasonably. And asking what kinds of mean times we'd expect to see is, frankly, trolling for a platform flamewar..
You think you know, but you have no idea (Score:1)
Art At Home [artathome.org]
Re:You think you know, but you have no idea (Score:1)
On a somewhat related note, GEB is indeed Gödel, Escher, Bach.
Check it: HTML Character Entities [http]
Art At Home [artathome.org]
Re:You think you know, but you have no idea (Score:1)
It'll definitely break, you can't know when or how until it happens.
Art At Home [artathome.org]
Re:He sounds right to me (Score:1)
Art At Home [artathome.org]
Like a vault, only not... (Score:2)
The reason why you can do this with a safe is because there are so many known quantities. You know what the safe is made of, and how it is built, and the properties of both, with no hidden surprises. The vulnerabilities of both stay constant over time as well; you don't ever hear about a previously undiscovered buffer overflow in carbon steel :) And finally, the methods of attacking safes and vaults do not change quite so quickly as hacking methods evolve, so you can know authoritatively what would be done by an attacker, and account for all of it.
Re:You think you know, but you have no idea (Score:1)
Re:It doesn't really work (Score:1)
But if it is the RAM that was affected directly by the cosmic particle, wasn't it the hardware that was at fault here? You can have software measures to counter it, such as checksumming the memory, but the failure itself did not originate in the software.
Re:You think you know, but you have no idea (Score:1)
Thanks.
BTW, the way to type a letter that does no exist on your OS. In DOS, you could type <Alt>-ddd where ddd is the decimal entry from the current code-page. In Windows there is the "Character Map" utility for that. I guess other OSs has their own means to that end.
Re:You think you know, but you have no idea (Score:1)
Apology accepted :-) I guess my lack of familiarity with American culture causes me to sometimes accept things at face value when they're supposed to be subtle references.
Extrapolating the future from the current situation will get you in trouble. Reality changes. No software can take into account all future input. Look up "misfeature" in the jargon file.
I think you understood the opposite of what I was trying to say - exactly because you cannot predict all the future uses a software unit will have, you cannot perform a statistical analysis of when, if at all, it'll break.
Not for software (Score:5)
Software, of the other hand, is a digital entity, so its function doesn't change with time. If it was broken on the 10,000th time around, it was broken all along. Whether anyone noticed it was broken is completely another issue.
it does (Score:1)
Re:meantime? (Score:1)
Time1+Time2+Time3/3 = mean Time
Re:The wrong question (Score:1)
Re:it does (Score:2)
it is "Mean Time to Failure" (MTTF) and its defined as "the average time interval between two failures".
sorry no good links but MTTF is what your looking for.
Re:OpenBSD (Score:1)
While a little fanatic, if you qualify it like "oBSD is the single most general purpose OS in existance" it rings closer to the truth. Of course, even the most secure OS in the world is worthless in the hands of an inept administrator.
She feels me I can taste her breath when she speaks.
Re:OpenBSD (Score:1)
In five years of running an ISP I never had any problems finding highly qualified FreeBSD admins. Linux is gaining more and more since the availability of admins is there
If that's your metric then perhaps Windows is the way to go. There's a ton of MCSEs out there!
OpenBSD (Score:3)
Our company changed over to OpenBSD from Red Hat because we were fed up of all the root exploits, all the patches all the time, and the incoherent way in which linux as a whole tends to be organised - ie, linux is the kernel, not the OS. OpenBSD is the entire OS, and is much more sane, IMHO.
The problem with linux is the general chaos. Great for hackers, and definately much better for the desktop user, but oBSD and *BSD's generally are much better in production environments.
Put simply, oBSD is the single most secure OS in existance, and fBSD the highest performing. Take your pick, its no contest as far as BSD v linux is concerned for my company.
Re:You think you know, but you have no idea (Score:1)
Re:The wrong question (Score:1)
Re:not useful (Score:2)
Bad comparison (Score:2)
During my first 5 minutes of looking at BlackICE output, I got hit from about 4 different directions as people scanned through the net...
Bruce Schneier covers this in detail. (Score:2)
Cryptogram March 2001 [counterpane.com] has an article about it, for example.
--
Clarify (Score:2)
The reason I find this meaningful because it help measure work load. Am I going to take the machine down once a week to apply a patch, or once a year? The shorter the time between new exploits the more work I have to do to maintain it.
--
Darthtuttle
Thought Architect
Software metrics does not a good product make. (Score:2)
Long ago, companies measured programmer productivity by using the KLOCKs, 1,000-line blocks of code. The more KLOCKs you could kick out in a given time period on a task, the greater the perception was that you were working harder. We all now know how easy it is to manipulate that perception. 500 lines to add two integers, sprinkling 1000's of lines of useless looping code documented to look like it's crucial to the system.
Proper measurement of failure is further compounded by the complex nature of most products written in OOP. Underlayered components, physical devices, and operating system issues could be mistaken as problems with the software application, when in fact the application itself requires no modification to fix the problem. Metrics also rely on fixed points of time as references which make matters worse as some problems are beyond the scope of the project (i.e. product works fine, but customer later upgrades video drivers that cause app to break).
Carnegie Mellon University [cmu.edu] has pioneered the software maturity analysis area with its Capability Maturity Model [cmu.edu] for software shops (think ISO9000). If I was a large customer (say Boeing), I would probably make my purchase decision more along the lines of the CMM rating of the software team that created the product rather than some silly arbitrary metric that most suits probably wouldn't comprehend anyway.