






Ask Slashdot: How Are So Many Security Vulnerabilities Possible? 354
dryriver writes: It seems like not a day goes by on Slashdot and elsewhere on the intertubes that you don't read a story headline reading "Company_Name Product_Name Has Critical Vulnerability That Allows Hackers To Description_Of_Bad_Things_Vulnerability_Allows_To_Happen." A lot of it is big brand products as well. How, in the 21st century, is this possible, and with such frequency? Is software running on electronic hardware invariably open to hacking if someone just tries long and hard enough? Or are the product manufacturers simply careless or cutting corners in their product designs? If you create something that communicates with other things electronically, is there no way at all to ensure that the device is practically unhackable?
10/90 (Score:5, Informative)
Is software running on electronic hardware invariably open to hacking if someone just tries long and hard enough?
This is 10% of the problem
Or are the product manufacturers simply careless or cutting corners in their product designs?
This is 90% of the problem.
And 90% of the 90% are the biggest boys (Score:5, Informative)
That can be simply listed as (in the order that I see them):
- Microsoft as an OS vendor (I know I'll get attacks from various ACs that think any criticism of MS is unfair but they are putting 'way more energy into sucking user's personal data into their servers than protecting said personal data)
- Large service companies with poor security for customer databases (I just saw Uber had a big hack last year that they've been trying to keep quiet).
- The 10% of so of the user population at large which don't have the intelligence to question email/text/phone/Facebook/etc. requests for their personal information.
The remaining 10% would be poorly defined standards (for example IoT) where the possible vectors and impact of security intrusions have not been thought through.
Re:And 90% of the 90% are the biggest boys (Score:4, Informative)
Software is complex. Any non trivial piece of software probably contains a bunch of libraries that are themselves complex, and built on a best effort/time basis. At almost every stage there is the potential for abuse. Where a library appears to be secure in of itself, it may contribute to poor security when it is used in ways that are not anticipated.
Operating systems are complex. They themselves are made up of many components which are made up of many libraries, which themselves may be made up of many libraries. Security vulnerabilities may be anywhere in there, or emergent from the collective. Combining a relatively secure OS with a poorly written application which runs with user privileges may expose other issues.
Operating systems and programs often support old interfaces. That is not automatically bad, but it is yet more attack area you have to cover and eventually deprecate.
Operating systems and programs may be wrote in inherently less secure languages such as C/C++, and it may even be for good reasons, but buffer overruns may allow control of the program flow. Just get the overflow to get to your tailored jump instruction and your tailored data, which is really more cpu instructions, and now you have control.
People are stupid. People are lazy. Even smart people may not do an exhaustive search for vulnerabilities for everything they download, since well, the job needs doing, so they do the quick check and some additional reasonableness checks and move on with life. Still, clicking on what you should not is a bad thing. If you see a web page that just looks scary, power off your PC manually and don't reload your old tabs on startup. That will prevent some bad things, but not all.
Hackers exist, be it a nation state such as Russia or even the United States, or simple criminals. Some of them may even be working on obscure OSS libraries that other famous package use. I'd almost bet some vulnerabilities are deliberately introduced. I'd pretty much bet some work for Microsoft, likely from every major nation state in the world. It is hard to secure against unintentional backdoors, but even if they don't have them now, you can expect some deliberate backdoors in future updates. Hell, how do you know that phone update is of the good, and not some clandestine agency downloading a special edition?
Hardware exists with back doors, some intentional, some possibly not. Some of the management interfaces are scary as hell. Are you really sure no one is taking advantage of any of that? Silicon is usually not manufactured with say complete control of the process. Are you sure the gates AMD specified are the only gates in the CPU? (or Intel for that matter.)
COTS hardware may not get updated and even if it is, updates and such are likely a low priority. Have you plugged your smart TV in? Does it have a camera? A microphone? Do you really trust it? Is it in your bedroom? What about the gadget that monitors your kid? How an ooma really still be in business at that price? (I admit I have an ooma device.)
Re: (Score:2, Troll)
Operating systems and programs may be wrote in inherently less secure languages such as C/C++, and it may even be for good reasons
Or bad reasons like Comp Sci elitist "I'm so smart, I'm not going to fuck up" snobbery.
Ada FTW!!!
Re: And 90% of the 90% are the biggest boys (Score:2)
Honest question: why do so few companies use Ada? It looks like a pretty nice language... But I've literally never seen it in use at a private company.
Re: (Score:3)
Insufficient critical mass is probably reason #1. Why insufficient critical mass you ask? Java won users over by offering fairly easy tooling, good documentation, and an ecosystem of developers unified across Unix/Windows. It helped that Java also had a major corporate backer.
Which of these does Ada have? As far as I know, Ada's limited success is mostly due to it's use by the DoD. Not the best at spreading the love.
In the *browser*, vs in the server (Score:3)
As you know, CGI programs run on the server. To do something in the *browser*, you had to use Java, or maybe Flash.
> And the Applet never caught on for too long.
Only for five years or so, but long enough to achieve critical mass since it was the only real option. ActiveX (formerly known as COM, formerly known as OLE) was very much not designed for the browser in the first place, and only worked (kinda) in IE, so it wasn't a real option for internet sites. It was used a bit for intranet.
Re: (Score:3)
When choosing which language to write code in there are the following questions.
1. Which language is popular enough to find new employees.
2. Is the language industry respected, so we don’t look like an amateur because we picked a joke of a language.
3. Does the language meet our business requirements.
4. Can I hide my code from others and my competition.
5. How quickly can it be coded in
6. Does it support modern features.
7. Can it be deployed easily
8. How forward compatible is it.
9. Is it cross platform
1
Re:And 90% of the 90% are the biggest boys (Score:4, Insightful)
Microsoft as an OS vendor (I know I'll get attacks from various ACs that think any criticism of MS is unfair
If you take a minute to look at the bulk of major incidents in the last year, it's mostly poorly configured Mongodb and S3 buckets. No SQL Server, MS Exchange or IIS in the list. There's the occasional ransomware but given the market share of Microsoft products, it's not bad at all.
Re: (Score:3)
Re: And 90% of the 90% are the biggest boys (Score:5, Funny)
People think agile is about "Getting shit done!".
It turns out to be "Getting shit, done!"
Re:10/90 (Score:5, Insightful)
Yes, the big issue here is that it's common knowledge consumers by and large refuse to be bothered to get educated and the bulk of the major software development companies out there aren't don't have leadership ethical enough to be able to resist taking maximum possible advantage of their naivety. Unfortunately this knowledge gap is also being turned against our own government even as our own government participates in using the very same knowledge gap on the general population. It's a huge ugly mess, really, and it says a lot about the spiritual deficiencies of humans as a whole, and I still completely in all seriousness blame Microsoft for starting it.
Re: (Score:3)
Kinda, but also, people have embraced the technology so fast that we can now do things we did not imagine -- and therein lies the rub, because whilst we didn't imagine what positives would be possible, we also didn't imagine what negatives would be possible. It is something of a blind process.
So, now we start to get experience of the negatives, and like anything else, we have to start trying to remedy them, just like with cars, where everyone who could, got one, and then we were horrified at how badly they
Re: (Score:3, Insightful)
Or are the product manufacturers simply careless or cutting corners in their product designs?
This is 90% of the problem.
This, so much this. Companies still view security as something that costs too much money to implement properly. It's cheaper to deal with the financial loss of a hack, than it is to have decent security policies implemented with properly trained personnel who's responsible for patching security vulnerabilities and testing the network constantly. Security's a constantly changing state of being, but this last statement shouldn't really be news for the crowd who's drawn to reading ./
Re: (Score:2)
It's cheaper to deal with the financial loss of a hack, than it is to have decent security policies implemented ... ./
Actually in many (if not most) cases the people making the decisions simply don't think it will really happen to them.
Insurance would be great. That's how we got fire s (Score:5, Informative)
The fire code is written by the National Fire Protection Association, a group formed by insurance companies, in order to reduce their losses from fires. Underwriters Laboratories (UL Listed) who check products for fire and electrical safety - same thing. "Underwriters" means insurance companies. Insurance companies are professionals at analyzing and reducing risk and they do a VERY good job of it. They use very advanced methods to determine risk. I'd LOVE to see insurance companies get involved in IT security, the same way they are involved in fire safety. Ever noticed car commercials advertising their high IIHS safety rating? IIHS is Insurance Institute for Highway Safety, insurance companies testing cars to make them safer.
> Insurance can pay out on the promises, and the insurers themselves are borrowing against still future promises to pay, which when they come due can be rolled over or hedged and thus the cycle continues ...
That's not how insurance works. The insurance company uses mathematical models to determine that of they insure 10,000 customers with a given risk profile, about 1% of those customers will have a claim. The average claim will be about $3,000, suppose. That's $300,000 the insurance company will have to pay out this year. Divided by the 10,000 customers, that's $30 per customer in claims. Each customer also costs $3 for mailing invoices and such, so the average cost per customer this year is $33. Therefore the premium they charge is $43. $10 gross profit per customer.
Insurance companies aren't betting hoping they don't have claims. They have a million customers, of course they'll have claims. With a million customers, the law of averages kicks in and they can predict rather accurately how much the total claims will be this year. So then they set the premiums (their prices) for the year a bit higher than their costs.
The one big thing that can screw that up is a major flood. A major flood could have a million people making claims all at once. That's why insurance companies don't sell flood insurance. Only the government sells flood insurance. (In the US at least).
Healthcare plans are not insurance (Score:4, Interesting)
Insurance is something that pays to cover risks, things that probably won't happen to you this year, and the expense would be more than the customer afford to cover out of their own pocket.
For example, home insurance will replace your house if it burns to the ground. You buy insurance because you couldn't afford to buy a new house out of your own pocket. You don't insure against needing to replace a toilet paper holder, or paint the walls, or weed the garden. These are ordinary, expected expenses that you just pay.
Car insurance will replace your car if it gets totaled. The average driver doesn't expect for their car to get totaled, and can't afford to pay for a new one with their own cash. Car insurance does NOT cover gas, oil, tires, spark plugs - ordinary, expected expenses.
Modern US health care plans get involved in every little $30-$60 doctor visit, and all the bureaucracy and red tape doubles the total cost of simple things like a checkup or vaccine. That's NOT insurance. Insurance is for unexpected events that you can't cover from your own bank account. An annual check-up, or flu vaccine, is both expected and affordable; it's not an insurable risk.
We used to be able to buy medical INSURANCE, coverage for *unexpected* events too costly to pay from your own checking account (ie major surgery or catastrophic illness). That was fairly affordable. For the ordinary, expected health care expenses you kept a few dollars in the bank, and later in a specific bank account called a Health Savings Account. Over the years various things have forced more and more crap to be covered by "health care plans" - you can't just buy medical INSURANCE anymore. That's added a lot of paperwork expense to what used to be a $25 visit for a sinus infection. Now you have $25 worth of doctor time and $30 spent on paperwork with the healthcare plan and government, so it costs $55.
Re: 10/90 (Score:2)
Re: (Score:2)
...any time you have humans involved trouble and hilarity are sure to ensue.
You forgot needless, senseless, totally-avoidable, self-inflected tragedy and suffering. Lots and lots of tragedy and suffering. The ratio of tragedy and suffering to trouble and hilarity roughly resembles that of the ratio of spam to anything else on the diner menu in the Monty Python "Spam" sketch.
Well, there's spam egg sausage and spam, that's not got much spam in it.
Strat
They don't know how to cost-effectively. Locksmith (Score:5, Informative)
I think most companies don't know how to produce reasonably secure software cost-effectively. They aren't motivated enough to spend a ton of money on security. So they give up on trying all that hard, to varying degrees.
Some companies try educating programmers a bit about security. That's good, but not sufficient. Programmers are constantly learning new frameworks, new libraries, new languages, new systems they have to integrate with ... They aren't going to be security experts too.
In my experience, the main cost-effective way to improve security is to have a security professional consult with developers at three points in the process of a software project. Then integrate part of what's learned into automated parts of the DevOps build and release process. One hour from a security person at each of these three points can really make a difference, not only in the current project, but in future projects. Have the security person join a meeting and be part of the discussion at these three points:
The initial overall design / architecture
This will allow the security professional to point out spots where security issues commonly occur, "be sure to use TLS (ssl) for this connection". It will also catch major architectural decisions that lead to big security problems that are very hard to fix later (such as an ISP planning on managing customer modems over their public IPs).
Finalizing the design details
Similar to the above, but at a finer-grained level
Pre-release testing and approval
Around the time you're starting integration testing, your security person can review the implementation based on notes they took in the two earlier stages. For some of these code-level things they can add to your existing pipeline, so from then on Git will warn you immediately when you try to commit code that follows a dangerous pattern such as use of std::process::Command with variables influenced by user input, or improper reuse of mutable buffers. (Here I use Rust terminology, the same errors can be made in most languages. Few bugs are langauge-specific).
Not only will this catch issues in the current project, but everybody learns from the interaction in order to avoid creating similar problems in the next project. Instead of studying 2,000 pages about security, the developers are being made aware of the specific issues that they tend to create in the specific domain the company is writing software for.
This process allows one security professional to effectively serve many programmers on many projects, much like your database expert might work with developers on many projects. You can get a lot of security improvement for not much money.
* Before somebody says "2,000 pages is ridiculous. Security is easy, all you need is the OWASP Top 10â, I'm a member of OWASP. I know very well the quick "rules of thumb" we publish. I've personally read over 10,000 pages about security and I don't know anywhere NEAR all that there is to know.
Re:They don't know how to cost-effectively. Locksm (Score:4, Funny)
Re: (Score:2)
True. And saves a LOT of time fixing bugs, scalabi (Score:2)
That's true . In fact we can SAVE developers tons of time working with them to be more secure. Security isn't just confidentiality. It's making sure the software works correctly - even when someone is trying to make it break. Fixing bugs takes a lot of time. Following security best practices means the software won't mess up even when someone is trying to make it mess up. That implies it won't mess up when people are using it normally - far fewer bugs to investigate and fix.
It's also availability - avo
I forgot the other part, the locksmith part (Score:5, Interesting)
Another thing to think about to understand it is that for thousands of years, people tried to make secure locks; every time locksmiths figured out how to open them - pretty easily. Security is very hard. Offline, it's okay that Pop-A-Lock can open your lock for $20. That's the accepted level of security.
Online, people thousands of miles away can use computers to try to crack the security on tens of thousands of victims, while the attacker is sleeping. They don't need to be skilled attackers, they just get hacking tools (software) from the relatively few people who are skilled. Popular web sites can be attacked a thousand times per day or more. Not even Chuck Norris can fight off a thousand attackers every day and never lose. On the WEB security is very hard. You MUST have layers of security, because somebody will break through the first layer, and the must have well-disciplined operational security.
* Medeco has finally done a reasonably good job of making physical locks that are hard for a locksmith to open. Not impossible, but hard. Breaking a window is still as easy as ever, though.
Re: (Score:2)
Eh, I'd say it's not guaranteed that anything is invariably open to hacking.
I'd say 15% vendors are crap, 85% users/admins picking the password 'password' to secure things.
In the world of IoT of course, it goes to 100% crappy vendors.
Re: (Score:2)
Murphy's Computer Laws [imgur.com]
Meskimen's Law
There is never time to do it right, but there is always time to do it over.
Note: Murphy was an optimist.
Re: (Score:3)
This is 90% of the problem.
And 90% of those 90% are due to cheaper inexperienced or incompetent people in charge of implementing security.
Re: (Score:3)
Indeed. Developer incompetence is 90% of the problem. Languages, coding styles, etc. do not really matter. Incompetent coders demonstrate time and again that they can make it insecure, no matter what.
Re: (Score:3)
Actually it is far more complex.
1. Legacy systems and backwards compatibility: A lot of software we still use have roots in the pre-internet days. Where having a UI password with a weekly encrypted password was considered strong security. Most hacking back then would had been finding a service that didn’t need a password and exploited it by just using it. By the Mid 1990’s more systems were getting on the internet allowing data to be sent without the UI so buffer overflows were a big thing and
99/1 realistically (Score:3)
I would assert that it is a 99/1 ratio. Security is a solvable problem, and we have had rigorous, solid, time-tested methods of security since the 1960s and 1970s, be it physical security, network security, or security of a computer.
I did an "Ask Slashdot" about a similar topic a few weeks ago, but it was more inclined to why companies have no interest in security, because (to them) security has no returns.
The problem is that we already know how to do segmented operating systems, windowing systems that hav
Re: (Score:2)
Re: (Score:2)
Android doesn't have an "update overnight" option like iOS does?
Most of my non-techie friends with iPhones just hit "update overnight" and the update is done by morning; no interruption to their routine.
Re: (Score:2)
Re: 10/90 (Score:3)
Re: (Score:2)
Re: (Score:2)
We aren't using Rust enough. (Score:5, Funny)
The problem is that we aren't using safe-by-design programming languages like Rust enough. If we used Rust more, then many types of bugs and security flaws wouldn't even be possible. As more and more software developers follow Mozilla's lead and start to use Rust to build their software systems, we will see many common types of security flaws vanish.
Re: (Score:3, Informative)
Just in case the uninitiated might confuse this for a serious statement; to be clear he's completely trolling.
Re: We aren't using Rust enough. (Score:5, Funny)
Re: We aren't using Rust enough. (Score:4, Insightful)
There are no panaceas in programming languages, but working with a framework that is carefully well-designed sure does cut down on human error down the road, even in the hands of a skilled programmer.
Ada is de facto for onboard systems in airplanes for a reason. Language constructs for design-by-contract matter when it's important, and we're learning from the masses of botnets and hackery that there's a lot that matters, not just hospital systems and jet planes.
Rust is in fact building important features into the core that C++ is just trying to bolt on. We need less error-prone, more validated and tested code, and the frameworks to support that. We're designing systems that society relies on, and it's irresponsible to society to assume that every programmer is a rock star 100% of the time.
Re: We aren't using Rust enough. (Score:4, Insightful)
It does mitigate certain families of security flaws. However most C programmers have had it beat into their head to generally do the right thing, so these are more rare than they used to be, though still real enough to value the language removing the and implementations like rust deserve credit for taking measures that help here..
However it simply cannot magically fix most modern vulnerabilities that get announced, as they are generally oversights in logic flows. So it's a bit worrisome to see people seeming to put a bit *too* much faith in language to provide 'automagic' security, when the design is more often the vulnerability rather than bungling pointers/mallocs/bounds.
Re: (Score:3, Informative)
>It does mitigate certain families of security flaws.
Crypto types like Rust because it deals with a particular class of problem that is impossible to mitigate in C. Namely knowing what the compiler will do, for sure. Will it erase that buffer or will it optimize it away the erasure? In C there are a wealth of examples where code that compiles to secure object code on one compiler manage to get broken by another compiler. Or when the optimization level is changed. There is no "Best Practice" to make this
Re: (Score:3)
Re: (Score:3)
Complete and utter bullshit. Language has little to no influence on code security. You can be insecure on many different levels and incompetent coders universally manage to make it insecure.
Incidentally, the claim the Rust is safe-by-design is a shameless lie. It is not. It just makes some specific security issues more difficult (but not impossible) to implement. It does not do anything at all for most security problems.
Re: (Score:2)
But how does it compare to Mongodb?
Security costs money? (Score:4, Interesting)
Good security usually means re-architecting whatever legacy garbage fire has been burning in off in the corner for the last 12 years and that costs money. The insecure software is still generating revenue in it's current state and there are no consequences for poor software security. #Equafax
Security also not fixed by money... (Score:2)
Often companies make 'security teams' that go in and tackle the security problems so the other developers don't have to. This helps with things like, say, bundling third-party libraries with known CVEs, and answering security concerns *when the developers bother to think to ask*. However, so long as your rank and file developers don't think about security and how an attacker would go at their code pretty much all the time, there's no way a security team is going to be able to keep up with the 'organic' co
Re: (Score:2)
However, so long as your rank and file developers don't think about security and how an attacker would go at their code pretty much all the time, there's no way a security team is going to be able to keep up with the 'organic' code,
You can say that again. As long as programmers have power to write turing complete code, they have power to write security vulnerabilities.
Re: (Score:2)
Also bugs (Score:3)
How are bugs still possible? That's how security holes are still possible too.
Re: (Score:2)
A week or two goes by? (Score:2)
Git-r-done (Score:5, Informative)
The good news is much like Charlie Rose gets embarrassed off the national stage, hopefully companies that don't take security seriously will be forced into bankruptcy.
Re:Git-r-done (Score:4, Insightful)
Engineering software typically involves confirming that everything that is supposed to happen, happens. Making software secure involves testing that everything that shouldn't happen, doesn't.
Testing for *every* possible failure case is hard.
Re: (Score:3)
>Testing for *every* possible failure case is hard.
But a little spot of formal methods can go a long way.
Yes. (Score:5, Informative)
Yes.
I've been a software security guru for more than ten years, and none of the companies I worked for, whether Fortune 100 or commercial companies shipping commercial software, fixed all the vulnerabilities we found before shipping. (Some set the bar at "high" and some as "critical", but no one halted the presses for "medium".) For all I know, most of the vulnerabilities we found perished on a disbanded team's backlog years ago to the delight of hackers everywhere.
But the bigger problem would be the code that shipped that we never saw, whether it was an intern's "hackathon" project shat onto the web, something that crawled out of a pool of H1Bs, or a third-party app grafted in to fake reporting enough to get past the demo with the big client. I have more horror stories than I can relate involving things like this.
Re: (Score:2)
Of course a lot of the 'medium/lows' become debates about whether they are really vulnerabilities or not. A lower severity is frequently a compromise between some security guy being surprised at a design point and a developer who intends the behavior.
But yes, the whole 'security team over here to 'fix everything', most of the software developers over there to do the work, whew they don't have to think about security because we have a team for that' is a pervasive problem in the industry.
Re: (Score:2)
Re: (Score:2)
You really think security is allowed in the production process before it's too late to cause deadlines to fall?
Found the guy who never worked for large corporations.
Liability is separated from ownership (Score:5, Interesting)
Re: (Score:2)
Re: Liability is separated from ownership (Score:2)
Re: (Score:2)
Once you issue stock, everything changes. The public corporation could knowingly kill and maim hundreds of customers [caranddriver.com], but nothing [google.com] will happen to the owners of the company.
If, say, a dentist knowingly killed 124 and maimed 274 people, that dentist would go to jail, and his/her assets would be taken and wages would be garnished for the rest of his/her life.
Do you live in a house or apartment? (Score:5, Insightful)
How Are So Many Security Vulnerabilities Possible?
Do you life in a house or apartment? Go around and look very closely at every aspect of the structure. As you go, make note every flaw you find, however tiny, but paying special attention to things that could be avenues for entering the dwelling from the outside even if everything is locked up. Now imagine 1,000,000 people all working constantly to find ways through those vulnerabilities without you realizing that is going on. Now imagine everybody in your city has an identical dwelling so that when one avenue is compromised, they all are.
That is how.
Re: (Score:2)
>I certify products for sale to the government. People that come to me to get certified expect to be scrutinized.
If this is in the FIPS140 ish area, then this:
>They also know in advance what the spec is, what they must do and how.
is a fantasy.
I've commented extensively to NIST on ambiguities and architectural impossibilities in the specs and they fix some of them, but it's slow going and in the interim, a heck of a lot is left to interpretation by the cert houses.
Two reasons (Score:3)
Simple (Score:2)
Re: Simple (Score:3)
Security is also "unfair". You, as a conscientious developer, can do everything "right", and get totally pwn3d *anyway* because of some widespread, system/platform/framework/library vulnerability (perfect example: Heartbleed).
The only way to improve the odds is for a development team to have one or more members whose ONLY job is to be aware of every thirdparty library/platform/os used by the project & literally research every single one, every single day, to become aware of vulnerabilities as they're di
Re: (Score:2)
>The only way to improve the odds is for a development team to have one or more members whose ONLY job is to be aware of every thirdparty library/platform/os used by the project & literally research every single one, every single day, to become aware of vulnerabilities as they're discovered...
It's enough to say "Is it ok to use this library" and then tear that one to pieces. The effort involved makes the cost of writing (well) the subset of features you want from the library seem small.
Which is why I
Re: (Score:3)
Nope. Sorry, but nope. If that was the case, the OWASP Top 10 wouldn't exist, a collection of 10 commonly made and reliably existing flaws in any piece of online application.
Anyone trying to get in would just have to try that top 10 and reliably get in. It's already bad enough with standard libs that are hardened against exactly those common problems, with everyone reinventing the wheel, we could get into any place with the standard toolbox.
Nobody cares (Score:5, Insightful)
Companies do not care about security, because they see no value in it. They rush their own developers to release software, and never ask them to focus on security.
Developers do not care about security. They never face the consequence of their negligence on it
Consumers do not care about security. They shop for the cheaper or the most hyped product, not for the one that was correctly engineered. How could they know it really was, anyway?
Security is the cost of "hitting the window" (Score:5, Interesting)
Companies do not care about security, because they see no value in it. They rush their own developers to release software, and never ask them to focus on security.
It's not that they don't care about security (although they often don't). It's because, in the competitive environment, the "invisible hand" separates the companies into "The Quick" (pun intended) and "The Dead".
For each new computer-based market opportunity there are typically far more companies trying to get to product than there are niches for them. The first one, two, or three will get through the "window of oppotunity" and take the market, and the rest will be left out when the window closes - perhaps to die, perhaps to move on to some other opportunity, rinse, and repeat.
To get through the window before it closes, development has to be fast. Something has to give, and practically EVERYTHING that gives makes security holes. So the Pointy Haired Bosses tell the workers to get the product to market and THEN worry about fixing the security holes.
Some of the developers make things secure anyhow. Most of them find the window closed when they're ready to ship, because the ones that did what management told them already got to market with the features working and the infrastructure made of swiss cheese. They took the whole market - before the bad guys discovered the holes, exploited them, and the media finally noticed.
How are so many vulnerabilities possible? (Score:2)
All complex software has bugs. But not all software respects your freedom to run, inspect, share, and modify the software so you can decide how to handle whatever problems arise with the software. You ought to be allowed to fully control the computers you own. Free software (software that respects your software freedom) is a means to grant people that control and treat people ethically with regard to computer software. Nonfree (or proprietary) software denies users the freedoms of free software. Nonfree sof
Who asks stuff like this? No one who's seen code. (Score:3, Interesting)
Please, before you post on Slashdot about code vulnerabilities, make sure you have at least programmed a "Hello, World" before. This post reminds me of the time a frustrated boss demanded to know why the game AI I was programming didn't "just use common sense."
More vulnerabilities are happening because there is a *massive* increase in software in consumer products. A bazillion products now have codebases that didn't before - ovens, toys, even my damn Christmas tree. Combine that with professional and social media that's always looking to dredge up outrage, and an increase in bad actors who realize that public outrage can work in their favor, and boom! You have a constant stream of stories about security holes. Why is that hard to understand?
Engineers are increasingly educated about new security threats - we evolve much faster in dealing with new challenges than almost any other type of worker. But yes, things get through, because this shit is hard - much harder than clueless internet whining. Also, because we're on the front lines of two wars - against those who tear down what others build, and against those who squelch innovation to preserve their own fat-cat positions - that is exponentially more intense than it was even a few years ago.
Expect the future to get *much* bumpier than this.
Unavailable: Principle of least privilege (Score:5, Interesting)
Almost all security problems boil down to the absolute lack of support for the principle of least privilege [wikipedia.org]. None of the commonly used systems have anything approaching this concept. The crude approximation available is to put each resource in a virtual machine and tightly limit its connections to other virtual machines that need to access it for a specific resource... then watch those like a hawk for traffic spikes etc.
The other thing that could help immensely is to install Data Diodes, which are gateways specifically designed to NEVER let data flow in the non-desired direction, guaranteed by physics. The come in pairs, they have a normal network connection on one side, and one of the pair can only transmit, the other can only receive, usually via a single fiber.
This stuff can be fixed, I've been saying so for at least a decade now (go ahead, search my comment history here and elsewhere)... ya'll are slow on the uptake. I figure another 5 years before it starts sinking in, and at least 10 more to get it done.
Re: Unavailable: Principle of least privilege (Score:3)
Re: (Score:2)
You've taken a shallow view of this... like some one who thinks that circuit breakers aren't necessary, it's just the users of electricity who aren't careful enough. Limiting the scope of change that a instruction can execute is the primary job of an operating systems, and Linux, Windows, and all the others just can't do it. Capability based systems provide safety, in a user friendly and transparent way... just like the breaker box in your house.
What we have now, is electrical equivalent of a power grid
Re: (Score:3, Insightful)
LOL at the guy that thinks most security problems are technical problems instead of the result of perverse risk prioritization in response to market demands.
Because it is hard, and sometimes not possible (Score:5, Insightful)
Security is not free. It is neither free in that it requires lots of man hours of time to develop & code, and that security has no impact on the user experience.
You can do end to end encryption of all traffic, encrypt at all states, require multi-factor auth, require physical devices, require secure portal software. But all of these have operational costs as well. But in the cost of compute and in the usability of the software.
If you had to access gmail through a specific secure application, with 3+ factor authentication, and it was really really slow, would you use it?
Customers won't pay for security (Score:2)
Company B builds product Y that directly competes with product X. They give security a passing interest. They throw together the code and ship it as soon as it works in 90% of the cases. They don't bother testing. If a bug
Everything is [inherently] broken (Score:5, Interesting)
Most programmers think code can be made secure if they only have better compilers, debuggers, or follow better practices. They are fundamentally mistaken about the nature of the problem.
This article lays out the nature of the error far better than I can. Please read it and then THINK:
https://medium.com/message/eve... [medium.com]
And then consider: âoeIt is difficult to get a man to understand something when his salary depends on his not understanding it.ââSâ"âSUpton Sinclair
Make a list (Score:2)
The security services want an easy way in they can use the cover of average malware for.
The police want a way in so they have contractors hide as malware.
The security services and police need a way out of a system, network with the data they find.
The method used by police, contractors, security services finds its way into the hands of cult, faith groups, criminals, the media, ex,
Priorities (Score:3)
I know quite a few CEOs and VP level execs. They are very focused on revenue. They are spending all their time trying to land new customers and grow the business. Security is a cost: generally: you have to pay money for it, either to security vendors or in headcount, and there is no corresponding revenue that you get. So it goes to the bottom of the priority list. Somewhere in their heads they know that deferring it is a bad idea, and the cost of a security breach is something they don't even want to think about, but there is always a new revenue target to hit, another customer to land, etc., and so the security stuff get put in the "maybe later" pile.
Backdoors (Score:2)
Two reasons (Score:2)
Laziness and Usability.
Usability really comes back to laziness most of the time.
The other obvious issue is that the chaining of independent "design compromises" is often what leads to full blown compromises.
There are no stupid questions... I guess (Score:2)
... but the one posed by the article's title comes close.
Given that most widely-used OS'es start from a base of languages that are not secure with manually-managed memory management, that most OS and application programmers are not (and I'm being charitable here) security experts by any means, and that software processes to mitigate these issues are still often pushed aside for "business" reasons when not deprecated in the name of agility, a better question is "How do we turn out software that stands up so
Cost vs. revenue (Score:3)
Because security is a cost center while new features are a revenue center. The executives who ultimately decide priorities, budgets and schedules are taught to minimize costs while maximizing revenue. The results are, obviously, highly predictable if supremely disappointing.
Buffer Overflows - program testing cost too much (Score:2)
This was Microsoft's stand for the longest time, it cost too much for the extra time it took to test.
"It all comes back to one programmer being careless," Paller said. "You wrote a program, asked someone for input, gave them space for a certain amount of characters, and didn't check to see if the program could take more. You are incompetent, and you are the problem. One guy making that mistake is creating all the work for the rest of us." https://www.cnet.com/news/stud... [cnet.com]
Because nobody knows how to code (Score:3)
Instead you rely upon languages to handle the safety and optimizations your lazy ass couldn't be bothered learning in the first place.
And you wonder why someone else guts your shit - they understand the basics which you failed to learn.
Poor education (Score:2)
And I'm fairly confident that this is typical across the tertiary sector.
Now, I'm not claiming for a moment that university is the only way to learn IT skills, or that learning ends the moment you get your degree (despite what many of our students seem to think sometimes). But a lot of our students come
agile scr(ot)um (Score:2)
Maybe because also every company is doing Agile Scrotum, and they're Not Doing Agile Right(tm).
because... it's complicated (Score:2)
The primary reason is that software quality is down the drain. We actually know how to write software two orders of magnitude better than what we have now (the metric being bug count). There'sa single-digit number of companies in the world doing it. They are successful, and the business case is ok (less maintenance cost and issue-related costs down the road), but only if you look at it long-term and with a dominance of short-term thinking, well, we have the mess we have today. Startup culture especially is
It's normal (Score:2)
If you hire a music major as CIO, like Equifax, everything is possible.
Re: (Score:3)
I agree that better programming languages with safety features would make a huge difference, if someone can make one that is easy to understand by average or below-average programmers, who write a lot of the software out there. Rust is quite safe, but has a lot of really weird-ass new concepts that many programmers can't be bothered to try to grok. Go is half-decent, but also a moderately weird and finicky programming language.
Safer web-app templating and db access libraries would also help a lot.
Company ma
Re: (Score:2)
>They just the the job done.
Without necessarily needing to use all the words that would be normally be needed in a sentence.
Re: (Score:2)
Re: (Score:2)
In general, when a downloadable application needs to access a service that requires an API key, how is the application's developer supposed to make the operations controlled by the key available to the application without making the key available to rogue developers who could use the API key to impersonate the application? This is the case for the "consumer secret" in a Twitter app.
Re: (Score:2)
In general, when a downloadable application needs to access a service that requires an API key, how is the application's developer supposed to make the operations controlled by the key available to the application without making the key available to rogue developers who could use the API key to impersonate the application? This is the case for the "consumer secret" in a Twitter app.
Welcome to Out of Band provisioning. The land of random key exchange protocols and horrendous complexity.
Re: (Score:2)
I've seen both.
In production code.
Of companies that handle very sensitive user data.
There are 3 places you should not work at if you want to sleep well at night: The sausage factory, government and security.
Re: (Score:2)
Simply not true.
I'm a white hat. I used to write code for a living, but that was a fairly long time ago. The code I write today is more something I whip together quickly to get shit done I need for my work. Secure? Please. It works. It probably fails as soon as any edge case comes along, let alone someone who wants it to fail.