Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Security Businesses Operating Systems Privacy Software The Internet Technology

Ask Slashdot: How Are So Many Security Vulnerabilities Possible? 354

dryriver writes: It seems like not a day goes by on Slashdot and elsewhere on the intertubes that you don't read a story headline reading "Company_Name Product_Name Has Critical Vulnerability That Allows Hackers To Description_Of_Bad_Things_Vulnerability_Allows_To_Happen." A lot of it is big brand products as well. How, in the 21st century, is this possible, and with such frequency? Is software running on electronic hardware invariably open to hacking if someone just tries long and hard enough? Or are the product manufacturers simply careless or cutting corners in their product designs? If you create something that communicates with other things electronically, is there no way at all to ensure that the device is practically unhackable?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Are So Many Security Vulnerabilities Possible?

Comments Filter:
  • 10/90 (Score:5, Informative)

    by rudy_wayne ( 414635 ) on Tuesday November 21, 2017 @09:33PM (#55599931)

    Is software running on electronic hardware invariably open to hacking if someone just tries long and hard enough?

    This is 10% of the problem

    Or are the product manufacturers simply careless or cutting corners in their product designs?

    This is 90% of the problem.

    • by mykepredko ( 40154 ) on Tuesday November 21, 2017 @09:50PM (#55600009) Homepage

      That can be simply listed as (in the order that I see them):
      - Microsoft as an OS vendor (I know I'll get attacks from various ACs that think any criticism of MS is unfair but they are putting 'way more energy into sucking user's personal data into their servers than protecting said personal data)
      - Large service companies with poor security for customer databases (I just saw Uber had a big hack last year that they've been trying to keep quiet).
      - The 10% of so of the user population at large which don't have the intelligence to question email/text/phone/Facebook/etc. requests for their personal information.

      The remaining 10% would be poorly defined standards (for example IoT) where the possible vectors and impact of security intrusions have not been thought through.

      • by Anonymous Coward on Tuesday November 21, 2017 @10:32PM (#55600221)

        Software is complex. Any non trivial piece of software probably contains a bunch of libraries that are themselves complex, and built on a best effort/time basis. At almost every stage there is the potential for abuse. Where a library appears to be secure in of itself, it may contribute to poor security when it is used in ways that are not anticipated.

        Operating systems are complex. They themselves are made up of many components which are made up of many libraries, which themselves may be made up of many libraries. Security vulnerabilities may be anywhere in there, or emergent from the collective. Combining a relatively secure OS with a poorly written application which runs with user privileges may expose other issues.

        Operating systems and programs often support old interfaces. That is not automatically bad, but it is yet more attack area you have to cover and eventually deprecate.

        Operating systems and programs may be wrote in inherently less secure languages such as C/C++, and it may even be for good reasons, but buffer overruns may allow control of the program flow. Just get the overflow to get to your tailored jump instruction and your tailored data, which is really more cpu instructions, and now you have control.

        People are stupid. People are lazy. Even smart people may not do an exhaustive search for vulnerabilities for everything they download, since well, the job needs doing, so they do the quick check and some additional reasonableness checks and move on with life. Still, clicking on what you should not is a bad thing. If you see a web page that just looks scary, power off your PC manually and don't reload your old tabs on startup. That will prevent some bad things, but not all.

        Hackers exist, be it a nation state such as Russia or even the United States, or simple criminals. Some of them may even be working on obscure OSS libraries that other famous package use. I'd almost bet some vulnerabilities are deliberately introduced. I'd pretty much bet some work for Microsoft, likely from every major nation state in the world. It is hard to secure against unintentional backdoors, but even if they don't have them now, you can expect some deliberate backdoors in future updates. Hell, how do you know that phone update is of the good, and not some clandestine agency downloading a special edition?

        Hardware exists with back doors, some intentional, some possibly not. Some of the management interfaces are scary as hell. Are you really sure no one is taking advantage of any of that? Silicon is usually not manufactured with say complete control of the process. Are you sure the gates AMD specified are the only gates in the CPU? (or Intel for that matter.)

        COTS hardware may not get updated and even if it is, updates and such are likely a low priority. Have you plugged your smart TV in? Does it have a camera? A microphone? Do you really trust it? Is it in your bedroom? What about the gadget that monitors your kid? How an ooma really still be in business at that price? (I admit I have an ooma device.)

        • Re: (Score:2, Troll)

          by Nutria ( 679911 )

          Operating systems and programs may be wrote in inherently less secure languages such as C/C++, and it may even be for good reasons

          Or bad reasons like Comp Sci elitist "I'm so smart, I'm not going to fuck up" snobbery.

          Ada FTW!!!

          • Honest question: why do so few companies use Ada? It looks like a pretty nice language... But I've literally never seen it in use at a private company.

            • by damm0 ( 14229 )

              Insufficient critical mass is probably reason #1. Why insufficient critical mass you ask? Java won users over by offering fairly easy tooling, good documentation, and an ecosystem of developers unified across Unix/Windows. It helped that Java also had a major corporate backer.

              Which of these does Ada have? As far as I know, Ada's limited success is mostly due to it's use by the DoD. Not the best at spreading the love.

            • When choosing which language to write code in there are the following questions.

              1. Which language is popular enough to find new employees.
              2. Is the language industry respected, so we don’t look like an amateur because we picked a joke of a language.
              3. Does the language meet our business requirements.
              4. Can I hide my code from others and my competition.
              5. How quickly can it be coded in
              6. Does it support modern features.
              7. Can it be deployed easily
              8. How forward compatible is it.
              9. Is it cross platform

      • by lucm ( 889690 ) on Tuesday November 21, 2017 @11:20PM (#55600483)

        Microsoft as an OS vendor (I know I'll get attacks from various ACs that think any criticism of MS is unfair

        If you take a minute to look at the bulk of major incidents in the last year, it's mostly poorly configured Mongodb and S3 buckets. No SQL Server, MS Exchange or IIS in the list. There's the occasional ransomware but given the market share of Microsoft products, it's not bad at all.

      • " The 10% of so of the user population at large which don't have the intelligence to question email/text/phone/Facebook/etc. requests for their personal information. Only 10%? You must work in a very security conscious organization then. From what I've seen over the past 25 years, I would peg that at a minimum of 50%...unless you do constant end user education, in-person reminders, etc. I've seen some pretty decent whaling attacks, complete with proper graphics, reply-to addresses (at least at first look),
    • Re:10/90 (Score:5, Insightful)

      by Narcocide ( 102829 ) on Tuesday November 21, 2017 @09:51PM (#55600011) Homepage

      Yes, the big issue here is that it's common knowledge consumers by and large refuse to be bothered to get educated and the bulk of the major software development companies out there aren't don't have leadership ethical enough to be able to resist taking maximum possible advantage of their naivety. Unfortunately this knowledge gap is also being turned against our own government even as our own government participates in using the very same knowledge gap on the general population. It's a huge ugly mess, really, and it says a lot about the spiritual deficiencies of humans as a whole, and I still completely in all seriousness blame Microsoft for starting it.

      • by Bongo ( 13261 )

        Kinda, but also, people have embraced the technology so fast that we can now do things we did not imagine -- and therein lies the rub, because whilst we didn't imagine what positives would be possible, we also didn't imagine what negatives would be possible. It is something of a blind process.

        So, now we start to get experience of the negatives, and like anything else, we have to start trying to remedy them, just like with cars, where everyone who could, got one, and then we were horrified at how badly they

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Or are the product manufacturers simply careless or cutting corners in their product designs?

      This is 90% of the problem.

      This, so much this. Companies still view security as something that costs too much money to implement properly. It's cheaper to deal with the financial loss of a hack, than it is to have decent security policies implemented with properly trained personnel who's responsible for patching security vulnerabilities and testing the network constantly. Security's a constantly changing state of being, but this last statement shouldn't really be news for the crowd who's drawn to reading ./

      • It's cheaper to deal with the financial loss of a hack, than it is to have decent security policies implemented ... ./

        Actually in many (if not most) cases the people making the decisions simply don't think it will really happen to them.

    • You left out the other 99% of the problem... software that ISN'T running on computer hardware; That's right... wetware. Humans are involved, and any time you have humans involved trouble and hilarity are sure to ensue.
      • ...any time you have humans involved trouble and hilarity are sure to ensue.

        You forgot needless, senseless, totally-avoidable, self-inflected tragedy and suffering. Lots and lots of tragedy and suffering. The ratio of tragedy and suffering to trouble and hilarity roughly resembles that of the ratio of spam to anything else on the diner menu in the Monty Python "Spam" sketch.

        Well, there's spam egg sausage and spam, that's not got much spam in it.


    • by raymorris ( 2726007 ) on Tuesday November 21, 2017 @10:47PM (#55600311) Journal

      I think most companies don't know how to produce reasonably secure software cost-effectively. They aren't motivated enough to spend a ton of money on security. So they give up on trying all that hard, to varying degrees.

      Some companies try educating programmers a bit about security. That's good, but not sufficient. Programmers are constantly learning new frameworks, new libraries, new languages, new systems they have to integrate with ... They aren't going to be security experts too.

      In my experience, the main cost-effective way to improve security is to have a security professional consult with developers at three points in the process of a software project. Then integrate part of what's learned into automated parts of the DevOps build and release process. One hour from a security person at each of these three points can really make a difference, not only in the current project, but in future projects. Have the security person join a meeting and be part of the discussion at these three points:

      The initial overall design / architecture
              This will allow the security professional to point out spots where security issues commonly occur, "be sure to use TLS (ssl) for this connection". It will also catch major architectural decisions that lead to big security problems that are very hard to fix later (such as an ISP planning on managing customer modems over their public IPs).

      Finalizing the design details
          Similar to the above, but at a finer-grained level

      Pre-release testing and approval
            Around the time you're starting integration testing, your security person can review the implementation based on notes they took in the two earlier stages. For some of these code-level things they can add to your existing pipeline, so from then on Git will warn you immediately when you try to commit code that follows a dangerous pattern such as use of std::process::Command with variables influenced by user input, or improper reuse of mutable buffers. (Here I use Rust terminology, the same errors can be made in most languages. Few bugs are langauge-specific).

      Not only will this catch issues in the current project, but everybody learns from the interaction in order to avoid creating similar problems in the next project. Instead of studying 2,000 pages about security, the developers are being made aware of the specific issues that they tend to create in the specific domain the company is writing software for.

      This process allows one security professional to effectively serve many programmers on many projects, much like your database expert might work with developers on many projects. You can get a lot of security improvement for not much money.

      * Before somebody says "2,000 pages is ridiculous. Security is easy, all you need is the OWASP Top 10â, I'm a member of OWASP. I know very well the quick "rules of thumb" we publish. I've personally read over 10,000 pages about security and I don't know anywhere NEAR all that there is to know.

      • by l0n3s0m3phr34k ( 2613107 ) on Tuesday November 21, 2017 @11:23PM (#55600499)
        "Does it compile? Then it ships!" per-quarter profit margins demand it!
      • The sad thing is "doing it right" doesn't take longer and can even go faster. An easy example is SQL injection attacks: use parameterized queries, and you have no problem with them. It takes no extra time to do that, and it eliminates the attack vector.
        • That's true . In fact we can SAVE developers tons of time working with them to be more secure. Security isn't just confidentiality. It's making sure the software works correctly - even when someone is trying to make it break. Fixing bugs takes a lot of time. Following security best practices means the software won't mess up even when someone is trying to make it mess up. That implies it won't mess up when people are using it normally - far fewer bugs to investigate and fix.

          It's also availability - avo

      • by raymorris ( 2726007 ) on Wednesday November 22, 2017 @12:57AM (#55600847) Journal

        Another thing to think about to understand it is that for thousands of years, people tried to make secure locks; every time locksmiths figured out how to open them - pretty easily. Security is very hard. Offline, it's okay that Pop-A-Lock can open your lock for $20. That's the accepted level of security.

        Online, people thousands of miles away can use computers to try to crack the security on tens of thousands of victims, while the attacker is sleeping. They don't need to be skilled attackers, they just get hacking tools (software) from the relatively few people who are skilled. Popular web sites can be attacked a thousand times per day or more. Not even Chuck Norris can fight off a thousand attackers every day and never lose. On the WEB security is very hard. You MUST have layers of security, because somebody will break through the first layer, and the must have well-disciplined operational security.

        * Medeco has finally done a reasonably good job of making physical locks that are hard for a locksmith to open. Not impossible, but hard. Breaking a window is still as easy as ever, though.

    • by Junta ( 36770 )

      Eh, I'd say it's not guaranteed that anything is invariably open to hacking.

      I'd say 15% vendors are crap, 85% users/admins picking the password 'password' to secure things.

      In the world of IoT of course, it goes to 100% crappy vendors.

    • Murphy's Computer Laws [imgur.com]

      Meskimen's Law
      There is never time to do it right, but there is always time to do it over.

      Note: Murphy was an optimist.

    • This is 90% of the problem.

      And 90% of those 90% are due to cheaper inexperienced or incompetent people in charge of implementing security.

      • by gweihir ( 88907 )

        Indeed. Developer incompetence is 90% of the problem. Languages, coding styles, etc. do not really matter. Incompetent coders demonstrate time and again that they can make it insecure, no matter what.

    • Actually it is far more complex.
      1. Legacy systems and backwards compatibility: A lot of software we still use have roots in the pre-internet days. Where having a UI password with a weekly encrypted password was considered strong security. Most hacking back then would had been finding a service that didn’t need a password and exploited it by just using it. By the Mid 1990’s more systems were getting on the internet allowing data to be sent without the UI so buffer overflows were a big thing and

    • I would assert that it is a 99/1 ratio. Security is a solvable problem, and we have had rigorous, solid, time-tested methods of security since the 1960s and 1970s, be it physical security, network security, or security of a computer.

      I did an "Ask Slashdot" about a similar topic a few weeks ago, but it was more inclined to why companies have no interest in security, because (to them) security has no returns.

      The problem is that we already know how to do segmented operating systems, windowing systems that hav

  • by Anonymous Coward on Tuesday November 21, 2017 @09:36PM (#55599937)

    The problem is that we aren't using safe-by-design programming languages like Rust enough. If we used Rust more, then many types of bugs and security flaws wouldn't even be possible. As more and more software developers follow Mozilla's lead and start to use Rust to build their software systems, we will see many common types of security flaws vanish.

    • Re: (Score:3, Informative)

      by Narcocide ( 102829 )

      Just in case the uninitiated might confuse this for a serious statement; to be clear he's completely trolling.

    • by gweihir ( 88907 )

      Complete and utter bullshit. Language has little to no influence on code security. You can be insecure on many different levels and incompetent coders universally manage to make it insecure.

      Incidentally, the claim the Rust is safe-by-design is a shameless lie. It is not. It just makes some specific security issues more difficult (but not impossible) to implement. It does not do anything at all for most security problems.

  • by Anonymous Coward on Tuesday November 21, 2017 @09:37PM (#55599943)

    Good security usually means re-architecting whatever legacy garbage fire has been burning in off in the corner for the last 12 years and that costs money. The insecure software is still generating revenue in it's current state and there are no consequences for poor software security. #Equafax

    • Often companies make 'security teams' that go in and tackle the security problems so the other developers don't have to. This helps with things like, say, bundling third-party libraries with known CVEs, and answering security concerns *when the developers bother to think to ask*. However, so long as your rank and file developers don't think about security and how an attacker would go at their code pretty much all the time, there's no way a security team is going to be able to keep up with the 'organic' co

      • However, so long as your rank and file developers don't think about security and how an attacker would go at their code pretty much all the time, there's no way a security team is going to be able to keep up with the 'organic' code,

        You can say that again. As long as programmers have power to write turing complete code, they have power to write security vulnerabilities.

    • by tgeek ( 941867 )
      You have three choices when developing your product. Cheap, fast and/or secure. You get to pick two of those.
  • by Kohath ( 38547 ) on Tuesday November 21, 2017 @09:41PM (#55599963)

    How are bugs still possible? That's how security holes are still possible too.

    • Sure. But maybe, also, not enough workforce and money is put on the table when it comes to security ( == they don't care enough )
  • A day goes by when Slashdot originally misses the story they end up posting a week later.
  • Git-r-done (Score:5, Informative)

    by Snotnose ( 212196 ) on Tuesday November 21, 2017 @09:46PM (#55599979)
    Security issues? Um, have you met the requirements? Yeah? Does it work? Yeah? The security issues aren't in the spec, release it.

    The good news is much like Charlie Rose gets embarrassed off the national stage, hopefully companies that don't take security seriously will be forced into bankruptcy.
  • Yes. (Score:5, Informative)

    by xxxJonBoyxxx ( 565205 ) on Tuesday November 21, 2017 @09:46PM (#55599983)
    >> are the product manufacturers simply careless or cutting corners in their product designs?


    I've been a software security guru for more than ten years, and none of the companies I worked for, whether Fortune 100 or commercial companies shipping commercial software, fixed all the vulnerabilities we found before shipping. (Some set the bar at "high" and some as "critical", but no one halted the presses for "medium".) For all I know, most of the vulnerabilities we found perished on a disbanded team's backlog years ago to the delight of hackers everywhere.

    But the bigger problem would be the code that shipped that we never saw, whether it was an intern's "hackathon" project shat onto the web, something that crawled out of a pool of H1Bs, or a third-party app grafted in to fake reporting enough to get past the demo with the big client. I have more horror stories than I can relate involving things like this.
    • by Junta ( 36770 )

      Of course a lot of the 'medium/lows' become debates about whether they are really vulnerabilities or not. A lower severity is frequently a compromise between some security guy being surprised at a design point and a developer who intends the behavior.

      But yes, the whole 'security team over here to 'fix everything', most of the software developers over there to do the work, whew they don't have to think about security because we have a team for that' is a pervasive problem in the industry.

  • by DogDude ( 805747 ) on Tuesday November 21, 2017 @09:47PM (#55599985)
    The root is that our corporate laws allow liability (for defective products, in this case) to be completely separated from ownership (stockholders). US companies can fuck customers up the ass with barbed wire, and nothing happens to anybody within the company management or ownership as a result.
    • by sinij ( 911942 )
      I am surprised laws still treat software as 'magic'. If my new toaster catches fire and melts the counter, I can count on getting compensation from manufacturer. If my new IoT gets pwned by a canned exploit, leaks my private conversations and pictures of me dressed as a pony (don't ask), then there is absolutely nothing I can do to get damages. What the f*&k?
    • Sorry, but you may as well say that people keep dying of old age because there are no laws on the books against it.
      • by DogDude ( 805747 )
        A sole proprietor in the US is liable for what they do. Small businesses are liable.

        Once you issue stock, everything changes. The public corporation could knowingly kill and maim hundreds of customers [caranddriver.com], but nothing [google.com] will happen to the owners of the company.

        If, say, a dentist knowingly killed 124 and maimed 274 people, that dentist would go to jail, and his/her assets would be taken and wages would be garnished for the rest of his/her life.
  • by El Cubano ( 631386 ) on Tuesday November 21, 2017 @09:51PM (#55600015)

    How Are So Many Security Vulnerabilities Possible?

    Do you life in a house or apartment? Go around and look very closely at every aspect of the structure. As you go, make note every flaw you find, however tiny, but paying special attention to things that could be avenues for entering the dwelling from the outside even if everything is locked up. Now imagine 1,000,000 people all working constantly to find ways through those vulnerabilities without you realizing that is going on. Now imagine everybody in your city has an identical dwelling so that when one avenue is compromised, they all are.

    That is how.

  • by Luthair ( 847766 ) on Tuesday November 21, 2017 @09:55PM (#55600023)
    1. People aren't perfect
    2. Companies chase features
  • Security is really hard. It's especially hard when so many insist on trying to reinventing the wheel, which countless developers do all the time.
    • Security is also "unfair". You, as a conscientious developer, can do everything "right", and get totally pwn3d *anyway* because of some widespread, system/platform/framework/library vulnerability (perfect example: Heartbleed).

      The only way to improve the odds is for a development team to have one or more members whose ONLY job is to be aware of every thirdparty library/platform/os used by the project & literally research every single one, every single day, to become aware of vulnerabilities as they're di

      • >The only way to improve the odds is for a development team to have one or more members whose ONLY job is to be aware of every thirdparty library/platform/os used by the project & literally research every single one, every single day, to become aware of vulnerabilities as they're discovered...

        It's enough to say "Is it ok to use this library" and then tear that one to pieces. The effort involved makes the cost of writing (well) the subset of features you want from the library seem small.

        Which is why I

  • Nobody cares (Score:5, Insightful)

    by manu0601 ( 2221348 ) on Tuesday November 21, 2017 @10:00PM (#55600061)

    Companies do not care about security, because they see no value in it. They rush their own developers to release software, and never ask them to focus on security.

    Developers do not care about security. They never face the consequence of their negligence on it

    Consumers do not care about security. They shop for the cheaper or the most hyped product, not for the one that was correctly engineered. How could they know it really was, anyway?

    • by Ungrounded Lightning ( 62228 ) on Tuesday November 21, 2017 @11:49PM (#55600613) Journal

      Companies do not care about security, because they see no value in it. They rush their own developers to release software, and never ask them to focus on security.

      It's not that they don't care about security (although they often don't). It's because, in the competitive environment, the "invisible hand" separates the companies into "The Quick" (pun intended) and "The Dead".

      For each new computer-based market opportunity there are typically far more companies trying to get to product than there are niches for them. The first one, two, or three will get through the "window of oppotunity" and take the market, and the rest will be left out when the window closes - perhaps to die, perhaps to move on to some other opportunity, rinse, and repeat.

      To get through the window before it closes, development has to be fast. Something has to give, and practically EVERYTHING that gives makes security holes. So the Pointy Haired Bosses tell the workers to get the product to market and THEN worry about fixing the security holes.

      Some of the developers make things secure anyhow. Most of them find the window closed when they're ready to ship, because the ones that did what management told them already got to market with the features working and the infrastructure made of swiss cheese. They took the whole market - before the bad guys discovered the holes, exploited them, and the media finally noticed.

  • All complex software has bugs. But not all software respects your freedom to run, inspect, share, and modify the software so you can decide how to handle whatever problems arise with the software. You ought to be allowed to fully control the computers you own. Free software (software that respects your software freedom) is a means to grant people that control and treat people ethically with regard to computer software. Nonfree (or proprietary) software denies users the freedoms of free software. Nonfree sof

  • by engineerErrant ( 759650 ) on Tuesday November 21, 2017 @10:16PM (#55600123)

    Please, before you post on Slashdot about code vulnerabilities, make sure you have at least programmed a "Hello, World" before. This post reminds me of the time a frustrated boss demanded to know why the game AI I was programming didn't "just use common sense."

    More vulnerabilities are happening because there is a *massive* increase in software in consumer products. A bazillion products now have codebases that didn't before - ovens, toys, even my damn Christmas tree. Combine that with professional and social media that's always looking to dredge up outrage, and an increase in bad actors who realize that public outrage can work in their favor, and boom! You have a constant stream of stories about security holes. Why is that hard to understand?

    Engineers are increasingly educated about new security threats - we evolve much faster in dealing with new challenges than almost any other type of worker. But yes, things get through, because this shit is hard - much harder than clueless internet whining. Also, because we're on the front lines of two wars - against those who tear down what others build, and against those who squelch innovation to preserve their own fat-cat positions - that is exponentially more intense than it was even a few years ago.

    Expect the future to get *much* bumpier than this.

  • by ka9dgx ( 72702 ) on Tuesday November 21, 2017 @10:16PM (#55600129) Homepage Journal

    Almost all security problems boil down to the absolute lack of support for the principle of least privilege [wikipedia.org]. None of the commonly used systems have anything approaching this concept. The crude approximation available is to put each resource in a virtual machine and tightly limit its connections to other virtual machines that need to access it for a specific resource... then watch those like a hawk for traffic spikes etc.
    The other thing that could help immensely is to install Data Diodes, which are gateways specifically designed to NEVER let data flow in the non-desired direction, guaranteed by physics. The come in pairs, they have a normal network connection on one side, and one of the pair can only transmit, the other can only receive, usually via a single fiber.

    This stuff can be fixed, I've been saying so for at least a decade now (go ahead, search my comment history here and elsewhere)... ya'll are slow on the uptake. I figure another 5 years before it starts sinking in, and at least 10 more to get it done.

    • Wow, that was some seriously stupid shit. I can only hope that was supposed to be a joke, but assuming it wasn't: Behold! People like this are often involved in the development process. That alone is enough to show you why.
      • by ka9dgx ( 72702 )

        You've taken a shallow view of this... like some one who thinks that circuit breakers aren't necessary, it's just the users of electricity who aren't careful enough. Limiting the scope of change that a instruction can execute is the primary job of an operating systems, and Linux, Windows, and all the others just can't do it. Capability based systems provide safety, in a user friendly and transparent way... just like the breaker box in your house.

        What we have now, is electrical equivalent of a power grid

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      LOL at the guy that thinks most security problems are technical problems instead of the result of perverse risk prioritization in response to market demands.

  • by Harlequin80 ( 1671040 ) on Tuesday November 21, 2017 @10:40PM (#55600275)

    Security is not free. It is neither free in that it requires lots of man hours of time to develop & code, and that security has no impact on the user experience.

    You can do end to end encryption of all traffic, encrypt at all states, require multi-factor auth, require physical devices, require secure portal software. But all of these have operational costs as well. But in the cost of compute and in the usability of the software.

    If you had to access gmail through a specific secure application, with 3+ factor authentication, and it was really really slow, would you use it?

  • Company A builds product X. They design it with security in mind. They code it with security in mind. They test it a dozen ways from Sunday. They are constantly trying to break it and fix it every time they can. They want to charge $20 for using their software to recoup their investment.

    Company B builds product Y that directly competes with product X. They give security a passing interest. They throw together the code and ship it as soon as it works in 90% of the cases. They don't bother testing. If a bug
  • by tanstaaf1 ( 770797 ) on Tuesday November 21, 2017 @11:09PM (#55600427)

    Most programmers think code can be made secure if they only have better compilers, debuggers, or follow better practices. They are fundamentally mistaken about the nature of the problem.

    This article lays out the nature of the error far better than I can. Please read it and then THINK:

    https://medium.com/message/eve... [medium.com]

    And then consider: âoeIt is difficult to get a man to understand something when his salary depends on his not understanding it.ââSâ"âSUpton Sinclair

  • Management. Costs of security takes away from their pay, shareholder value. Thats less jets, holidays, gambling, yacht time.
    The security services want an easy way in they can use the cover of average malware for.
    The police want a way in so they have contractors hide as malware.
    The security services and police need a way out of a system, network with the data they find.
    The method used by police, contractors, security services finds its way into the hands of cult, faith groups, criminals, the media, ex,
  • by durdur ( 252098 ) on Tuesday November 21, 2017 @11:13PM (#55600443)

    I know quite a few CEOs and VP level execs. They are very focused on revenue. They are spending all their time trying to land new customers and grow the business. Security is a cost: generally: you have to pay money for it, either to security vendors or in headcount, and there is no corresponding revenue that you get. So it goes to the bottom of the priority list. Somewhere in their heads they know that deferring it is a bad idea, and the cost of a security breach is something they don't even want to think about, but there is always a new revenue target to hit, another customer to land, etc., and so the security stuff get put in the "maybe later" pile.

  • IMO they are back-doors they don't want closed. Look at adobe flash how many holes have been patched over the years. Windows, them too and many more So ya back-doors they don't want to shut. Too much data they want to take that has nothing to do with making their products better. But will make them plenty of money
  • Laziness and Usability.

    Usability really comes back to laziness most of the time.

    The other obvious issue is that the chaining of independent "design compromises" is often what leads to full blown compromises.

  • ... but the one posed by the article's title comes close.

    Given that most widely-used OS'es start from a base of languages that are not secure with manually-managed memory management, that most OS and application programmers are not (and I'm being charitable here) security experts by any means, and that software processes to mitigate these issues are still often pushed aside for "business" reasons when not deprecated in the name of agility, a better question is "How do we turn out software that stands up so

  • by Todd Knarr ( 15451 ) on Wednesday November 22, 2017 @12:04AM (#55600647) Homepage

    Because security is a cost center while new features are a revenue center. The executives who ultimately decide priorities, budgets and schedules are taught to minimize costs while maximizing revenue. The results are, obviously, highly predictable if supremely disappointing.

  • This was Microsoft's stand for the longest time, it cost too much for the extra time it took to test.

    "It all comes back to one programmer being careless," Paller said. "You wrote a program, asked someone for input, gave them space for a certain amount of characters, and didn't check to see if the program could take more. You are incompetent, and you are the problem. One guy making that mistake is creating all the work for the rest of us." https://www.cnet.com/news/stud... [cnet.com]

  • Instead you rely upon languages to handle the safety and optimizations your lazy ass couldn't be bothered learning in the first place.

    And you wonder why someone else guts your shit - they understand the basics which you failed to learn.

  • I teach IT at a top 100 global university. Until a couple of years ago, security was not covered at all in any of the compulsory units for any of the IT degrees. It's still not mandatory in every degree.

    And I'm fairly confident that this is typical across the tertiary sector.

    Now, I'm not claiming for a moment that university is the only way to learn IT skills, or that learning ends the moment you get your degree (despite what many of our students seem to think sometimes). But a lot of our students come

  • Maybe because also every company is doing Agile Scrotum, and they're Not Doing Agile Right(tm).

  • The primary reason is that software quality is down the drain. We actually know how to write software two orders of magnitude better than what we have now (the metric being bug count). There'sa single-digit number of companies in the world doing it. They are successful, and the business case is ok (less maintenance cost and issue-related costs down the road), but only if you look at it long-term and with a dominance of short-term thinking, well, we have the mess we have today. Startup culture especially is

  • If you hire a music major as CIO, like Equifax, everything is possible.

Today is a good day for information-gathering. Read someone else's mail file.