Become a fan of Slashdot on Facebook


Forgot your password?
Security Businesses Privacy Technology

Ask Slashdot: What Are Ways To Get Companies To Actually Focus On Security? 158

New submitter ctilsie242 writes: Many years ago, it was said that we would have a "cyber 9/11," a security event so drastic that it fundamentally would change how companies and people thought about security. However, this has not happened yet (mainly because the bad guys know that this would get organizations to shut their barn doors, stopping the gravy train.) With the perception that security has no financial returns, coupled with the opinion that "nobody can stop the hackers, so why even bother," what can actually be done to get businesses to have an actual focus on security. The only "security" I see is mainly protection from "jailbreaking," so legal owners of a product can't use or upgrade their devices. True security from other attack vectors are all but ignored. In fact, I have seen some development environments where someone doing anything about security would likely get the developer fired because it took time away from coding features dictated by marketing. I've seen environments where all code ran as root or System just because if the developers gave thought to any permission model at all, they would be tossed, and replaced by other developers who didn't care to "waste" their time on stuff like that.

One idea would be something similar to Underwriters Labs, except would grade products, perhaps with expanded standards above the "pass/fail" mark, such as Europe's "Sold Secure," or the "insurance lock" certification (which means that a security device is good enough for insurance companies to insure stuff secured by it.) There are always calls for regulation, but with regulatory capture being at a high point, and previous regulations having few teeth, this may not be a real solution in the U.S. Is our main hope the new data privacy laws being enacted in Europe, China, and Russia, which actually have heavy fines as well as criminal prosecutions (i.e. execs going to jail)? This especially applies to IoT devices where it is in their financial interest to make un-upgradable devices, forcing people to toss their 1.0 lightbulbs and buy 1.0.1 lightbulbs to fix a security issue, as opposed to making them secure in the first place, or having an upgrade mechanism. Is there something that can actually be done about the general disinterest by companies to make secure products, or is this just the way life is now?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Are Ways To Get Companies To Actually Focus On Security?

Comments Filter:
  • Just kidding. I am not advocating unlawful access. But it seems like many companies don't do a damn thing until they have a breach.

    • Re:Hack them. (Score:5, Insightful)

      by Anonymous Coward on Wednesday October 18, 2017 @07:25PM (#55393207)

      Actually many of them don't do much after a breach either.

      • Not true. The executives make a point of selling their stock before the news gets out.

    • by flopsquad ( 3518045 ) on Wednesday October 18, 2017 @09:15PM (#55393735)
      "So, Randolph Q. Chairman — can I call you Randy? — Randy, every time a customer's data is stolen from your company's database, Boris here is going to cut you in half with his machete. Is that what you want, Randy? Hm?"
  • After all it is not like they are judged by any other metric besides spending money or anything like that.

    Also go to India or get some college kid to run it for cheap. That is what any MBA will tell you and it is not like it is hard or anything to do.

    • Just get someone competent to run IT. And it's not just IT, security covers all departments. R&D that makes products that should have security, operations with external facing servers that should have security, servers that retain customer data, and so forth.

      The problem in IT is that it's usually run by someone who just repeats Microsoft marketing and industry buzzwords. There's no real leadership except to pass along the same cookie-cutter solutions to their cookie-cutter employees. That often makes

      • Designating the IT to focus on security is designating an opponent to the organization, somebody who's agenda opposes the corporations. The head of IT should be an advisor on IT Security, due to how intertwined it is in the daily operations of a business. The advisor should be respected and heard.
        • Everyone's got a different definition of IT. To me, IT isn't intertwined in the daily operations, it is only intertwined in keeping the corporate network and computers running. Most of that type of IT is getting sourced, overseas or to specialized companies or cloud services. They don't have the corporation's best interests in mind; they won't lose their job for very long if there's a breach, their stock options won't vanish, etc. That's not the group you want protecting the company's family jewels.

          Of cou

          • The IT staff isn't supposed to be actively intertwined in the daily operations. The IT infrastructure is usually intertwined wih the daily operations, and the security of that IT infrastructure must be as seemelessly as possible intertwined into the daily operations of any IT resources.

            I agree that there is nothing inherent in IT that makes them advisors. However, I would propose that one who is not capable of advising should not be head of IT. I maintain that the ability to assess risk and advise strateg
            • No that is the problem IT supposed to be part of the organization. Only last decade had this changed as IT was involved with business processing and critical operations. If IT is not qualified to handle security then who is??

              IT NEEDS to be advised to and part of the process or you end up with a nightmare like this []. How much money do you think that airhead marketing manager makes in that video and how successful do you think that new website in that video linked above will be?

              Hell the poor IT web developer

      • Whosh.

        My point is not about MS vs non MS people in I.T. making decisions.

        Rather it is moronic nightmarish scenario of non-IT folks making requirements [] with 0 input from IT on a shoestring budget. On the link above I loved the phrase "...if you email the Vice President on the requirements your contract will be reviewed .."

    • After all it is not like they are judged by any other metric besides spending money or anything like that.

      Also go to India or get some college kid to run it for cheap. That is what any MBA will tell you and it is not like it is hard or anything to do.

      This is painful because it is true.

  • Insurance (Score:5, Insightful)

    by Krishnoid ( 984597 ) on Wednesday October 18, 2017 @07:29PM (#55393223) Journal

    Insurance translates risk into dollars into quarterly financials.

    • Investors who don't understand computer security can ask what's being done to mitigate risk ("You have fire insurance, why not cybersecurity insurance?")
    • The CEO/CFO/board sees that if they buy insurance, they can better risk-manage cybersecurity breakins, and can provide an answer to the institutional investors.
    • The insurance actuaries can insist on audits to make sure the software/server/network infrastructure is secured well enough to be insurable.
    • The rank-and-file IT get stuck with implementing it, and employees get stuck suffering with increased security.

    Moral of the story: start training for a job as an actuary [].

    • by rtb61 ( 674572 )

      With the current state of software warranties, I could not imagine how insurance against hacking events could possibly exist. The initial assessment actively threatens the employment of IT staff, they are being judged and make no mistake, first on the audit list, fire and hire. That also plays out to the rest of staff, as any employee with access to at risk hardware can trigger a security breach.

      Sure I could imagine fly by night insurance who take premiums and never make payouts, using lawyers to fend them

      • Your cynical take on the insurance industry kind of makes me want to say "fuck it, hack all of the companies, all of the time, and let God sort it out."

      • Such insurance exists here in the UK. I think the business model is to take in high premiums, and pay out as few people as possible, and only pay a relatively small amount (although I may be wrong, but the number quoted to me for my contracting company was too high to be worth doing, and let's face it, I'm not much of an IT pro if I need such insurance ;-) They pretty much just gave me a price, and didn't ask any real questions about my competence in such matters by the way - I guess they just looked at my

    • Exactly this. Just like we require liability insurance to drive a car, if we required PCI insurance to accept credit cards
      then there would be a dollar amount associated with it. Currently, PCI compliance is required (and in some cases just
      recommended) but failure to be PCI compliant is only a problem if you get caught. As much as I hate insurance
      companies some times, getting them involved would make it so that if a company wanted lower premiums, they would
      have to actively try to mitigate the risks.

      • by plover ( 150551 )

        Cyber insurance rates are already risk based. The insurance company will set your rate based on the level of competence in security you demonstrate.

        • Cyber insurance rates are already risk based. The insurance company will set your rate based on the level of competence in security you demonstrate.

          Yes, but Cyber Insurance is not required and most businesses don't have it.
          Requiring all businesses to carry it would make "level of competence you demonstrate" a number on the balance sheet
          where currently cyber risk is an vague potential future cost that most companies ignore.

    • Re:Insurance (Score:5, Informative)

      by TubeSteak ( 669689 ) on Wednesday October 18, 2017 @08:13PM (#55393409) Journal

      The insurance actuaries can insist on audits

      Target was certified as PCI compliant a few months before they were hacked.
      They only problem is that the PCI audit would never have caught the memory scrapers that were used to infect Target's point of sale systems.

      Most of the major credit card hacks in recent memory involve companies who've been certified as PCI compliant.

      I'm not against audits, but it should be nakedly obvious that the audits we have are not the audits we need.

      All of which is to say that having insurance companies cook up security standards doesn't mean anything will become more secure. /The PCI standard has a section on vulnerability scanning and penetration testing. It should be considered the bare minimum, not a reasonable security goal.

      • I worked for a company that falsified their PCI compliance. All you have to do is lie. Most of the auditors are simply box checkers and there's never any real test. Until people start going to prison for these offenses, the biggest punishment to the companies will be paying for shitty "identity theft protection" for a year.
      • Say it with me, now, "compliance is not security."

    • It is just one small department. They're not R&D, you don't want the IT help desk guys designing the next physical product that gets sold in stores that needs to have security. There are operations with servers that need security, and that's very often not IT, and even when it is IT you will see IT split into several sub-departments. IT tends to be focused on how quickly they can outsource their workers, put all the data into someone else's cloud, and cash in on a big bonus after saving all that money

      • #And for security, you never want "rank and file" implementing it. The rank and file don't understand security.#


        Jolly good joke. It's the 'rank and file' that are going to get tricked by the phishers and 'social hackers'. It's the 'rank and file' who are going to set their password to 'password', or put their password on a post-it at the bottom of their monitor. Or save it to their GoogleDocs document named 'security' where they store ALL of their passwords...

        Security is EVERYONE's job. Top to bott

        • Ok, we're talking two things. First, is individual responsibility for security; ie, training and following the training. I was talking about the people actually creating and implementing out a security policy.

          Going back to the original topic, you need the company to focus on security. Without that happening, the rank and file won't be coordinated and will be doing their own thing. If you've got only 75% for your rank and file following good security practices then that's not very good.

    • The problem with this is that the costs will probably not be large enough to motivate a significant change in behaviour because hackers go after the details of customers not the money of the company being hacked. A faster and better way to do this would be to have legislated statutory minimum damages for each individual's details which are hacked. Say $10k for sensitive data like a credit card number with lower amounts for just an email address or name etc.

      This will immediately establish the financial co
  • Software isn't this new thing that nobody really understands, so as-is, use at your risk is no longer should be applicable. If you sell insecure crap, then it gets hacked - your company should be responsible. Just like releasing food that poisons, electronics that electrocute, or clothing that let it all hang. Even Lululemon had to recall yoga pants because fabric showed too much when stretched...
  • by aberglas ( 991072 ) on Wednesday October 18, 2017 @07:38PM (#55393247)

    Features are what counts. The more features software has, the better it is. And add more layers, because abstraction and indirection are good. And most importantly, make it bigger and more complex because everyone knows that code is good so the more code the better.

    Eventually not even the hackers will understand it and we will all be safe.

  • by sexconker ( 1179573 ) on Wednesday October 18, 2017 @07:39PM (#55393253)

    Everyone at the top (CXO, board members, top paid employees based on cash plus stock options plus etc.) serves 1 day in prison for every instance of leaked info.
    Chase it down through subsidiaries, contractors, shell corporations, spouses, etc.

    The other option is mob justice. (Which is fine by me.)

    • Re:Easy (Score:5, Insightful)

      by Actually, I do RTFA ( 1058596 ) on Wednesday October 18, 2017 @08:54PM (#55393623)

      We shouldn't punish leaks, we should punish bad security. Heartbleed was unpredictable. There's a difference between unpatched WPA2 today and one week ago.

      • by emil ( 695 )

        I am really hoping that Shibby brings out a new Tomato [] sometime soon, but if anybody is going to be punished, it should be the authors of the WPA2 standard.

    • Sure feed the prison industrial complex. That has served America so well in the past.

      I mean seriously have you not realised the prison doesn't seem to solve anything? Your recidivism numbers alone should show that.

  • Haul some C-level execs away in handcuffs. And don't put them in some white-collar resort prison either.

    • Haul some C-level execs away in handcuffs.

      Then we will have to pay exectutives a lot more money to accept that risk.

      And don't put them in some white-collar resort prison either.

      America already imprisons four times as many people as any other 1st world country. If we are going to start putting people in prison for being stupid, we are going to need a lot more prisons.

      • We're just putting the wrong people in prison. We imprison people for using drugs who are hurting no one but themselves, but if a CEO screws up people' lives they often get a bonus for it.

        There will always be someone who accepts the risk. You should not raise the pay, it is better to get someone who accepts the CEO job at lesser pay who is good at it then someone who demands huge compensation and then plays golf all day.

  • The CIO wants a evolving always up to date black box of security that will never get between him and quarterly stock option rewards. It would also be great if it allowed him to lay off everyone but the sales force and that design guy with the retro eyewear who knows all the girls at the club.
  • With the perception that security has no financial returns

    So make it. Your company released data on 32 million people due to shoddy security? Your company will have to contact each one directly, individually, and cut them a check for $1000[1] on top of whatever monitoring services they might need now. Same thing if it's only 32 people.

    This won't fix IoT issues, of course, but there's a different mechanism that could: cost internalizing. Require companies to pay into a fund for proper disposal of their produ

    • Your company will have to contact each one directly, individually, and cut them a check for $1000

      Get a grip on reality. Equifax had a profit of $488M last year. That is $3 per individual leak. Since their profits are likely to drastically decline this year, even expecting them to be able to pay $3 is unrealistic.

      • Re:The Pocket Book (Score:4, Insightful)

        by RyoShin ( 610051 ) <(tukaro) (at) (> on Wednesday October 18, 2017 @08:23PM (#55393475) Homepage Journal

        Get a grip on reality.

        Them, first. The amount I gave is quite high and could be lower, but that's sort of how fines (should) work: If you set a fine that is under the profit margin for the complicit activity, then the fine is just accepted as a part of the business (because the underhanded tactic still pays out more overall than compliance would.) Equifax is not losing corporate business AFAIK and is even getting some returns from credit monitoring services that have seen a spike of enrollments, so unless the fallout lands on them they'll happily ignore the reality many people are now in, in deference to the next quarter.

        I would be satisfied with Equifax completely shutting down, so let's agree to lower it to only $100/person and they can implode slightly less. I don't believe we have any sort of "execution" laws for corporate charters in this country, but more than a few really should have been and Equifax joins this prestigious group.

  • If they aren't already interested in paying attention to security, pointing out where their security is flawed won't change anything. At best, they'll just think you're acting like some kind of know-it-all, and at worst, they might make your life thereafter somewhat unpleasant.

    If a company doesn't pay attention to security, run in the other direction. Get as far away from them as you can.

    • by ShanghaiBill ( 739463 ) on Wednesday October 18, 2017 @08:21PM (#55393467)

      If a company doesn't pay attention to security, run in the other direction.

      How do you know which companies are paying attention? Also, how does one "run" from Equifax? You are in their DB, whether you choose to be or not.

      • by mark-t ( 151149 )
        Please note, I said "as you can".... obviously it would not apply if one has no choice in their affiliation, but it can still often be the case that one will have such a choice.
  • Arguable statement (Score:5, Insightful)

    by war4peace ( 1628283 ) on Wednesday October 18, 2017 @07:49PM (#55393307)

    "This especially applies to IoT devices where it is in their financial interest to make un-upgradable devices, forcing people to toss their 1.0 lightbulbs and buy 1.0.1 lightbulbs to fix a security issue, as opposed to making them secure in the first place, or having an upgrade mechanism."

    It's actually more complicated than this. You need to factor in the customer.
    The vast majority of customers for above-mentioned devices are "IT security-impaired". In layman's terms, they have no fucking clue (I don't blame them by saying this, it's just the way things are). So they vote with their wallet.

    If company A is very security-focused and produces aLightbulb with upgradeable firmware and active development for said firmware, but company B doesn't give a shit, you will end up with bLightbulb which costs 10 times less than aLightbulb. Guess which company would go out of business?
    IoT is filled to the brim with customers looking for the cheaper alternative, and security isn't a driving factor to motivate them to buy the more expensive product. Getting companies to agree on a security standard? Good luck with that, there's always going to be the profit-oriented company willing to sell their lightbulbs 15% cheaper, and have them cost 4 times less, undercutting and eventually buying off competition.

    Not saying I agree with how things are, but then again, it's how they are.

    • by arth1 ( 260657 ) on Wednesday October 18, 2017 @08:49PM (#55393585) Homepage Journal

      Getting companies to agree on a security standard? Good luck with that,

      Blackhats love security standards. That's documentation that makes life much simpler.
      It's like a HOA that mandates that all front doors must have locks of one particular brand, and that audible alarms must be tested every 30 days.

    • Metrics (Score:5, Interesting)

      by Cassini2 ( 956052 ) on Wednesday October 18, 2017 @08:54PM (#55393617)

      A key problem is that the IT industry lacks useful metrics. For instance:
      - We have Big O notation, but the compiler doesn't automatically detect algorithmic complexity. As such, no one can easily tell if you have written a program (algorithm) that scales well, or scales poorly. This is a big problem for non-trivial pieces of code, because it is very easy to include an O(n^2) library function in a "tight" O(n^2) loop.
      - Memory management is so well hidden in modern environments, that it is often impossible to tell how efficiently memory is used. It's a variation on the Big O notation problem. Thus, memory usage in a large framework (C# or Java) can obscure memory leaks and O(n^2) memory usage problems, until n becomes sufficiently large (in full production).
      - What metric measures security? Security doesn't even have the benefit of Big O notation.
      - In a big program, it is often not even possible to tell what code paths are actually being used. Run-time profiling helps a great deal, however there are privacy issues.
      - There is an entire landmine about programs including interpreters (compilers) to execute user generated code block. For a program of sufficient size, it is necessary to do this. However, it is a security nightmare. How do you even tell, in the context of a large application, if it is possible for someone with normal use rights to execute malicious code?
      - Almost every programming resume claims that the person is proficient in HTML, Java, and C++. How do you tell which programmers are good? In the context of a given project, what does good mean anyway?

      Some metrics are present in software, but they are often ridiculed:
      - # of lines of code
      - Execution time. Specifically, execution time does not matter if the task is sufficiently fast that no one cares. If you have a Big O notation scaling problem, it is often possible to ship software and someone not notice until it is in production.

      Many other industries have methods of measuring quality and suitability. Software, it exists, but not in an easy to use, obvious and mature form.

    • by plover ( 150551 )

      Getting companies to agree on a security standard? Good luck with that, there's always going to be the profit-oriented company willing to sell their lightbulbs 15% cheaper, and have them cost 4 times less, undercutting and eventually buying off competition.

      Right now, the designers of WiFi light bulbs throw a SoC in the socket and a few LEDs on the heatsink, and because there's no standard, each company makes up their own bare-bones data connection for "on/off", and supplies a clunky iOS and Android app. Nobody reviews the protocols, they shove whatever no-name distro and web server they can think of into the SoC, and ship it.

      So the way to improve on this is to have an externally defined standard for IoT devices. The standards need to address all of the secu

      • a nifty idea. But what consumer will buy the STIG-certified product, when the cheap security-less knock-off is available for 60% of the price (having been made at 10% of the cost)? Does anyone really check for the UL label any more?
      • "Security certified by The STIG products sold here" I would buy any product that claimed to be protected by a dude in a white racing uniform and opaque helmet.
    • by AmiMoJo ( 196126 )

      We should treat IoT security like we treat safety. Most places it doesn't matter how old the device is, if it isn't safe then the manufacturer has to do something about it. Recall those 10 year old cars, fix those 15 year old washing machines that occasionally catch fire. Of course in the latter case they would likely just offer you a discount on a new one, but at least you were warned it might catch fire and got a few bucks.

      • How many of the 10-year old cars or 15-year old washing machines are actually recalled in practice?
        How often does Average Joe check whether your $DEVICE is secure? I never see regular people scouring the Internet to verify whether their phone, smart watch, TV, router, you-name-it is secure or has an available firmware update.

  • devices need to have os and app code split into there own updates so it's easier to push out updates.

  • Headline kinda sez it all, even though it will never happen.

    1-2 companies become memories because they got breached, Cxx's might give IT departments the resources they need to prevent breaches.
  • "this has not happened yet (mainly because the bad guys know that this would get organizations to shut their barn doors, stopping the gravy train.)"

    So companies could do it if they knew it was a problem, but they don't because they're blissfully unaware, and the only people that would tell them won't?

    • Not only your point but also this assumes "the bad guys" are a monolithic group which is clearly not the case. Criminals are just as susceptible to the game theory as anyone else. If Group B can grab a fortune before Group A then they will.

  • Buildings don't collapse, trains don't crash and planes don't fall out of the sky because there are strict government standards on how to make one. These standards cover the software used in them as well, and we now actually have some reasonably good standard practices on how to make software reliable. Unfortunately, reliability and security are not the same, so what's needed is a set of standards that describe how to make secure networks. I fully understand that's not an easy job, but I'm pretty sure that

  • Require insurance (Score:5, Interesting)

    by GuB-42 ( 2483988 ) on Wednesday October 18, 2017 @08:05PM (#55393375)

    When you drive a car, the law requires you to have insurance, because you can do a lot of damage to others you won't be able to pay for if it happens.
    The idea here is to impose heavy damages in case of a breach and require companies to be insured to some amount. The insurance requirement is a way to prevent companies from just taking chances and get away with bankruptcy if bad things happen.
    Another advantage is that insurance companies don't want their customers to get hacked so that they can offer attractive prices and make profit. As a result, they will make sure that security best practices are implemented in the same way that theft insurance require certain locks.

    To sum up, with mandatory insurance :
    - Hacked users will be compensated
    - Insurance companies will have real financial incentives to find ways of making things more secure
    - Insured companies will do their best to implement best practice as it will most likely lower their premiums. The worst may not be able to get insured at all and risk legal sanctions even before the inevitable hack happens

  • by CaptainDork ( 3678879 ) on Wednesday October 18, 2017 @08:06PM (#55393379)

    ... and, you're welcome.

    • Yeah because America the world's capital of litigation is such a shining example of companies who go out of their way to care about the interests of their customers.

      Your comment would be laughable if it wouldn't make so many people cry while assuming the fetal position.

      • Easy on the trigger, OK?

        Where the answer is, "litigation," why do businesses have fire codes that include extinguishers, sprinklers, exits, occupancy limits, construction firewalls, material codes, and regular inspections?

        Where the answer is, "litigation," why is asbestos no longer allowed in buildings and why do asbestos companies continue to pay for health care?

        Where the answer is, "litigation," why is silicosis a declining disease because of OSHA regulations and why are industries continuing to pay for h

        • Wait, we're talking about litigation and you cite repeatedly examples of government intervention. At least you agree with me.

          But then at the end you go back to litigation being good. Oooh yay, a class action suit. I'm sooo locking forward to my $5 gift card I can redeem next time someone opens up a credit card in my name.

          Litigation works well for the ambulance chaser looking for a get rich quick windfall. It does fuck all to companies who resolve the issue by putting up a warning label.

          • We do agree, mostly.

            I spent most of my career working for law firms.

            While I mostly stayed away from case details, and I certainly agree that class action is a cluster fuck, my examples were intended to illustrate what happens AFTER (or instead of) class action.

            In the early days when businesses were transitioning to fire code adoption, those who failed to comply were sued out the ass and punitive damages kicked in.

            Those legal matters were one-on-one where juries decided that damage multipliers would be effec

    • by AmiMoJo ( 196126 )

      The problem with litigation is that it's often hard to prove actual harm and financial loss. You could join a class action but then all you get is a $2.50 voucher after a decade or so.

      • You're talking about compensation to individuals, which I agree is only a small irritant.

        I'm talking about litigation that leads to mandatory compliance on a much larger scale.

        I've posted this before, but think of fire codes at businesses:

        Via litigation, the families of those who died were individually compensated. The injured were provided with health care.

        As it became clear that businesses didn't really give a shit, the litigation moved away from individual incidents toward more general solutions aimed at

  • Unconditional and immediate forfeiture of $10000 to every customer who got their data stolen as a result of poor security practices.

  • That's the *only* way to get corporations to do anything.

    One idea would be something similar to Underwriters Labs, except would grade products, perhaps with expanded standards above the "pass/fail" mark

    And let me guess, the compliance and governing bodies would be staffed by the participating corporations?

    $1 per name.
    $5 per address.
    $5 per phone number.
    $10 per SSN.
    $20 per CC number.

    Anything else is lip service. And the fines go to the offended parties.

    • So if Ubuntu or Linux Mint release software with a 'bug', those are the costs that will be imposed on them? Or is it just websites? So any person or organization that has a public presence on a website that obtains any information whatsoever from the public needs an expensive liability insurance policy.

      Slashdot would be a significantly more expensive site to operate. It and most of the web would shut down. There would be a few big conglomerates like Amazon, Facebook, Microsoft and Yahoo that could affor

      • So if Ubuntu or Linux Mint release software with a 'bug', those are the costs that will be imposed on them?

        First, are they using a supported Linux dist, with a support license? If they are, then it's back on the support company.

        If they are using free and open source, then yes, absolutely. They saved millions of dollars in development by running OSS over a commercial, supported solution. They chose to cost cut that corner. It's their fault.

  • The problem with the idea of certifying security is that security is a constant moving target. Two weeks ago, WPA was thought to be secure and is part of the PCI-DSS (basically one of the main security "certifications" out there). Today, that's not so anymore. And while some might want to argue about this particular incident and how much it really matters, its more the idea than the single example. The list of CVEs being published every year is freaggin massive. Think of that first MD5 collision. We don't c

  • Hold them accountable. Those C level assholes at Equifax should be facing serious jail time. But we all know they won't.

  • Feedback loops define behavior, so the answer is simple, create feedback loops for bad security. There are many ways to do this. One way would be turn every ill-secured IoT devices against it maker and perform a periodic DoS attack on the company website and/or the sites that sell them. This would result in a rising level of traffic that will cause the company money which is the exact reason why they didn't bother to secure the devices. However, if you wish to force government regulation then you need o

  • by LazLong ( 757 ) on Wednesday October 18, 2017 @09:16PM (#55393739) Homepage

    Create regulations that provide for large fines. Companies rarely care about anything unless it costs them money.

    • Regulations and large fines would be leveraged against 'Free Software' and 'Open Source.'

      Do you want a regulatory agency to be required to rubberstamp all software that is released to the public?

      A new version of Linux could probably come out every five years under such a system.

  • I think it's really simple. Money is what motivates pretty much everything. So when a company's negligence results in criminal activity adversely affecting a person, that company will need to pay to make it right. Make you whole again, plain and simple, whatever it takes. They pay for it all.

    Also I think making security marketing bulletpoint would help. Companies that get hacked get a reputation for getting hacked and die off. Companies that example good security by not getting hacked get a sort of 'y

  • Another simple helping hand: Bug-bounties need to be hefty. They need to pay more than crime does. Until they do, people who find this stuff will sell it to criminals instead of you. You gotta pay more than the criminals for your sloppiness.

    • Also, treat bug hunters a little better, eh? When Timmy emails you about your stupid php mistakes, instead of calling the FBI, having him arrested, dragged into court, prosecuted, jailed, while your bug is still there, how about instead patting Timmy on the head and giving him a 5 or 6 digit payoff for telling you and tell him to keep looking for more mistakes.

  • by mentil ( 1748130 ) on Wednesday October 18, 2017 @09:54PM (#55393927)

    Not to worry, the perfectly-informed consumer* will choose not to buy insecure products, causing only perfectly secure devices to survive in the marketplace.

    *Spherical, and in a vacuum

  • It will change when IT is an actual profession and regulations demand it.

  • by Cyberpunk Reality ( 4231325 ) on Wednesday October 18, 2017 @11:36PM (#55394263)
    It culminated on Nov 8, 2016. And it is so well done that most Americans don't even realize we're under attack.
  • by Tom ( 822 ) on Thursday October 19, 2017 @01:29AM (#55394529) Homepage Journal

    You can see right now in Europe how to do it. We've tried it the hard way for 30 years, worked not so very much. For about the same time we tried to convince politics that this is a danger, not much happened. Oh yeah, one day SOX happened and that brought a tiny benefit, but mostly on the paperwork and consulting-hours side.

    In Europe, right now massive investments into information security are being made, because of two laws that politicians have finally passed, both at the EU level. One is the General Data Protection Regulation and the other is the Council Directive "on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection". You have an equivalent (referenced in the EU) from the NIST.

    The fundamental change, and that answers your question, is that violations of these laws, and especially data breaches or other infosec events that could have been prevented with proper security, now carry massive fines. Let me quantify "massive":

    â20 million or 4% of global annual turnover for the preceding financial year, whichever is the greater

    The magic bullet is the 4% rule. It refers to global revenue, and it refers to corporate revenue - no more reducing risk by seperating your corporation into tiny "independent" companies. If a five-person subsidary of Facebook suffers a severe data breach, the fine can be $ 345 million.

    Also, the law puts the legal liability to top-level management. That is the second magic bullet. Put CEOs and directors on the front line. Unless they can demonstrate that they took steps to comply to the technical and organisational requirements, they could go to jail. Now that gets top-level management moving.

    So the simple answer is: Hit them where it hurts. Money and personal liability. Take away the corporate shield and diffusion.

    Disclaimer: I do this stuff for a living. We are currently being drowned in projects to implement ISMSs and the GDPR is a main driver behind that.


    Addendum: This gets you basic security levels. As soon as the risk management labels the residual risk as acceptable, that's it. My personal opinion is that our security is still shoddy at those levels, and the main reason we're not all dead is that most hackers are imbeciles and the only reason they can make a living with their laughable hacking skills is that security is such a joke. For illustration, look at the typical spam / phishing mails you get. Who would fall for that shit full of spelling errors, grammar mistake and my-blind-grandma-could-spot-this forgery? The answer is: If you send it to enough people, you will find enough idiots who do.

    Once we have a basic security level across the board, the game will change. Lots of "hackers" will have to go back serving burgers and fries, but those with any actual skills will step up their game. And then we'll be in a world of hurt. There'll be an Equifax every month. My daily rate will probably skyrocket because supply and demand, but I'm still not looking forward to that.

    If you are serious about security, as the saying goes you don't have to run faster than the bear, only faster than your friends. But don't walk just because they do. Start running now, because once they are eaten, you have to run faster than the bear.

    • GDRP and the-other-one-with-a-crappy-name are certainly getting a lot of places shitting the bed. It'll be interesting to see who gets prosecuted under these new rules first - and when that is.

  • The only "security" I see is mainly protection from "jailbreaking," so legal owners of a product can't use or upgrade their devices.

    That's not what it's there for.

    It's there for two reasons:

    1. To keep you from F-ing around with the baseband firmware for the SDR.

    This prevents you and a bunch of your Jihadi buddies staging a terrorist attack, and then interfering with the ability of emergency responders to actually react effectively to the attack in order to mitigate damages.

    People do not want you dicking with the SDR, because preventing you from doing that keeps you off the emergency responder and military frequencies with commodity devi

  • Make them financially responsible. Your lack of security cost your customers x amount of money, pay 1.5x as a fine. Customers get their money, government gets the .5, companies know what will happen if they get careless, or stop paying attention. Companies want the same rights as individuals, make them take the same responsibilities.
  • Really impressive results with Kaspersky.

  • (0) Make generic "consumer waivers" And "compulsory arbitration of disputes involving company mishandling of customer information" illegal. Consumers may NOT be required to waive rights to the privacy of their personal information from dissemination by potential criminals and unauthorized individuals in a generic manner or by a "click through" or "default" agreement, Just to use or purchase a product or service.

    (1) Shift the burden of proof so companies cannot imply non-breach by saying "We found no

  • Pass a couple of laws making it clear that companies are liable for any costs resulting from security failures of their products, and making it easy for consumers to file and collect on such claims.

    Even more important: make it easier to nail company executives personally, if one can show that executives were negligent. Equifax is the perfect example: There is plenty of evidence that the CxOs were informed of failures in their processes as much as a year in advance of the first breach. Yet they did nothing.

  • What a man can make, a make can break.
  • by sad_ ( 7868 )

    I think the EU is moving towards a law where a company must at least provide X years of support for security issues (not sure on this, though). Unless you put these things into law and include hefty fines for not following said laws, companies will just keep on ignoring making secure devices.

  • Accountability : It not only works for security, it works for many other things as well. It starts with taking a cookie for a kid and goes on to as far as you take it.

    If there is no accountability, there was no wrongdoing in the first place.

  • There is nothing that will work in the foreseeable future. The public does not care enough, and the politicians have a vested interest in not caring.

    Laws will not be passed because both parties are owned by corporate interests. Sometimes the corporate interest is split on an issue, and something can happen. But virtually all corporations will oppose regulations that require security---as well as laws that establish greater liability for poor security.

    The Equifax breach is the largest compromise of public da

"If it's not loud, it doesn't work!" -- Blank Reg, from "Max Headroom"