Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Security IT

Ask Slashdot: How Can We Create a Culture of Secure Behavior? 169

An anonymous reader writes "Despite the high news coverage that large breaches receive, and despite tales told by their friends about losing their laptops for a few days while a malware infection is cleared up, employees generally believe they are immune to security risks. They think those types of things happen to other, less careful people. Training users how to properly create and store strong passwords, and putting measures in place that tell individuals the password they've created is 'weak' can help change behavior. But how do we embed this training in our culture?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Can We Create a Culture of Secure Behavior?

Comments Filter:
  • by Anrego ( 830717 ) * on Tuesday April 22, 2014 @02:27PM (#46816973)

    Users are gonna do stupid things when it comes to security. Trying to fix that is a noble goal, but good luck.

    The direction we need to keep going towards is idiot proofing. Assume the user will screw up and mitigate or eliminate the impact.

    • by Anonymous Coward

      Amen. And it is not just about idiot users either. It is basic human psychology. We are all wired to do insecure things at times. We need to engineer around this vulnerability.

      • In general, this is because IT departments are dictatorial about forcing users to do "security" requirements that do little or nothing to improve security.

      • by lgw ( 121541 ) on Tuesday April 22, 2014 @04:09PM (#46817937) Journal

        Preach it! You cannot try to fix a software problem by fixing the users. Requirements for strong passwords have no place in modern security. A 4-digit PIN works great for my ATM card, because of the combination of:
        * Two-factor auth
        * Good, fast system for repudiation and reclamation
        * Many, many back-end processes in place to limit harm

        Is your IT system set up this way? Why not? Two-factor auth is easy, off-the-shelf stuff these days. Sharply limit password tries before account lockout, and abandon any thought of strong passwords, changing passwords, and so on - all of that is accomplished by the certs (and rotation thereof) on the second factor. The user's password is just there to make it OK if the second factor is stolen, during the time before the user reports it.

        Everyone's "real" password is crypto-strong, because there's a properly-generated cert involved, and rotated at ITs discretion with no burden on the user. But people only need to remember something easy, just something that would take more than 3 tries to guess.

        • by PRMan ( 959735 ) on Tuesday April 22, 2014 @04:41PM (#46818169)
          How many ATM heists and skimmers have their been over the past 10 years? I'd hardly say it's working WELL.
          • by lgw ( 121541 ) on Tuesday April 22, 2014 @05:01PM (#46818317) Journal

            It's working quite well. The cost of all that is very low on the scale of the banks and that's what matters. It's simply not about "0 incidents", it's about limiting the damage to little enough that it's not important.

            Partly that depends on the bank, of course, as some are total dicks about it if your card gets skimmed, but that's a customer service problem. Detecting the problem, limiting the cost, and so on are all important systems that banks take seriously. And the banks are gradually making systemic, low cost changes to reduce the ease of skimming, or of hacking an ATM, but they're not in a hurry as it's just not that expensive of a problem (how many ATM heists to equal a single mortgage default?). More importantly, they're not trying to fix their customers!

        • by Lotana ( 842533 ) on Tuesday April 22, 2014 @06:26PM (#46818829)

          Sharply limit password tries before account lockout

          Let me introduce you to a very simple business plan:

          1. Get the usernames of some company that is making good money. Not too hard, majority of them should be first/last names concatenated.

          2. Keep logging in with the usernames and password as "password". Watch as the IT is brought to their knees trying to deal with hundreds of employees being constantly locked out.

          3. Contact the company asking for good sum for you to stop it.

          4. PROFIT!!!

          In essence this is a very trivial DoS attack. This is the reason why login attempts get long pauses before letting you try again and why accounts don't get locked down.

          • by lgw ( 121541 )

            As the AC asked - how are you making the attempts without the second factor? And you lock out the device, not the person, of course.

            • by Lotana ( 842533 )

              Good point. I have miss-read the original post.

              I was under the impression is that the second level is accessed only after the initial weak-password passes.

              My bad.

        • Is your IT system set up this way? Why not? Two-factor auth is easy, off-the-shelf stuff these days

          How do you do 2 factor auth with SSH and is it more secure than a decent password requirement?

          I ask because I had this argument a few years back and realised that password protected private keys are not really 2 factor auth since if someone gets the private key then they can brute force the passphraase out of it client side since they have unlimited attempts without the possibility of lockout (the passphrase is only used to unlock the private key, not exchanged with the server as part of the auth process).

          • by lgw ( 121541 )

            I'm, not sure I understand your question. The part of SSH that the user is unaware of is therefore not bound by the need for user-memorable passwords. Plenty of issues with CAs and all, but not really what I was talking about.

            I'm talking about a device (smart card or company-issued computer) with a very strong password (randomly generated by IT, rotated by IT, etc) that the user never sees, which must be combined with the user's weak password to do anything. As long as the attacker can't test whether a gi

        • by dgatwood ( 11270 )

          Sharply limit password tries before account lockout

          No, don't. Besides the DOS problem that other folks have already mentioned (which can be solved by doing per-IP bans), there's also the "Your site isn't as important as you think it is" problem.

          Most folks have a handful of low-security passwords that they use for sites that they don't care about. If you limit the number of login requests to anything less than about ten, a user who hasn't logged in for a while won't remember which of those old passwords h

    • by drakaan ( 688386 )

      Seconded. The people that understand the risks generally don't represent a problem, but the people that don't understand them often also don't benefit from an explanation in a way that would change their behavior. Computers are not magic, but many people believe that they are. They also believe that antivirus software catches every single bad thing before it happens.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        It's not that. Most people know that data breaches happen, like the Target one that was all over the news a bit ago.

        The problem is that the security advocates make (seemingly) random behavioral demands that awkwardly often do not actually enhance security if followed. (I'm thinking of the entropy-neutral "strong password" dogmas)

        When you make a system change that affects other employees, let them know why. When you propose a policy change for security purposes, defend it in front of a crowd of those affe

      • The people that understand the risks generally don't represent a problem, but the people that don't understand them often also don't benefit from an explanation in a way that would change their behavior.

        And in the corporate world there is the problem of status. People higher on the hierarchy do not like being told that they cannot do something by people lower on the hierarchy.

        And if something goes wrong then it is YOUR fault because "security" was YOUR responsibility.

        Computers are not magic, but many people

        • by dbIII ( 701233 )
          It's really a wonder (and a variety of corporate privateering) that a quick and nasty CAD program such as AutoCAD with all it's faults over the years became the default. These days I can only run the version that people like with a WinXP virtual machine or on linux via WINE.
    • by jovius ( 974690 )

      Exactly. What helps is a step by step process which doesn't allow any missteps, and which guides on the way. Encryption is perceived as sorcery; something summoned by the high priests. Even a shortcut key combination and a password is too much. Strong passwords are hideous monsters from the netherworld anyway. The concepts are too complicated. They need to be hidden away or in some way built in. Maybe a key analogy would work, something like the final key [cyberstalker.dk] or similar setup.

      Anyway, the process should function

    • by whit3 ( 318913 )

      Truly, it is foolish to think millions of 'users' can be handed the
      security problem, and advised to take action individually.

      We should all cringe in horror when we hear that
      millions of nontechnical users are being encouraged to
      'take the problem seriously'. It's like asking all the residents
      of an apartment building to safety-check the steam boiler (probably
      only one or two will want to tighten the relief valve spring).

      There have been attempts to 'take the problem seriously' with
      draconian legal sentences: th

    • Training. 6 monthly mandatory security training. Presented well, covering all aspects of security.

      Accountability. If a user does a stupid thing, make them personally liable for it. Warnings and firings work well.

      Usability. My workplace offers 5 free licenses of a well known antivirus/firewall package for every employee for home use. That extends the circle of safety one more ring.

      Security. Lock it down. Lock it down. Lock it down. What are the minimum rights that should be given for a user to do the

  • by Krishnoid ( 984597 ) on Tuesday April 22, 2014 @02:28PM (#46816977) Journal

    Perhaps we could take the lead from government departments already tasked with maintaining security, hold on, let me google this ... I'm finding 'Transportation Security Agency' and 'National Security Agency'. That should be a good start.

  • by Anonymous Coward

    In my experience, a company with high employee morale has people who will tend to listen and follow security procedures, even when it might be time consuming. Even small things like stopping someone who slips past a door without badging in, or asking who someone is who is in a building without some ID.

    With poor morale, there isn't much for the people to bother with security. I've seen companies try to save money by offshoring... then lose a lot more due to breaches than they would have spent by keeping ex

    • by bhcompy ( 1877290 ) on Tuesday April 22, 2014 @03:23PM (#46817511)
      Time consuming = won't do it. I've got enough things to worry about with all the bullshit administrative tasks I have to do to accomplish my non-administrative job. Give me security that doesn't force me to do more work, like encrypting my drive, single badge identification(no separate key fobs for doors I should have access to anyways), automatically encrypting my attachments, forcing me to change my password every 30 days, forcing me to have different passwords for different resources because password requirements are different(some requiring special characters, some not allowing special characters), forcing me to change my passwords for different resources at different intervals, etc.
    • Offer a bonus and recognition to any employee whose computer doesn't get hacked by the hired pen tester. Publish tips on how to avoid being hacked. Compliance rates will soar. Also, knowing they are being targeted by an actual human translates an abstract notion of why security practice is important into something concrete.
      • by dbIII ( 701233 )

        doesn't get hacked by the hired pen tester

        Pen testers are often reformed script kiddies without enough understanding to comprehend how the networks they are attempting to get into work. They fail people on such criteria as having ssh on the correct port. The way to do things properly is to be able to see what is going on from both the inside and outside and examine any holes that become apparent. If they are not given full access from the inside how are they going to find any problems that have nothing b

        • Not to mention that most pen tests stop the very second even a single vulnerability is found. Some tester might drop a bunch of flash drives in the parking lot, wait for an employee to take one inside, and then conclude that they've penetrated the building and that the test is finished. They never find the fact that you could clone someone's badge from 50 feet away, or that the network ports in the public lobby aren't VLANed separately from the network ports in the high-security areas, or...
          • Not to mention that most pen tests stop the very second even a single vulnerability is found. Some tester might drop a bunch of flash drives in the parking lot, wait for an employee to take one inside, and then conclude that they've penetrated the building and that the test is finished. They never find the fact that you could clone someone's badge from 50 feet away, or that the network ports in the public lobby aren't VLANed separately from the network ports in the high-security areas, or...

            Have you ever been through a real pen test?

            I have (twice, two different security companies several years apart on the same web application) and they certainly did not stop after the first thing they found. They kept trying and trying and provided a report detailing every single issue they found, coded on a 1 to 5 scale based on importance. The reports came both came to pages and pages for probably less that 20 issues, less than 5 critical. Obviously in both cased the only sent us the reports in encrypted fo

            • by dbIII ( 701233 )
              Yes I've written nice long technical reports too - but if they can't be let in, given root and allowed to take a proper look around on systems for stuff that's not on their checklist then they are not going to be able to be sure that they have found everything. A decent security audit and people playing Mission Impossible games are two different things if all they do is the games. Especially if the games mean that they mark things being on the correct ports as a fatal flaw (eg. a joke of a pen testing out
  • Strong passwords are useless - well, they're useful only against a brute-force attack and that's not the big threat anymore. A 64-character password is worth nothing against a phishing attack, and is worse than nothing if you have to write it down.

    Maybe the cure is to have the incoming mail server destroy all clickable links (or point them at an internal "you will need to navigate to that URL manually" warning page, and simply delete anything executable.

    • by jythie ( 914043 )
      After that you need to cure customer support too since that is a common social engineering target. In fact you might have to wipe out tech support in general...
    • by mlts ( 1038732 )

      I've wondered about more adaptation of CAC-like cards for logging in, where the card reader (or even better, access tokens that work with a USB port) is standard on all new computers. This way, a host has a list of public keys for authorized users, rather than sensitive passwords (even if stored as salted hashes.) The way malware can work would be to generate bogus signatures/decryptions with the user's access token, and that is a lot more intrusive than just slurping a password typed in.

      Of course, this i

  • by Anonymous Coward

    While it may seem draconian, the best way I've found is to start from the ground up with recurring training. Make the training mandatory, but unobstructive, and ensure you get the people to sign they understand the rules. You'd be surprised just how much of a difference you will get from anyone if you have a piece of paper with their signature on it, there just isn't the same value in an emailed "ok, I got it".

    There is a delicate balance between security and convenience, so you need to make sure that whatev

    • >Make the training mandatory, but unobstructive

      That's not possible. If it's mandatory then it's obstructing something, period.
  • by blue trane ( 110704 ) on Tuesday April 22, 2014 @02:36PM (#46817067) Homepage Journal

    How can we create a culture where there is no incentive to hack or steal?

    • You are right, this is the better question. Why do we have a world where a few pieces of information that are effectively public have any sort of value? I have to tell my address, phone number, SSN, and so on to every bank, doctor, potential employer, landlord, and so on. Yet we continue to delude ourselves that somehow the information is going to remain secret. Well, 30+ years of "the bad guys are winning" shows that keeping (essentially) public information secret just isn't going to happen.

      Look at it f
      • I don't think the banks paid much price, or any. They borrow short at 0% and lend long at 10% or 18% or whatever. All they have to do is make the payments by borrowing short, and give people more credit, and they've created the money they supposedly "lost".

    • Step 1: Wipe out humanity.

      Step 2: Find an intelligent life-form that isn't tempted to hack or steal for some reason.

      Step 3: ????

      Step 4: Profit. (Probably by exploiting all those overly trusting tripedal beings.)

  • by Sight Training ( 3626165 ) on Tuesday April 22, 2014 @02:38PM (#46817089)
    This is a great question, and one that plagues businesses of all sizes. Based on our experience writing security training and consulting companies on the best ways to plug the security holes in their organizations, it comes down to three things: 1) Spelling it out: A proactive approach to security awareness includes open lines of communication, telling employees exactly what sorts of things to look out for. One major mistake that corporations often make is assuming too much—mainly, assuming that their employees know how to identify malicious situations over the phone or through email. Instead, spell out the situations that may trip them up, either through policies or training. 2) Repeat, repeat, repeat: Even in companies that make a concerted effort to raise security awareness among workers, there is a tendency to backslide into comfortable complacency unless the danger is kept at the forefront of their minds. This doesn’t have to be onerous for management or irritating to employees, since there are so many effective ways to make security awareness a part of a worker’s daily experience. E-newsletters, security briefs, and clever, eye-catching security awareness campaigns are a few ideas. 3) Create a culture of teamwork: Often, corporate environments in large companies use impersonal policies to “teach,” hoping to generate desirable behaviors with a “Don’t think, just do” mentality. This approach makes employees feel like a tiny cog in a huge machine, a piece not worthy of more than minimal information. Smart employers give employees more credit. An attitude of inclusion should permeate every policy, every training campaign, and every common area. A real “good guys vs. bad guys” attitude makes everyone feel like part of a team that is working toward the common goal of security.
  • good luck with that 40 yr old secretary that still hold old behavior at heart. Computers have good memories, people have crappy shitty memories. Thats why they tend to use words or something similar to what they know instead of gibberish random password generator for their security. I've seen people in high places which holds sensitive info that could easy kill a person if that info is leaked and they still used weak passwords... I've tried to tell them everything I can to use good behavior and it's a diffi
  • Then the people who don't deeply care about using computers properly won't use them except for boring business stuff, and then we can replace Windows with z/OS or OpenVMS and all those PCs with terminals.

  • Security is a pain. It slows you down. it gets in the way. It makes you jump through hoops and it is inconvenient. If I had to spend as much time unlocking my front door as I do to log into some websites: ones that don't even contain any information I value, I'd probably leave it open a lot more often.

    So until the software (or hardware) necessary to make systems more secure improves a great deal people won't use it. I can't say what the nemchmark is for user tolerance / acceptance, but if I had to guess I'

    • by mlts ( 1038732 )

      Sometimes good security isn't a pain. Had client certificates been used more often, or just having a website ask the user to PGP/gpg sign a blob of text for logging in, passwords would be less critical.

      With a client cert, almost all authentication troubles go away. However, client certs are troublesome for users to manage (have to remember the key's password as well as copy the private key to every device in advance), so it comes at a cost, although if people got as used to it as they are used to the like

  • People can't be bothered to take moderate, reasonable precautions with their own LIFE-PRESERVING behaviors, you think that they're going to be motivated to change their behaviors because some tech has to fart around with their laptop for 3 days re-imaging it?

    Seriously, people need to stop assuming that humans aren't just hairless primates with a knack for tools and language.

  • by DaveV1.0 ( 203135 ) on Tuesday April 22, 2014 @02:47PM (#46817169) Journal
    People still drink and drive, smoke, do drugs, and have unsafe sex despite years and sometimes decades of having admonitions against all of those things embedded in our culture. Why? Because people still "think those types of things happen to other, less careful people." It is human nature, hubris, and magical thinking all rolled into one.
    • Re:You don't. (Score:5, Insightful)

      by Bonker ( 243350 ) on Tuesday April 22, 2014 @04:52PM (#46818241)

      An important caveat to this line of thought is that GOOD education DOES work to prevent risk behaviors.

      A blanket 'Just Say No' campaign like the one ran by Nancy Reagan in the 1980s did more harm that good because, when a lot of the kids had it force-fed to them for a decade grew up and discovered that marijuana didn't immediately kill your or turn you into a junkie, many of them threw out the entirety of 'Drugs are bad, m'kay?' and went on their merry way destroying their bodies with harsher and harsher drugs.

      However, kids who had explained to them what drugs really did to a person's body and which drugs were more addictive and which drugs were less were, and are, less likely to actually do those drugs.

      The same is true of sex education. It's been shown with frequently tragic consequences that 'Abstinence Only' education usually makes the teen pregnancy and STD situation worse in places where it's taught. However, more complete sex education that explains pregnancy, STDs, and all the other associated risks that go along with sex causes a notable decline in teen pregancy, STDs, and an actual increase in the average age at which teens start having sex.

      I have found the same line of logic to be true with IT security. If you make a point of explaining the whys and wherefores, perhaps going so far as to make an interesting, engaging education program, the people who are your 'risk vectors' decrease, as do the number of security incidents you have to deal with.

      No, you never can completely eliminate the problem. However, by offering education that is interesting, complete, and that doesn't treat the recipient as an idiot, you can dramatically reduce the problem.

  • You can't. (Score:5, Insightful)

    by bravecanadian ( 638315 ) on Tuesday April 22, 2014 @02:47PM (#46817175)

    As long as there is incentive to skip security and get things done.

    ie. let the nerds in IT worry about security - I'll worry about selling/making/doing and getting my bonus.

    So technically I guess you could do something to foster this sort of secure behaviour but it won't happen because the powers that be don't give a shit.

    So yeah, you can't.

  • Despite the high news coverage that large breaches receive, and despite tales told by their friends about losing their laptops for a few days while a malware infection is cleared up, employees generally believe they are immune to security risks. They think those types of things happen to other, less careful people.

    Untrained users are not the cause of large breaches. Malware infections happen to even the most careful users. In other words, training users and trying to change your company's culture won't make a significant difference.

    Encrypt the laptop before a user can touch it. Make sure a decent virus scanner is running (and keep your fingers crossed). Get well trained sysadmins who see their job as keeping your network and servers as secure as reasonably possible.

    • by mlts ( 1038732 ) on Tuesday April 22, 2014 @03:21PM (#46817495)

      If I had to give five general things a company could do, it would be similar to the following the parent stated:

      1: First and foremost... separate and isolate. Finance should be isolated from everything else, with a Citrix or TS server so people working there can browse the web with the browsing well separated from critical assets. If a breach does occur, it will be limited in scope.

      2: Laptop encryption is trivial. BitLocker [1] and the AD infrastructure to recovery is a must-have. Depending on level of paranoia, AD policy can be set to auto-encrypt USB drives, so a dropped thumbdrive doesn't mean a massive data breach. In fact, it would be wise to have BitLocker on all desktops as well, so repurposing of the machines is easy -- just a simple format or clean command in diskpart.exe.

      3: Backups. Often overlooked, but a humble tape drive can mean the difference between a quick restore versus paying some guy out of Russia a lot of BitCoins. Disk arrays != backup because one command (blkdiscard for example) can render all backed up data gone in seconds.

      4: A clear chain of command. This way, someone can't hack a VoIP connection, browbeat some lackey to get some critical access or knowledge about internal networking.

      5: Active pen-testing from a guy running a script on boxes to actual blackhats using everything at their disposal including sending people on site in coveralls and fake badges to get in.

      [1]: Yes, TrueCrypt is a good utility, but this is the enterprise where recoverability is as important as security.

      • by Xaedalus ( 1192463 ) <Xaedalys@[ ]oo.com ['yah' in gap]> on Tuesday April 22, 2014 @04:09PM (#46817929)
        I work in Tape, and I can tell you that I've run into sysadmins and CTOs who have overlooked #3 (particularly with their belief in cheap disk arrays) to their sorrow. Tape is boring old tech, but it's damn near bulletproof in saving the bacon every damn time something goes wrong and a restore needs to occur. Ethernet with NAS boxes my ass, you need a tape library in there somewhere to completely insure that your company doesn't go down permanently after the inevitable rogue wave of human stupidity hits your network.
      • by dbIII ( 701233 )

        Active pen-testing from a guy running a script on boxes to actual blackhats using everything at their disposal including sending people on site in coveralls and fake badges to get in

        Pure theatre.
        Instead of embarrassing the helpful who will still be just as helpful to the next guy with a fake badge it's better to have a proper audit with full access instead of port scanning games or whatever.
        That obscure and insecure thing listening to the net that can't be seen from the outside by a pet script kiddie becaus

  • Same way as every other behavior: reward desired behavior and/or punish undesired behavior.

    • by bill_mcgonigle ( 4333 ) * on Tuesday April 22, 2014 @03:36PM (#46817641) Homepage Journal

      Or more succinctly: incentives matter. What incentive does an employee have to keep data secret? Will he be demoted in rank and lose pay if he does something stupid?

      What incentives do companies have to maintain a secure infrastructure? Will their insurance policy hold them liable if they do not?

      I'm just in the middle of polishing up a puppet module to deploy a bunch of new certs on my infrastructure. My incentive is that my reputation looks pretty bad if I advise clients to be secure but my own infrastructure is not up to snuff. That's really an incentive to avoid lost opportunities, I suppose.

      Google is talking about scoring up pages that are secure. Another very wise incentive.

      Let's keep this ball rolling: what other incentives can we offer or explain?

  • People are used to guarding against security threats, but are always defending against old ones. By the time you get everyone trained in defending the threat, the attackers have already moved on to a new one. The only way to defend yourself is have a small group of people who can anticipate or react to the ever changing threat and have them defend everyone else. Unless you are primarily interested in security, they will never focus on preventing new attack avenues.
  • by Animats ( 122034 ) on Tuesday April 22, 2014 @02:59PM (#46817299) Homepage

    Users are not the problem any more. Crap code is the problem.

    C is the source of buffer overflows. Microsoft is the source of autorun problems, or "if it's executable, run it". PHP is the source of most SQL injection problems. Vendor-installed backdoors are the source of most router vulnerabilities. None of these are end-user problems.

  • by Tony Isaac ( 1301187 ) on Tuesday April 22, 2014 @03:17PM (#46817459) Homepage

    In my 25 years working in IT, none of my passwords, weak or strong, have ever been hacked. Even my teenage sons, who have no idea about password strength, or site security, have never been hacked. And I doubt YOU can point to a single instance of someone hacking YOUR password.

    Does password hacking happen? Yes, of course. Should we be careful? Yes. But there are much greater dangers, such as malware (which you no doubt HAVE had a personal brush with).

    So if we need to put up with annoying security measures, let's at least focus on the more relevant dangers, rather than forcing us all write down our passwords and stick them to the bottom of our keyboards!

  • I've recently learned a new definition of security, one that's a little bit different from what I'd thought about before.

    A secure system is a system that continues to work as expected, even in the face of unexpected events.

    Users like a system that works the way they expect. They don't like crashes, endless popups, and systems slowed to a halt by malware.
    So teach them the benefits they can expect. You can have a fast, trouble-free computer by doing x, y, and z. Clicking on "virus alerts" makes your comput

    • by Imagix ( 695350 )
      That's not security (well, not the security that the rest of this thread is posting about). That's resiliency.
      • That definition absolutely includes what this thread is about. TFA talked mostly about malicious email attachments. When you do that, things stop working right. The discussion has talked about poor passwords. When your poorly chosen password is cracked, things stop working right. Using a good passphrase helps keep things working they way you expect them to work.

  • For a company of decent size, having some sort of mandatory training may be in the realm of possibility, but good luck with all of the small business (20 employees) out there. My company provides IT services to these types of businesses, mostly medical practices. There is no way to do anything other than individual, one-on-one training, and then only after something has already gone wrong. The owners don't want to pay for our time, and the staff are simply too damn busy to deal with it. This could just
  • by Charliemopps ( 1157495 ) on Tuesday April 22, 2014 @03:22PM (#46817507)

    A number of years ago I worked for a large (Global) company that wanted to make their new ticketing system secure. So they implemented a new password standard for the system that required a 35 character password, it reset every 30 days, and required 5 non-alpha numeric characters. The result? Within a week everyone in my department had their passwords written on a post-it note stuck to their monitor. The biggest problem with network security is usually the network security department.

    Use common sense 2 factor authentication that's not too difficult for your users to comply with and they WILL comply. Make it overly complex and hard for the average non-tech person to understand and your own people will undermine all of your security efforts. Publicly fire any employe that violates your simple rules and it will quickly become apparent that adhering to those easy to follow rules is worth the effort.

  • Unless people have some training or background, thy will proceed blindly along until something actually Makes them pay attention.

    Start with such basics in high-school, or even earlier than that. Explain (and mark their understanding) of things like strong vs weak passwords, and simple security procedures. E-mail safety tips. Good file management practices. Even basics like how to take care of a keyboard and/or pointing device would go fairly well in such a course.

    Oh. Almost forgot: MAKE IT MANDATORY! Nobody

    • by Anonymous Coward

      MAKE IT MANDATORY! Nobody gets to use the school computers/labs (even Office Staff) if they don't show proficiency.

      I agree. They should apply these same rules to all parts of life. Did you wash your hands when you came into the restaurant? No? Slap the food out of their hands and throw them out!

      Did you buckle your seatbelt in the taxi? No? Throw them off into the gutter!

      Did your dog just shit on the grass where kids play? Looks like poochie is getting Ol' Yeller'd!

      It's only when every aspect of our lives is subject to draconian absolutism imposed by every other person's personal bugaboos that we can really be safe from

  • It's not known exactly how to instill a culture of paranoia, but one idea is to subject employees to traumatic experiences involving police and/or gangsters.

  • Not passwords (Score:5, Insightful)

    by Todd Knarr ( 15451 ) on Tuesday April 22, 2014 @03:46PM (#46817719) Homepage

    First off, stop worrying about passwords. Most malware doesn't get into systems by way of an attacker cracking passwords. It comes in in ways that bypass passwords entirely, either by getting a user to run it or by getting the user to give the attacker their password.

    Second, look at your management culture. Do you expect your employees to routinely click on links in e-mail? Look for things like HR or IT sending e-mails that instruct people to follow links they've provided, or "secure" or "encrypted" e-mail systems that store the messages on Web servers and expect your employees to use a link to get at the contents of the "secure" or "encrypted" message. If you find such things, realize that you're training your employees to be insecure, because you're training them to expect to do as a normal part of their job exactly what the malware will need them to do to infect their systems. Start by removing such things from your management culture. If you need encrypted e-mail, do it within your own e-mail system so that users never need to follow links to read encrypted or secured e-mail. Outlook and Exchange offer this directly. If you need to give employees links to internal web applications or documents, create a Web page or site with a directory of links and train your employees to use a bookmark in their browser to access that site and navigate to the appropriate section where you'll put all the new links they need.

    Third, look at your IT policies. Not the ones you wrote, the ones you expect employees to follow. If your policy manuals say "No user-installed software." but your actual policies require users to get and install software from outside, you have a problem. It can be as innocuous as sending zipped archives while not having a program to handle them pre-installed on user computers. It can be as pervasive as not having your IT able to support the myriad of tools your developers need, most of which will by definition not be the kind of thing most desktops would need. But every time you have a situation where what you expect of your employees requires software you didn't pre-install on their systems and where it'd negatively impact an employee's job performance and more importantly their performance evaluations if they refused to install that needed software themselves, you're creating security problems. Sit down and decide how you're going to address this, then address it. It can be as simple as a page of "approved" links to sites you know are safe and where employees can get all that useful software that gets used every day.

    Fourth, evaluate your software update policies and IT budget and staffing. If your IT department doesn't have the staff or the budget to monitor the vendors of all the software in use in your organization, test changes and push updates out to your desktops and servers, you need to re-evaluate your IT budget and staffing levels. You need to get most updates installed within 30 days of their release, and you need to be able to get major critical security updates analyzed, tested and deployed within 24 hours. Your IT staff can't do that if security updates are a side item they're expected to handle in between doing everything else. If management wants security to be a priority, they need to back up their words with the resources and budget departments need to make it a priority.

    Yes, a lot of that comes back to management. Attitudes towards security come from the top. More importantly, they come from what those at the top do and expect rather than from what they say.

    • Fourth, evaluate your software update policies and IT budget and staffing.

      LOLS! What is this "IT budget" of which you speak? Staffing?!

      I worked for a series of startups, and at the last place the CEO was like "Wtf am I paying $10k a year for with this IT management company?" Hilarity ensued. :-|

  • Well, you have to start somewhere, right?
  • The first question is not actually how you can create such a culture, but whether it's actually a good thing in the first place. You seriously need to evaluate this. One of the primary means of being secure is not trusting others. But trusting others is an incredibly useful tool to get things done, and it may be worth taking the security hit. Stand on a crowded railway platform, and you're trusting so many people, each of whom could push you off and kill you so easily, without even thinking about it. Withou

  • 1. It's annoying.
    2. Most people don't think like that.

    People are not built for that kind of caution.

  • I'll probably be modded down for this but the most effective way is to pwn the users to show them that they are merely bitches that any moderately skilled geek can defraud completely. Since they only learn from being fucked over, being fucked over is the only way they learn - otherwise you are just considered to be paranoid.

    Repeat this for every user you meet and add the strange looks you get from them when you do things a secure way.

  • Requiring users to change their password often and requiring long and "strong" passwords that are difficult to memorize is not the answer to better security. This results in people having to write down their password someplace convenient for them (and any nefarious people around). This is well demonstrated by the movie "Ferris Bueller's Day Off" where the main character find the schools' passwords taped inside a desk and alters his and his friends grades. It also trains users, and the help desk, that they
  • As bender would put it "kill all humans"

    because if any of us remain the likelihood of us being careless and stupid is guaranteed

  • The first step would be to reduce the number of separate passwords that have to be used. That means minimizing/eliminating the use of outside vendors that interact with your users via the web. If there's some vital human resource service that is needed (testing, training, employee reviews, whatever), bring it in house rather than contracting it out to an outside vendor. Because every single outside vendor you use means another set of credentials to be maintained.

    The second step would be to eliminate pass

  • The first problem is security through stupidity that you see all over the place. This is where you are required to change your password every x months, or days. It has been found that the maximum number of password changes per year, without storing it, is 2. That is maximum. It is still recommended to have people change their password, but currently the recommendation is if you do, to set it to once a year. I think Microsoft on their server products has this set to 3 months by default.

    Low maximum password
    • You're missing something here. You mention affording to store 50-character passwords.

      Your password should never be stored. That's an insecure practice right there. The system should have some sort of hash or other transform to put your password through and then store that (we'll leave the details to the security people).

      The biggest worry I have with short password length limits is that it suggests that some incompetent has designed a system that stores my password in the clear.

  • Passwords need only be as secure as the effective aggregate retry policy of whatever is accepting credential inputs.

    Half the problem are all these 'hashes' stored in the clear on disk where administrators incorrectly assume users are responsible to select big enough password to make up for lack of effective protections. This of course is a complete failure having never worked continuing to grow more laughably amusing over time as computing power per unit cost increases.

    Next we have security standards activ

Things are not as simple as they seems at first. - Edward Thorp