Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security IT

Creating a Security Test Environment? 167

Enderandrew writes "Our IT department has been tasked with creating a list of authorized software, and only allowing software to be added to such a list after it has been thoroughly tested. In theory that sounds like a great idea — but how should we test apps to make sure they are secure? We have tools to scan internal websites, and we use MBSA for our Windows servers. However, I'm turning to Slashdot to ask what are the best methods for creating a test environment where I can analyze apps for security vulnerabilities. We're a multi-platform shop, but my main concern is with Windows apps."
This discussion has been archived. No new comments can be posted.

Creating a Security Test Environment?

Comments Filter:
  • by seanonymous ( 964897 ) on Friday August 01, 2008 @11:47AM (#24434709)
    You can't be certain unless you have the source code, so tell the folks who are requiring this list to be created that they can say goodbye to Word and Excel.
    • by Sir_Real ( 179104 ) on Friday August 01, 2008 @12:03PM (#24435007)

      You can't even be sure when you have the source code. [bell-labs.com]

      Tell the folks who want this list that you must trust someone at some point and that will always leave you vulnerable.

      • Ok, sure you can't even be sure unless you have the microcode running your hardware. But for 99.999% of people, the source is good enough.
        • Ok, sure you can't even be sure unless you have the microcode running your hardware. But for 99.999% of people, the source is good enough.

          Even then you can't be sure, because the hole could be built into the hardware. The best you can really do is get a stack of identical processors and chip sets and destructively slice all but one of them up and run them through an electron microscope to verify the circuits - but even then you'd only have a statistical guarantee and you still might miss some clever analogu

          • by Fred_A ( 10934 )

            Ok, sure you can't even be sure unless you have the microcode running your hardware. But for 99.999% of people, the source is good enough.

            Even then you can't be sure, because the hole could be built into the hardware. The best you can really do is get a stack of identical processors and chip sets and destructively slice all but one of them up and run them through an electron microscope to verify the circuits -

            Of course that's assuming you have the code for the microscope controller... After all it's only showing stuff on a screen... Who knows if it's real ?

      • by SgtChaireBourne ( 457691 ) on Friday August 01, 2008 @01:39PM (#24436883) Homepage

        You can't even be sure when you have the source code. [bell-labs.com]

        The point there is that you have to have the code for the whole stack not just an isolated application. For an application to be secure, you have to be able to do a valid code audit. For the code audit to be worth anything it has to be done all the way down to the core: compiler, libraries, utilities and operating system. So you can be sure when you have the source code, but you do have to have all of the source code.

        So without even touching on the quality issues with MS, lack of code access rules out all MS products from the system on up to user applications. "Shared" source might be fine for specific, limited, platform-specific development contexts, but is basically the same as "escrow" And "escrow" is just another name for closed source, namely, as Ken Thompson point out, insecure. Ditching MS products won't automagically make your site secure, but it is a necessary first step.

        Now there are some short cuts one can take in Ken Thompson's example. However, they all boil down to having the code for the whole system as he points out, not just parts. Even diverse double compiling [dwheeler.com] requires, at the end of the day, a system that has been vetted top to bottom to use as the baseline.

        Now the next step is to deal with smaller, more modular units of code. They're not only good design but also easier to manage. Again, that rules out a certain party ...

        FWIW it's interesting that the ACM recently pulled the Thompson article [acm.org]. It had been available for over a decade. One wonders how much longer the ACM will be a useful source of technical information.

    • by Thelasko ( 1196535 ) on Friday August 01, 2008 @12:10PM (#24435151) Journal
      What? a post that begins with, "The only way to be sure..." and doesn't end with, "nuke it from orbit."

      You must be new here.
      • He did mention "say goodbye to Word and Excel", which _is_ a tactical nuke as far as most companies are concerned. And it is probably safest to suggest it from some far away place, like a stable orbit...

        Completely off-topic, I know, but at work I'm about to start a major project that will use OpenOffice for all its documentation needs. And it will not be a trial or something like that: we will be using it because it is the best tool for the job.

      • I say we take off and mod down the post from orbit, it's the only way to be sure.

        Well, not really, but I always wanted to post from the ISS, so it's a win-win.

    • by interiot ( 50685 ) on Friday August 01, 2008 @12:37PM (#24435675) Homepage

      And the best way to protect a computer is to remove its network and power cables. In the real world though, security isn't the exclusive goal.

      I agree that open source apps give a stronger guarantee of security, but going from "we want things to be more secure" to "we want absolute security" to "closed-source apps must necessarily be removed" seems like a stretch, even if it makes for good open source advocacy.

    • Re: (Score:3, Insightful)

      by BobMcD ( 601576 )

      You can't be certain unless you have the source code, so tell the folks who are requiring this list to be created that they can say goodbye to Word and Excel.

      The Principle of 'Never Look': When not empowered to act upon the information gathered, DO NOT GATHER THAT INFORMATION.

      Suppose you have to de-certify an app due to something your research discovers, what then?

      You may not be in a position to challenge this directive, but if you are, INSIST on the power/backing/support to enforce it. The political collateral involved in this will, at a minimum, delay it for a good long while.

    • There hasn't been quite enough information provided to help guide you down to specifics, but let's take a shot. It sounds like you're still discussing thin-client web-based applications and hardening your intranet environment. One of the questions to ask is if you're a "buy vs build" company - meaning, do you buy an application to fill a business need, or do you develop an application to fill a business need? Penetration testers can check the applications for classic vulnerabilities - buffer overflows, c
      • by Fred_A ( 10934 )

        And also, use an app that supports paragraphs. They'll do wonders for your TPS reports (security analysts might not tell you that but it's still important).

  • by janeuner ( 815461 )
    If yes, then the product is insecure. If no, then the product is secure.
  • by Zosden ( 1303873 ) on Friday August 01, 2008 @11:48AM (#24434719)
    Unplug the network cable. Its so easy even a caveman can do it.
  • Government... (Score:4, Insightful)

    by Hyppy ( 74366 ) on Friday August 01, 2008 @11:48AM (#24434725)
    I know that for most federal government institutions, the NSA is involved testing a product for its security. I've never heard of a place doing it in-house.

    Is your boss ex-military or something to that effect? If so, he may not understand the complexity of the task he is assigning.
    • Re: (Score:3, Informative)

      by Hyppy ( 74366 )
      Sorry for the self-reply, I just want to clarify something. I mentioned ex-military because for the most part military admins are provided software which has the "Seal of Approval," and are forbidden from installing any products which have not been "security tested."
      • Re:Government... (Score:5, Interesting)

        by Foofoobar ( 318279 ) on Friday August 01, 2008 @12:13PM (#24435201)
        My brother is a high up in the military and complains of this 'seal of approval' constantly. Microsoft salespeople and other constantly will send their products to get 'evaluated' and get the seal of approval the next day as if someone can evaluate their product in 24 hours. Whereas other products that are open source or actually supply the source code can take MONTHS!

        It's totally arbitrary and has very little to do with security.
        • Re: (Score:3, Insightful)

          by JCSoRocks ( 1142053 )
          No, the MSFT products get "approved" in 24 hours because they've been working with the NSA throughout the dev cycle adding backdoors for them. The NSA already knows that the security holes are because they put them there. :P
        • What am I bid? (Score:2, Interesting)

          My brother is a high up in the military and complains of this 'seal of approval' constantly. Microsoft salespeople and other constantly will send their products to get 'evaluated' and get the seal of approval the next day as if someone can evaluate their product in 24 hours. Whereas other products that are open source or actually supply the source code can take MONTHS!

          It's totally arbitrary and has very little to do with security.

          I also spoke with someone who does application and system security certification for a national government. Basically companies pay, the team then plays around and tries to guess what's in the blackbox and what it's doing in there, and then after a while rubber stamps the approval. The tools audited that way are not ones selected by the government, for that matter. The vendors present something and then provide the money for the certification. No money == no review == no certification.

          The dude was ra

  • If only the Internet had some kind of search engine where one could easily access the experiences of thousands of sys admins and/or developers.

    • Re: (Score:3, Insightful)

      by Hyppy ( 74366 )
      The plural of "anecdote" is not "data." Also, usability and security aren't really related. If someone complains about a product, for the most part, it will not be because there is a buffer overflow vulnerability for the 4th input field.
      • Ah, ya got me.

        I forgot that everything on the Internet is anecdotal, including this post.

        • by Hyppy ( 74366 )
          Of course, if it's security related, even an anecdote can disprove that a product is secure.
      • And the plural of datum is not proof!

        I love this one... it's like Marco Polo for nerds.
    • Yeah, that will happen about the same time that someone takes the idea that Amazon has of selling books, and changes it to "renting" or "borrowing" books. Next thing you know, the government will get in on the game, and subsidize these ideas, and us taxpayers will all have places were we can go and read books for free! Think of the chaos, think of the publishers right to profit!

  • I swear I saw another story like this one, about VMWare being used to test botnets, and being the perfect test environment for debugging etc., etc.

    Sometimes, do we really need duplicates of stories only if we change the headlines.

  • Thoroughly tested (Score:5, Insightful)

    by Ngarrang ( 1023425 ) on Friday August 01, 2008 @11:53AM (#24434821) Journal

    Just how thorough of testing is being asked? Some security flaws escape eyes for years (see DNS flaw). Some flaws are obvious. But, in general, you can never be 100% certain of a program be 'secure' unless you know its has passed some milspec cert.

    While optimistic, I think the new policy is a bit misguided in its wording.

    • Re: (Score:3, Insightful)

      by Enderandrew ( 866215 )

      Well, I'm asking what is the best way to set up a test environment for testing. I make my best effort while explaining to my bosses that it is near impossible to declare anything void of vulnerabilities.

      All I can do is make my best effort.

  • by HaloZero ( 610207 ) <protodeka@@@gmail...com> on Friday August 01, 2008 @11:56AM (#24434863) Homepage
    Security at what level? You need to draw a line where your security is 'good enough', because some things are simply too far outside your scope.

    VMware is your best friend in this case. When dealing with client/server software, I'd install it in a VM, and then nmap it to see what affect it had against the machine with or without a firewall. Just to see what sort of ports were open, to characterize the software.

    You can also use a lot of the great tools from SysInternals to poke around a bit more in the softwares workings, but using only software that is 100% security certified means you're going to have a bunch of people with blank hard disks. If you're using Windows and are paranoid to that point about security, I wouldn't look too far under the hood of that operating system if I were you.

    There is the 'Good Enough' line. The point of systems security is not necessarily to maintain a paranoid, logged level of dilligence against every packet (though DPI isn't a -horrible- idea - it's ALL situational, tho ;), but instead to secure yourself, your customers, your employees, and your infrastructure against a broad swath of threats. You can't tighten the screws down on one aspect alone and proclaim being bulletproof.
    • by RNLockwood ( 224353 ) on Friday August 01, 2008 @12:19PM (#24435351) Homepage

      This was to have been implemented today in my organization but in three stages to minimize disruption. We must conform to FDCC dictated by DHS. I received a USB drive with some instructions and files that I can use to download and install VMWare to create a sandbox for testing. The instructions are lengthy so I've just skimmed them but it appears that a software package is installed that when run establishes the baseline security of the virtual machine. Then the software to be tested is run in the virtual machine and if the base changes it fails.

      I think that DHS or some Federal Agencies have lists of software that is FDCC compliant and this should ease the burden of testing if the lists can be easily accessed. I'll probably test one of my applications this weekend.

      At any rate search on FDCC for more informaton.

    • by nine-times ( 778537 ) <nine.times@gmail.com> on Friday August 01, 2008 @12:31PM (#24435571) Homepage

      I think most answers here (at least the reasonable ones) are going to have more questions than answers. Questions like:

      • Secure against what?
      • How secure do you need it to be?
      • How thorough do your tests need to be?

      Part of the problem is that there isn't any such thing as completely thorough testing that makes sure everything is completely secure from any kind of attack whatsoever. It doesn't exist. So when you're talking about testing security, you have to know what you're trying to achieve.

      Are you just trying to meet some kind of regulation placed on your company from an outside body? Because that outside body should be providing you with information, then, that will tell you how to meet their standards. Are you just trying to satisfy someone's paranoia? Then you just need to come up with a bullshit standard that satisfies that person's paranoia.

      For most situations, it's enough to protect against generic hacking over the network, worms, etc. In those cases, as far as the software installed, it's probably enough to buy from major vendors or use respectable FOSS distributors, and keep things patched. If you're using Debian/Redhat/SuSE, then you're probably good. Even Microsoft's stuff (despite its bad reputation on Slashdot), if kept up-to-date with patches, is fine. A decent firewall goes a long way, and you can look into IDS stuff.

      Another valid concern is employees, so make sure they're locked down as much as possible, not given access to resources they don't need. Ideally you'll audit what you can, keep backups going back so if some data goes missing (or is mysteriously changed), you can recover it.

      Or you might just be asking about making sure a given piece of software isn't malware? So you do some research online and find if anyone is complaining about it. Maybe you install it, scan for opened ports, look at the network traffic coming off for anything anomalous.

      But if you're talking about auditing the security of Firefox or MS Office for bugs that allow for privilege escalation or something, and you're unwilling to rely on the analysis of other experts.... well, good luck with that.

    • Just because you haven't discovered a vulnerability yet doesn't mean it doesn't exist. I can't really prove that software is secure. Really, I'm testing more for software that I can discover to be a liability.

      Again, nothing is perfect here. I just need to come up with the best solution within my means.

  • Nessus (Score:5, Informative)

    by Gazzonyx ( 982402 ) <scott.lovenberg@gm a i l.com> on Friday August 01, 2008 @11:58AM (#24434921)
    IIRC, nessus does network security scans that check for holes in software on the network (missing patches, etc.). You could do a pen. test using a live CD like Arudius, INSERT, PHLAK, etc. Check out the security live CDs at Frozentech's Live CD site. Many have the nessus package on board. [livecdlist.com]
  • I would say, have lab of machines.

    * install the new apps on your standard images, and see what security security or config changes they require.
    ->* also, look for how the app installs and how it appears to work.
    ->* maybe do some network sniffs and see what the app is doing
    * also, check how the server side pieces work and how they talk to other serversa and to the clients

    etc.

    -Nex6

  • Veracode (Score:4, Informative)

    by spacerog ( 692065 ) <spacerog AT spacerogue DOT net> on Friday August 01, 2008 @12:02PM (#24435005) Homepage Journal
    You can buy a service to test your apps for you.

    Veracode [veracode.com]

    Based on its breakthrough binary analysis technology, Veracode offers the world's first subscription-based security testing service that provides organizations with the only automated and independent assessment of security risks in applications, whether those applications are built in house, purchased as commercial off-the-shelf software or developed offshore.

    Disclaimer: I know the founders but I am not involved in the company at all.

    - SR

    • "Based on its breakthrough binary analysis technology"

      Why don't they put this 'technology' in Windows ? Or how about designing a compiler that don't allow insecure code?
    • Re: (Score:3, Interesting)

      by Polarism ( 736984 )
      We had a couple guys from their management team on our SecuraBit podcast. http://www.securabit.com/ [securabit.com]. Very bright folks, and if they weren't up in MA I might throw my resume their way.
  • Why reinvent the wheel when you can just use the Common Vulnerabilities & Exposures (CVE [mitre.org]) list. This list provides common names for publicly known information security vulnerabilities. Any software that's on the CVE gets removed from your list of approved software. People already did the work, why not leverage it?
  • by rs232 ( 849320 ) on Friday August 01, 2008 @12:04PM (#24435029)
    'what are the best methods for creating a test environment where I can analyze apps for security vulnerabilities. We're a multi-platform shop, but my main concern is with Windows apps..'

    You can't test for security vulns, especially in a Windows environment, as there is so many interlocking components that behave differently depending on the configuration. Introducing a service pack into a previously 'secure' environment and all bets are off. All you can do is patch, patch patch ...

    You could also intalss a second intrusion detection [wikipedia.org] system that monitors the first for unauthorized access and keeps a full audit of access alterations etc. You could also compartmentalize your system so as a breech in a web service doesn't automatically lead to a total system compromise.
  • no rootkits (Score:5, Funny)

    by eille-la ( 600064 ) on Friday August 01, 2008 @12:05PM (#24435057)

    You should deny the installation of rootkits, they cause maintenance and security problems

  • Seriously. All you need to do is install a user-friendly Linux distro in the workers' machines, and install Windows using VirtualBox.

    That's the only way to be sure.

    If you're talking about installing software on the Windows Servers, I can only say this: ARE YOU OUT OF YOUR MIND!?!?

    • Yeah. Then just chain everyone to their chair and weld the office doors shut too so that no one can come in and launch any kind of physical attack either. That + Linux = better than gov't security.
  • by cjonslashdot ( 904508 ) on Friday August 01, 2008 @12:05PM (#24435063)
    You are apparently talking about black-box testing. For starters, you need a security team to perform penetration testing on the apps in a production-like environment. But if you have home-grown software, you need to address the problem of insecure systems being built by your programmers. The programmers need to understand application security. For a somewhat theoretical but still practical treatment, I recommend my own book, High-Assurance Design [amazon.com] (Addison-Wesley, 2005). You should also check out Michael Howard's book and his blog [msdn.com]. And then there are Gary McGraw's books which address process. - Cliff
  • Evironment -> Environment
  • by Anonymous Coward on Friday August 01, 2008 @12:08PM (#24435125)

    and refuse to give them hot pockets until they crack the program.

  • by mpapet ( 761907 ) on Friday August 01, 2008 @12:09PM (#24435147) Homepage

    This is a no-win situation for the persons assigned to certifying an application. I personally have a very hard time communicating with managers who believe, with an unshakable faith, this is a reasonable solution to a perceived problem. When it blows up, MY head rolls, not theirs. The ages-old "Get IT in my office *now*" blame-shifting game.

    The right way to handle this would be to push back hard and explain why this is an epic failure in the making and resources quagmire. That isn't typically a good political solution though.

    I would love some advice as to how to change such foolish thinking.

    Maybe you've got a good thing going there, but if there have been IT organizational changes recently, this may be a harbinger of things to come.

  • by grandbastard ( 1312837 ) on Friday August 01, 2008 @12:10PM (#24435157)

    If a group from sales can't break an app, it's secure.

    You might also use a bunch of chimps. The only difference there is all of the poo flinging, screaming and downright annoyance factor, but it's hard to find good chimps, so it's easier to just put up with it and use folks from sales.

  • Many organizations have tried this approach, but I don't know of any that have been really successful. [For one example, look for user reactions to NMCI -- the Navy Marine Corps Intranet that has become a 4-letter word to any half-intelligent user].

    One of the problems is that there are lots of specialized applications. Users are often used to downloading open-source applications (like, say, an ssh client) when they need it -- and they don't want to wait three months to a year for it to undergo testin
  • Fools Errand. (Score:5, Insightful)

    by Anonymous Coward on Friday August 01, 2008 @12:17PM (#24435305)

    1) There is no secure software. Just software that doesn't have any known vulnerabilities.
    2) Define 'thoroughly tested'.
    3) White-listing applications is going to cause your company pain as people wait for required tools to become available.
    4) You can't afford to do this yourself.

    Look at it this way. When PHB is shouting for his new shiny piece of software to work, you can bet that 'thoroughly tested' gets severely watered down. You'll be left with some software on the list that has vulnerabilities whilst other people can't get their jobs done because the software hasn't been listed yet. Even if you avoid those pitfalls and manage to 'thoroughly test' each application, how many applications does your company use? Say you can test an application in 2 weeks - For one fulltime employee you'll white-list 24 applications in a year. The fact that you are asking /. for help suggests that you don't have much knowledge or experience in this field, so you're going to have to pay for training or for an employee with that training.

    This model is doomed to failure, and whoever suggested it should be shown the door and replaced with a CISSP qualified guy (or gal). Security is a process, and what you need are tools to help carry out that process. Someone with a CISSP qualification will know where to find those tools.

    Some of those tools might include auditing (to find what applications are in use), firewalls, IDS systems (to detect suspicious traffic), patch management (to mitigate vulnerable software), an intelligence provider (to tell you about new vulnerabilities and patches in a timely manner), a Security Event Management system (to manage security data and drive processes) etc...

    For the record, I am a QA lead engineer with experience in enterprise and industrial security. I do this kind of stuff for a (good) living.

  • by maestro371 ( 762740 ) on Friday August 01, 2008 @12:20PM (#24435357)

    The likely goal (from a management perspective) is to establish an authoritative list to allow for efficient management of vulnerabilities. If you know what's out there and "authorized" you can respond to threats far more quickly than otherwise.

    Your management would be silly to expect you (unless you are an application analysis company) to thoroughly vet every application that comes through the door. It would be a terrible waste of time.

    Do some basic analysis (what ports does it open, does it connect out to other systems, that sort of thing). Beyond that, I believe the value is simply in knowing about the application and being able to track any potential issues.

  • Well... (Score:2, Informative)

    by CapnStank ( 1283176 )

    The workplace I work for is a rather large multi-billion (possibly trillion) dollar business. When testing application compatability they use development servers which are a mirror to the production ones but completely disconnected from the external network.
    1) Apply new software to test enviroment
    2) Distribute access to a test group
    3) Gather reports, determine impact
    4) Distribute into production if deemed appropriate

    It isn't the most cost effective solution but it works when you're trying to roll out an upd

  • Look beyond apps (Score:3, Interesting)

    by Darkness404 ( 1287218 ) on Friday August 01, 2008 @12:25PM (#24435439)

    but my main concern is with Windows apps."

    First, secure your OS somehow. A Windows install will almost certainly be less secure then a comparable OS X/Linux/BSD install, not only because of the openness of code, but security through obscurity. Your real trouble isn't with skilled hackers, they can get through almost anything if it isn't the nightly build, but rather script kiddies who use 1337hax0rtool.exe to attack.

    And there is no such thing as a secure system, only a more secure system and a less secure system.

  • by Anonymous Coward on Friday August 01, 2008 @12:27PM (#24435479)

    Boss: create me a secure test environment.

    guy: OK, my first step is to ask the people of the internet.

    types: dear slashdot, how can I create a secure test environment?

    slashdot responses:
    -do not use any microsoft products. they are the borg.
    -the important thing is whether you will use vi or emacs.
    -use a ham radio instead
    -who's going to "helm" the next LOTR "vehicle"

  • You shouldn't even start testing for security until you know what you're trying to achieve.
    • Are you worried about insiders stealing customer data?
    • Or outsiders hacking in from the InterWeb?
    • Or a nutter changing your admin passwords and blackmailing you?
    • Or someone in the next office logging your network traffic?

    Start with "Security Engineering" by Ross Anderson [amazon.com]. The first edition is online [cam.ac.uk].

  • by gunnk ( 463227 ) <gunnk.mail@fpg@unc@edu> on Friday August 01, 2008 @12:29PM (#24435511) Homepage
    Security is about mitigating risk, not eliminating it.

    There is no such thing as an app that is "known secure", only apps that are "unknown risk" and "definite risk".

    With that in mind, you can mitigate your risk by:

    1 - Closing ports down that you don't absolutely need talking to the world. Nmap is your friend here.

    2 - Scan for as many known attack vectors as you can. A good start? Metasploit. Get it. Use it. The bad guys are already probing you with it.

    3 - Personally, I also like to run a different server OS than desktop (i.e.: you probably have Windows on the desktop, so use Linux in the server room). Exploiting shared vulnerabilities between client and server makes life so much easier for the bad guy that REALLY wants to spoil your week.

    4 - Beware of trust. In this case, beware of trust relationships between machines. You don't want one compromised server leading to a bunch more.

    5 - Containment. You CANNOT guarantee every system is secure, so design your network to allow for the eventuality that some portion WILL be compromised. Limit the damage before it happens.

    Oh, and after you use the black hat tools to test your network, scrub those systems you used to bare metal. Don't trust that those systems are still trustworthy.
    • please use different terminology for your subject header.

      I feel dirty just reading that..

      first image - sex toys demanding answers to calculus questions....

    • All good tips, thanks!

    • Re: (Score:2, Insightful)

      by k8to ( 9046 )

      Certainly agreed.

      You should certainly do all these things. However, some amount of focused, possibly manual, application-specific investigation can also be worthwhile. I *think* this is what the original poster was referring to.

      Investigate the tool conceptually. Identify how it works, what it trusts, how it safeguards against problems. Essentially do your own black box security review from what you know about the program.

      Consider asking the vendor to comment on steps they take to ensure security. You m

  • A list of Authorized Software: Good. This is good because you need to have a list for those new employees that think Limeware is a corporate application.

    Compatibility testing between different software suites: Good. This is the second reason to have a list because this is what you're going to run into a lot more than security issues. Some apps may require java1.2.3 another java4.5.1. Or having multiple Oracle apps on the same workstation. These are the issues your application packagers should be documenti

  • waiting to thoroughly test app and system updates can be as bad for security as not testing apps. Will you wait weeks to mounts to test a windows update just to get hacked by some one uses what the updated fixed to get in.

  • Our IT department has been tasked with creating a list of authorized software, and only allowing software to be added to such a list after it has been thoroughly tested

    That is an incredibly bad idea which will make you the target of user hatred. Your staff is in business to do the business. This is like telling a carpenter that he has to get a new brand of hammer approved by the corporate tool testers instead of just going down the street and buying whatever hammer the hardware store has.

    It's also a profoun

    • Unlike a piece of software, a bad hammer can't compromize your information security.

      Your best bet is to simply forbid specific pieces of software that are known to be a problem (certain browser extensions) and specific categories of software of are usually a problem (peer to peer.) Then, add to the list as folks make poor choices.

      Enumerating badness, I see. Well, good luck with that.

      • by Fastolfe ( 1470 )

        In this case, it could still be the right approach. You need to estimate the costs of doing nothing (software free-for-all), and doing everything (total lock-down). Those costs should include the risks of evil/buggy software and practices, and the costs to productivity and employee morale. If your business is a minimum-wage bureaucracy (e.g. a call center), it makes sense to lean to the left and stamp out standardized systems with heavily-tested software. If your business is a high-paid software enginee

    • You can take measures to get some degree of confidence regarding any applications.

      You put a bloody allegory that is frankly pointless, doing what you are suggesting (allow people to put whatever they think they need) is, using another better known allegory, to allow the fools run the asylum.

  • Useless Red Tape (Score:2, Insightful)

    by elnyka ( 803306 )
    That task is purely red tape. Sounds good in theory, but if you dissect it, you find that this is merely a list of "authorized" software. What does "authorized" mean? Authorized for what? For whom? By whom?

    It certainly cannot be secure because all apps have different functions, outputs and installation requirements. You would have to come up with a "lowest common denominator" for the security requirements that apply for all. And if so, it would be of such low common denomination that the term "secure" be

  • No software (Score:3, Funny)

    by nategoose ( 1004564 ) on Friday August 01, 2008 @01:13PM (#24436387)
    I'm pretty sure if you do away with software completely you'll be pretty safe.
  • I read the whole thread thinking SOMEONE would mention this, but I forgot, this is Slashdot.....

    You should set up a small ISA Server lab, and use its monitoring tools to tell you EXACTLY what the apps you are testing are doing, when they are doing it, and from where.

    Another advantage of ISA, is that you can control apps via Active Directory. Combine that with packet inspection, and you have something where you can know the exact behavior of an app on your network. One DC, one ISA box and a couple of work

  • Secunia (Score:2, Informative)

    well if you are using all commercial applications and not homebrew stuff, I highly recommend checking these guys out.

    Secunia [secunia.com][secunia.com]

    The will run a scan on all software on the system. Tell you what is there, what has vulnerabilites, patches availible, secure and what has reached their end of life. For those with patches missing or vulnerabilites it will then rate them on criticality. Plus you can also scan remote machines as well.

    I find this very usefull for tracking down those bugger programs that are

  • If you have a kickass appsec team you can put them on the job (few companies do these days). If not, hire out a contracting company like ioactive, leviathan, immunity, xforce, etc.

    Pass them a list of software you want looked at, and they should be able to send it back to you with ratings on whatever scales interest you (long/short term threat to a corporate network, etc).

    I'm not sure if one or any of the companies I mentioned offer services like that. They mostly do product security for the companies that

  • 2 words (Score:2, Informative)

    I have 2 words for you...Virtual Infrastructure. VMWare or other simular software can save your bottom and offers easy rebuilds. Someone else may have said this already, but I don't have time to read ALL the comments! :o)
  • waste of time (Score:2, Insightful)

    by prennix ( 1069734 )
    so how long will you test? what are the criteria to make it out of the test environment?

    It seems a more reasonable approach would be to:

    1. pick software that is in wide enough distribution that exploits are published and patched quickly.

    2. Set up the production box on a private LAN. Scan the hell out of it. Lock it down. Scan it again.

    If you think your team is going to find zero-day exploits... then you are either lost or secretly have the crack team (in which case you wouldn't be posing this qu
  • As so many have no doubt said above, you CANNOT determine which apps are secure, and there really is no existing 'test suite' for application security in general. It is far too broad a ground to cover.

    What you need to do is identify 'known good' versions of applications. Essentially that is going to involve finding a version of each app, or maybe several versions, which have no known unmitigatable security issues. You can do that by looking for CVE's etc.

    Those become your good baseline. From there you'll ju

  • An information security department. It's all well and good for a small business to have security minded IT pros, but there is something that every one should realize... management, IT, janitors... everyone. Ready? MANAGEMENT SHOULD NEVER BE IN CHARGE OF THE SECURITY DECISIONS!!! They will always be involved, but the moment you let them take charge of something they know next to nothing about, you get screwed. This is why businesses have weak passwords (or have them written on post-it nots, stuck to monitors
  • by diggitzz ( 615742 ) <diggitzNO@SPAMgmail.com> on Friday August 01, 2008 @05:13PM (#24440919) Homepage
    Seriously. You really, really cannot possibly test for every possible vulnerability in every possible app, especially without the source available! The best you can do, IMHO, is to structure your network and systems to primarily isolate any possible bugs from spreading or from compromising integral data. In this way, if bugs or other breaches of security really *are* arising due to users installing insecure apps, you'll know which users did it because only their own systems will be f*ckd. Further, the users, being employees, need their computers to do their jobs and are not trying to maliciously break them (hard as that may be to believe). Indeed, the guy who's computer is broken all the time obviously can't get his job done, so why would someone sabotage their own job by breaking their own machine? Since I'm sure they really want to *keep* their jobs, many users will be happy to attend more training about good security practices, possibly even taught by you. This has the added bonus that you get to help close the human security loopholes that exist despite your courageous efforts to shelter the users from their own demise: you can remind them not to share their passwords, to check ID of visitors, to keep documents off their desk and screensavers locked when they're away, and all those other *ordinary security* measures without which could give a very malicious or opportunistic person direct access to the hardware on your precious network, a way bigger risk than any software bugs could present...

    Give your employer a cost-benefit analysis comparing the migration to either a Linux or Mac server backbone that won't let users f*ck it up so easily, to the zillions of hours of overtime they'd have to pay a whole crew of code-monkeys and network goons to moonlight scrutinize all possible apps for all possible vulnerabilities, and let them decide ;)
  • Try this:

    Set up any box, make it OpenBSD, hell, SELinux.

    http://www.openbsd.org/ [openbsd.org]

    http://www.nsa.gov/selinux/ [nsa.gov]

    Lock that bad boy down, complete with Honeypot.
    Then go here - piss these guys off (not hard - be patient):

    http://www.wolfware.dk/intro/welcome.asp [wolfware.dk]

    Did I remind you to watch from a disk-loaded Linux box?
    A good time will be had by all.

    You'll have to toss the hardware on both machines, but eh, if you grab the Honeypot traffic (if they don't catch it) you could write a book.

    PROFIT !!!

  • by rwa2 ( 4391 ) * on Friday August 01, 2008 @10:51PM (#24444193) Homepage Journal

    Well, for a security test environment, I'd find it quick and easy to set up a bunch of virtual machines on an isolated network (that is similar to your production network, with proxies and firewalls to the big bad internet where appropriate, etc.). This will make your test environment easy to clone & reset to a known configuration.

    Then you want to place a sniffer (wireshark) where you can see all traffic between the virtual machines. This gives you some idea of what a piece of software is "doing"... what remote servers it's trying to connect to, whether it bothers to use any type of encryption or at least obfuscation when it sends data around in the network, etc. Might want to run portscans (nmapfe) as well to see what vulnerabilities the software opens up on your host, and whether you can exploit them using commonly available hacker tools.

    Finally, it'd also be informative to have an intrusion detection system (such as snort + acidlab) on that network (as well as on you production network) to help catch and interpret suspicious network activity.

    So there are some basic things you can do to easily assess what risks your applications pose from the outside. It will help you catch basic hacker attacks such as IRC / VNC backdoors and stuff. More sophisticated attacks (which might conceal traffic to the outside via sneaky TCP-over-DNS or the like, or hide backdoors using port-knocking) would be harder to detect... for those you'd just have to have an accountability trail back to your suppliers of that software, especially if you have no way to inspect the source code for that kind of malicious embedded trojan.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...