Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security

Changes in the Network Security Model? 261

Kaliban asks: "As a Sysadmin, understanding network security is clearly an important part of my skillsets so I wanted to get thoughts on a few things that I've seen recently after some discussions with co-workers. Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary? Is the simple concept of opening and closing ports insufficient for networking services that require the client and server to open multiple simultaneous connections (both incoming and outgoing)?This leads me to my next question: has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded? Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet? When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."
This discussion has been archived. No new comments can be posted.

Changes in the Network Security Model?

Comments Filter:
  • by Eponymous Cowboy ( 706996 ) on Tuesday September 30, 2003 @12:45AM (#7091270)
    There are three disparate levels of security you need to consider, and it is advisable to take a three-tiered approach to the problem.

    First, for employees and others who have trusted access to your network, the answer is not to poke holes in your firewall. Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network. From your point of view and theirs, it is as if their machines were physically located on the other side of your firewall--just like having the machines right in your building.

    Second, for business partners and contractors who need limited access to a subset of services, but whom you do not trust fully, the answer is quite likely also a VPN, but not directly into your network. For services provided to these people, you want everything from your end first going through application-level firewalls, and then through the VPN, over the Internet, to them.

    Using a VPN in these cases prevents random hackers from entering your network on these levels.

    Finally, for the general public who simply need access to your web site, the ideal situation is to simply host the web site on a network entirely separate from yours--possibly not even in the same city. Use an application-level firewall to help prevent things like buffer overflows. Then, if your web server needs to retrieve information from other systems on your network, have it communicate over a VPN, just like the second-level users mentioned above--that is, through additional levels of firewalls to machines not directly on your primary network. (Basically, you shouldn't consider your web servers as trusted machines, since they are out there, "in the wild.")

    By following this approach, you expose nothing more than is necessary to the world, and greatly mitigate the risk of intrusion.
  • Immature Technology (Score:5, Informative)

    by John Paul Jones ( 151355 ) on Tuesday September 30, 2003 @01:04AM (#7091300)
    Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet?

    Nope. That should never happen.

    The problem here is that application-level firewalling is fraught with problems. The lack of intuitive management for this type of firewalling is a problem that quite a few companies are trying to solve -- with limited success, so far. The problem is that as you move up the OSI layers, the variables increase exponentially. If you think that 65,536 is a big number, try writing an application-level script that permits "acceptable" MAPI requests while denying "unacceptable" MAPI requests. How do you determine that this NFS packet is good, and this one is bad? From the same host to the same server? How about X11? SSH? Oh, and don't break anything while you're at it. Lions and tigers and bears, Oh my!

    These are the problems of an immature technology. As time passes, these issues might be somewhat mitigated, but there are plenty of "network administrators" that haven't fully grasped the concept of IP, and struggle with L3/L4 firewalling, to say nothing of moving up the stack.

    Here's a tip, though; look for Bayesian filters in firewalls in a few years. That will be a trip.

  • Re:Keep it simple. (Score:4, Informative)

    by dzym ( 544085 ) on Tuesday September 30, 2003 @01:24AM (#7091347) Homepage Journal
    Just as a point of comparison:

    As part of a security test, we placed an NT4sp4 box with an unpatched install of the Option Pack--to install IIS (note that this is perhaps the most easily exploitable Windows configuration on the face of the planet)--behind an ISA sp1 firewall running on Windows 2000 sp3. We were unable to compromise or otherwise DoS either of the two NT servers with readily available exploit code for IIS or otherwise on either operating system.

    Now, it may be possible to still exploit the aforementioned NT boxes, but clearly it would have taken a great deal more effort than just running a NIMDA-alike on the NT4 box.

  • Some add'l tidbits (Score:5, Informative)

    by Anonymous Coward on Tuesday September 30, 2003 @01:31AM (#7091391)
    First off, remember - you won't be able to think of everything. No security model is complete without behind-the-wall systems, be they basic monitoring systems up through more sophisticated custom snort or proprietary IDS. It all depends on your paranoia level.

    There are a few ways to handle the bane of netadmins - 'I wanna get to my files!' VPN, as suggested, is one solution - but not without problems. Recent issues with X.509, OpenSSH hacks for IP-over-SSH, etc. You can mitigate the danger by using a set of consistent criteria for each of your requirements, like a checklist. For example:

    1) Is the service mission-critical? (BOFH them if no!)
    2) Can the service be offered through a less-vulnerable channel? NFS mounts moved to SFS, perhaps, or encrypted AFS as mentioned above.
    3) Is there a way to move the service into a perimeter network (or outside entirely)? Even if this means synchronizing a set of data to an outside machine via cron, if the data on the machine is less important than the internal network security, this can help.
    4) Once the user is connected, authenticated and accessed, *THEN* what can go wrong? What could they do maliciously? What could they do accidentally?

    Personally (and this is just me talkin', no creds here) I tend to reflexively say "NO!" until convinced otherwise. I know that there are services which *must* be available through the wall, but I want the requestors to have to work to convince me. Closed systems are more secure.

    Also, don't be afraid to investigate low-tech but simple and effective means of circumventing problems. First thing I ask users who want to get an occasional file home - "Can you mail it to yourself?" Second thing: "Would you be able to use a 'public folder' that I have synch to an accessible box, say, every half hour?"

    I second the opinion of iptables. It's a sharp tool, so be careful - but correctly applied, it kicks pants off most application or appliance firewalls. Invest the time to learn the sharp tool, and you'll realize that most of what you pay for on big expensive firewalls is manageability (i.e. Java GUIs, wizards, databases, multiple systems preconfigured - IDS, firewall, proxy, etc). Do the work.

    Good luck. Don't listen to people who berate you for 'not knowing things.' Attempting to learn them in advance - due diligence - is a sign of a good admin. Be thorough. And above all, find a friend who does the same kind of work, and check each other. Probe each others' networks. Try exploits posted on the net.

    Final, and most important - software updates. The boring part, but the most critical.

    Cheers.
  • by sid crimson ( 46823 ) on Tuesday September 30, 2003 @01:38AM (#7091421)
    Sadly, you are better off than the majority (?) of people. Ironically, it's possible you're more likely to fall prey to a bad MS Patch than anything else.

    If your virus software is kept up to date then your Linksys will serve you well. Keep a good backup of your data for the times that your antivirus update comes after the virus/trojan/worm infection.

    I might suggest your worst enemy is a coworker or familymember of said coworker.

    -sid
  • by canning ( 228134 ) on Tuesday September 30, 2003 @01:47AM (#7091461) Homepage
    When firewalls don't do the job Mike Fratto, Sep 29 2003

    Battle lines have been drawn, and volleys are being lobbed between the analyst and vendor camps. In dispute: Whether intrusion prevention is out of commission or the next network security salvation.
    On one side, Gartner has cast intrusion detection into its "Trough of Disillusionment", saying the tech has stalled and calling for these functions to be moved into firewalls. Meanwhile, intrusion-prevention product vendor ForeScout Technologies vows to identify and block attackers "with 100% accuracy".
    Call us Switzerland, but we say neither group has a lock on the truth.
    Network intrusion prevention (NIP) systems probably will not protect your network from the next zero-day exploit or troublesome worm, but they are not a waste of time or money, either.
    Our position puts us in the minority: Though we think NIP systems can enhance an existing security infrastructure, we do not consider integrating intrusion prevention and firewalls into a single unit a desirable goal.

    Firewalls vs NID Firewalls have a largely static configuration: firewall administrators define what is acceptable traffic and use the features of the firewall to instantiate this policy.
    Some firewalls provide better protection features than others. For example, an HTTP application-level proxy is far superior to an HTTP stateful packet-filtering firewall at blocking malicious attacks, but the basic idea is the same: Your firewall administrator can be confident that only allowable traffic will pass through.
    If you have doubts about your firewall, get a new one from a different vendor, send your firewall administrator to Firewall Admin 101, or get a new administrator.
    Not surprisingly, when we asked you why you are not blocking traffic using network-based intrusion detection (NID) systems, 63% of you said you use a firewall to determine legitimate traffic.
    But people make mistakes, so misconfigured firewalls are a common source of network insecurity.
    This simple fact has been used as a selling point for both intrusion detection and prevention systems, with vendors claiming their products will alert you to, or block, attacks that do get through.
    The answer: Instead of layering on more hardware, solve the fundamental problem of misconfiguration.

    Think configuring is easy? Unfortunately, though, it is not that simple. If you are enforcing traffic policy on your network using a stateful packet-filter firewall--such as Cisco Systems' PIX, Check Point Software Technologies' FireWall-1 or NetScreen's eponymous product--without security servers or kernel-mode features enabled, you should know that application-layer exploits, such as server-buffer overflows or directory-traversal attacks, will zoom right through. Stateful packet filters stop at Layer 4.
    Application-proxy firewalls can block some attacks that violate specific protocols, but face the facts: protection is limited to a handful of common protocols.
    The rest are not supported through a proxy, or are supported through a generic proxy, which is no better than a stateful packet filter.
    Still, NIP is not a replacement for firewalls and will not be in the foreseeable future. Why? The fundamental problem is false positives--the potential to block legitimate traffic.
    Before you can prevent attacks, you have to detect them, but NIP systems rely on intrusion detection, which is hardly an exact science.
    A properly configured firewall will allow in only the traffic you want. We need to feel this same confidence in IDSs before we can believe in NIP systems, but IDS vendors have employed lots of talented brain cells trying to raise detection accuracy, and they are nowhere close to 100%.

    Incoming! Despite these caveats, we believe a properly tuned NIP device can be instrumental in warding off most malicious traffic that gets past your firewall.
    There are several ways to block malicious traffic: If the NIP device i
  • by JebusIsLord ( 566856 ) on Tuesday September 30, 2003 @01:57AM (#7091513)
    You can tunnel VNC through SSH though, making it quite secure (and, as an added bonus, faster through compression). The old VNC site even used to recommend it.
  • by altamira ( 639298 ) on Tuesday September 30, 2003 @02:34AM (#7091635) Journal

    There's a few very sophisticated application-level firewalls available on the market, but they all pertain to a very specific set of protocols. NFS and MAPI are none of them, as these are far too complex and it's too hard to distinguish bad from good traffic; HTTPS, on the other hand, is pretty well suited to full application layer inspection, and this can make it very practical to actually allow access to an application on your INTERNAL network from the outside. However, on the side of the application-level firewall, this requires very sophisticated rulesets that require modification whenever the application changes, and that require a very skilled administrator. Whale Communications makes one such product (e-Gap Application Firewall), which could easily be the most sophisticated application level firewall for HTTPS. There are other vendors though that offer reverse proxies including authentication that will do session management and only forward traffic belonging to live, authenticated sessions, that could possibly as well make it practical to have the application run on your internal network.

    Just think about it - in an ideal world, you could connect your database only to the web - no replication to the insecure area (DMZ), no (not in the Windows meaning of the word!) trust relationship with the DMZ, no poking holes in your firewall for DB/RPC/other proprietary communication protocols, no bringing out and maintaining the same set of hardware and software twice...

    BUT this comes at a price - secure application layer proxies require skill and money.

    Disclaimer: I work for a company that has implemented the Whale solution in Germany for 2 years. However, I chose the Whale solution for its technical merit solely.

  • by sid crimson ( 46823 ) on Tuesday September 30, 2003 @03:19AM (#7091758)
    I'm working on something similar... Exchange/OWA on the net.

    There are a couple people who just need to POP their email while away. Perdition POP3-proxy over SSL is a decent solution. Setup POP3 proxy box on a separate network (ie. DMZ) from the Exchange Server and you're set.

    There are a few that must have OWA access. For them, set up a reverse proxy with Apache/Squid and get a certificate for this server to communicate with your Exchange/OWA/IIS box.

    And forgoodnesssake relay all your email thru something before it hits your virus-protected Exchange box. I suggest a Postfix [postfix.org] / Spamassasin [spamassassin.org] / ClamAV [elektrapro.com] setup.

    -sid
  • by Dagmar d'Surreal ( 5939 ) on Tuesday September 30, 2003 @03:34AM (#7091799) Journal
    "[...] has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded?"

    The simple answer to this question is "Definitely not." The use of a DMZ segment to keep production machines on their own physical network segment is likely to never become obsolete because the benefits of this simple step are so great.

    "Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet?"

    Whether they are or not is irrelevant. Only the barest minimum of your network should be exposed to another network (especially the Internet), and those hosts that _are_ should be unable to initiate connections to the rest of your network to reduce the impact of the loss of confidentiality in the case of an intrusion. While this may seem rather anal-retentive, to implement a proper application level firewall, the firewall can't just casually filter by generic service type. It _has_ to be able to distinguish a kosher query from a malicious one, and this requires a LOT of detailed work in the firewall rules to ensure that only the queries you want passed through can be passed. If you have a lot of custom CGIs with input parsing, this can turn into a nightmare of man-hours to maintain.

    "When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."

    I mainly agree with you and feel that the answer is really "Almost never", with "never" requiring some support from the developers maintaining your site. If they're on-board with you on the concept of a DMZ, they'll help you by designing the production system so that connections could be made _to_ it from the intranet to extract information from the production hosts, instead of making the production hosts initiate connections to the intranet and increase the chance an intruder could do the same. If you can't control the access because it's some wacky proprietary protocol, institute a second DMZ (network cable is cheap and so are extra NICs). No other network should ever be allowed to reach inside your intranet.
  • by snotty ( 19670 ) on Tuesday September 30, 2003 @03:35AM (#7091801) Homepage
    Actually, having ISA Server publish your Exchange server (using RPC) or Outlook Web Access (OWA) is a great alternative to hosting yet another server you're going to have to patch and lock down. Configuring a firewall that is meant to be secure is much easier than trying to tie down a web server. Web servers on the edge don't even have the monitoring and reporting capability that you will need to know that things are running smoothly (or not). If all you want to let out is webmail, just publish OWA. ISA Server can add a layer of protection that a web server can't, including URLScan filtering, SecurID two-factor authentication, and pre-authentication. On top of that, if you want, you can install a Symantec virus filtering agent on the ISA Server and simultaneously filter out viruses in your webmail. There are hundreds of users who user ISA to protect their Exchange and webmail. Don't take my word for it though. Check out :

    Serverwatch [serverwatch.com]
    Microsoft's own site [microsoft.com]
    ISAServer.org [isaserver.org]

    The best answer is always to have defense in depth - Having a firewall in front of your web servers and email servers is good. Having an application aware firewall in front of your web/email servers is better. Having both and having a secure policy on them with AV software and keeping your machines patched is the best.

  • by cowbutt ( 21077 ) on Tuesday September 30, 2003 @04:40AM (#7091967) Journal
    That's one of the unique features I do actually like about CheckPoint's FireWall-1 suite; their SecureClient VPN client software allows the firewall administrator to push firewall policies to be enforced locally on hosts intended to be VPN clients.

    It's not perfect of course, as a host could be compromised before SecureClient is installed, but in a controlled environment, that should never really be the case.

    --

  • by Phroggy ( 441 ) * <slashdot3@ p h roggy.com> on Tuesday September 30, 2003 @04:52AM (#7091990) Homepage
    You could run something like SquirrelMail, which is a webmail package that uses IMAP to talk to your mail server. I think the idea of using Apache as a proxy server to connect to an internal server with OWA is also a good one (as opposed to port forwarding or "poking a hole", which would look the same to the user but be significantly less secure). Either of these ideas should work fine with whatever OS you want.
  • by graf0z ( 464763 ) on Tuesday September 30, 2003 @06:23AM (#7092252)
    and NIDS is that all current systems ready for production use are based on pattern matching, just like virus scanners. It detects a "bad packet" (like one containing a standard rootshell) if it has an according signatures in its database. Additionally it can enforce protocols (i.e. by dropping evil overlapping IP-fragments). Both happens at high costs as IP-fragments and TCP-segments have to be reassembled for inspection.

    These system may filter standard attacks (i.e. exploits you find at bugtraq, packetstorm) quite good, but You can image that it's easy to poke such a firewall by varying an attack. They know many standard ways of varying (like "/cgi-bin/../cgi-bin/" instead of "/cgi-bin/", or inserting NOPs into a rootshell), but there are a thousand and one way for doing the same thing, and most won't get detected.

    So: Do NOT think Your $XXXXXX application level oversecure paranoia firewall ransoms You from secure network design or patching Your systems! Instead, do the usual:

    • use seperated subnets of different security level (like a dmz)
    • hold Your systems on recent patchlevel
    • tighten Your configs, review them with >2 eyes
    • use proxies (maybe with authorization). Build virus-scanner into http- and smtp-proxy
    • do NOT consider Your internal network "save", so don't use telnet for administration Your internal *NIX servers
    The last point is due to the fact that it's too easy to inject hostile code into a browser. Most scripting attackts get NOT detected by state-of-the-art virusscanners if they are slightly modified. So consider the desktop workstations in Your network as a bunch of trojans.

    To summarize: You have a excellent chance of averting 99% of all attacks (as those are known attacks of script-kiddies/zombies/...) with standard techniques like the above mentioned. You have a good chance of making a random hacker to move away to an easier target. You have almost no chance of averting a skilled hacker with time who wants to get into YOUR machines.

    /graf0z.

  • by julesh ( 229690 ) on Tuesday September 30, 2003 @06:47AM (#7092310)
    SSH allows portforwarding, even backward (i.e. you can run SSH-sessions into the company by contacting an outside server and connecting back over that very ssh-connection.

    There's a very simple solution to that.

    Put "AllowTcpForwarding no" in /etc/ssh_config

    Simple.

    (Aside: there is a note in the openssh manual that reads "Note that disabling tcp forwarding does not improve security in any way, as users can always install their own forwarders." I think this only applies if you give them unrestricted shell access. See another post in this thread for information about a restricted shell that allows scp to work but prevents other stuff from executing).

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...