Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Changes in the Network Security Model? 261

Kaliban asks: "As a Sysadmin, understanding network security is clearly an important part of my skillsets so I wanted to get thoughts on a few things that I've seen recently after some discussions with co-workers. Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary? Is the simple concept of opening and closing ports insufficient for networking services that require the client and server to open multiple simultaneous connections (both incoming and outgoing)?This leads me to my next question: has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded? Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet? When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."
This discussion has been archived. No new comments can be posted.

Changes in the Network Security Model?

Comments Filter:
  • Multiple Firewalls (Score:3, Interesting)

    by Renraku ( 518261 ) on Tuesday September 30, 2003 @01:01AM (#7091291) Homepage
    I can see where the desire for more than one firewall is going to go up. Here's an example. At the boarder, you might have a hardware firewall set up, before data can even get to the machines. Then you might have a per-cluster firewall, so each department or cluster of computers can set their own policies for what gets in and what doesn't. Then there would be the firewall on each machine, which could be set according to the uses of the machine. So there would be three layers of shielding before you even get to the security features of the OS itself. Or you could just go VPN like someone suggested. Another good idea would be to have some kind of username/password setup so that some people could bypass the first firewall, and the issue of 'trust' wouldn't be as big as allowing someone to zip through all the firewalls.
  • by redhog ( 15207 ) on Tuesday September 30, 2003 @01:06AM (#7091306) Homepage
    One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution.

    We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (they did not have any oportunity to work from home before, so they don't complain :).

    We also plan to later on introduce AFS and allow remote AFS mounts, and VNC remote-desktops.

    Locally, we have a simple port-based firewall, basically walling off all inbound traffic except ssh and http (and allowing nearly all outbound traffic), and keep our OpenSSH and Apache servers updated (have you patched the two ssh bugs reported on /. on your machines yet?).

    So, my advice is - keep it simple. Do not trust a too complicated system. And keep your software patched for the latest bugs - keep an eye on the security-update-service for your distro/OS and bugtraq.
  • by kennyj449 ( 151268 ) on Tuesday September 30, 2003 @01:16AM (#7091319)
    In my opinion, between the danger of worms transmitted above the application level and the existence of uneducated users (in many cases, uneducatable) as well as the whole physical security issue, even an internal network is not to be trusted (though few are actually worse than the Internet, except for pervasive wireless networks that don't use a strong, non-WEP encryption solution.) VPNs can definitely be very useful, but placing using them only at the outer edges of your network (e.g. internet-based links) leaves you wide open to any form of attack that originates from inside, which is always a danger no matter how good your external defenses are.

    Personally I don't think that physical seperation is necessary if you're going to be using a strong VPN, because of the fact that you can make it so that the only traffic that passes back and forth is through a VPN and is then no less secure (if anything more secure, except for the purposes of physical security) than if traffic were being passed over the internet. You also get the advantage of increased throughput, a single (or fewer) physical sites to manage, and lower bandwidth costs. Every little bit helps...

    In any case, it is my opinion that any computer which can communicate with others on the internet, no matter how well-restricted such communications are, should itself be considered non-trustworthy. It might be safer for being behind a firewall, but it can still grab a trojan or worm either through accidental or intentional means and become a staging point for internal attacks. It is for this reason that I personally believe that it is imperative to ensure that every computer on a network is secure and has personal firewalling of some form installed (if you're dealing with *nix workstations this is a no-brainer for a competent admin; Windows boxen will benefit greatly from simple solutions such as Tiny Personal Firewall.)

    This all goes double for boxen which are physically located outside of the network and which VPN inside (this is the reason for that last paragraph's worth of rambling.) A certain amount of distrust should be exercised for computers which can find themselves poorly protected from the dangers of the internet at times, and as such it is not only necessary to keep such boxes under close scrutiny and send their traffic through a decent firewall, but also to either educate users (as well as possible) on good security or require as a matter of policy that they utilize certain security measures (a personal firewall combined with a regularly-updated antivirus application is a potent combination that goes a long way towards keeping a computer clean.) Assuming that a VPN is a safe connection is a recipe for disaster; it prevents others from listening in but otherwise it is no better than any other old TCP/IP connection.

    VPNs, of course, can be quite useful on an internal network. Packet sniffers tend to have difficulty picking up on SSH as it is, but put that through a 1028-bit encrypted tunnel and it become exponentially more difficult to crack apart (and such layering protects you from vulnerability as there are now *two* effective locks which must be picked in order to gain entry.) It isn't going to make a difference between two servers connected with a crossover cable and which enjoy strict physical security, but when traffic is being passed over a network with old windows 95 boxen running Outlook, it pays to be prudent. Such encrypted seperation, when used intelligently, can often eliminate the need to physically seperate network segments when connectivity can be useful.

    Oh, one last point: if you're using a WLAN, it's only logical that unless it's strictly for visitors doing web surfing and chatting on AIM, a VPN is useful there as well. WEP is both less useful and far less effective.

    As for a good VPN technology to use for any application, IPSEC is always handy (and enjoys excellent and robust out-of-the-box support in the more recent revisions of... almost everything.)

    Sorry if this seems a bit unclear, but I've had a long day. :)
  • Bayesian filters (Score:3, Interesting)

    by SuperKendall ( 25149 ) * on Tuesday September 30, 2003 @01:32AM (#7091394)
    A general question - bayesian filters are great for email because a user trains them. But do you think it will ever be practical to "train" a firewall as to what is good and bad traffic? I guess to some extent you could use regression tools to generate the sorts of traffic you like... but it seems like such a thing would have to have a pretty high threshold in order not to drop any real traffic. I'm not sure such a device is pratical.
  • by m4dh4tter ( 712011 ) on Tuesday September 30, 2003 @01:40AM (#7091431)
    Face it folks. Provisioning security services at network perimeters is just wishful thinking, and this is not a new insight. Traditional packet filtering firewalls are absolutely necessary (do you walk around your neighborhood naked?) but they must become much more widely distributed *inside* large networks in order to be effective. The same applies to application filtering technologies (some of which are very promising) and all the other stuff people think of as perimeter defenses. Any attempt to set up large networks as controlled domains with known security characteristics is a losing battle. The world needs to go to endpoint-driven security. A lot of companies are working on making this manageable and cost-effective. And while we're at it, that's also the place to incorporate highly granular access-control services. As long as you have machines on your network that can hit external web sites or have floppy drives or unauthorized wireless access points, your internal network *is* the internet.
  • by Anonymous Coward on Tuesday September 30, 2003 @01:46AM (#7091454)
    The firewalls are there in linux distro's, the question is what are the default setups on distro of choice. Micro$oft has firewalls already in the more recent release's but again the question of defaults comes up. The ultimate issue comes down to what should be default settings and how much should be cut off. Of course the best setting is complete isolation so it becomes a matter of tradeoff's. Do we want features X,Y & Z plus their inherit weakness or do you default to no X,Y & Z giving more security, but the hassle of enabling X,Y or Z as needed. It's been mentioned many time's here on slashdot that security is all about tradeoff's and that's where the ultimate question comes in especialy when looking at it from a commercial point of view. What is acceptable to customers (in convenience, features & ease of use). As far as exploits they will always be there, question is how desirable are they to be found. To error is to be human, and no matter what we do there will always be way to find that error. If your that worried, the best firewall would be to be disconnected (again the tradeoff's).
  • by egarland ( 120202 ) on Tuesday September 30, 2003 @01:47AM (#7091459)
    There is no one answer. If security is your only concern you should have as many layers of security as possible with firewalls between each layer locked down as tight as possible. That said, security is never your only concern. Cost, ease of maintenance, performance, and flexibility are all important in a network design. After all, the purpose of your company is probably to get something accomplished, not to avoid getting hacked. There are times when every different network configuration is appropriate from super secure to a cable modem router to a windows box right on the internet. There is no one answer.

    Application layer firewalls are another layer above port filtering. They can increase security and could, in theory, make it worthwhile to share a service hosted on a machine that is inside your network. I would only do that if you trusted the security of your internal network. Most network designs assume that once you get in to the "internal network" there is no more security and all your deepest company secrets are available to anyone browsing around. If this is true, you've probably made some bad decisions somewhere along the way and you should address those before you open any holes. If you are willing to maintain strict security on your internal network then the added simplicity of allowing Internet access to machines on it can be worth the risk. This can be a lot easer than setting up a dmz.

    Usually layers do make sense though, even if one of the layers is just a Linux box doing firewalling, routing and serving some services. One thing I like to do is to mix operating systems at different layers. That way if you get a worm of some kind that gets into one layer it won't penetrate to the layer behind. For example, internet facing servers are Linux based, desktops are Windows based.

    Another thing I have done when I absolutely needed a Windows based web server is to setup Apache as a reverse-proxy only forwarding requests to a particular subdirectory to the Windows server. This filtered out all the standard buffer overload attacks since none of them referred to that subdirectory name. It also made sure the requests were relatively well behaved and buffered outgoing data for the Windows box, reducing connection counts when it was under high load. This is an easy way to do an application layer firewall and if you are firewalling with a Linux box you can do it right on the firewall.
  • by rc.loco ( 172893 ) on Tuesday September 30, 2003 @02:15AM (#7091571)

    Firewalls are great at slowing down intrusions. However, without proper application security architecture and host-level security hardening, you cannot really protect a network-accessible resource. Often times, the only resource (network, application, host) that we can control 100% of the time so that it can be trusted is the host.

    Besides, the bulk of compromise situations occur INTERNALLY. Is that PIX on your WAN router really going to stop disgruntled Gary down in QA from trying out across 5 subnets the latest script kiddie tool that his roommate hooked him up with. If you spend quality time hardening your hosts, chances are you may really not lose more than a few hosts at a time during a significant compromise at the application-layer (e.g., a remote root sendmail hole, a bug in BIND). I think we need to revive the popularity of security "tuning" on the host side - a lot of people forgo it for strong network security but I think that the latter is a much more difficult perimeter to maintain.

    I've seen others post about the dangers of VPNs. I totally agree, they are conduits for information loss, but are likely to be mostly self-generated (internal). Example: Disgruntled Gary in QA sucks down the product roadmap details off the Intranet before giving his 2 weeks notice and starting to work for a competitor.

    Apologies to Gary's everywhere. ;-)

  • by marbike ( 35297 ) on Tuesday September 30, 2003 @02:20AM (#7091588)
    I have been a firewall engineer for nearly four years. In that time I have come to the conclusion that you have a major trade off in the ultimate security of a system as compared to the usability of that system. An example is the explosion of VOIP and video conferencing in the last two years.

    H.323, SIP, SKINNY etc. all require many ports to be used which is a nightmare to a firewall admin. As a result, firewalls are evolving to include support for these systems, but my fear is that the overly (in my opinion) permissive nature of firewalls which allows these connection, is ripe for exploitation by future crackers/hackers.

    While I was supporting firewalls, my mantra was to close every damned thing I could and the users can suffer. But I also realize that in a modern network, usability is a major concern. Companies are deploying VIOP networks in record numbers while saving thousands of dollars each month. Companies need to reduce overhead to remain profitable, so they are looking at new technologies to help them. If the firewall industry cannot keep ahead of these technologies, it will ultimately fail.

    I think that the time of using access-lists to controll traffic is nearing an end. This will result in slower overall performance of firewall solutions as application level firewalling becomes mandatory, rather than the past of transport layer firewalling.

    I am afraid that I have no easy solutions, but I hope that the industry will be able to remain both secure *and* usable.

    Hell, perhaps in the future security will be built into operating systems and network resources, rather than the reacitve nature that we enjoy today.
  • by ObligatoryUserName ( 126027 ) on Tuesday September 30, 2003 @02:53AM (#7091667) Journal
    Sad to say, but in the future, the only reliable port will be 80. All clients will have all ports except 80 blocked by default (right now this seems like wishful thinking!) and no one will open any other port (it will give them a scary security warning!), and even if they wanted to, they might be blocked from doing so by their ISP.

    We're already seeing shades of this, but it hasn't reaced the majority of Internet users yet. Back in late 90's my company rolled out a product for schools that to be retooled when it was realized that many schools were firewalling everything except port 80. (They added a mini proxy server to the product that sent everything over 80.)

    I have a friend that's a sysadmin for a medium sized insurance company - and they had all their internal applications break a couple weeks ago when a MS worm started bouncing around the Internet. However, the problem wasn't that they were using Windows machines (I think all their servers were AIX...)- the problem was that their ISP (the regional phone company) had blocked off the port that all their applications used because it was the same port that the worm used to get into systems. Last I heard, the phone company was refusing to ever re-open the port. (The phone compnay made the change without even informing anyone at the insurance company, everything just stopped working and from what I heard it took them a day to figure out why their data wasn't getting through. I believe they were resigned to changing all their programs to work on a different port.)

    So, we've already come to the point where connections on other ports seem strongly subject to the winds of fate, and I see no reason the situation won't get worse. In most environments 80 is the only port that people would notice if it was blocked, and there are too many sysadmins out there who don't know any better. Right now, if I was developing an application that needed to communicate on the Internet, I would only trust that it could use port 80, and I wouldn't even bother looking at anything else. You can even see application enviornments starting to spring up now (Flash Central) where it's assumed that most applications will just share a port 80 connection.

    It sure is a sub-optimal situation, but I don't know what can be done to stop the trend. Ironically, such a situation makes simple port-blocking firewalls useless because all applications will be running on port 80 anyway.
  • by Kaliban923 ( 712025 ) on Tuesday September 30, 2003 @02:57AM (#7091678)
    The varied answers did indicate that there is ambivalence in general about the idea of allowing a machine on the internal network to advertise services even if protected by an application level FW(such as ISA Server protecting an Exchange server). That's good because I thought I missed something in the past 2 years since my last sys admin job(tried my own non-IT business for a while for those who are curious).

    For those who did comment, thank you kindly. I appreciate the ideas and just so folks better understand, this question was speared by the fact that my current workplace has determined a need for webmail because apparently our VPN solution is both too complicated and we dont trust our users to have secure machines(I dont make those rules, I just live with them). There is one voice in my organization who wants us to open up an exchange server thats on our internal network because it will be protected by an ISA server and that just seems nuts. I rather just place a frontend Webserver on our DMZ/perimeter network with IMAP access to our exchange server(we only need email, not calendaring and other features) and use secure protocols to transmit authentication information. From this discussion I've concluded that there is no decisive answer and that I rather stick with our current network security model(screened subnet) rather than "poke a hole" in the firewall for the exchange server.
  • by segment ( 695309 ) <sil&politrix,org> on Tuesday September 30, 2003 @03:02AM (#7091695) Homepage Journal
    First, for employees and others who have trusted access to your network, the answer is not to poke holes in your firewall.

    While this is simple to state, how many companies will follow this rule. Companies are not going to jail their users, so the first one who wants to listen to mp3's or streaming music, up goes Real, or Windows Media. What? You want to see the stock ticker from Bloomberg? Sure now you have multicasting crap. Get real, and that's not including someone who knows about things like datapipe.c

    Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network.

    You're either blind or too trusting in people. Remember the biggest security hole often comes directly from the inside. For instance, I know someone who has a VPN through IBM for her work. Lo and behold she wanted to take that same machine and hook up DSL to it. Say goodbye to security over VPN.

    I won't get too deep into this since I'm tired but a VPN isn't always the answer. The answer is actually education. Instead of spending on a Cisco Pix, or Nokia VPN machine, try holding monthly meetings with employees and make them aware of issues. Doesn't have to be a full blown Harvard presentation, but a quick power point presentation will actually teach them things they could carry on in their home or future place of employment. VPN's are like security through obscurity in a way. If someone wants in a VPN will do nothing to stop them

  • by cheros ( 223479 ) on Tuesday September 30, 2003 @03:03AM (#7091697)
    If you want to do it right you'll always end up with a tiered model. Your basic stance should be not to trust anything or anybody, and open up from there (a bit like getting a mortgage ;-). Second stance is to always try and have two layers of defence in place instead of one (i.e. defence in depth), like NAT + proxy, just an example. Third stance is to NEVER allow direct interaction with internal hosts. This means that inbound services (SMTP, hosting web pages) should be done from a separate interface 'between' the Net and your internal network, called Demilitarised Zone of DMZ (apologies if this is old news, just trying to keep it clear). That's IMO also where VPN users come in, they can be given proxied equivalents of internal services, that keeps a network clear from oinks that have just managed to fiddle their VPN so they end up as routers between the Net and the internal network (yes, I know your policies should prevent them doing this, but see second stance ;-). Any supplier feeds come in on the same type of facility, you could even use a separate interface for it. And last but certainly not least, describe what you're actually trying to protect as that will give you some idea of the value loss if you end up with a breach, much easier to develop some defendable idea about budget requirements. For extra bonus points you can let senior management decide to put a value on those assets (i.e. give them enough rope ;-).

    But this is not where it ends, because you still haven't dealt with (a) inside abuse and (b) the possibility of failure. Good security design takes failure mode into account. Plan for when somehow your defenses are breached. Tripwire your firewalls and core systems and check them, lob the odd honeypot in the internal network which will give you early alerts that someone is scanning the place or a virus has entered (last year I caught one of them very early because of a rather suspicious Apache log) and make sure you have a patch strategy that has a short cycle time (depends on your risk tolerance, but especially your firewalls will need attention). Where possible, segregate the more critical facilities out so you can more accurately protect them (just consider your users hostile - don't answer the support phone for half a day if you want a more realistic version of that feeling ;-).
    Oh, and think about what platform you run your security services on. I don't prefer a Unix over Windows because it's more or less safe (that's actually more complex than appears at first glance - donning asbestos jacket ;-), I prefer Unix based facilities because I end up with less patching downtime as it rarely needs a complete restart. But that's just me. And READ those logs..

    Hope this helps. =C=
  • by harikiri ( 211017 ) on Tuesday September 30, 2003 @03:33AM (#7091797)
    We are a big Checkpoint shop (stateful inspection firewall). With regards to which is better, the issue seems more to be:
    1. What is the industry standard
    2. What can we get support for locally.

    Application firewalls have really done poorly here in Australia. I speak from experience - used to be a security 'engineer' (read, install Gauntlet), and have since moved on to network security administration.

    The main vendors I've seen in the marketplace are (or were) Gauntlet, Sidewinder, and Cyberguard.

    NAI dropped the ball with Gauntlet both here and abroad. The technology behind it is excellent, but the support really, really sucked. In addition, the administration was performed via a highly unintuitive java-based application, that everyone I knew *hated* to use. You often ended up simply going back to the command-line to configure the beasts.

    Sidewinder I have no formal experience with, but have heard good reviews. Secure Computings presence in Australia was limited to international firms that required its use. There was no "storefront" for quite some time.

    Cyberguard I have seen at a handful of places, mainly banks (and apparently also at various .gov.au sites).

    All of these are technically good products. But due to their lack of popularity and market presence, they don't get used.

    So it's a glorified packet filter I go to add a rule to now.. ;-)

  • by Ckwop ( 707653 ) on Tuesday September 30, 2003 @03:57AM (#7091858) Homepage

    You can trust your employees [bbc.co.uk]?

    Dont ever believe that your employees wont attack you. Some will attack you by accident (bringing infected machines into the office or something), some will even attack you out of spite.

    You should only give trust to entities you have to trust in order to get the job done. You have to trust (some of) your servers or IT staff.. but you shouldn't have to trust most of the internal network.

    Where possible, you should treat your network machines in the same way as you'd treat an internet machine.. Obviously, you're going to have to give your network machines more access than an internet machine.. but treat them with the same suspicion.

    Simon

  • by rainer_d ( 115765 ) * on Tuesday September 30, 2003 @05:01AM (#7092005) Homepage
    One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution.

    We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (they did not have any oportunity to work from home before, so they don't complain :).

    Do they have a shell at the back end ? Do you allow port-forwarding ? Once you allow SSH inbound or outbound, all security is basically gone.
    SSH allows portforwarding, even backward (i.e. you can run SSH-sessions into the company by contacting an outside server and connecting back over that very ssh-connection.

    At my former employer, once they opened up port 443 for the VPN-clients outbound, I just moved sshd to port 443 on my DSL-box at home and could start and watch edonkey-downloads from work via ssh+vnc.
    I knew what I was doing and I didn't over-abuse it, but the potential for a security-nightmare is there.

  • by gbjbaanb ( 229885 ) on Tuesday September 30, 2003 @05:13AM (#7092034)
    First off, remember - you won't be able to think of everything.

    Thank you, you reminded me of the number 1 rule of security planning. In all of /. everyone is going on about VPN, SSH, etc - all technological solutions - and forgetting the real situation.

    Security is all about risk planning. There is no way you can either plug all the holes, restrict all the access properly, and manage all the resources. So, the question becomes not 'how to stop it', but 'what will I do when it goes tits up'?. Also, as someone undoubtedly has said, the only perfect security is in a concrete box, sunk to the bottom of the ocean. Well, yes.. but you always have to trade off security for usability. What's the point of being networked if no-one can access their files? People can access their files: dangerous security hole!

    You see - its OK having all the security products in place, setting them up perfectly, but then an employee logs on to the database and walks away with a backup of all the credit cards...

    and employee #2 gets a new toy, a wireless lan thing, and a passing hacker (theres always one), doesn't even have to raise a sweat listing off those same credit card numbers.

    Think *all* your employees are trustworthy (haha), well, what happens if someone walks into your offices (for a meeting, for instance), and surreptitiously plugs a wireless laptop into a network port, tucks it under the chair and walks off? Doesn't even have to be a spare port, they can plug in a little hub.

    You might as well ignore the technological security measures, sure you'd get hacked more often, but that just means you'd have to do a lot more work recovering the system - it does *not* mean that with the security products in place you'll never have to worry about performing that recovery process, so you dont need one.

    So, given that it may go wrong at any time, and you've figured out what you'd do when it happens. You also have a disaster recovery plan- for when the server room floods and is hit by lightning, or 2 hard discs go pop at the same time.

    Security - all about how much risk you'll accept, little to do with securing systems.
  • Re:Are you NUTS?! (Score:3, Interesting)

    by Courageous ( 228506 ) on Tuesday September 30, 2003 @06:01AM (#7092177)
    They're useless! Any competent hacker knows that there are hundreds (thousands?) of ways to get around being caught by an IDS.

    Knowledge that LIDS is present on a system being accessed, indeed if they can determine that LIDS is present, will send even the best hackers fleeing the moment they discover it. Anything built around a MAC (Mandatory Access Control) file system is bad mojo. You'd have to be working for a first world intelligence agency to even dream of sticking around.

    C//
  • by Anonymous Coward on Tuesday September 30, 2003 @07:20AM (#7092423)
    it's simple. Apply the VPN to protect your data between you and your remote client, then absolutely firewall your remote client to protect your internal network.

    Rather than monitoring firewalls/antivirus software on remote machines, simply firewall all but essential connections.

    We use Citrix. Thus we have one port open to our internal network. I've not seen a network-aware virus that can spread via SSL (only SSL logins are allowed), let alone login to a remote RDP/ICA server (similiar to only allowing ssh for unix--but considerably more complicated due to the GUI in Citrix based applications.)

    On the plus side, you can check your firewall logs to see if a client is indeed attempting to make all sorts of nasty connections.

    This saves us time(money) because the remote client, even if it has a bunch of viruses, is not a threat to our internal network.
  • by booch ( 4157 ) <slashdot2010NO@SPAMcraigbuchek.com> on Tuesday September 30, 2003 @08:40PM (#7099653) Homepage
    You can tunnel (and back-tunnel) any protocol through any other. Which kind of leads to the original question: Yes, you do need to be looking inside the packets. But there will always be ways around/through. People have tunneled through ICMP pings, DNS lookups, and GotoMyPC even reverse tunnels a VNC/PCAnywhere type application through HTTP. Those are all payloads (layer 7). Inspecting at layer 3 or 4 (IP/TCP) doesn't help, and even application-layer proxies (actually closer to layer 6) aren't likely to detect most of these tunnels.

    Saying that allowing SSH in eliminates all security is missing the forest for the trees. What would you suggest for file transfers, FTP? How about command sessions, Telnet? Security is not about making things bullet-proof. It's about mitigating exposure to risks. But I agree, if you're using SSH inbound, you should turn off port-forarding and shell access. It'd be best to put the system in your DMZ as well. And if possible, use an SSH application-layer proxy, although I'm not sure how feasible that is.

Suggest you just sit there and wait till life gets easier.

Working...