Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Web Services - More Secure or Less? 313

visibleman asks: "I have recently moved onto a project which is based around web services and SOAP and have, therefore, been doing some reading on those subjects. One thing which keeps coming up is that web services are claimed to be more secure than CORBA and RMI because it means drilling less holes through firewalls. If I was a firewall administrator (I am not, I am a developer) I would want to know that if I open up a port (port 80 for instance) I know what kind of requests are coming through it. Since SOAP is essentially a mechanism for sending functional requests over a port specified for web page requests this would make me nervous. My preference would be that requests for web pages go over one port and requests to run services go over another - favouring an IIOP solution. Am I off my trolley or would other Slashdotters have similar fears?"
This discussion has been archived. No new comments can be posted.

Web Services - More Secure or Less?

Comments Filter:
  • by mosch ( 204 ) on Friday November 16, 2001 @11:29AM (#2574566) Homepage
    It's a new trend, run everything on port 80 so your network admin has less to worry about, but that whole concept is a steaming pile of shit.

    The security or insecurity of a service has nothing to do with whether or not the request can be brokered by a webserver. All this really accomplishes is setting up the webserver as a massive single point of failure, and making it harder to audit what services a particular box is running.

    When you use the paradigm that each service has an associated port, you can be sure that nobody is running any unknown services merely by blocking ports. When everything is on port 80, the firewall becomes much less useful.

    • by xyzzy ( 10685 ) on Friday November 16, 2001 @11:34AM (#2574601) Homepage
      Exactly, there has been much gnashing of teeth on the xml-dist-app list about this (a SOAP standardization list).

      Although SOAP is bound to HTTP, there is no requirement that you use port 80 -- it's just a well-known HTTP port. As long as the people who need to use your service agree to it, you can use port 12345 if you want. If you are really paranoid, you should be running HTTP over something more secure, like a VPN between you and the service requestor, and not the public (great unwashed) internet.
    • As the original poster said as well, this means that now you don't know that port 80 traffic is web traffic anymore and application-specific proxies are required more and more often instead of simple port openings.
    • The real reason HTTP and port 80 is seen as neat is that it is probably already open, so you don't have to deal with that mean old network admin who just wants to spoil your fun.

      You don't have to answer difficult questions about how your service is secured, how it might be exploited to reach other resources within the firewall, etc. You ride the coattails of the "harmless" web server traffic.
      • by ostiguy ( 63618 ) on Friday November 16, 2001 @11:56AM (#2574727)
        I am a mean old network admin for a software consultancy company. I can therefore understand mean old network admins.

        The problem with big companies who give us big bucks to develop web apps is that the firewall/security teams are totally unresponsive to requests from development teams. A lot of firewall teams act as if nothing is ever up for discussion, and 80 and 443 are all that will ever be. System security would be a lot stronger if the security teams worked along with development teams, but instead a ton of security teams have a fortress mentality, for both system security, and their own interactions - locking themselves away from contact. As a result, everything and anything will eventually be pushed thru 80 and 443.

        ostiguy
        • So you are willing to demolish the TCP/IP protocol to spite a few paranoid admins.

          If you are developing an application that Company X needs to use and it has to communicate through the firewall, a simple phone call to PA's manager would solve the problem. But the real reason you want to run SOAP on port 80 is because you don't have a legitimate reason for your application to be running inside the corporate firewall. By legitimate, I mean in the minds of management. No, a "harmless" "e-commerce" "web-app" for music/pr0n/gambling/shopping/whatever is not likely to be considered a legitimate reason.

          Any admin would much rather have a program using port 12345, because then, if there is an exploit or malevolent user, it can be blocked at the packet level, instead of putting together every response and parsing the entire message using an infinite symbol list and AI that can detect the potential threat in any file sent through the only port on the only address on the only host in the network, if you can call it that anymore.
          • Yes, I, a single netadmin, am well on my way to destroying TCP/IP.

            SOAP is being pushed as an alternative to EDI, Corba, etc, etc (this isn't my area, remember, I am the netadmin trying to destroy tcp/ip). This is because firewall/security teams are not interested in working with (their company's) vendors to establish IPSec tunnels, or SSL tunnels for various apps. Instead of quicker binary transfer within a ssl or ipsec tunnel, stuff will be kludged into https, lest the firewall team's sensibilities be offended.

            There will be a huge market for near (as near as one can get) wirespeed http proxies soon as a result. Pretty soon some one will build some hack with some beta of .net that is vulnerable, and all hell will break loose as http becomes the big threat (as seen on the front of Infoweek/world/land, etc). A big market will result as companies throw proxies in front of their webservers , and in front of their end users internally to protect against this self generated menace.

            ostiguy
        • Same goes for third-party web-based applications and services. It is VERY difficult to convince an IT group to open new ports - even if they are for established, standardized protocols.

          Running on ports other than 80 is frequently a deal-breaker when trying to sell network applications into highly security-conscious environments. Most network admins equate more open ports with less security whether it is justified or not. HTTP is something they know and understand and already have set their network up to suppor - SOAP just makes sense.

      • So, you're saying the security 'problem' has more
        to do with the people-ware than the software.

        Security people win if absolutely nothing happens.
        Greater traffic == greater tendency for things to go awry.

        Management, if it can be awakened, needs to step
        in and restore balance between security and operational concerns.
      • There's always the honor system...

        oooooooooooooooooooooooooooooo
        o __________Notice _________ o
        o If you are a cracker or o
        o terrorist please use port o
        o port 80 as it is secure. o
        o Otherwise you may use the o
        o non-secure port 2000. o
        o Thanks and have a nice day o
        oooooooooooooooooooooooooooooo

    • by mikeee ( 137160 ) on Friday November 16, 2001 @11:51AM (#2574688)
      When you use the paradigm that each service has an associated port, you can be sure that nobody is running any unknown services merely by blocking ports

      Ah ha! Pronoun trouble!

      Unfortunately, you can only be sure that nobody is running any unknown services if they use the paradigm that each service has an associated port. Some fool rigs up a VPN layered over HTTPS or DNS, and what good is your firewall then?

      In some ways, SOAP's obvious security problem is better; at least it's clear you're screwed.

      Solving this correctly is a very hard problem.
    • by KyleCordes ( 10679 ) on Friday November 16, 2001 @11:57AM (#2574734) Homepage
      It's not about security, it's about the adminstrative effort of getting a firewall configuration change made in a large organization. In short, it's really hard to do.

      Here's a purposely oversimplified and perhaps harsh explanation:

      Simplisting firewall adminstrators don't care what you send over the network, as long as it's on port 80. More sophisticated adminstrators also insist that it's HTTP. Even more sophisticated ones inspect in more detail, such as checking to see if files transferred have viruses in them.

      It's only a matter of time before firewall adminstrators notice that SOAP requests are really RPC calls, and block them by default - then we will all be back to having to get specific configuration changes done to let apps work over the firewall. We won't want to do that for the same reasons we don't want to try to convince someone that it's OK to open a port.

      I predict that there will therefore be a way developed to wrap SOAP not only in HTTP, but in HTML. The XML SOAP request could sit inside of simple HTML wrapper tags; this would let it go through the likely block-by-default of SOAP traffic.
      • It's not about security, it's about the adminstrative effort of getting a firewall configuration change made in a large organization.

        You mean it's all about avoiding security. Security policies are getting in your way, so you're finding a way to quietly violate their spirit while conforming to their letter. That's a great way to completely compromise the security of your company's IT infrastructure. Good job.

    • by melquiades ( 314628 ) on Friday November 16, 2001 @11:58AM (#2574741) Homepage
      It's a new trend, run everything on port 80 so your network admin has less to worry about, but that whole concept is a steaming pile of shit.

      So true.

      It's taken many years to build up the many layers of network security we have. One of the main reason SOAP is so easy to use is that it drills a hole right through all those layers. In other words, SOAP is easy because it encourages you to ignore everything that makes remote applications hard -- like security.

      As an example of just how wacky the everything-on-port-80 idea is, and how dangerous, consider this idea I heard from Bruce Schneier: implement IP over SOAP: have a SOAP service listening at two endpoints for IP packets, and forward those packets over SOAP to the other endpoint. Then make one of those endpoints the default gateway for packets into the otherwise-secure network at the other end....

      Just ponder that.
      • Actually, there is already an implementation of something simular to this in httptunnel. I forget the web address of it, but you can find it via freshmeat.net. Basically, it tunnels either IP socket data over HTTP, or, as an even MORE evil thing, raw ethernet data.. 8-O
      • One problem with firewalls (especially packet filters) is that it's hard to know exactly what data is flowing through. You can really tunnel any protocol over any other - you just need to know how to encapsulate and decapsulate it. Distinguishing whether data is regular data or encapsulated data of another type is hard to do. So I suspect that security people are going to have a hard time, unless we can convince the developers that they *need* firewalls and to stop tunneling holes through.
      • SOAP does not drill any holes that POST would not have.

        And you are most likely going to find SOAP engines that perform a lot of error checking and parsing of an XML message behore handing it to you than your average cgi-bin script that gets the POST request.
    • I don't know about you, but this thing seems much more like-- Firewall Enhancement Protocol [ietf.org]. The writers of this rfc seem to think that this is the best thing for the internet since OSPF....

      Seriously-- allowing ANY sort of RPC through a firewall has some serious risks.
    • I was contracting at company X a few years ago. I needed to use some piece of software for my project, and the demo site for said software ran on port 8081.

      Which was blocked by the company firewall.

      So I go ask the admin why it's blocked. I mean, WTF, blocking random _incoming_ ports I can understand, but outgoing ports? When there's already port 80 and 21 wide open? Not to mention DNS[1].

      "We don't open it because it's more secure that way", he said. "But it's not more secure!" "YES it is!" "Why?" ".... BECAUSE!"

      "Ok but look I could just make an SSH tunnel on port 80 with pppd and I can bypass all that stuff ..."

      He replied: "well if you do that, you'll get in a LOT of trouble, and the company is going to sue you! And I warn you because I have logs of everything!"

      Now what's really interesting is that I had been running this particular setup (pppd through ssh on port 80) for a couple months already, and nobody noticed.

      [1] You think you've secured everything and that no info can get through your highly secure firewall? But have you thought about the DNS?

      $ host the.root.password.is.iluvmom.crackersite.net
  • by Omnifarious ( 11933 ) <eric-slash@nOSpam.omnifarious.org> on Friday November 16, 2001 @11:30AM (#2574572) Homepage Journal

    I don't think it matters which you use. Allowing people to make functional requests to programs inside your firewall is just as much of a security risk either way. I actually think [omnifarious.org] the function call model is an evil, misleading, broken way of thinking about messages over networks, but like several other practices, people seem bound and determined that this is the way to do things. If you must do this evil thing, it probably doesn't matter (from a security standpoint) how you do it.

    The only thing you really gain by not going through port 80 is that the attacker theoretically won't be able to break into your web server software by breaking into your RPC software, but I wouldn't count on that being the case. Besides, either way, they've gotten onto your box, does it really matter how?

    Holes in firewalls aren't intrinsically bad things. It's what they lead to that's the problem.

    • The document you reference has no explanation about why the function call model is evil, misleading, or broken. All you do is put forward a short argument that it more tightly couples endpoints than exchanging XML documents and that it is lower performance than dumping raw memory.

      That is hardly reason for calling it evil.

      To call it misleading you would need to provide an argument for why the function call paradigm makes sense when both program components are on the local machine but not when they are distributed. Why should that make a difference? Why should I care where the other end of my transaction be located?

      It seems your arguments have more to do with current implementations than any morality inherent in the function call paradigm. I notice that your own alternative is written in C++, which uses the function call paradigm rather than the obviously more efficient, correct, and aesthetically pleasing message passing paradigm.
      • The document you reference has no explanation about why the function call model is evil, misleading, or broken. All you do is put forward a short argument that it more tightly couples endpoints than exchanging XML documents and that it is lower performance than dumping raw memory.

        I also complain a lot about latency, and I should complain about the lack of a shared address space, which leads to even more latency. That's the biggest issue. The function call model encourages you to ignore unavoidable latency in network messages. It's a fine model for things within the same process, but latency makes it misleading and evil for network messages.

        To call it misleading you would need to provide an argument for why the function call paradigm makes sense when both program components are on the local machine but not when they are distributed. Why should that make a difference? Why should I care where the other end of my transaction be located?

        Because, when it's located far away, you have several issues to contend with. First, a function call normally has a latency in the nanosecond range. A network message over the Internet normally has a latency in the millisecond range. Even LAN messages have a latency in the 100s of microsecond range. That's at least 4-5 orders of magnitude (base 10) difference. That's gigantically huge. Yet, it sits there in your program looking like an innocent function call that you'd normally expect to extract an overhead of a few nanseconds.

        The other problem is that you don't have very much control over the state of the other program, especially if someone else wrote it. You're essentially introducing close state dependencies between programs that are largely unrelated, and several milliseconds apart. There is nothing _inherent_ about the function call model that forces you to introduce these dependencies, but everything that everybody knows about programming causes you to make certain assumptions about function calls that just aren't true for network messages, especially between largely unrelated programs.

        RPC is evil because it hides those essential differences from you.

        • If I make a function call that's because I want it to be synchronous (well, or because I'm too lazy or the language I'm using makes it far too hard to do asynchronous properly). I expect it to take however long it takes. Some of that time will be function call overhead. Some will be disk overhead. Some will be processing overhead. Some will be network overhead. All of those things are always there. I think you're making too much of an issue out of the increased function call overhead when there are plenty of other reasons a function might take many milliseconds to respond.

          If I move my program from a flash device to an old MFM drive the drive latency would increase substantially but I don't think that would be reason for calling all synchronous disk I/O broken, evil, and misleading.
        • I find it amusing that what you call "evil" is the reason why RPC and Corba/IIOP are they way they are. They're hiding the fact that the method call you make might be in process or might be on a remote machine.

          "The other problem is that you don't have very much control over the state of the other program, especially if someone else wrote it."

          And with SOAP and XML, you have total control over the remote program? Having "loosely coupled" "general" SOAP messages won't solve incorrect implementations of the remote service.
          • I don't like SOAP either. When I talked about XML in my 'paper', I meant XML documents that aren't designed to encapsulate a function call, but merely carry information.

            • XML documents that aren't designed to encapsulate a function call, but merely carry information.

              in SOAP, XML is used to encapsulate function call ids and arguments - it's just data

              • Does a call that results in a bunch of SOAP formatted XML being put on the wire look like a function call or not? Does that function call have arguments that mirror the data stuck on the wire or not? If the answer is yes, then SOAP is designed to encapsulate a function call, and is evil.

        • Mod Omnifarious' post up, he's got a seriously valid and important point here.
        • Anybody who uses SOAP requests for doing small and frequent function calls over the net is foolish and deserves to suffer the consequences he will face for his foolishness.

          On the other hand, however, those who use SOAP calls correctly will reap the benefits SOAP has to offer.

          An example:

          I work for a mortgage company. We had an online rate search engine (till the market went sour anyway). One of our clients ran a few XML queries to our engine (they were located in California, we are in Chicago). Here are the two most important queries.

          1. The Loan Search: they would ask a few questions, and then do a Loan Search with one XML query and cache the results.

          2. Apply for a loan: they would acquire all information related to applying for a loan, and then send one XML query at the end containing all that information.

          Had SOAP and WebServices been available at the time to use in place of custom built XML handling methods (not to even mention production ready XML libraries) our jobs would have been a lot easier.

          Had we used SOAP to handle every request (including sending every individual piece of information seperately, or sending an XML document back and forth for every step of the loan application process) our application would have been a hideous failure. Instead, it worked just fine. The biggest problems were my XML parsing techniques were lackluster (this was the first work with XML I had done so it was a little slow) and the market just wasn't there for online mortgaging (to this day, only about half a percent of people who get mortgages actually buy their mortgages entirely online).

          SOAP has it's uses, but like any good technology, people can abuse it. As always, time will seperate the good programmers from the bad programmers. Therefore, my conclusion is that SOAP is not inherently evil, rather a lot of crappy programmers are! :)

          Bryan
          • Anybody who uses SOAP requests for doing small and frequent function calls over the net is foolish and deserves to suffer the consequences he will face for his foolishness.

            You can't make blanket statements like that. If I am writing an ecommerce app that interfaces with a FedEx web service to get prices for shipping to various destinations for certain services at a certain weight, and I make a call every time a customer wants to know how much shipping would cost, I can get that aspect of the system up and running in less than half the time it would take to develop the in-house data structures, load them, and continue to load them daily (or maybe several times a day, depending on how often the data changes). Yes, I know that last sentence was a run-on, but you get the point. If time-to-market is a high priority (and it should be), then frequent small calls to internet web services could be the answer to a lot of problems.
        • You write a lot of good reasons to not use RPC calls like normal function calls.

          How fortunate that I don't know anyone who does.

          Just because there's a performance difference between RPC and LPC doesn't mean that we HAVE to have a different interface for the two. In fact, it's downright stupid to deny yourself the simplicity of function-like RPC calling when it's what the situation calls for.

          If a program in question has other needs, then do something else.

          You are making a mountain out of a molehill. Show me a program that has the problems you are so worried about. I don't see one. I see programmers using RPC and other methods correctly for the situation.

          For a change.
          • I don't know of any programs outside of a proprietary environment that do that. I know of some older research grade programs that do that. I haven't investigated GNOME carefully.

            I think that the simplicity is as false as the simplicity of starting up a new thread for every connection when you have to talk to 5000 people at once.

            • There you go again, proving something wrong by pulling an example of a dumb thing you can do with a technology. Who the hell starts up 5000 threads on a routine basis? If that's happening, you're probably out of the tested and claimed domain of the program.

              You seem to want programs to prevent you from using them wrongly. Can't happen. Mathematically provable for sufficiently precise definitions of "wrong". If you insist on living in that world, expect to pay the usual price for a disconnection from reality, whereas the rest of will continue to have tools that allow us to do dumb things like open 15000 GUI windows, but somehow, we manage to avoid it, without people like you holding our hands.

              Except on porn sites & typo sites.
              • Umm, I've seen plenty of programs that have a threading model just as I described. If you listened to people 2-3 years ago, that idea was all the rage.

                I don't want things that prevent you from doing stupid stuff. I just want things that don't encourage you to.

    • Actually i'm more concerned about somebody executing illegal RPC requests because of a flaw in the web server. A lot of this bullshit tends to be implemented on IIS after all.
  • by -brazil- ( 111867 ) on Friday November 16, 2001 @11:31AM (#2574578) Homepage
    Off the trolley, I'd say. It's a fundamental and unavoidable weakness of packet firewalls that they filter ports, not services. It's completely naive to believe that port 80 will always be harmless HTTP traffic. ANYTHING can run on port 80, and there's nothing you can do against it unless you have absolute control over all machines behind the firewall.
    • There are firewalls which filter both ports & services at the same time, those using application layer proxies. If you configure one of those firewalls to run HTTP on a port, and instead send SMTP messages to that port, then the firewall will block those SMTP messages.
    • Off the trolley, I'd say. It's a fundamental and unavoidable weakness of packet firewalls that they filter ports, not services. It's completely naive to believe that port 80 will always be harmless HTTP traffic. ANYTHING can run on port 80, and there's nothing you can do against it unless you have absolute control over all machines behind the firewall.

      Hmmm.... Not only can XLM-RPC and SOAP also run on port 80, but that HTTP traffic can be mighty harmfull... Thinking of Nimda, Code Red, and CRII.... The problem is that any "protocol" is fundamentally used to exchange instructions and these instructions can be used for all sorts of stuff.... So filter based on services, but please keep the services in your DMZ ;)

      Basically, this means--

      filter based on IP address and port number. only allow those things to pass through the firewall that you absolutely need (possible exception of outgoing TCP connections, at your discression) and keep it all inventoried.
  • by BlackSol ( 26036 ) on Friday November 16, 2001 @11:32AM (#2574588)
    I agree with you, the seperation of the ports is more secure due to the fact you need to do less filtering to monitor the incoming requests. However this assumes a competent administrator setting up the firewall, and your code is secure.

    Forcing requests to utilize web services is an easier security model. Singular port monitoring is required and ddos, proper request structure, overflows and the like are handled by the web server, thus abstracted from your application layer and upgradable with less affect on your development. Also its assumed you are using a professional level web server (Apache, Iplanet, NES, or even IIS), meaning a greater user base resulting in problems getting found quicker and fixed faster.
  • by digital_freedom ( 453387 ) on Friday November 16, 2001 @11:33AM (#2574593)
    I totally agree with the idea that separate services receive separate ports. This makes a lot of sense for security, in that you can track excatly what SOAP requests are being made to your servers and allows you to shut them off if necessary. Going over Port 80 makes it virtually impossible for a company to disable a SOAP service from the firewall without expensive packet inspection at the firewall. The drawback that I can see with not going over port 80 is trying to get the Networking group to punch a hole in the firewall for that port. A separate port also makes things more secure in that if you want to use SOAP internally to your network, you don't allow other people to easily send SOAP requests from the external network. We use CORBA at my company and we don't open the ports to the open internet, but we do keep them open on internal firewalls. If hackers knew that we had CORBA servers, they could inspect what services we had and possibly do malicious harm.

    Separate but equal is what I say.
  • Having software that talks on a specific port is not too hard to deal with -- port 80, 8080, 1234123, whatever...

    I've worked with stuff that required a range of ports (like thousands of them), which is what makes your IP people freak. Far more common than one would think.
  • SOAP (Score:5, Interesting)

    by Jon Peterson ( 1443 ) <jon@@@snowdrift...org> on Friday November 16, 2001 @11:34AM (#2574599) Homepage
    Hi,

    SOAP is transport independant. That's one of its (theoretical) virtues. You can implement SOAP over SMTP, HTTP, whatever.

    Practically, it does seem fair to say that HTTP is what an awful lot of SOAP tools are going to be expecting, and given that SOAP is still quite bleeding edge, I wouldn't want to try using another transport protocol unless I could afford time and skill to do a lot of fixing up.

    However, HTTP doesn't have to run on port 80. Furthermore, most SOAP implementations will be (well, claim to be) happy on HTTPS too, so that's an easy way to do encryption.

    As for the 'web page vs functional' thing, well that's not so simple. A request for a page produced by a CGI script is a functional request coming from strangers over the web. SOAP need not be different.

    At the moment, if I want to make an XML version of my content available to folks, I might tell them to use HTTP GET with a URL that invokes a CGI program that returns some XML.

    In the future, I might want to make the same XML available via the getXML method my Website class, and then SOAP enable my Website class.

    The differences isn't that great.
  • SOAP security (Score:2, Interesting)

    We are currently using SOAP-like mechanisms, and there are a number of security precautions that can be implemented that in my opinion balance the threat of accepting such messages.

    Possibly the most secure precaution is using SSL for the requests. You can require a client certificate to access the service and your site certificate will reassure your partners that they have connected to the correct server. In addition, you can build in custom username/password fields into the app, or have each message PGP signed.

    Another option is to move your application to a different IP address and use the firewall to restrict access to it. This method is good if your partners are known ahead of time.

    Hope this helps.
  • It shouldn't matter what ports you open up on your firewall - what you are interested in is what will be receiving these requests.

    We've all seen that access to port 80 can cause problems with incorectly configured IIS machines anyway.

    Basically as a person responsible for security and firewall configuration you don't just enable access on a port just because someone asks for it - you check out what is going to be used and make a decision AND warnings to those involved.
  • In theory ... (Score:5, Informative)

    by King Of Chat ( 469438 ) <fecking_address@hotmail.com> on Friday November 16, 2001 @11:42AM (#2574643) Homepage Journal
    It can be really quite secure because:

    You can use any port you choose. A bit "security through obscurity" this one, but no harm there>

    You don't really need a full web server. All you're going to get is an HTTP request with a SOAP envelope thingy inside. If it doesn't match the WSDL (or whatever) schema thingy you've published, then just ignore it. You only need give the information to people who are going to be legitimately calling your service. Of course you're still vulnerable to normal DoS, but then isn't everyone.

    It is quite possible to digitally sign SOAP requests. Just ignore anything not signed/not signed by a recognized customer.

    If you are only exepcting SOAP requests from a few other servers, then consider client-side SSL. Since only a few servers will be calling you, then you'll only need a few client certs.

    Like everything, it's as secure as you make it. If you expose "FuckMyOS" as a SOAP method and publish it through UDDI or something then ... well ... you get what you ask for. Signatures on SOAP requests aren't (easily) supported by everything yet - but then SOAP implementations differ (eg MS SOAP has no types, IBM SOAP does). This isn't a major issue as it's pretty easy to roll your own request - it's only XML after all.

    PS I have no opinion on Vladinator's website.

    • I have to agree, because letting in "dangerous" function requests in over HTTP is fairly close to the same risks as running CGI scripts.

      CGI programmers have had to be trained to code safely, not doing stuff like:

      system "$unchecked_input";

      SOAP programmers have to trained similarly. GIve 'em some rope, but make sure they don't hang you with it!
  • by TheSHAD0W ( 258774 ) on Friday November 16, 2001 @11:48AM (#2574676) Homepage
    IMO you should run separate functions on separate ports. I don't think this increases or decreases security much, but it greatly improves scalability.

    I could, for instance, run my setup on a single box; and then, when traffic went up and the service got popular, replace the box with a Linux firewall to an intranet. The functions could then be divided among several machines on that intranet, and having the firewall box route different ports to their dedicated machines would be a trivial task.

    Hell, you could even have redundant machines for critical operations, and if a failure occurred you need only change the routing on the firewall box to get things back up.
  • My real concern about tunneling everything through a single port or protocol is that it makes network auditing much more difficult. If there is a security problem, or just a general network problem, the fact that everything looks like HTTP doesn't help track down the problem.

    However, there is a flip side to this. I have been in the position of trying to convince large companies to change their firewall configurations. It would be easier to make lead in to gold than to get a large company to allow communications through a new port on their firewall.

    This basically means that putting everything through port 80 serves two purposes. It give people the perception of security, and it lets the project actually happen. It is the case that not having to change your network configuration is a powerful marketing tool, but it doesn't make anything more secure. All of these issues are addressed in just about every networking book out there.
  • Portals are nice, from a security perspective. You can run all your applications behind a front-end webserver, only accessible via port 80. Some nice firewalls, like Checkpoint, have an HTTP security server which does bounds checking and similar to HTTP requests. Couple this with a good, reliable webserver (apache or netscape), and any applications running behind the portal are less susceptable to an overflow attack since the only machine that can access these applications is the webserver, which means an attacker would have to compromise the web server first.

    Also by doing portals in this way, you can force users to authenticate an HTTPS session before accessing the portal site, and the services behind the portal. Of course, how you do authentication can be anything from login/pass to securid or X.509 certificates. Once the users authenticate themselves, then accessing the applications "through port 80" is more secure.

    However, setting up multiple DMZs is the way to go. In my example above, where the webserver accesses the services behind the portal, you'd set up those applications in their own DMZ (seperate from the webserver DMZ). Access to this DMZ wouldn't be allowed directly from the outside (restricted by FW), which again would require a compromise of the web server. The other advantage is, if an attacker were to compromise the application *somehow* without a webserver compromise, then this would restrict them to only boxes in this second DMZ and therefore would not compromise the webserver ALSO. Setting up a DMZ correctly means a lot. You can set up a DMZ to accept incoming connections but not to allow anything outbound (except for state traffic). This would prevent an attacker, who has compromised services in the DMZ, from attacking anything else from that point into the rest of your network.
    • Some nice firewalls, like Checkpoint, have an HTTP security server which does bounds checking and similar to HTTP requests.


      That's all very well, but the inspection performed by the FW-1 HTTP Security Server is quite expensive in performance terms (effectively it turns FW-1 from a stateful packet filter into a proxy).


      Not only that, but historically, there have been plenty of problems with the Security Servers to the extent that I wouldn't be happy deploying it on a production, high traffic network (and certainly not without extensive validation).


      To be fair, I haven't worked with FW-1 recently and haven't looked at NG (aka v5) at all, so things may well have gotten better.


      --

      • There is a solution for high-traffic networks: Firewall sandwich.

        And you're right, the HTTP security service is resource intensive, but that's why you get a few boxes in a sandwich and the load is lessened.

        It all has to do with costs on whether you want to implement such a solution.
  • This is definatly a valid question, and I think, personally, the answer would be yes, the entire notion of web services have some serious security reprocusions. In the past, web traffic was web traffic. Now that HTTP is being used to essentially tunnel an RPC call into your servers, it means that that same servers that have, time after time, been compromised, are now the same servers providing vital access to critical data systems.

    Now, this does NOT mean that web services are bad, simply that web services have to be written with the understanding that they ARE more open then normal simple RPC calls. Greater use of this design means greater risk in general, since now access to functions that may be suseptable to buffer overflows, denial or service attacks, etc, are basically sitting out there in the open. I've never heard of a denial of service attack targeted at an RPC mechanism, but with little or NO modification, this type of attack could be deployed 'out of the box'.

    New security measures will have to be created in order to thwart this greater risk that will now be exposed.
  • Building analogy (Score:3, Insightful)

    by 1984 ( 56406 ) on Friday November 16, 2001 @11:52AM (#2574697)
    This isn't a perfect analogy, but think of it like a building, where port 80 is the front door that comes into the foyer. The windows are miscellaneous ports, and the loading dock is some port you use for something else (maybe 22).

    Let's say you have a security system hooked up to the front door, the windows, loading dock doors etc. Normally pretty much anyone is allowed to walk through the front door. You do hope nobody manages to climb in through a window, and you have strictly controlled access via the loading dock.

    Now if your reception is poorly designed your only hope is that nobody who walks through the front door hacks off the head of your receptionist and proceeds to go walkabout through the building screwing with things. If your reception is well designed this will be hard to do.

    You could even have it so that there's some hazard to those right there in reception but breaking out of reception is as hard as breaking in any other way. But you don't just assume it's secure because it's nicely decorated or (in this case) because so many people walk through receptions it *must* be secure.

    It's just a security model. If you alter the constraints and facilities of the environment, then you've also changed the range of threats to that environment. And you tailor the prophylactic security, intrusion detection and response to the potential threats and damage of compromise.

    Overall, if you want to have any security, you have to think about security. However the hell you set up your systems.
  • A positive note.. (Score:4, Interesting)

    by Thomas Charron ( 1485 ) <twaffle@@@gmail...com> on Friday November 16, 2001 @12:01PM (#2574761) Homepage
    After posting my last reply, I thought of something that is a GOOD thing regarding SOAP over HTTP that deserves mentioning. By directing and detecting all web traffic, you now have a transactional log off all RPC calls being made into your system. So while yes, you are possibly exposing things, you have a much better logging mechanism in a central location then you would have by having any given application tunneling thru its own socket, making calls to its hearts content. All calls cal now be logged, filter, redirected, etc..

    Now of course, this does apply only to SOAP over HTTP, and possibly not SMTP/POP3, Raw socket, MSMQ, etc..etc..
  • Bruce Schneier covered this more than a year ago in the 15.06.2000 cryptogram [counterpane.com]. Anyone who has read Schneier's newsletter long enough begins to realize that he is the Cassandra of the Internet...
  • by dacron ( 467082 ) on Friday November 16, 2001 @12:06PM (#2574786)
    SOAP has actually gone well out of its way to allow server admins to filter requests. It makes use of the "Mandatory Header" aspects of the HTTP protocol such that every soap request must come with an HTTP header specifying which function is being called. Since it's in the header, a server doesn't need to know SOAP to filter, it justs needs to know HTTP, and the server can simply turn away anything that doesn't provide such a header.

    I agree there is still a major lack of support for this type of filtering, and even the standard leaves something to be desired in this respect, but the SOAP designers definitely did think that this was a big enough problem to provide facilities for future closing of these holes.
    It's a bit of a pain to administer, but it definitely *can* be done.
  • This is a non-issue. You can run any protocol over any port. If you thwart your own firewalls by running all services through the same port that's your own damn fault (or your clients' fault).
  • Yes, another port would be simpler to secure. Without that, firewall administrators will simply go higher in the stack and look at layer 7. In other words, the firewall will have to pick out the URL and apply rules to that. Of course, this also implies the firewall is tracking connections, etc. It can no longer be just a dumb packet filter, but no serious firewall is.

    In the end, the lack of a port as a service differentiator isn't a big deal. What is important is that you have something wich differentiates the service. A URL can do that, it just costs a little more CPU.

  • by Ars-Fartsica ( 166957 ) on Friday November 16, 2001 @12:33PM (#2574904)
    At some point it will be simply impossible to effectively firewall by port, protocol or syntax. There are so many ways of piping functionality acrosss the network that firewalls are simply going to have to become intelligent about what represents a hostile bytestream.

    I wonder if anyone is working on this?

  • To me the more appropriate question is what is quicker/easier to develop? Configuring the network firewall, servers and router is the job of the Network and system administrator, so do you really have much influence over those factors?

    Having done sys admin work, it's much easier and less work to go through port 80. That's one less port to keep track of and allows me to build expertise on securing HTTP. Learning to secure a lot of different ports isn't hard though time consuming. Teaching it to new staff and making sure they understand all of it isn't. That's one reason for the adoption of SOAP and other XML/HTTP protocols.

    From a developer perspective, would you rather build in IPSec to your IIOP, CORBA application, or setup HTTPS and go through a well tested system? Rolling your own security on top of IIOP and CORBA isn't a trivial task. You could build your own encryption wrapper for IIOP or CORBA, but you would have to handle all the key storage, key management, encryp/decrypt, secure sessions, and authentication to create robust, reliable security.

    If your application really needs greater than 128bit SSL, then going through a web server on port 80 doesn't do anything 4 U. To my knowledge RMI can make HTTP connections via java.rmi.server.RMISocketFactory. There are existing Java libs to handle both SSL and key management, so going with port 80 is really a administration choice.

  • I asked pretty much the same thing of Microsoft when they first announced .NET (which is closely tied to SOAP) For anyone who's curious, I asked a couple people, so I don't really remember WHO I talked to, but I do know that Scott Gu was one of the people.

    Their response?

    Developers are tired of being hampered by netadmins, trying to open up unsecure ports just so that DCOM will work. Basically, SOAP is a way to do it where you don't have to open up esoteric and undocumented ports and protocols...

    As far as security goes... it's up to the implementors. SOAP does have one advantage over some other forms of RPC, in that it has a few built in forms of authentication and is explicit as opposed to implicit. That means you can't just randomly activate bits of code just because you can log onto a server.

    Another advantage of SOAP is that a decent XML coder can write his own parser for the protocol, so you don't have to use the vendor's, and you can customize your parser to only pass safe requests.

    Of course, some of the MS people indicated that they felt I should use the MS parser at this point. I haven't seen anything bad with it, but I wouldn't have any qualms about writing my own if the business needs dictated it...
  • by ahde ( 95143 ) on Friday November 16, 2001 @12:50PM (#2574993) Homepage
    SOAP was developed specifically so companies (such as Microsoft) can execute arbitrary code through otherwise secure firewalls where all they have to do is get the user to download a simple client program that wraps the commands in an XML format and sends it as an innocent looking HTTP response. It was designed to *solve* the problem of corporate users wanting to run network applications that are verboten or otherwize blocked by their network administrators.

    SOAP is designed with security in mind. Security circumvention.
  • Steven Deering from the IETF had an interesting point about running a bunch of services on top top port 80. If you run a bunch of services on top of port 80, all you done is build a protocol stack on top of things running on port 80 and you've turned TCP into a layer 2 protocol. You haven't solved anything, and in fact, you've moved your problem up a level. This is ridiculous. We need to get back to running separate services on separate ports just as the Internet was designed to do.
  • by heilbron ( 122789 ) <heilbron@nm.ifi.lm u . de> on Friday November 16, 2001 @12:52PM (#2575004)
    Bruce Schneier had an interesting statement on security and SOAP:

    <a href="http://www.counterpane.com/crypto-gram-0006. html#SOAP">CryptoGram Newsletter on 2001-June-15:SOAP</a>
  • by AugstWest ( 79042 ) on Friday November 16, 2001 @12:56PM (#2575017)
    I work for an ASP, and we basically have to build full web applications that function like Office tools, and believe me, port 80 is a necessity.

    We need to fire up a java applet on the client machine that maintains a session with the server. We also need to allow chat.

    I can't begin to tell you how many million sof dollars we've lost as a company because of large corporations that refuse to adjust their firewall settings to accomodate web applications.

    Some of them don't want .jar files being executed. Some of them just don't want to allow anything but port 80.

    If we're only allowed traffic on port 80, which is the case when dealing with 90% of corporate environments, your choice is either a) get the services running over port 80 or b) give up on maintaining your business.
  • To date, there have been a large number of tools dedicated to the creation and deployment of web services, but relatively little thought has been given to relationship management between services (a subset of which is security). Only a handful of companies (e.g., the deftly-named Grand Central [grandcentral.com] and Flamenco [flamenconetworks.com]) have started to broach this issue.

    I think we can expect to see a large amount of activity in the area of what it takes to connect web services in the real world (i.e., with sensitive data, in business-critical operations, etc.) in the near future. One certainly would not one's web services to be abused [langa.com]/cracked [wired.com] as easily as Microsoft's Passport "technology". It will be interesting to see how this new market evolves.

  • Don't forget that there are a lot of customers out there that can only contact sites on port 80 and 443. I have run into this time and again. You want to use a port other than 80 for admin or security reason, only to find out that your customers security practices don't allow communication to other ports.

    This is true for both consumers and business customers.

    So while you might want to run a service or application on another port, you might be locked into port 80.

    Just something to keep in mind.

    Beside, you shouldn't rely on the obscure ports for you security. You should build security into your application from the start. And you should NEVER trust any data that comes from "outside" your applications.

    Cheers!
  • by rice_burners_suck ( 243660 ) on Friday November 16, 2001 @01:13PM (#2575122)

    I would say that drilling open a bunch of ports on a firewall is probably safer than opening port 80 and nothing else and running all services through this port. Why do you suppose we have ports in the first place? If everything is supposed to run on just one port, than we should have just an IP address and no ports at all! But we do have ports, 64K of them.

    In my opinion, every "server" program running on a computer should have its own dedicated ports which it listens on and performs operations through. For secure operation, you decide which services you need and enable only those services. Since all ports not used by these services are, well, not used, then you should block those ports in your firewall.

    Want more security? Most non-computer people simply don't understand the concept of good computer maintainence. I keep telling people that just like any machine, computers need to be well maintained or their operation degrades over time. (And that means that security vulnerabilities become more likely as time goes by without proper maintainence.) This includes software and hardware maintainence. Once you have a well functional system working, you can search for big security vulnerabilities, like unnecessary programs or whatever. Once those are gone, you look for smaller things, like software configuration that might allow an intruder to get increased priveledges. Once those are gone, you can go deeper, by getting some h4x0r programs and torture testing your system (being careful not to mess up other peoples' systems in the process). Once you can't get into your own system, you can go deeper yet by examining and auditing the source code of programs you're running (if the source is available to you). I'm sure there are about 30 other steps in between these, but these four are the big tick-marks I can think of right now. Oh well.

  • I wrote "Web RPCs Considered Harmful" [monkeyfist.com] that briefly addresses the security issue.

    Summary (and using more recent terminology): Web services that expose more new and unique code are more likely to expose bugs. RPCs, SOAP, and CGIs all encourage developers to write more exposed code by making that style easier to do.

    One better alternative is to be more data-driven (some would say "functional", as in "functional programming"), so that you only expose data (via a standard server which would typically be more mature, heavily reviewed code).

    Alas, that's an entirely different way of thinking that most people are not used to, since it flies in the face of "normal procedural or OO programming" that happens on the desktop. Some examples, though, are Linda Systems (TupleSpaces), REST [conveyor.com] (the traditional WWW architecture), and even P2P to a large extent.
  • I will now summarize this entire article into two opposing viewpoints:

    • SOAP is great - it lets you work around those annoying firewalls and get stuff done on port 80!
    • SOAP is wrong - firewalls are there for a reason, and just running everything on port 80 doesn't make it any more secure.

    I tend to agree with the second argument, but until we have powerful stateful protocol filters for all protocols that could go through port 80 or wherever, there's no real difference between opening 50 separate holes or one big one. Even then, bad stuff can get in and out over https, etc. So SOAP doesn't really make things much worse, it just points out security issues that we've been ignoring all along.

  • by StormyMonday ( 163372 ) on Friday November 16, 2001 @02:19PM (#2575526) Homepage
    ... for the SOAP protocol is that Microsoft's ActiveX services use a portmapper to get dynamic port numbers for their services. Needless to say, this is absolute hell to try to run through a firewall with anything resembling security.

    Hence SOAP. You piggyback your ActiveX control onto another service (HTTP) that uses a single port. Smart admins will use something other than port 80; we know how many of *those* there are.

    There is also the problem that firewall admins tend to take their job seriously -- they know that if anything nasty gets into the network, they'll get blamed for it. They tend to be *very* conservitave. Web admins don't -- most of them think that the worst that can happen if they get hacked is that they'll get pitchers of nekkid wimmen on the corporate homepage. They don't care. *Much* easier to deal with web admins than firewall admins. Lotsa places will even let you have your own web server if you promise to be nice.

    As to what it can lead to, check out RFC 3093, Firewall Enhancement Protocol (FEP) [isi.edu]
  • A firewall is the wrong approach anyway. It presumes that you can declare a sure perimeter behind which things can be "trusted."

    There are so many ways around most firewalls (modems, wireless networks, unscrupulous visitors, virii on removable media and whotnot) that the firewall is really just the "front door."

    End-to-end security -- defense in depth -- is the only way to be sure. Each machine has to be "strong enough" -- just like most office desks and doors are equipped with locks, though most of us don't use 'em.

    Clearly we live in a world where most desktops are _completely_ insecure, so firewalls aren't completely worthless. But perhaps SOAP and the like will have some benefit through clueing in some of the clueless that there's more to security than throwing up a firewall.
  • Fact is that running SOAP over port 80 or not doesn't make much difference. Someone once said that IT secuirty is 20% technology and 80% policy and practice. These numbers are debatable, but I agree with the premise.

    The problem is that certain things have to be open on a networked computer in order to benefit from the networking in the first place. You need layered security. You can't just secure your physical, network and transport layers and expect everything to be okay. You need to know what's going on all the way up to the application layer.

    You need to use DMZs, staggered firewalls, SSL, SSH, applications that force you to login, appropriate file/directory/service security permissions. You need to know at any time what software your boxes are running, and make an effort to understand how that software works and what issues it presents. You need to patch commercial software, read the bug lists and do penetration testing.

    There's obviously more that can be added to this list, but the point is that security is process not a technical specification, a device...or a choice of port.

    Most organizations don't invest enough in this process because those controlling expenditure tend not to understand the importance. Also, security is one of those things you only notice when it doesn't work, so it is assumed you are doing it, and you'll never shine for doing a great job at it.

    I think it will take a much more hostile Internet security environment to wake people up to the need to invest in the most critical security capital of all: talented, educated and dedicated human beings.
  • The issue with SOAP is not one of security - what port you run on is neither here nor there - but the fact that most technologies based on XML are a load of old rubbish.

    XML may be a "standard" but so are technologies such as Java serialisation and they work just fine over HTTP. This works automatically and leads to fewer programming errors due to "impedance mismatch", surely the chief source of any security holes and other bugs.

    I don't buy the argument that an XML schema is any more future-proof than a Java class spec. Java handles class changes etc. quite elegantly. And I don't buy the "XML is language-independent" line either - it's just hard to read XML in any language. So you have to use that awful Xerces stuff that changes every 2 months, with little backward compatibility between versions.

    Don't be fooled - there is simply nothing that uses XML that can't be done more elegantly some other way. XML is not a technology - it, along with SOAP, is a completely pointless standard.
  • by ipoverscsi ( 523760 ) on Friday November 16, 2001 @03:55PM (#2576064)

    A couple of rebuttals if I may.

    Many people claim that one can run services on any port they choose, so port filtering is not the same thing as service filtering. True, but if people ran anything on any port we would have no concept of well-known-services at specific ports. Moving web traffic from port 80 makes almost no sense because that's where everyone is going to look for it by default. There is a high probability, then, that filtering on specific ports will filter specific services.

    Network administrators, by default, are highly suspicious and paranoid people. They don't even trust the people they work with, and for good reason. If they could force everyone to use pine or mutt for e-mail reading, I'm sure they would since it is less succeptible to Outlook-born viruses. If development teams would communicate with and seek advice from the security team when developing applications I'm sure there wouldn't be as much hostility to opening a port as there is when approached with "We just wrote an application. Can we have a free port?"[1]. In the latter case, the security team has no idea what the application does or how it was developed and is certainly not inclined to open a port to untrusted software.

    Finally, on to the subject of my article, Apache (or whatever server you're running) is the inetd of the future. Look at the facts:

    • both listen on one or more ports for requests
    • when a request comes in it is dispatched to the correct subsystem
    • most security (ssl, https, tcpwrappers) is handled by the daemon before it gets to the service handler
    • the service handler can perform further accouting or security checks
    • the daemon handles all the networking details on behalf of the subsystem
    Add to this the fact that this is all multiplexed on a single port, and configuring your firewall should be a breeze. Virtually anything you can do with inetd you can do with a good web server.

    Paradoxically, network admins appear less paranoid about their web servers than other inetd-based or standalone services. Some guy codes up a web app and, with little fuss, gets it deployed on the server. No code review, no hassle, no problem! There are only two reasons I can think of for this behavior: 1) The administrator inherently trusts the web server, or 2) the web server box is in a DMZ. I would be suspicious of administrators in the former case.

    Despite the security advantages of a DMZ, it is still necessary for application developers to communicate with security people. Say, for example, that a web application is deployed on server in a DMZ and that the machine is later compromized. If the application had a configuration file with passwords for a database, the database should now be considered compromized. Damage can be reduced or prevented by correct configuration of the database (providing write access only to a specific table rather than the whole database), but you should check with the security people before actual deployment.[2]

    [1] The standard answer to this question is "No". Note that the administrator only answers the question asked. If you want to be more successful in the future, present a full document detailing what the software does, how it works, and maybe provide the admin with a code review, THEN ask for a port. I know this is a lot of work, but it is necessary to maintain the security of the network. You may not take security seriously, but your administrator does.

    [2] Yes, I know that there are moron security people out there. My comment assumes you have good to excellent security people working in your company.

  • by ajv ( 4061 ) on Friday November 16, 2001 @10:14PM (#2577392) Homepage

    It's not about the connection method, it's the content that traverses the corporate boundary that is the issue.

    If the content shouldn't be going over the boundary, then it doesn't matter how you achieved it - you're still in the wrong. You could do it in CORBA, you could do it in simple HTTP GET and POSTs, it doesn't matter.

    As a developer, I can make SOAP invisible to all firewall administrators using HTTPS or abusing their firewall's limitations (most firewalls are incredibly stupid - they don't and can't parse even basic protocols like HTTP, thus let anything that goes out on port X out if port X is allowed outbound.

    As a person responsible for security, your use of any services not explicitly allowed is probably against security policy. But security policy is there to enable business, not inhibit it. This is the single biggest failing of most security people: they lose sight of why they are there!.

    If it takes too long to get a content-flow approved, then that is a failing of the content-flow negotiation process, and it's not about technology at all.

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...