Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Security

Ask Slashdot: Writing Hardened Web Applications? 333

rhartness writes "I am a long time Software Engineer, however, almost all of my work has been developing server-side, intranet applications or applications for the Windows desktop environment. With that said, I have recently come up with an idea for a new website which would require extremely high levels of security (i.e. I need to be sure that my servers are as 100% rock-solid, unhackable as possible.) I am an experienced developer, and I have a general understanding of web security; however, I am clueless of what is requires to create a web server that is as secure as, say, a banking account management system. Can the Slashdot community recommend good websites, books, or any other resources that thoroughly discuss the topic of setting up a small web server or network for hosting a site that is as absolutely secure as possible?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Writing Hardened Web Applications?

Comments Filter:
  • by Anonymous Coward on Monday January 02, 2012 @08:21PM (#38567582)

    Also read The Web Application Hacker's Handbook. (google: wahh)

  • by dwheeler ( 321049 ) on Monday January 02, 2012 @09:46PM (#38568416) Homepage Journal
    Take a look at my book on secure programming: http://www.dwheeler.com/secure-programs/ [dwheeler.com]. I wrote it after I saw software getting broken into, again and again, for the same old reasons.
  • by naasking ( 94116 ) <naasking@gmaEULERil.com minus math_god> on Monday January 02, 2012 @11:46PM (#38569140) Homepage

    However hard you write your web app, if its running anything important, it WILL get hacked. there's nothing on this planet that cannot get hacked if it is a software.

    I disagree. You can mitigate many risks by using a proper language (memory safe) or web framework which handles user input and session/CSRF protection. You can mitigate all risks by properly using a theorem prover to guarantee your implementation has all the important properties to guarantee safety.

  • by IBitOBear ( 410965 ) on Tuesday January 03, 2012 @03:32AM (#38569920) Homepage Journal

    [A:] Never accept form data via GET, always require POST.

    I never understand why any web page, other than something like google search, will or wants to accept data that is part of the URL for any meaningful interraction.

    Sure it's bookmark friendly but:

    (1) GET contents are logged by default, and PIN-trap elligible in post-facto and blind ("fishing") legal actions, POST contents are not.

    (2) GET is the verb used in things like IMG SRC="" requests, if all GET requests are incapable of incidental write operations then that whole category of cross-site-scripting attack is rendered moot.

    (3) Because of item 1, the contents of your web server log(s) is, by default, promoted from a stream of tidbits to a first-tier security risk in need of secure archiving. [If you have followed good practices and separated your database machine and your web server onto separate platforms, for instance, then a compromize of your web server of the classic sort will net very little at all if all your logs say is "IP X.X.X.X GET http://site.tld/someform [site.tld]" and "IP X.X.X,X POST http://site.tld/somerequest [site.tld]". If action and identification information are passed around in your GET(s) then an attacker can learn that IP address A.B.C.D is the home of USER=Bob and so forth.]

    Basically if people had _honored_ the designation of everything after the question mark ('?") as a _query_ _string_ in the HTTP specification, but not carried the SQL-burdened definition of "query" into the issue, a lot of web-pain could have been avoided.

    Yea it might not be as bookmark friendly, but when is it ever smart to bookmark the POST of a filled-in form?

    [B:] learn what a _real_ DMZ is (e.g. two routers with the public machines between them and the internet behind one end and the intranet behind the other, with very intense restrictions on what traffic can pass from the DMZ into and through the internet end and _both_ routers configured to _distrust_ _all_ connection attempts originating from the DMZ machines). Then implement this arrangement correctly. There are a bunch of rules for doing this right, and if you follow them your web service machine will be "stuck" in a deep warm hole of safety, in that what it can do will be greatly limited, which is at least as important as making sure that what can be done to it is limited. Most exploits require more than one path to the machine, for instance tricking the web server into "calling you back" with a telnet session or an FTP or SCP of bulk data. If the web server can only pass traffic from the one port (port 80 etc) off of the DMZ then even a successful compromize of the machine may be stopped from having any net effect.

    [C:] Every machine in the DMZ is allowed to do exactly one thing. e.g. don't build a LAMP machine, build a LAP and a separate LM machine and place them very close together. This sort of separation can even be done with virtual machines. Just so long as the machines cannot peek at one another's storage etc.

    This is not mainstream wisdom, but it is out there if you look for it. (e.g. I didn't make all this stuff up myself. 8-)

    There are lots of things that are easy, but not always cheap, to do that could make the world much safer.

    They just aren't in the five-days-to-your-web-presence quick-start guides to web servers.

  • by TheLink ( 130905 ) on Tuesday January 03, 2012 @09:37AM (#38571288) Journal

    and monitor all traffic to detect anyone *attempting* to connect over any other port, and immediately greylist their IP Address for an hour. If they repeatedly do it, than blacklist them permanently.

    From what I see in real-world firewall logs, there are often tons of IPs trying to connect to your nonlistening ports. And those can be from dynamic IP users. Blacklisting these permanently would cause more problems and not really help much (assuming your system is hardened and has upstream DoS/DDoS protections in place).

    and kill the session if a single out-of-order packet is received.

    If you're worried about that sort of thing, you should solve it by using TLS/HTTPS (correctly ;) ) rather than killing sessions just because an out-of-order packet is received. If the attacker already has the ability to pwn a user's TLS/HTTPS connections, the attacker has no need to inject out-of-order packets to pwn that user.

    If you're that paranoid what you could do is set up "honey data" and "honey rows" in database tables. For example, you could create customer records of nonexistent people/items who/that don't appear anywhere else in the world. If those data/records are ever accessed, it means something has gone wrong. And if that data ever appears "outside" (internet or elsewhere), it may mean something has gone very very wrong...

    Another way for attackers to access the data would be via the backups and the systems that do backups. So even if your web apps and servers are super-hardened, it may not matter if the attacker can get the data via the backups.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...