Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Security

Ask Slashdot: Writing Hardened Web Applications? 333

rhartness writes "I am a long time Software Engineer, however, almost all of my work has been developing server-side, intranet applications or applications for the Windows desktop environment. With that said, I have recently come up with an idea for a new website which would require extremely high levels of security (i.e. I need to be sure that my servers are as 100% rock-solid, unhackable as possible.) I am an experienced developer, and I have a general understanding of web security; however, I am clueless of what is requires to create a web server that is as secure as, say, a banking account management system. Can the Slashdot community recommend good websites, books, or any other resources that thoroughly discuss the topic of setting up a small web server or network for hosting a site that is as absolutely secure as possible?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Writing Hardened Web Applications?

Comments Filter:
  • by Elgonn ( 921934 ) on Monday January 02, 2012 @08:16PM (#38567502)
    I've seen many a question or thought like this and I don't understand the underlying wonderment. Web applications aren't different than any other networked applications. You just have a larger selection of clients that could be communicating with you. But you'd never trust ANY client would you?
    • by rsilvergun ( 571051 ) on Monday January 02, 2012 @08:27PM (#38567650)
      Aren't web apps very different? Inside my Intranet I can make certain assumptions that I can't on the Web. If those assumptions prove false, it's because another layer above me isn't doing it's job. You might balk at this, but the fact is that as programmers we're constantly relying on some layer above us; whether it's network (TCP/IP, SSL, TSL), software (the OS, the API) or hardware (is the Memory on this board bad?).
      • by b4dc0d3r ( 1268512 ) on Monday January 02, 2012 @08:40PM (#38567818)

        Sounds like you're making bad assumptions. One unexpected breach and your network is no longer secure. On a secure network, you still close off all of the ports except the ones you use. You don't make assumptions that something is safe, you add IP filtering and passwords.

        Web apps are exactly the same as any other intranet app, and should be just as secure. The only difference is, you also have a web server and a framework adding potential bugs and holes. And then your code most likely has to protect against common browser-based attacks and handle user authentication/authorization on a stateless connection.

        Don't trust anything on any network, or you'll end up like Sony. Breach after breach.

        • by nahdude812 ( 88157 ) * on Monday January 02, 2012 @09:22PM (#38568244) Homepage

          You fail to actually address any of the technologies he mentioned as a layer above. You're talking about closing ports and other pretty standard bland basic intro-level security. Sure, there's overlap, but what he's saying is that a lot of common Internet problems are reliably and intelligently pre-solved for you if you control enough of the technology stack.

          I'll pick his example of TLS since that's a good example of the sort of technology stack you can rely on in an intranet application which is prohibitive to implement in an Internet application.

          If your web server has validated a TLS certificate, unless your signing authority has been compromised (and for internal purposes, that's owned by your own company's security team), you can trust the subject of the TLS cert. It is not only considered safe to assume that TLS is valid, it's widely regarded as one of the most secure possible means of authentication you can have since it includes endpoint verification on both ends. It's excellent practice, but if your CA is compromised it falls apart. Of course you're probably also relying on other proven technologies like LDAP for identification, but if someone ends up with write access to parts of LDAP they shouldn't have, this falls apart too.

          In internal applications you can make these sorts of assumption that aren't really available on the public Internet since you don't control enough of the technology stack outside your own network to do so without substantial inconvenience for your customers. That doesn't make you a bad developer. In fact the opposite is likely true. If you're building an intranet web application and you think you can do a better job of managing user credentials than LDAP or a better job of securing communications than TLS, you're deluding yourself and very likely introducing security bugs into your application.

          • by b4dc0d3r ( 1268512 ) on Tuesday January 03, 2012 @12:51AM (#38569368)

            a lot of common Internet problems are reliably and intelligently pre-solved for you if you control enough of the technology stack.

            Another bad assumption, that you control enough of the technology stack. I did address the technologies a layer above. You can't control the IP implementation, nor the TLS, nor LDAP, nor anything else outside of what your framework allows you to control.

            Therefore, you have to distrust everything. Assume your LDAP is compromised, that TLS is broken, that your framework and web server and host OS are all broken. Write as defensively as possible.

            This isn't about just writing an application, there is OS level hardening, web server hardening, framework hardening, and more. You can't assume it's all in place and just write the application. Especially if you are "clueless of what is requires to create a web server that is as secure as, say, a banking account management system."

            That is the (one of) the reason(s) Sony got hacked repeatedly within a few weeks. Don't assume anything. The web is hostile, everyone is an enemy, and no matter what your assumptions, unless you assume that everyone is an enemy, you are going to be wrong. Just once out of a million page views, or a trillion, or a trillion squared, once is all it takes.

      • by gwolf ( 26339 ) <gwolf AT gwolf DOT org> on Monday January 02, 2012 @09:20PM (#38568228) Homepage

        Most attacks come from trusted machines, either from people wanting to use their rightful access level to get more data than what they should (or modify data they should not be allowed to), or by bots crafted to infect internal users' workstations and rob their credentials. No, you shoul not trust them just because they are internal anymore than what you should trust me.

    • by tysonedwards ( 969693 ) on Monday January 02, 2012 @08:35PM (#38567756)
      Exactly, the idea should be that you should assume that every piece of data that you are receiving is likely malicious, so as such you should sanitize every variable, never execute *anything* sent to you, mandate bidirectional encryption in which you verify certificates at both sides, and kill the session if a single out-of-order packet is received.

      As well, block *every* port except the one that you intend to use within your application, and monitor all traffic to detect anyone *attempting* to connect over any other port, and immediately greylist their IP Address for an hour. If they repeatedly do it, than blacklist them permanently.

      As well, requesting a non-existent resource should be treated just as trying to SSH in to your box as root!

      Anyone who legitimately runs into your security protections would need to call to get their account reinstated.

      You should also ensure that any functions that will only be *reading* data do not have privileges to *write* data under any circumstances.

      Only writing functions should be capable of writing to your data stores / databases.

      Any malformed entries stored within your database should be immediately flagged as "bad data" and *not* presented back to the user. The record should simply be gone. Any one user who has more than 3 pieces of "bad data" associated with their account should be immediately blocked pending review.

      The best course of action with regards to designing any hardened applications is to assume that any data coming from your own, non-internet accessible servers is suspect and then you will do well in limiting risk.
      • by Thiez ( 1281866 ) on Monday January 02, 2012 @09:33PM (#38568316)

        > and kill the session if a single out-of-order packet is received.

        Isn't that a relatively common and normal occurrence with TCP/IP? I fail to see how this would help as the packets will be presented in the right order to the application anyway.

      • by TheLink ( 130905 ) on Tuesday January 03, 2012 @09:37AM (#38571288) Journal

        and monitor all traffic to detect anyone *attempting* to connect over any other port, and immediately greylist their IP Address for an hour. If they repeatedly do it, than blacklist them permanently.

        From what I see in real-world firewall logs, there are often tons of IPs trying to connect to your nonlistening ports. And those can be from dynamic IP users. Blacklisting these permanently would cause more problems and not really help much (assuming your system is hardened and has upstream DoS/DDoS protections in place).

        and kill the session if a single out-of-order packet is received.

        If you're worried about that sort of thing, you should solve it by using TLS/HTTPS (correctly ;) ) rather than killing sessions just because an out-of-order packet is received. If the attacker already has the ability to pwn a user's TLS/HTTPS connections, the attacker has no need to inject out-of-order packets to pwn that user.

        If you're that paranoid what you could do is set up "honey data" and "honey rows" in database tables. For example, you could create customer records of nonexistent people/items who/that don't appear anywhere else in the world. If those data/records are ever accessed, it means something has gone wrong. And if that data ever appears "outside" (internet or elsewhere), it may mean something has gone very very wrong...

        Another way for attackers to access the data would be via the backups and the systems that do backups. So even if your web apps and servers are super-hardened, it may not matter if the attacker can get the data via the backups.

    • by KevMar ( 471257 )

      There is a huge difference though. It is true that you should not trust any clients. But many people make incorrect assumptions.

      They think that when you are working internally, there is a very small number of clients that can possible connect to it. The odds of a hacker getting onto your network are small. So of course it's secure, it's on a server behind a firewall. Opening an application to the internet strips those security blankets away.

      To be honest, I think we all do a little of that too. We do wha

  • by Anonymous Coward on Monday January 02, 2012 @08:17PM (#38567506)

    For some reason, every bank we deal with (for large business types) is internet explorer only. I guess you'll have to start there.

  • by ka9dgx ( 72702 ) on Monday January 02, 2012 @08:17PM (#38567510) Homepage Journal

    It will get hacked, it's just a matter of time. If you have data that is getting uploaded, then needs to be secure after that, consider using a unidirectional network, also known as a "data diode", which can only send data in one direction.

    If you can't hand the administrator account passwords to someone and rest easy, you shouldn't be counting on it to be secure.

  • EULA baby! (Score:5, Funny)

    by cultiv8 ( 1660093 ) on Monday January 02, 2012 @08:17PM (#38567518) Homepage
    Why harden your web app when you can just write in your EULA that end users can't sue you [slashdot.org]? Profit!
  • and turn off the computer... and hopefully that will keep your data secure :)

    • If you really want to secure it, you'd weld the case shut and fill the jack you plug the power cable into with epoxy. Of course the computer won't be useful afterwards, but it will be secure.

    • by mrmeval ( 662166 )

      It's:
      disconnect it from the net/phone
      wipe the harddrive
      cast it in concrete
      wire it with explosives
      bury it in a vault
      post armed guards outside
      then nuke it from orbit

      I added the nuke just to be sure.

  • by TheEmperorOfSlashdot ( 1830272 ) on Monday January 02, 2012 @08:17PM (#38567526)
    http://www.w3.org/Security/faq/wwwsf4.html [w3.org]

    Once you understand the things they recommend and WHY they recommend them, you won't need to ask this question anymore.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Also read The Web Application Hacker's Handbook. (google: wahh)

    • http://www.w3.org/Security/faq/wwwsf4.html [w3.org] Once you understand the things they recommend and WHY they recommend them, you won't need to ask this question anymore.

      You can also spread your application out into layers. From your request I assume you will be collecting and/or publish sensitive data. It may be possible to divide that process into sections, and spread the seconds over three different machines, with custom-written interfaces between them. That way, when (not if, but when) your world-facing server gets pwned, the pwners will probably be unable to immediately pull anything useful out of the second section (on the second machine), since it isn't using any ord

    • by Chuck Chunder ( 21021 ) on Monday January 02, 2012 @08:53PM (#38567964) Journal

      you won't need to ask this question anymore.

      Pretty bad advice. Unfortunately this is an area where you will continually need to keep asking the question. While there are certainly basics that should be covered there are also subtleties and interactions and new exploits in software you will depend on.

      The OWASP top 10 [owasp.org] is a pretty good starting point.

      • by TheEmperorOfSlashdot ( 1830272 ) on Monday January 02, 2012 @11:09PM (#38568930)
        Not, "How can I write flawless code?," but, "What should I be reading?" The submitter showed no prior knowledge of exploits, so it seemed reasonable to provide him with a simple introduction to the kinds of exploits he may encounter and how they can be prevented.

        Interestingly, the 2010 "OWASP top 10 vulnerabilities" have all existed for a decades - a competent developer flash-frozen in 1998 and thawed out today would be able to guard against all of those flaws. That's not good evidence for your position that the question "continually needs to be asked."
    • by nahdude812 ( 88157 ) * on Monday January 02, 2012 @09:34PM (#38568332) Homepage

      Wow, I can't believe this is still around. It's pretty dated. Let me demonstrate:

      Q3: Are compiled languages such as C safer than interpreted languages like Perl and shell scripts?

      The answer is "yes", but with many qualifications and explanations.

      Really? C is a safer web language than Perl? Buffer overflows and all? Their example that you might accidentally be editing a file (in production?) in Emacs and leave a backup file sitting around that someone can request, and therefore have access to its source code is so weak it's pathetic. Isn't every major modern web server already configured to refuse to serve files whose mime type it does not recognize from the file extension? "Foo.cgi~" won't be downloadable because the web server doesn't understand what a ".cgi~" file is. Never mind that this example assumes that you're engaging in the egregious sin of editing a file on a production system.

      If it's not editing directly in a production system, you almost certainly have a .gitignore (or .cvsignore or .svnignore or whatever) set up to ignore backup files, so it'll never go through your build system or become part of your deployed package anyway. And STILL if you're relying on the obscurity of your source code as a security measure, you're doing something wrong. It doesn't hurt to keep the source secret, but by no means should you be compromiseable because someone was able to get a peek at one of your source files. If someone wants your source code badly enough, they just need to pay off one of your engineers and they get the entire stack source, maybe even revision history. Corporate espionage is all but impossible to track down the perpetrator unless he's very stupid, and it leaves a lot less evidence behind than traditional brute force attempts (like guessing script file names and looking for backup copies somehow left around in production).

  • by msobkow ( 48369 ) on Monday January 02, 2012 @08:18PM (#38567538) Homepage Journal

    I am clueless of what is requires to create a web server that is as secure as, say, a banking account management system

    You can't.

    There is no way to provide the same level of security as an in-house application running on dedicated terminals and a dedicated network as with the banks' teller terminals and ATMs.

    And that's because you have no control over the browser and it's plugins, so you can't stop them from mismanaging or misrepresenting the data, custom code in modified copies of open source browsers saving pieces of secure pages that you never meant to see a hard drive, etc.

    • I suspect the OP was hoping to make his app as secure as a bank's Web-based account management system

      • Re:Easy answer (Score:4, Informative)

        by msobkow ( 48369 ) on Monday January 02, 2012 @08:38PM (#38567792) Homepage Journal

        Well, then he's picking a poor example of web security given the banking industry's track record on break-ins and id theft.

        If you want to see guidelines about what you have to provide for a secure system, check out Saskatchewan Health Information Protection Act [gov.sk.ca] for one region's take on what data protection means.

        As to the technology of how to deploy that, there are no easy answers and checklist standards. New attack vectors and design oversights come out all the time, so web security is an ongoing battle, not something you just design for and "finish".

        • by msobkow ( 48369 )

          Aside from that, there is no mention of what kind of web server technologies are to be used. I seriously doubt anyone can make many useful suggestions for how to secure an application when you don't even know what tools and languages are to be used to develop it.

          Itemize your security requirements, then filter your tool options based on whether they have features to enable or support those requirements. Find out if it's possible to address any gaps through custom code.

          Only then can you seriously think

  • by unity100 ( 970058 ) on Monday January 02, 2012 @08:20PM (#38567550) Homepage Journal
    And do blanket filtering. never trust input. always filter to extreme, as long as you can get away with it. as much as you can get away with it.
    • And do blanket filtering. never trust input. always filter to extreme, as long as you can get away with it. as much as you can get away with it.

      And remember, filtering input is only half the story. The other half is to harden the server itself by using a good, solid, external hardware firewall, and being careful to run only those services which are absolutely essential. And make sure all those services are patched and up to date. It does no good to harden your web app in the extreme if you're running an old and buggy ssh server on the same system.

      • by swalve ( 1980968 )
        I would agree with that 100%. In addition, using different machines for different filtering tasks. You might have a firewall that keeps bad stuff out of your network, but then have another firewall type of machine that protects and sanitizes only your app server. And have different physical networks. Nobody, not even system admins, should be able to get to the machines from the outside facing network.
    • by Corbets ( 169101 )

      And do blanket filtering. never trust input. always filter to extreme, as long as you can get away with it. as much as you can get away with it.

      A lot of the posters on this story talk about filtering, and that's absolutely right - but filter on a whitelist, never a blacklist! Think about what inputs or input forms are acceptable rather than trying identify all possible bad inputs. If you go the blacklist approach, it's almost certain that someone more cracking-minded than you will identify something you missed.

  • Slashdot is a good forum to ask this type of question. I'm sure you'll find a few individuals who work securing financial systems, but you are probably better off having a one on one with someone with some real experience. Data security methodology will likely be a hot topic this year among management and security researchers, so you should check for conferences in your area, or budget some time to take a weekend or too off. I doubt a book will teach you everything (often year old information), so I strongl
  • by unity100 ( 970058 ) on Monday January 02, 2012 @08:22PM (#38567584) Homepage Journal
    However hard you write your web app, if its running anything important, it WILL get hacked. there's nothing on this planet that cannot get hacked if it is a software. even hardware can get hacked if it is running on even read-only software. so, assume it will get hacked, and design so that you will minimize the damage when the app is hacked.
    • Re: (Score:3, Interesting)

      by naasking ( 94116 )

      However hard you write your web app, if its running anything important, it WILL get hacked. there's nothing on this planet that cannot get hacked if it is a software.

      I disagree. You can mitigate many risks by using a proper language (memory safe) or web framework which handles user input and session/CSRF protection. You can mitigate all risks by properly using a theorem prover to guarantee your implementation has all the important properties to guarantee safety.

  • Vulnerabilty Scanner (Score:2, Informative)

    by Anonymous Coward

    You'll never be 100% secure but take a look at something like http://www.rapid7.com/products/vulnerability-management.jsp. Its the company who bought metasploit(http://en.wikipedia.org/wiki/Metasploit_Project) which is a common penetration testing framework.They have a community version that free you can run against the server if you own it, if you don't own the server you can check with the hosting company and see if its ok to run to verify everything is fine. Its for banking you should look at http://en.w

  • Get IIS 4 (Score:5, Funny)

    by Billly Gates ( 198444 ) on Monday January 02, 2012 @08:26PM (#38567642) Journal

    And use VBScript with activeX controls mixed with sql server 6.0 and make sure the clients all have to use IE 6.

    Throw a little ASP, not asp.net or anything bloated that checks the sql agaisnt injections and you will have one rock solid platform that nothing will get hacked or get intercepted.Just ask any MCSE to secure it and you are good to go

  • OWASP.org (Score:5, Informative)

    by LouTheTroll ( 1093917 ) on Monday January 02, 2012 @08:27PM (#38567654)
    Be sure to checkout out all of the fine resources at http://www.owasp.org./ [www.owasp.org] It's the Open Web Application Security Project. All materials, training, libraries, and content are free. There are numerous local chapters also so be sure to search for one in your local area.
  • by Anonymous Coward

    Can the Slashdot community recommend good websites?

    Check OWASP: http://www.owasp.com/index.php/Main_Page [owasp.com]

    • - read the Top 10
    • - Join a local chapter

    Also, budget for an engagement with professional penetration testers. Best they find the holes before the black hats do.

  • I recommend taking a look at The Open Web Security Application Project [owasp.org]. There are a significant number of resources listed on this topic.

    Best,
    Z

  • by gman003 ( 1693318 ) on Monday January 02, 2012 @08:28PM (#38567680)

    Trust no inputs. Check your inputs. Validate cookies. Validate parameters. Validate your validation. Encrypt whatever you can, whenever you can.

    SQL injection is the most common vulnerability. Learn how to make it impossible with prepared statements.

    If possible, hire some white-hat hackers to try to break into the site, and see if they find anything.

    Above all, trust nothing.

    • by KevMar ( 471257 ) on Monday January 02, 2012 @09:42PM (#38568384) Homepage Journal

      Above all, trust nothing.

      That's the most important rule of thumb. Don't even trust your own client code.

      Make definite security boundaries. Draw a circle, label it data. Draw a circle around that circle, label it prepared statements. Keep drawing circle adding layers for each security boundary so you have something like this.

      Data-> prepared statements -> firewall -> web server -> business logic -> user state management -> browser -> client side code -> user input

      Each layer needs to validate everything. Let each layer assume that the protected layer in front of it is missing. It just does not exists. One common issue is having only the client side code validate the user input. I love to modify client side code to bypass validation just to see what breaks. If its HTML, there are so many ways to do that.

  • by OleMoudi ( 624829 ) on Monday January 02, 2012 @08:31PM (#38567714) Homepage

    While one can arguably say everything can be hacked (unless air-gapped), in certain scenarios you can at least mitigate the impact of a breach to make it almost irrelevant.

    Easiest example is password storing. Some SQLi may get through and provide someone with a dump of your user passwords, but if you follow up to date recommended security practices [codahale.com], the data will be nearly useless.

    Beind said that, just by reading the Web Application Hacker's Handbook [mdsec.net] and following all of its recommendations you will have a pretty secured app.

  • Layers (Score:5, Insightful)

    by jbolden ( 176878 ) on Monday January 02, 2012 @08:32PM (#38567728) Homepage

    A few things;

    1) Multiple layers. Consider your application and the entire framework it exists in. Assume that each part is completely under the control of a hostile. Now design the system so that the hostile still can't do much harm. So for example start with the webserver assume it were hostile, how are you protecting the data? Go through the entire architecture this way and make sure you can contain any type of part under hostile control even if it went undetected.

    2) You probably want to be using capabilities not permissions i.e. X has the permission to do Y to Z, not X has the permission to do Y. That takes a ton of time to setup, and it is as much a jump in security as going from no password to passwords.

    3) You want to use languages, servers, software that are security aware and designed. So for an obvious example you want to use web frameworks that taint check everything as a matter of course. You want a database that does the same thing (remember multiple layers).

    4) You are going to want a full security implementation. A fragmented network, the server in a DMZ with monitoring behind a firewall. You are going to want intrusion detection and vulnerability assessment.

    5) If you are really serious, hire a white hat team to audit you and do multiple cycles.

    And if your boss is serious I'd be happy to start discussing this professionally.

    • As someone in this field for far too many years, it's great to find someone who pretty much outlined my strategy for secure web apps.

      The rules are: All input is suspect, all users are suspect, deny access to everything and go from there.

      On a sad note, I've yet to find a boss willing to do it right, there's always a shortcut to save time or money.

      • by jbolden ( 176878 )

        All input is suspect, all users are suspect, deny access to everything and go from there.

        Nicely put. I agree. And possibly -- internal users are suspect as well.

        I agree with you on money saving. Very few people want to pay for security, except for security companies and the military. Web companies aren't bad either because they have to deal with so many attacks. But in general most people want to claim security without the spending.

        • by Joe U ( 443617 )

          Nicely put. I agree. And possibly -- internal users are suspect as well.

          I'm actually in the mindset that all users, including the administrators, are suspect. I get a lot of flak on that, but personally, I have yet to see a reason to give anyone complete direct access to raw data without making it difficult.

          Also, for some reason, many devs don't like the whitelist method for validating input. Never really understood why.

          • by jbolden ( 176878 )

            Whitelisting is a pain in most languages and a lot of work. That's why I'd go with languages and/or frameworks that make it mandatory and easy. But fundamentally most developers don't like doing security, that's why it is often a good idea to have a security development team do work like validation of input functions and then the more application oriented stuff is done by developers.

    • by Enleth ( 947766 )

      This. Especially the layers.

      If you can, split the application in two parts - the font end running on a world-facing web server and a back end on a private network. Use a well-defined, high level protocol for communication between the two. If you can afford (literally, it's just a matter of throwing more hardware at the problem) some overhead, use a text-based serialization format with a solid, well-tested parser. The simpler, the better. Check every single request at the backend in every possible way, data

      • by jbolden ( 176878 )

        I wonder if any web applications that properly implement all those things and more even exist

        Yes, this sort of layered approach is common in most of the internet application services. I've worked with Yahoo and Myspace on parts of their infrastructure. They get an amazing amount of attacks.

  • by DarwinSurvivor ( 1752106 ) on Monday January 02, 2012 @08:42PM (#38567846)

    Software engineering is fairly similar to structural engineering. Just as an architect does not truly understand how to create an indestructible building without first learning how buildings are destroyed, you can't possibly hope to create a secure software system without understanding how software is broken.

    If you are serious about securing your software (without having a security expert on hand), you need to spend some time *breaking* software. http://www.hackthissite.org/ [hackthissite.org] has some fairly good tutorials, but you're also going to need to learn about buffer overruns, binary magic (such as never-ending zip files and over-sized jpegs), sql injection, malformed packets, firewalling, fail2ban, encryption (certificates at the very least), intranet isolation, air-gapping, client-securing, hardware securing (disabling USB ports), etc.

    Basically, there is a reason security experts spend so much time in school and charge so much per hour. If this project is already in the blue-print stage and has a deadline, you should be looking to hire a security expert at the planning stage and at least a few audit stages along the lines. If this is more of a pet-project, it could be a very good way to get yourself motivated to learn these subjects.

    • by Jaime2 ( 824950 )

      Software engineering is fairly similar to structural engineering. Just as an architect does not truly understand how to create an indestructible building without first learning how buildings are destroyed, you can't possibly hope to create a secure software system without understanding how software is broken.

      Earthquakes don't adapt their attack strategies as well as hackers. Learning to hack will help you harden against the lamest attacks of five years ago. Every single buffer-overflow can be fixed by simply keeping up on patches. Every SQL Injection vulnerability can be fixed by using a proper database access layer.

      I find it much more effective to first apply basic coding standards based on OWASP, then to think of every web page from a request-response perspective instead of from a user perspective. 90% of

    • I've already commented, otherwise I'd mod this up. I sometimes program or modify in assembly language, ad I have to say, cracking an app here or there has made me absolutely paranoid when writing. I still think in machine execution terms when writing VB.NET. I consider what the CPU has to do to get the results I want.

      Side effect, I was able to make a web service 30 plus times faster by changing maybe 4 lines of code. Because I know what the VM/Interpreter has to do, and I tell it to do the same thing a

  • I'm not very practiced in developing "hardened" web apps (mostly I've just worked with already written code that is secure), but:
    Use as little javascript as possible (if you're planning to use web2.0 AJAX type stuff). It's almost laughably easy to change javascript after the webpage has loaded (Greasemonkey for example). If you're super good at programming clean secure C/C++ you might want to program your own webserver (servers like Apache are easier to use, yes, but they release security patches to them
    • by mortonda ( 5175 )

      Use as little javascript as possible (if you're planning to use web2.0 AJAX type stuff). It's almost laughably easy to change javascript after the webpage has loaded (Greasemonkey for example). .

      That really has no impact, as long as you make sure the server side validates all actions to be sure they are correct and allowed. Javascript is great to enhance the experience, but it does nothing for security.

      • A friend of mine once had a router that used javascript based authentication that could be hacked using Greasemonkey. So don't do that, is sorta what I was trying to say. I suppose I could have said it better though....
        • by mortonda ( 5175 )

          Right; the key is, javascript can do nothing for security, only for interface enhancement. The security must be maintained only by the server side.

    • by Jaime2 ( 824950 )

      If you're super good at programming clean secure C/C++ you might want to program your own webserver (servers like Apache are easier to use, yes, but they release security patches to them all the time, so they aren't THAT secure. A dedicated single program that only does a few things is likely to have less vulnerabilities).

      That's the worst idea I've ever heard. Apache and IIS both average less than ten discovered meaningful vulnerabilities per year. Making a secure HTTP server is way harder than you think it is.

      Use SSL or some other form of secure transport (https). This will insure (well not insure, but make it more difficult) that even if someone is able to snatch your user's packets (like if they are in Starbucks or something), they will have to decrypt them before they get a token (by which time it will have expired).

      Look up Cross-Site Request Forgery, it's a whole class of attack that makes the browser do the hard work for you. This is a great example of a case where using a framework instead of rolling your own is best. Most authentication frameworks had some CSRF protections in them before most web developers even knew it was

  • Yay for the OWASP recommendations. Also, read this book by Ross Anderson.

    Buy a couple days from a security vendor like GDS or Cigital for a security architecture review. Good luck!

    I do this for my day job.

  • by Tony Isaac ( 1301187 ) on Monday January 02, 2012 @08:46PM (#38567910) Homepage

    Citibank had a security hole that let people just change the credit card number in the URL! http://yro.slashdot.org/story/11/06/26/1334209/citi-hackers-got-away-with-27-million [slashdot.org]. AND they passed security audits!

    I can also speak from personal experience. A company I worked for had to pass a security audit in order to do business with the City of Houston government. It was a joke. We programmers all knew of glaring security holes, but the audit missed everything, and we passed with flying colors.

    The moral of the story? Use common sense. Do the things that you know make a site more secure. Don't store plain-text passwords. Use stored procedures. Use SSL. Use the latest development tools. Somebody will still find a way around your security controls. But to keep your customers happy, get a security audit done. That will give them the peace of mind they want, and you the cover you need.

    Nobody has created real rock-solid security--physical or digital--without spending truckloads of money.

  • 1950's computer science used a model of "input/output/processing/storage" and it worked well for most projects but it also kept programmers minds on data flow. Find out how that data flow can be abused and prevent it. The simpler a system is, the few bugs it will tend to have.

    Also don't use systems that want to load up hundreds of packages to do something simple. Software complexity is the root of all security issues.

  • You can start by reading up on what's frequently been the vector for system break-ins elsewhere, and avoiding those mistakes. For instance:

    - Don't write in php, and especially don't rely on a php-driven framework. While no language is perfect, serious php exploits still appear with alarming regularity (as compared to, say, perl or python). Also be sure to disable php on your server if possible (e.g. just redefine all php-related extensions to be treated as text/html).

    - If you need an SQL back end, learn how

  • Anyone who has actually worked for a time in web development will tell you that tight schedules, shoestring budgets and large sites, with many moving parts, all conspire to ensure that security holes exist in just about every non-trivial web app of any meaningful size. It's not that we're unaware of SQL injection, validation of inputs or even cross site scripting, we just don't have time to check and test everything and all possible interactions before we have to move on to something else (thank you MBAs).
  • ... then just set the password for root to "rewt" and your done.

    Seriously, the way banks do things should absolutely never be a model for security. Run BSD (not Windows, not Mac, and not Linux). Find the smallest open source web server that can do what you need (but absolutely not Apache), and review the source and history of bugs and exploits. Or just write your own. Avoid languages with lots of modules if you can, and certainly avoid those modules. As much as you can you need to be writing all the co

  • Coming from a windows environment, you've probably already lost.

    But here is a book to show you how much you don't know (in a platform agnostic way):

    The Tangled Web [amazon.com] by Michal Zalewski.

    Also, forget all of the advice above about starting by writing your own webserver. That's a fool's errand.

  • First, make sure your software is written correctly. That is, use Java with a lot of unit testing and good coverage using something like Cobertura.

    Now, given a correct program the only way it can fail is if it accepts bad input from the user. This means you need to write your program where you validate your inputs as early as possible. If the input is clean your program will not fail except for something out of its control - such as a resource failue (bad network, hardware failure etc). The hard thing is th

  • by dwheeler ( 321049 ) on Monday January 02, 2012 @09:46PM (#38568416) Homepage Journal
    Take a look at my book on secure programming: http://www.dwheeler.com/secure-programs/ [dwheeler.com]. I wrote it after I saw software getting broken into, again and again, for the same old reasons.
  • In order to understand security, you must first understand how to hack or abuse a website. I recommend spending at least a week as a hacker. Here are some things to get your started:
    1. Install Firefox or Chrome, I like Firefox for webdev.
    2. Install GreaseMonkey https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/ [mozilla.org]
    3. Install TamperData https://addons.mozilla.org/en-US/firefox/addon/tamper-data/ [mozilla.org]
    4. Learn about backend resource management and data validation, specifically SQL Injection. http:// [hackthissite.org]
  • by Stultsinator ( 160564 ) on Monday January 02, 2012 @10:49PM (#38568806)

    ModSecurity (or any other WAF) can greatly decrease the number and kinds of attacks that actually make it through to your application. And like a good firewall it can alert you when you're under attack. If you do nothing else, put this in place.

    You also want to make sure your app is solid, so head on over to DISA and see what the military recommends. They have Security Technical Implementation Guides (STIGs) for just about everything in your architecture: http://iase.disa.mil/stigs/app_security/index.html [disa.mil]

    Once you have things built, test! Use some of the open source penetration testing tools to see if there are any known vulnerabilities in your stack. Try it with and without your WAF in place.

    Finally, if you really need to go the extra mile, it's time to shell out some cash for professional penetration testers. They'll have a tool belt full of open source and proprietary tools and the good ones will even do a static analysis of your code.

  • by LostMyBeaver ( 1226054 ) on Tuesday January 03, 2012 @05:01AM (#38570196)
    Sure, there's a gazillion lines of COBOL code out there in banking, it would take FOREVER to replace, though there are more modern banking systems which provide a solid starting point and it could be done and it might cost less than running IBM mainframes. But there's another great reason for COBOL.

    COBOL is a crippled language. Unlike most other languages where all the code is written in that language... or maybe with the help of one more language. COBOL is an insanely retarded language. COBOL is generally only used for processing things. RPG-IV or something similar is used for terminal user interfaces, web based systems can communicate with the COBOL back end and then COBOL would perform the transactions required. COBOL itself doesn't even store the data. Even though COBOL often has a crippled internal ISAM or possibly can use SQL. More often than not, COBOL is linked to a DB/2 database... which is more of a large scale ISAM as opposed to a SQL server. Think of it as the data store used by an SQL server. Records are searched for using archaic search methods based on classic database structures... more like how it was done with dBase, FoxPro or Clipper.

    While I don't recommend writing code in COBOL... you should consider the mainframe model of development.

    1) The web interface DOES NOT query data directly from the data store. Instead, it requests data from a broken on another server.

    2) The other server is connected using a non-IP protocol. The only method of requesting data from the other server is through a simpler interface which is fully known. You can't ask for credit card numbers for example. This can be accomplished using Ethernet, but avoid using IP. It's far too big and hard to understand. Using Infiniband as a link is great because you can use MPI over Infiniband to send a message asking for data and then wait for a result. This sounds more expensive than it is... but Infiniband is pretty damn cheap until you need fabric switches. Point to point is pretty easy on the wallet. The key is, any machine accessing data on the database should not under any circumstance be able to define its own query. Add a new function with parameters for each query. It must be explicit.

    2a) Ultimate paranoia. Critical information such as credit cards and social security numbers exist on a separate payment processing machine hidden behind that machine. All communication with that machine is performed over a dedicated data link such as a high speed serial port (you can get them in megabit+ speeds these days) and all queries performed on that machine will be 100% explicit and will guarantee that there is no possible means of requesting anything other than the last 4 digits of either number. Transactions are sent to that machine, it performs the transaction and responds back with "Yes or no!". If a single user has multiple credit cards... there can be a query function which returns the card type and the last 4 digits only.

    3) Just like an IBM mainframe. NO C/C++ code compiled native. This is simply because C and C++ lack proper memory management for secure systems. People still use pointers when they should use classes and there's no run-time checking on these things. Instead run inside a VM type environment. I recommend using a web server running on top of Mono or on top of Java. Not a web server running Mono or Java inside of them. This way, there's a much higher amount of security involved. Yes, this can still go wrong... but the chances are much lower. Make it a requirement that all classes and functions which are used in the system are compiled for the virtual machine and DOES NOT call out of the VM... unless there is no alternative.

    4) Port 443 is the only open port on the systems... no exception. If you run netstat -a there should be absolutely no ports listening other than HTTPS.

    5) Hardware based load balancing proxy server. These are expensiveish... but if you get a hardware based proxy server that load balances by proxying HTTPS requests across your web servers, it'll m
  • by kikito ( 971480 ) on Tuesday January 03, 2012 @05:18AM (#38570258) Homepage

    Documenting yourself will not hurt, but I think you must hire an expert to have that done correctly. Computer security is a field complex enough to warrant more than reading a couple books.

    If your budget doesn't allow for this, then probably your client doesn't really need the kind of security they are asking for.

One man's constant is another man's variable. -- A.J. Perlis

Working...