Ask Slashdot: Writing Hardened Web Applications? 333
rhartness writes "I am a long time Software Engineer, however, almost all of my work has been developing server-side, intranet applications or applications for the Windows desktop environment. With that said, I have recently come up with an idea for a new website which would require extremely high levels of security (i.e. I need to be sure that my servers are as 100% rock-solid, unhackable as possible.) I am an experienced developer, and I have a general understanding of web security; however, I am clueless of what is requires to create a web server that is as secure as, say, a banking account management system. Can the Slashdot community recommend good websites, books, or any other resources that thoroughly discuss the topic of setting up a small web server or network for hosting a site that is as absolutely secure as possible?"
Start with the W3 guide to secure CGI programming (Score:5, Informative)
Once you understand the things they recommend and WHY they recommend them, you won't need to ask this question anymore.
Filter EVERY input right at the start. (Score:5, Informative)
Vulnerabilty Scanner (Score:2, Informative)
You'll never be 100% secure but take a look at something like http://www.rapid7.com/products/vulnerability-management.jsp. Its the company who bought metasploit(http://en.wikipedia.org/wiki/Metasploit_Project) which is a common penetration testing framework.They have a community version that free you can run against the server if you own it, if you don't own the server you can check with the hosting company and see if its ok to run to verify everything is fine. Its for banking you should look at http://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard as it has standards for financial data
Re:Web Applications aren't different (Score:5, Informative)
OWASP.org (Score:5, Informative)
Re:Web Applications aren't different (Score:5, Informative)
As well, block *every* port except the one that you intend to use within your application, and monitor all traffic to detect anyone *attempting* to connect over any other port, and immediately greylist their IP Address for an hour. If they repeatedly do it, than blacklist them permanently.
As well, requesting a non-existent resource should be treated just as trying to SSH in to your box as root!
Anyone who legitimately runs into your security protections would need to call to get their account reinstated.
You should also ensure that any functions that will only be *reading* data do not have privileges to *write* data under any circumstances.
Only writing functions should be capable of writing to your data stores / databases.
Any malformed entries stored within your database should be immediately flagged as "bad data" and *not* presented back to the user. The record should simply be gone. Any one user who has more than 3 pieces of "bad data" associated with their account should be immediately blocked pending review.
The best course of action with regards to designing any hardened applications is to assume that any data coming from your own, non-internet accessible servers is suspect and then you will do well in limiting risk.
Re:Easy answer (Score:4, Informative)
Well, then he's picking a poor example of web security given the banking industry's track record on break-ins and id theft.
If you want to see guidelines about what you have to provide for a secure system, check out Saskatchewan Health Information Protection Act [gov.sk.ca] for one region's take on what data protection means.
As to the technology of how to deploy that, there are no easy answers and checklist standards. New attack vectors and design oversights come out all the time, so web security is an ongoing battle, not something you just design for and "finish".
You give banks too much credit (Score:5, Informative)
Citibank had a security hole that let people just change the credit card number in the URL! http://yro.slashdot.org/story/11/06/26/1334209/citi-hackers-got-away-with-27-million [slashdot.org]. AND they passed security audits!
I can also speak from personal experience. A company I worked for had to pass a security audit in order to do business with the City of Houston government. It was a joke. We programmers all knew of glaring security holes, but the audit missed everything, and we passed with flying colors.
The moral of the story? Use common sense. Do the things that you know make a site more secure. Don't store plain-text passwords. Use stored procedures. Use SSL. Use the latest development tools. Somebody will still find a way around your security controls. But to keep your customers happy, get a security audit done. That will give them the peace of mind they want, and you the cover you need.
Nobody has created real rock-solid security--physical or digital--without spending truckloads of money.
Re:Be paranoid (trustno1) (Score:4, Informative)
Above all, trust nothing.
That's the most important rule of thumb. Don't even trust your own client code.
Make definite security boundaries. Draw a circle, label it data. Draw a circle around that circle, label it prepared statements. Keep drawing circle adding layers for each security boundary so you have something like this.
Data-> prepared statements -> firewall -> web server -> business logic -> user state management -> browser -> client side code -> user input
Each layer needs to validate everything. Let each layer assume that the protected layer in front of it is missing. It just does not exists. One common issue is having only the client side code validate the user input. I love to modify client side code to bypass validation just to see what breaks. If its HTML, there are so many ways to do that.
Re:If you don't know, you can't do it (Score:5, Informative)
That's fairly naive in web terms. For example, the application may carefully check an incoming string is valid for what it expects, but fail to correctly encode it on output and create a cross-site-scripting attack vulnerability (for example if the input contained a element). There's also a lot to check; for a number, it's not too hard, you check that the input is an integer/decimal as appropriate, and do range check if relevant. For a string it gets harder; length check is obvious, but what about checking character set? It turns out just finding out what the character set of an incoming string _is_, is difficult (blame IE): http://www.crazysquirrel.com/computing/general/form-encoding.jspx [crazysquirrel.com]
Then you get cases such as CSRF (cross-site request forgery) attacks ( http://en.wikipedia.org/wiki/Cross-site_request_forgery [wikipedia.org] ), where the user is fooled into clicking a link that sends a request to the web site, If they're logged in, the browser will typically send appropriate cookies, meaning from the server point of view the user has sent an entirely valid request.
OTOH, to say "If you don't know, you can't do it", is hopelessly defeatist. I would not start with a security-critical web application any more than I would start with any other security-critical application, but you can learn this stuff. Alas, it does take time...
Re:If you don't know, you can't do it (Score:1, Informative)
Re:If you don't know, you can't do it (Score:4, Informative)
You don't know how many times I've told people that. They're usually the same people who say "How could they have done it?", and then I have to break out years old writeups of the exploit.
Case in point, SQL injection. I was talking to some web programmers who apparently have worked in a bubble, and learned everything from the books, but glossed over the part about "never trust user input". They didn't get it. I demonstrated a SQL injection against their code. Then they were willing to listen.
Too many programmers see user input as being trustworthy. Back in the day, it was as simple as "don't allow ` or ; in lines you send to a system calls". People even screwed that up. Then it was "don't do system calls, do everything natively in your code". People ignored that. Then it became "never trust user input", and "sanitize any user provided data". It's sad but true, they still don't care.
I've introduced people to hacking tools and methodologies. It's not so they can hack. I "encourage" them to try to hack their own code. Code it right the first time. Then attack it to prove that it is right. And keep trying to break into it. Learn better techniques, and teach me something. I don't mind in the least if a coworker can show me that I'm wrong. It's worse if a malicious 3rd party does.
There's no excuse for someone not to know and use the same tools that attackers use, to defend themselves.