Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Networking

Ask Slashdot: Experience Handling DDoS Attacks On a Mid-Tier Site? 197

Posted by Unknown Lamer
from the is-that-likes-windows-3.1 dept.
New submitter caboosesw writes "A customer of mine recently was hit by a quick and massive DDoS attack. As we were in the middle of things, we learned that there are proxy services of varying maturity to deal with these kinds of outbreaks from the small and mysterious (DOSArrest, ServerOrigin, BlackLotus, DDOSProtection, CloudFlare, etc.) to the large and mature (Prolexic, Verisign, etc.) Have you guys used any of these services? Especially on the lower price point that a small e-commerce (not pr0n or gambling) company could afford? Is a DDoS service really mandatory as Gartner now puts this type of service in the same tier as SEIM, firewalls, IPS, etc?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Experience Handling DDoS Attacks On a Mid-Tier Site?

Comments Filter:
  • by microTodd (240390) on Monday April 09, 2012 @07:19PM (#39625509) Homepage Journal

    Remember this really cool slashdot story about a sysadmin on the receiving end of a DDOS?

    http://slashdot.org/story/01/05/31/1330202/post-mortem-of-a-dos-attack [slashdot.org]

    The original writeup link is dead but I found it here (warning: PDF). This was a really cool story.

    http://www.stanford.edu/class/msande91si/www-spr04/readings/week1/grcdos.pdf [stanford.edu]

  • by SethJohnson (112166) on Monday April 09, 2012 @07:23PM (#39625541) Homepage Journal

    watch the attack and start blacklisting IP ranges.

    In most cases, your customers are going to exist in one or a few countries. It would be valuable ahead of time to add redirect rules to your iptables for entire ranges of IP addresses located in countries that don't host your customers. Redirect these IP ranges to a sacrificial server on a different pipe to the backbone. That way, when some of your customers are abroad and need access to your services, they can still get some amount of response.

    Additionally, you can proactively parse your user accounts for IP addresses and build a whitelist ruleset for your iptables to implement in a defcon 0 situation. Don't use this as a normal operations mode, just when the shit has really hit the fan and you need to block everyone except your known-good account holders.

    Seth

  • Misunderstanding (Score:5, Informative)

    by lanner (107308) on Monday April 09, 2012 @07:26PM (#39625581)

    The mere question of how to mitigate a DDOS indicates a fundamental lack of understanding of how IP networking and DDOS works.

    You (the ISP customer) have no ability to control what packets are sent to you over your uplink circuits. You can control what you send, but you have no ability to control what you receive.

    Read the sentence above. Repeat as necessary.

    Even if you knew with 100% certainty which packets were "bad" packets and which were "good" packets, if your uplink is saturated, dropping them on your edge router/firewall/whatever is 100% ineffective.

    The best mitigating strategy is that you need to have an agreement with your ISP and plan in place prior to an attack. Identify the hostile addresses, give them to your ISP, and they will null-route those sources either within their core or even at the edges of their networks to prevent entry. Your ISP has the capacity to mitigate a DDOS attack, you as the little customer do not.

  • by rev0lt (1950662) on Monday April 09, 2012 @07:27PM (#39625585)
    So, are you saying nginx will work when you receive more TCP requests than your server can handle? Or the upstream router? Or when evey page render is a database hit? Nginx is a lot faster than apache when serving static content, and at the cost of some flexibility. Guess what? Most of the web isn't static content, even if it appears so. Do you think sessions and agent info are logged into ether? Get real.
    And yes, I DO use nginx, and it rocks. It's just not the silver bullet you're talking about.
  • by Anonymous Coward on Monday April 09, 2012 @07:49PM (#39625775)

    Even if you knew with 100% certainty which packets were "bad" packets and which were "good" packets, if your uplink is saturated, dropping them on your edge router/firewall/whatever is 100% ineffective.

    Your "best strategy" advice is very good, but it is not the "only strategy."

    As others have said, you can also have multiple entry points all sharing the same back end. Each of these entry points can be on their own hosting provider. In principle, you can arrange for the front-end/back-end connection at your front-end provider to NOT share a physical wire with the "public" side of your front-end, so if it gets hit hard it crowd out traffic going to/from the back end.

    Here's an example:

    I run poormeddosvictim.com. I have servers at 3 sites around the country, 1.666.3.4 1.2.666.4, and 1.2.3.666.

    For some reason, some mining company on Mars thinks I am evil so they keep DDOSing me.

    Hosting provider A is widely connected. I advertise 1.666.3.4 so all but one of A's pipes see direct connections. I use A's remaining pipe to connect back to my back-end. I work with A so the traffic to the back-end never shares a wire or router with incoming traffic. Bang on A's incoming pipes all you want, I'll still be able to talk to my back end unless you crash me entirely.

    I have similar arrangements with hosting providers B and C.

    I put my back end at hosting provider D and, just for grins, have a backup back end on hosting provider E that syncs up regularly with the back-end on D.

  • by Shoten (260439) on Monday April 09, 2012 @07:59PM (#39625839)

    It doesn't help against DDoS attacks. Not even remotely, not even a little bit. To put the advice to a metaphor, a DDoS attack is where there are so many people loitering in the front lobby of a business that people can't even get into the front door of a building. Using a different web server is like having a receptionist who speaks faster; it doesn't address the nature of the attack in the slightest way possible. These attacks are either driven by saturation of network links or by leveraging vulnerabilities in underlying database-driven applications (hint: a little-known SQL command called WAITFOR is often to blame); using nginx won't help in the slightest bit.

    Christ...these attacks are over a decade old; read up or be quiet.

  • by stanlyb (1839382) on Monday April 09, 2012 @08:39PM (#39626149)
    Actually it does, and that's one of the reason for using nginx as a proxy and cache server before the appache server.
  • Re:Misunderstanding (Score:5, Informative)

    by Liquid-Gecka (319494) on Monday April 09, 2012 @08:50PM (#39626229)

    This is a bit of a naive explanation.

    Let me explain how a DDoS mitigation strategy works for many of the companies listed in the summary. They setup datacenters in 10, 15, or more places all hosting a proxy. Some of these solutions use DNS to route traffic around problems (GSLB) while others like CloudFlare use Anycast which is awesome and super hard to get right. Each of these services are typically setup with tons of bandwidth capacity, well over 10Gb, but often times into the 100Gb range. They also often have deals with upstream providers that can filter traffic at the edges meaning it never makes it onto the internet in the first place.

    Since you servers are not exposed to the internet, and the ones are are have far, far more horsepower to deal with this than a DDoS will even manage from the client side they can easily just churn through the attack, discarding connections and never letting them hit your limited servers. This is how they can easily survive Anonymous style DDoS attacks.

    The other thing is to ensure you have turned of every "feature" your load balancer is giving you. SSL termination at the LB, full session management, etc. All of these cost load balancer CPU which is easy to take advantage of, even if there is a DDoS mitigation system in front of your site. You can't just add a few more servers either. Adding capacity to a load balancer is nearly impossible to do mid-attack.

    Even more interesting is that you can often times trick the crappy ddos software by doing things like excessively slow responses (tarpitting) making its loop take ages to try again. This is pretty much using the tactics of a DDoS directly against the attackers.

    Another common tactic is to add attackers to a view in your bind config that resolves your hostname to 127.0.0.1 just for them. This works if you do not have long TTLs and they are using hostnames. If they are using direct IPs then you simply move your traffic to a second IP and drop the one they are attacking. Best case is if you can do this via BGP announcements so the traffic simply will fail to route and everybody wins.

    And yes, I do this professionally but not for any commercial product.

  • by CoderExpert (2613949) on Monday April 09, 2012 @08:54PM (#39626261)
    You do understand that there are different kinds of DDOS attacks and flooding the available bandwidth is just one of them?

    In fact most DDOS attacks rely on causing heavy load on the server. Bringing down server like that requires much less resources of your own than flooding it with pure traffic.

    Geez, slashdot, this is one of the fundamentals of DDOS attacks.
  • by ScentCone (795499) on Monday April 09, 2012 @08:57PM (#39626285)

    How do you determine if the third party proxy has sufficient bandwidth to handle the DDoS + regular traffic?

    They have a performance guarantee, and don't get paid if they can't keep up at the promised level. Any of the ones you'll want to use will have a dashboard that shows you a more-or-less-real-time view of the blocks/passes, and how much of the purchased throughput you're using.

  • by ScentCone (795499) on Monday April 09, 2012 @09:00PM (#39626315)
    For that event, we used Zen Networks. They're at www.zenprotection.net, which describes their services pretty well. Not affiliated in any way, but they did solve the problem for us over the short stretch it was required. Honestly, we didn't shop around much ... the site in question was very much on fire. Not like a slashdotting, of course, but some fairly determined Eastern European punks looking for cash. They made my clients angry enough to have them asking, "Is there something we can do back to these guys?" We didn't, of course. Would have been a waste of time.
  • by Bengie (1121981) on Monday April 09, 2012 @09:27PM (#39626509)
    1) A properly configured FreeBSD router/firewall will handle 200k+ connections per second
    2) Configure the firewall to proxy TCP hand-shakes, so your web servers don't get flooded with syn packets unless the hand-shake actually finishes
    3) Mid-grade nginx web server will handle 70k+ requests/sec
    4) Setup your DNS to round-robin to several web servers

    Between your firewall and your webserver rules, you should be able to filter most obvious DDOS's. That which you can't filter, you'll just have to brute-force it and suck it up.

    Your web servers can handle more requests than you have bandwidth, the next bottle-neck is your database.

    There is not "silver bullet" like you said, but a properly designed system should be robust enough to leave your bandwidth your bottle-neck Most web apps I see aren't designed to properly make use of SQL. It's like someone trying to shoe-horn procedural logic into a database. Gotta get your DB architect to work with the programmers.

    A properly architected web app with a properly architected DB should be able to handle more requests than your bandwidth can handle.

    The only real DDOS to worry about is a flood and you can't really stop that unless it's a simple up-stream change. Enough machines DDOS'n ping floods at you will take you down. Filter all you want at your router, you won't have the bandwidth. Would be too simple to filter up-stream. A bunch of random forged TCP packets will suck up your bandwidth. If the attack is well distributed, ain't not'n you can do about it.

    There is not "Silver Bullet" like you said, but a properly designed system should have bandwidth as its bottle-neck
  • by PiSkyHi (1049584) on Monday April 09, 2012 @09:52PM (#39626645)
    Its quite normal in Slashdot for one person to rant, another rebuts everything cruelly and then another and another...

    My take on this is that nginx is cool for static pages, we all should know that.... new optimisations in Apache 2.4 hope to address some of these and Apache is easier for me to configure for dynamic sites with controllers.

    Regarding DDOS - neither of these will help... there are different types of DDOS attacks, sure. Any site that is dynamic in nature is screwed by any DDOS before it even saturates the entrance because an inability to disseminate requests in time causes the webserver to effectively stall. There are mitigations, one of the best is iptables rate limit for DOS attacks, of course defending DDOS attacks requires enough horsepower behind the scenes, so that when the entrance is saturated, requests can still be distributed usually by a load-balancer that places the bottleneck at the entrance alone - placing the site in the cloud with auto-scaling will solve this at a cost. Any type of DDOS attack that relies on an exploit though, requires a fix, removal or workaround before any horsepower mitigation can take place.

  • by CoderExpert (2613949) on Monday April 09, 2012 @10:09PM (#39626733)
    Actually, any website that is properly optimized is already serving most of it content as static. That is what caches are for. And yes, you can (and should) cache even parts of the page. However, even with dynamic content there is a very clear difference between serving with apache or nginx. Sure, someone who really knows Apache can maybe hack it to work as fast, but how many persons actually know? Let's be realistic here.

    Most of the time just switching to nginx and properly caching your content can mitigate DDOS attacks. Sometimes you may need more, but the point is that you should fix these bottlenecks first anyway.
  • by tuomasb (981596) on Tuesday April 10, 2012 @05:37AM (#39628699)
    This was changed last year. AWS doesn't charge for inbound traffic. Amazon Web Services Pricing Changes Effective July 1, 2011 [amazon.com]

"If truth is beauty, how come no one has their hair done in the library?" -- Lily Tomlin

Working...