DNS based Website Failover Solutions? 39
Chase asks: "I run a couple of websites(including for my work). I'd like to have a backup web server that people would hit when my server goes down. My primary host is on my companies T1 line and even though I've had my server die once the most common reason for my sites to be offline is that our T1 goes down. I've looked at the High-Availability Linux Project but it seems that almost everything there is for failover using ip takeover which isn't an option if my network link dies and my backup server is on a different network. ZoneEdit seems to offer what I'm looking for but I'm wanting a do it myself solution. The only software I've found is Eddie and it seems to have stopped development around 2000. I know DNS based failover doesn't give 100% uptime but with a low cache time and decent monitoring it seems like it's the best solution for having my backup server at a differnt location and on a differnt network. Anyone know of a good solution? (Using Linux and/or Solaris hosts)"
Depends whether you want to pay for it . . . (Score:5, Informative)
However if you do actaully have a budget to spend have a look at the 3DNS product from F5 [f5.com] Networks. it does the failover you describe and although it works better if it is intereacting with F5's server load balancing product, it can still monitor and react to standard web servers becoming unavailable.
Re:Dyndns (Score:2, Informative)
They will let you configure custom TTL values on A (host) records. I set mine to 5 minutes and it works just fine.
There are some automated engines out there which will update the dyndns service automatically, but I have not seen any which will automatically set the unavailable URI if the primary internet connection isn't available.
dyndns is more oriented at people who want to host but their address changes frequently, whether for black-hat, white-hat or ISP DHCP reasons. However, while reliability has never been a problem with their service, it may not suit the needs of a more commercial customer.
Just my two cents as a happy user.
uhhhh (Score:3, Informative)
A few ways.. (Score:5, Informative)
2. You will need a second line. Mandatory. If you really want insane uptime, you'll need dynamic routes ala BGP from both ISP's. If you don't need that much, you could maybe work with an automated probe-and-dnsupdate script which can run outside the network. It would switch the primary DNS to and from the backup IP address which is on the isolated network.
3. Have an equalized DNS entry for both IP addresses. It gives the client a 50% chance of connecting once its dead, but its better than nothing.
4. Tell the site visitors to connect to www1.mysite.com if they're having troubles reaching your site and have www1 pointing to your backup IP. Make sure your DNS servers are network redudant as well, or the whole excersize is pretty pointless.
Re:uhhhh (Score:3, Informative)
backhoes are easy to fix, I remember when I worked at Mindspring (pre-Earthlink) there was major outage (a hurricane I think) in NY that not only broke the T1 (there was exposed fiber) but it was also under 30' of water. It took 7 days to drain the water before the cables could be repaired.
You could always use IPv4 Anycasting. (Score:3, Informative)
RFC 2136 + Net::DNS + your monitoring software (Score:4, Informative)
First, you need to have a monitoring system on the Internet somewhere, not through your T1 because if that goes down it won't be able to update your DNS. You have that already, I'm sure, to test your web site accessibility from the Internet. Of course, at least one of your name servers must be accessible when the T1 goes down too, so that will have to be somewhere (other than on your T1) on the Internet as well.
On this name server enable dynamic updates. Modify your monitor system that checks availability of your site to use Net::DNS to update the IP address of your web server when the monitor fails.
Going all open source, I'd use Net::DNS and nagios for the monitoring software, bind for the name server (which supports dynamic updates), with Linux as the OS.
Re:RFC 2136 + Net::DNS + your monitoring software (Score:4, Informative)
with Linux as the OS
Kernel! And anyway, does the fact you're using GNU/Linux or *BSD actually make a difference to this?
Re:RFC 2136 + Net::DNS + your monitoring software (Score:3, Informative)
The key is that I use tcpclient from DJBs ucspi-tcp package:
http://cr.yp.to/ucspi-tcp.html
Don't hurt yourself with BIND, either. Parsing that file is going to hurt your brain. I use grep -v to manage my data file for tinydns:
http://cr.yp.to/djbdns.html
Maybe I'll get around to publishing my work. A brief synopsis:
I do a tcp connection to port 80 on my webservers with a 5 second timout. If the connection fails it pulls all IPs assoicated with that server out of my DNS. Not only does this determine if the server is up but it also determines if the server needs less load because it can't get to my request
in time.
There's a state file for each webserver, ie webserver.up or webserver.down. That's easy to look for later to determine if I need to change the DNS tables.
I run the check every 60 seconds. I only have two servers so it's not too tough.
I also check www.yahoo.com and www.google.com availability over each ISP to determine if an ISP is available. I update DNS based on the ISP conditions as well.
I say again, try to avoid BIND if you can, I can't think of a sane way to process your zone files with shell scripting.
We tried it, and it didn't work. (Score:1, Informative)
Don't use DNS failover. (Score:4, Informative)
Then again, if it dosn't matter to you, don't worry about it. Just do RR-DNS and manually cut out the failed IP. "most" people will get the still-working servers.
Re:RFC 2136 + Net::DNS + your monitoring software (Score:3, Informative)