Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Bug

Providers Ignoring DNS TTL? 445

cluge asks: "It seems that several large providers give their users DNS servers that simply ignore DNS time to live (TTL). Over the past decade I've seen this from time to time. Recently it seems to be a pandemic, affecting very large cable/broadband and dial up networks. Performing a few tests against our broadband cable provider has shown that only one of the three provided DNS servers picked up a change in seven days or less. After turning in a trouble ticket with that provider - two of the three provided DNS servers were responding correct - while the third was still providing bad information more than two weeks after that specific change. What DNS caches ignore TTL by default? Is there a valid technical reason to ignore TTL?"
"This struck me as odd, and I decided to run a few tests using my own domain. Lowering the TTL to twenty four hours, and making changes and then checking to see when a change was picked up. I queried twelve outside DNS servers/caches that I had access to (Thanks to my friends and relatives with dial ups and DSL who put up with me and my requests to reboot their machine daily!). Checks performed against these outside DNS servers indicate that it may take as much as four to five weeks before a DNS change is picked up! Most DNS servers picked up the change within 48 hours. A small number did not (three out of twelve - that's a quarter of them!)

This merits more study, and prompts a few questions. So, before I begin with a more serious broad study, I'd like to get some feedback on the problem as I've seen it. I know the tin foil hat crowd will see the failure to propagate DNS correctly as censorship, and the OS/bind/djb/whatever zealots will simply see this as an argument for their particular religion.

Based on the responses I get, I will then setup and test a couple of domains with different DNS servers for 6 weeks and report back the findings. [volunteers welcome!]"
This discussion has been archived. No new comments can be posted.

Providers Ignoring DNS TTL?

Comments Filter:
  • 24 hours ? (Score:4, Informative)

    by Anonymous Coward on Tuesday April 19, 2005 @11:34AM (#12282089)
    in VOIP networks TTLs can be as low as 10 minutes
  • DNS practices (Score:5, Informative)

    by LynXmaN ( 4317 ) * on Tuesday April 19, 2005 @11:34AM (#12282090) Homepage
    Usually on big providers overriding the TTL of the zone is a usual practice for sure, I do that myself in the ISP I'm working for (it's middle sized).

    But I don't think they're setting a TTL longer than 24 hours, that would be kind of insane, isn't? At least from my own experience when I did a big DNS servers change (changed all the serials) the delay was less than 24 hours for almost all of them.
  • nscd (Score:4, Informative)

    by epiphani ( 254981 ) <epiphani@daYEATSl.net minus poet> on Tuesday April 19, 2005 @11:34AM (#12282091)
    nscd does not obey TTL by default. It uses gethostbyname(), which does not return TTL.

    We use nscd quite a bit, as im sure many other providers do. We only cache positives for 30 minutes, so we dont end up ignoring it for too long.

  • by cluge ( 114877 ) on Tuesday April 19, 2005 @11:39AM (#12282143) Homepage
    Send a plain text email to
    dns-subscribe@angrypeoplerule.com

    This is a moderated list, and is only for letting people who are interested know when the study will begin, how to participate and the final results.
  • Re:For non geeks (Score:5, Informative)

    by Anonymous Coward on Tuesday April 19, 2005 @11:39AM (#12282153)
    They're referring to DNS TTL, not IP TTL.
  • old data (Score:5, Informative)

    by b1t r0t ( 216468 ) on Tuesday April 19, 2005 @11:39AM (#12282158)
    I've had problems before, but it turned out that it was usually my stupid secondary server which somehow didn't take the slave update (see below), and randomly that would be the one that gets queried and cached.

    And then there's the times when I just plain forgot to bump the serial number field. Works great on my master server after I restart it, but nothing else (especially my secondary) notices the change.

  • Re:Dumb question (Score:5, Informative)

    by Alioth ( 221270 ) <no@spam> on Tuesday April 19, 2005 @11:40AM (#12282170) Journal
    Use the 'dig' and 'host' commands.

    For example

    dig @your-isps-nameserver.net -t A www.example.com

    For example:
    $ dig @192.168.0.1 -t A www.slashdot.org

    ; <<>> DiG 9.2.4 <<>> @192.168.0.1 -t A www.slashdot.org
    ;; global options: printcmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54561
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0

    ;; QUESTION SECTION:
    ;www.slashdot.org. IN A

    ;; ANSWER SECTION:
    www.slashdot.org. 7184 IN A 66.35.250.151

    ;; AUTHORITY SECTION:
    slashdot.org. 7184 IN NS ns2.vasoftware.com.
    slashdot.org. 7184 IN NS ns3.vasoftware.com.
    slashdot.org. 7184 IN NS ns1.osdn.com.
    slashdot.org. 7184 IN NS ns1.vasoftware.com.
    slashdot.org. 7184 IN NS ns2.osdn.com.

    ;; Query time: 3 msec
    ;; SERVER: 192.168.0.1#53(192.168.0.1)
    ;; WHEN: Tue Apr 19 11:38:58 2005
    ;; MSG SIZE rcvd: 159
    Note the TTL of 7184 seconds (this is how long the nameserver at 192.168.0.1 will continue to use the cached record for before fetching it again from slashdot.org's authoratative nameservers).
  • Re:nscd (Score:3, Informative)

    by photon317 ( 208409 ) on Tuesday April 19, 2005 @11:41AM (#12282183)

    Yeah but nscd just caches for your local server box, it doesn't re-serve the cached results to your remote clients. He's describing actual dns cache/forward servers ignoring TTL and handing outdated/bad data to client machines.
  • Re:For non geeks (Score:3, Informative)

    by eyegor ( 148503 ) on Tuesday April 19, 2005 @11:42AM (#12282195)
    From this site [menandmice.com]: Time To Live, the number of seconds remaining on a cached record before it is purged. For authoritative records the TTL is fixed at a specific length. If a record is cached, the server providing the record will provide the time remaining on the TTL rather then the original length it was given.
  • by Anonymous Coward on Tuesday April 19, 2005 @11:43AM (#12282201)
    Time to Live (TTL) when in relation to DNS issues is the maximum amount of time in seconds that a cacheing nameserver should cache an answer before going and checking again from an authorative source before handing it out.
  • So close... (Score:5, Informative)

    by Derek Pomery ( 2028 ) on Tuesday April 19, 2005 @11:43AM (#12282207)
    Pity you didn't paste the appropriate part of the wikipedia article.
    "TTLs also occur in the Domain Name System (DNS), where they are set by an authoritative nameserver for a particular Resource Record. When a Caching (recursive) nameserver queries the authoritative nameserver for a Resource Record, it will cache that record for the time specified by the TTL."

    http://en.wikipedia.org/wiki/Time_to_live
  • reboot? (Score:5, Informative)

    by grazzy ( 56382 ) <grazzy@@@quake...swe...net> on Tuesday April 19, 2005 @11:44AM (#12282221) Homepage Journal
    (Thanks to my friends and relatives with dial ups and DSL who put up with me and my requests to reboot their machine daily!).

    ipconfig /flushdns
  • Re:Dumb question (Score:5, Informative)

    by Dtyst ( 790737 ) on Tuesday April 19, 2005 @11:49AM (#12282278)
    There are many online DNS tools DNS report [dnsreport.com] being one of the best and DNS stuff [dnsstuff.com] being very powerful but harder to use. I also like Dig it Man! [menandmice.com] for simple DNS checks. Also many large internet providers usually have allkinds of online network tools available online on their webpages.
  • comcast (Score:2, Informative)

    by gelwood ( 852074 ) on Tuesday April 19, 2005 @11:51AM (#12282298)
    Last week, two nights in a row, Comcast's DNS was down NATION(USA) WIDE.
  • by jkujath ( 587282 ) on Tuesday April 19, 2005 @11:51AM (#12282301)
    I queried twelve outside DNS servers/caches that I had access to (Thanks to my friends and relatives with dial ups and DSL who put up with me and my requests to reboot their machine daily!).

    Why did you need to contact your friends/relatives to check whether or not your domain gets propagated?
    Couldn't you just query DNS servers directly using nslookup and/or dig?
    Querying them directly would eliminate you from wondering if the machine you are checking from has the DNS cached and you wouln't need to flush it (why would you need your friends/relatives to reboot their machines?). Not to mention the amount of time you would spend in having to coordinate this type of testing.
    Even if you don't want to use nslookup and/or dig from your Windows/Linux/Mac/whatever, there are tools available via the web that can help as well.
    This certainly is not a list of all the tools, or even the best ones... they're just ones that I have used in the past:

    dig [kloth.net] Web-based "dig" tool
    nslookup [kloth.net] Web-based "nslookup" tool
    DNS Report [dnsreport.com] Checks for DNS errors and provides nicely formatted information on a given domain
    DNS Stuff [dnsstuff.com] Various web-based DNS tools

  • by MadRocketScientist ( 792254 ) on Tuesday April 19, 2005 @11:52AM (#12282318)
    Has anyone done any measurement stats on DNS queries

    According to my DNS hosting company's FAQ [zoneedit.com]:

    "...or 200MB of usage is used (1 million DNS queries)"
  • Re:Let me guess... (Score:3, Informative)

    by sqlrob ( 173498 ) on Tuesday April 19, 2005 @11:52AM (#12282321)
    I got a "Your machine is trojaned" e-mail with few details. A thorough scan of my network showed diddley-squat. I finally got to reasonable level support and the issue was poisoning the cache with negative lookups. I was testing the mail, and URLs within the mail as well. I think there was an average of 20 lookups/mail.

    People running MailWasher on Windows also got the same warning from RR. All this was probably about a year ago.
  • Re:For non geeks (Score:3, Informative)

    by Neil ( 7455 ) on Tuesday April 19, 2005 @11:57AM (#12282365) Homepage

    Actually, the "TTL" in an IP header is different from the "TTL" in a DNS response (though in both cases the acronym means "time to live" and is intended as a limit on how long data hangs around).

    IP header TTL is basically a hop-count, to stop IP packets going round in circles indefinately in the event of routing loops in the network.

    Typically, when you look up a name like "www.example.com" your workstation consults a caching DNS server (on the local LAN, or offered by your ISP, or something). This DNS server goes off and talks to the root name servers, which refer it to the "com" name servers, which in turn refer it to the "example.com" name servers, from where it gets an IP address to go with the name. A couple of seconds later you ask for another page from "www.example.com". Your workstation asks the local DNS server for the information again, but the DNS server doesn't go and figure out the answer from scratch - it remembers the answer that it provided last time, and just repeats it. Time-To-Live is an "expiry date" that the authoritative name servers (like the "example.com" name servers) can put on their answers, so that the caching name servers know how long the answer is good for without them rechecking with an authoratative source.

  • Re:DNS practices (Score:2, Informative)

    by toddbu ( 748790 ) on Tuesday April 19, 2005 @11:57AM (#12282374)
    Usually on big providers overriding the TTL of the zone is a usual practice for sure, I do that myself in the ISP I'm working for (it's middle sized). The problem with someone else deciding on TTL for my zone (whether they're big or small) is that they'll probably get it wrong. How do you know the "right" value to pick for me when you don't know why I picked the value in the first place? Granted, some people pick low TTL "just because", but in our case we round-robin servers and if one goes down then we want to be able to take it out of the loop in a timely fashion. We're ok with a 15 minute TTL, but not with a day.

    Part of the problem with short TTL is that there is no really good mechanism in the Internet today for failing over a cluster of web servers short of buying expensive routing hardware. If you want to run a web server with a backup then having a short TTL is probably the best option around. What we need is a better DNS failover strategy and then many short-lived TTLs will probably go away. The current solution is crummy anyway. When Internap died here in Seattle and we were down for 45 minutes (along with LiveJournal [slashdot.org]), a high priced router/load balancer wouldn't have done us a bit of good.

  • by wo1verin3 ( 473094 ) on Tuesday April 19, 2005 @11:59AM (#12282395) Homepage
    Our company made a DNS change for a download server accessed by customers, over a month passed and multiple tickets opened with several large ISPS (Road Runner being the biggest) with no action taken. We finally had to setup a new server name for customers to be able to access the download server...

    In all there were 3 large US isps that were major offenders...
  • by GNUALMAFUERTE ( 697061 ) <{moc.liamg} {ta} {etreufamla}> on Tuesday April 19, 2005 @12:00PM (#12282407)
    Here in Argentina. We don't have bandwidth problems, bandwidth should be cheap considering the kind of conections that we have. But, all the bandwidth belongs to a few, that are not so interested in letting others grow, so they resell it at really high prices. So, since bandwidth _is_ a problem, many ISPs have Proxys, transparent Proxys, etc. The most dirty thing they are doing now is transparent proxys that never cleans their caches, content seems to never expire, etc. The other is DNSs that updates it's records all at once, every X days, not taking TTLs into account. I worked for about 2 years as a sysadmin for a hosting company, and this was a nightmare. Once, a customer's website was defaced, we cleaned up, restores a backup for him, but many people was still seeing the old website ... for more than a WEEK.
    A solution to this problem would be a law, that would create a set of standard services that a comunications company may give, with well defined names and categorys, and it should be MANDATORY for companys to market their services using this names, in their comercials too. So, for example, we would have categorys such as "Full Duplex Simetric DSL Conection", or "ADSL, With Proxy, Blocked Ports".
  • Re:Faulty system (Score:3, Informative)

    by Anonymous Coward on Tuesday April 19, 2005 @12:05PM (#12282462)
    It's irresponsible tampering, it's that simple.

    Its completely within the spec, and as a fundemental principle I can do whatever I want with my server. So get with the program and understand there are other ways of dealing with the issue. Two weeks before the change, set the new IP address of the mail server as a lower priority (higher number) server, so if the info is cached, it will fall back to the new number when the old one fails. When you make the change, you can purge the old address entirely.

    This is DNS maintenance 101, and should not surprise anyone who works on DNS.

  • Re:For non geeks (Score:3, Informative)

    by Shopko ( 872100 ) on Tuesday April 19, 2005 @12:05PM (#12282469)
    Actually that's for TCP's time to live. For DNS TTL, here's the scenario: (and yes this is simplified; there is more that actually happens, but it's not important for this discussion)

    Background
    ----------
    Domain Name Servers (DNS) are usually configured in a heirarchy, such that each server has a parent. This fact will be important below.

    Every domain (i.e. slashdot.org) has one or more "authoritative" name servers. These name servers know what web host slashdot.org is hosted on and how to get there.

    Other DNS's on the Internet do not know how to get to slashdot.org, because they are not "authoritative" for that particular domain. So they send a request out to their parent asking how to get to slashdot.org. Eventually, one of the parents will know the address of slashdot.org's authoritative name server, and will return this address.

    How This Relates To TTL
    -----------------------
    Here is what happens once the address of the authoritative name server is returned:

    A = The name server trying to figure out how to get to slashdot.org
    B = The authoritative name server for slashdot.org

    A asks B how to get to slashdot.org
    B responds to A with an address (66.35.250.150)
    A asks B how long this address is valid
    B responds to A with a TTL (e.g. 24 hours)

    So now name server A will not have to ask for slashdot.org's address again for 24 hours, since it was told by the authoritative name server that it can keep the address for 24 hours.

    This "keeping of addresses" is called caching, and name servers that do this are called caching name servers.

    I hope this helps. :-)
  • by iainl ( 136759 ) on Tuesday April 19, 2005 @12:10PM (#12282514)
    I'm just guessing here, but the ISPs are probably keeping their DNS servers on their client's side of the wider net, and only accept queries from their own users to avoid being DOSed.
  • Re:Bypass their DNS (Score:3, Informative)

    by tchuladdiass ( 174342 ) on Tuesday April 19, 2005 @12:14PM (#12282554) Homepage
    I've always run my own dns server on my home network, but lately I've noticed several dns servers refusing connection from my cable modem ip. Apparently they are a bunch of service providers that blacklist direct connections from dynamic ip address, probably in responce to ddos attacks. So, in order to have reliable dns I will need to configure my server to forward to my isp's dns server whenever it fails to make a direct connection itself.
  • Re:Bypass their DNS (Score:5, Informative)

    by petermgreen ( 876956 ) <plugwash.p10link@net> on Tuesday April 19, 2005 @12:17PM (#12282578) Homepage
    the root servers aren't recursive resolvers so you aren't really pulling from them in any meaningfull sense. you are just hitting them very occasionally when you use a new tld. Most of your data comes direct to your resolver from the authoritive nameservers. also the root nameservers are things that ABSOLOUTELY MUST STAY UP and measures would be taken to spread the load further if needed (this has already been done with bgp anycast for k-root).
  • by 7zark7 ( 97365 ) on Tuesday April 19, 2005 @12:20PM (#12282618)
    I run a DNS server for around 470 domains. I have this problem with our telco/dsl provider(Large Canadian Monopoly).
    What i found is if the TTL is set to less than 3 hours it is automaticly reset to 3 weeks.
    As a result I have set all of out TTLs to at least the 3 hour minimum.
  • Re:DNS practices (Score:2, Informative)

    by Name Anonymous ( 850635 ) on Tuesday April 19, 2005 @12:21PM (#12282628)
    However, some people when they are about to do major updates on the domain information will a day or two before the change is going to happen temporarily lower the TTL where needed. And after the change is done and checked to be correct, they will raise the TTL back to its normal value.

    Therefore overriding TTL can break things for your customer.

    I can see raising any TTL of less than an hour to an hour, I can't see raising it to 24 hours or more. This would limit what breaks for your customers.
  • Re:Bypass their DNS (Score:3, Informative)

    by slavemowgli ( 585321 ) * on Tuesday April 19, 2005 @12:28PM (#12282725) Homepage
    It's generally a good idea to not rely on your ISP's DNS servers too much. Personally, I just use an OpenNIC [unrated.net] server, with one of my ISP's servers as a fallback - that way, I don't get any of the occasional timeouts, failures of new records to propagate properly and all that. Really, ISPs should focus on providing the connection, nothing more. I don't use my ISP's mail servers for email, and I don't use their nntp servers for Usenet; why should I use their name servers for DNS requests?
  • Re:24 hours ? (Score:4, Informative)

    by gclef ( 96311 ) on Tuesday April 19, 2005 @12:55PM (#12283052)
    heh. Have a look at www.yahoo.com...they're at 60 seconds. Yay Akamai.

    (For those that haven't messed with Akamai, they're intentionally setting the TTL insanely low to force clients to re-request often...Akamai uses the response they give as a way of doing path optimization to clients. It's ugly, but it kinda works.)
  • by Buran ( 150348 ) on Tuesday April 19, 2005 @12:57PM (#12283077)
    I'm on Charter and I'm not so sure it's filtered... I've switched to backup DNS servers before. Just as a test, I removed Charter's servers from my list (I have like 8 more servers behind their two as fallbacks), applied the change (I use Mac OS X 10.3) and then went to example.net. My machine successfully looked up the domain and went to the associated website.

    What's the easiest way to check to see if your machine does indeed fetch records from another server? dig?
  • Re:Bypass their DNS (Score:5, Informative)

    by Transcendent ( 204992 ) on Tuesday April 19, 2005 @01:12PM (#12283268)
    You could avoid this by pulling from THE root DNS servers, but if everyone did that it would put undue strain on the root servers

    That's not how DNS works.

    The root servers simply point you into the direction of the authorative DNS server for a given domain name. That is why you have to register who is going to be the DNS server for any given domain so the root servers can point people to it. Your own DNS then caches the response from the DNS server (not the root) locally, only updating it after the TTL is expired (which isn't always happening with the provider's DNS, hence the problem).

    The root servers are reliable... they have to be. Sure there have been DoS attacks and the like on them before, but they only need to update themselves for new domain name server registrations (which last I heard is every 5 minutes? So that's a much better "ttl").
  • by wayne ( 1579 ) <wayne@schlitt.net> on Tuesday April 19, 2005 @01:13PM (#12283288) Homepage Journal
    Listen -- We are SOA for around 11,000 domains. Both myself and the other uber-admins get tickets like this "escalated" when some clueless newbie wet behind the ears freaking junior admin DOESN'T RTFM and doesn't realize that if the serial #'s don't change then TTL is ignored.

    Ok, can you point me to anywhere in RFC1034/RFC1035/RFC2308/etc that says that the SOA record has anything to do with the TTL? The nTTL, yes, but not the TTL. Yeah, if they don't change the serial number, their secondary name servers will take a long time to expire (could be weeks), but again, this doesn't have anything to do with your claim that if the serial number doesn't change, then the TTL is ignored.

    Have I just been trolled?

  • Re:Bypass their DNS (Score:3, Informative)

    by Cat_Byte ( 621676 ) on Tuesday April 19, 2005 @01:21PM (#12283402) Journal
    Some IPs have multiple sites and have to resolve by url rather than IP. It probably isn't all of them giving you this problem but it may be part of it.
  • Re:reboot? (Score:1, Informative)

    by Anonymous Coward on Tuesday April 19, 2005 @01:25PM (#12283454)
    And to do the same thing on Windows XP:
    sc stop dnscache
    sc start dnscache

    or, just use
    ipconfig /flushdns
    like the GP suggested and save yourself some typing
  • quick fix (Score:3, Informative)

    by ap0 ( 587424 ) on Tuesday April 19, 2005 @01:42PM (#12283688)
    My dad uses Comcast and he kept calling me to "make the Internet work" during their recent DNS outages. I just SSH'd in to his router and added a Verizon DNS server (4.2.2.1) to his DHCP info, and his Internet worked right away. His neighbors were complaining they couldn't use the 'Net but he was surfing away just fine.
  • by jafiwam ( 310805 ) on Tuesday April 19, 2005 @01:43PM (#12283708) Homepage Journal
    Grandparent is full of shit or doesn't understand what this thread is about.

    Serials are primarily for the two servers do get the same data (primary/secondary), so when the secondary is done waiting it goes to look at the serial on the primary and grabs the new zone transfer if the serial is higher.

    TTL on an A record is just a recomendation (a specific setting that over-rides the default TTL for the zone up near the SOA).

    IF a server has cached an A record with a TTL of 6000 seconds (just under 2 hours) it should hold and server data for only a maximum of 6000 seconds, and after that time dump the data and go get new data from the authoritative name servers.

    If you do a DIG against them, they'll tell you how much time is left on a cached record.

    Serial doesnt come into the "when to drop cached data" transaction at all.

    Sure, not incrementing the serial can cause all sorts of problems. But that's not what the article is on about.

    AOL et. al are ignoring specific A record TTL and putting their OWN TTL on cached information that over-rides mine. (I know this because the tool I use makes it so I CANT forget to incriment the serial, and I still run into TTL problems. What about that smartypants?) So when I set a domain from default to 3600 seconds a day before an MX record (email server) change and they ignore it, email migration from one server to another stays messed up for days rather than the hour my TTL would do. A good admin doesnt abuse TTL (like yahoo apparently does...) and sets it back up higher when finished moving stuff, most of the time I am prefectly happy with the nice long standard cache time. But sometimes you NEED a low TTL.

    I got the O'Reilly Grasshopper book right here in front of me and none of the TTL sections mentions SOA needing increment for TTL caching. If someone wants to point out a page number that says I am wrong I'd be happy to shut up. But self-righteous indignation better be fact checked... seriously.
  • by eram ( 245251 ) on Tuesday April 19, 2005 @02:34PM (#12284315)
    There was an article called On the Responsiveness of DNS-based Network Control [imconf.net] presented at the Internet Measurement Conference" [imconf.net] last year. It is based on data from the Akamai content distribution network and shows that some DNS servers and even more client applications do not honor DNS TTL information.
  • Re:Bypass their DNS (Score:5, Informative)

    by geniusj ( 140174 ) on Tuesday April 19, 2005 @03:01PM (#12284662) Homepage
    To be even more specific, here is how a typical lookup happens (assuming NO cached data):

    Specifics per implementation might be off, but either way it ends in the same result:

    Recursive -> Root Server: "ANY? www.google.com"
    Root Server -> Recursive: "com NS a.gtld-servers.net ....."
    Recursive -> a.gtld-servers.net: "ANY? www.google.com"
    a.gtld-servers.net -> Recursive: "google.com NS ns1.google.com ...."
    Recursive -> ns1.google.com: "ANY? www.google.com"
    ns1.google.com -> Recursive: "www.google.com A 1.2.3.4 ... google.com NS ns1.google.com"

    As you can see, the root server only provides information for the top level domains. Those being com, org, us, uk, au, etc.

    It's commonly thought that they handle things like 'google.com' which isn't true. google.com, in thise case, would be known by {a,b,c,d,etc}.gtld-servers.net. Each TLD has its own nameservers, obviously. But com and net use those.

    As for the TTL issue. I do offer Dynamic DNS which has a default TTL of 180 seconds, however I have not run into this personally. Or myself and my users just haven't noticed it.

    Regards,
    -JD-
  • A French lesson. (Score:2, Informative)

    by UseTheSource ( 66510 ) on Tuesday April 19, 2005 @05:03PM (#12286056) Homepage Journal

    Actually, the word du does mean "of the". It's the equivalent of de and le together. It's le jour because it's masculine.

  • Re:Bypass their DNS (Score:3, Informative)

    by racermd ( 314140 ) on Tuesday April 19, 2005 @08:15PM (#12287721)
    Ok, I haven't seen a reply to your post, so I think I'll chime in for DNS n00bs.

    Setting up a proper DNS server isn't too hard (as indicated by the number of posters that have done just that). However, it does take a bit of knowledge about how DNS really works. To that end I suggest you read some books about Networking, and DNS in particular.

    I've found the O'Reilly books to be fairly easy to read while providing a great starting point for those that have a broad, basic understanding of how networks (and computers) operate. Specifically, I'd recommend DNS and BIND [amazon.com]. This assumes that you have some LAN experience, this is a great place to start. It does tend to focus a bit on BIND (Berkley Internet Name Domain), but most DNS servers are based on it's general feature-set and configuration, anyway.

    By itself, this book won't allow you to set up your own DNS server. However, it will help you get that core understanding of HOW DNS actually works and what you can get it to do for you. You'll have a choice of software on a number of differnet platforms, but the general operation will pretty much be the same across them all.

    And there are plenty of other books and publications on DNS, so don't limit yourself if O'Reilly doesn't do it for you.

    This probably wasn't the answer you were looking for, but it really is what you needed.

  • Re:Bypass their DNS (Score:3, Informative)

    by Michael Hunt ( 585391 ) on Tuesday April 19, 2005 @08:19PM (#12287751) Homepage
    DNS server A records will only be provided in the 'additional' section of the response (so-called 'glue' records) if the zone in which they live is delegated to themselves.

    That is to say, if i query a.gtld-servers.net for www.bob.com IN A, and bob.com is delegated to ns1.bob.com and ns2.bob.com, the registry will know the IP addresses of bob.com's two nameservers and return them in the 'additional' section (otherwise nobody would ever be able to find anything under bob.com, including ns1 and ns2.bob.com).

    If bob.com is delegated to ns1.foo.com and ns2.foo.com, but foo.com is delegated somewhere else, then ns1/ns2.foo.com's A records WILL NOT be returned as glue, and your resolver will have to recursively query for those, as well.

    It's amazing that DNS works at all.

Prediction is very difficult, especially of the future. - Niels Bohr

Working...