Understanding DNS propagation
published · 2026-04-22

Understanding DNS propagation

Why a DNS change takes hours to roll out — and what's actually happening in the global resolver cache during that time. The real mechanics of TTL, recursive resolvers, and how to verify your records have landed.

DNS
Trace Warrior Team
7 min read

Every web professional has lived through it. You change an A record at 09:00 in your registrar. By 09:01 your laptop sees the new address, the CDN dashboard says everything looks right, and you think you're done. At 09:30 a colleague in another office still hits the old server. By lunch you've got two support tickets and a slight panic.

This is DNS propagation. It is one of the most consistently misunderstood parts of running anything on the internet, even by experienced engineers. Here is what is actually happening — and how to control it.

DNS is a cache, not a directory

The Domain Name System is often described as "the phonebook of the internet," which makes it sound like a single global lookup table. It is not. DNS is a hierarchical, distributed cache. When your browser asks for example.com, the query usually hits at least four layers of caching on its way to the authoritative source:

  1. The OS resolver cache on your machine.
  2. The recursive resolver run by your ISP (or Cloudflare, Google, Quad9 — whoever your /etc/resolv.conf or DHCP lease points at).
  3. Sometimes a forwarder cache between recursive resolvers and authoritative servers.
  4. The authoritative nameservers — the ones you actually control through your DNS provider.

Only the authoritative servers reflect changes immediately. Every cache in front of them is holding a copy of the previous answer, and they get to decide when to refresh it.

What controls propagation time

The single biggest knob is the TTL (time to live) value on each record. TTL is set in seconds, written into every DNS response, and tells every cache between you and the user "you may keep this answer for this many seconds before asking again."

A TTL of 3600 means "any resolver that fetched me is free to keep this answer for an hour." Resolvers respect this with some slack — they're not required to expire on the exact second, and many will hold a record for longer if they're under cache pressure or if the authoritative server times out.

Common TTL defaults:

  • 3600 (1 hour) — the typical CDN default
  • 86400 (24 hours) — common on stable registrar-managed records
  • 300 (5 minutes) — fast-changing services, blue-green deployments
  • 60 (1 minute) — emergency or active migration

The faster your TTL, the faster your changes propagate. The slower your TTL, the less load you put on authoritative servers and the better your geographic edge caching works. There's no universal right answer — it's a trade-off you tune.

The 48-hour myth

You will sometimes hear that DNS changes take "up to 48 hours." This is a holdover from a time when authoritative servers ran on much longer refresh intervals and recursive resolvers were less consistent. In practice, on the modern internet, if your TTL is 3600 then 95% of users see the change within an hour and 99% within a few hours.

The 48-hour figure is only meaningful if you're changing nameservers themselves rather than records within a zone. NS record changes flow through the registry (Verisign for .com, for instance) and have their own slower update cycle. Even those usually settle within 24 hours.

How to make a planned change fast

There is a standard sequence for any DNS change you want to settle quickly:

  1. Drop the TTL a day before. If your current TTL is 3600, change it to 60 at least one TTL cycle (so, at least one hour) before the actual change. Now every cache that fetches the record before your cutover will hold it for only one minute.
  2. Make the change. Update the A record, the CNAME, whatever it is. Because every cached copy expires within a minute, the new value is live globally within ~60 seconds for anyone whose resolver refetches.
  3. Verify from multiple locations. Don't trust your own laptop or your ISP's resolver — both might be lying due to local cache. Query authoritative nameservers directly using the DNS Lookup tool, and check from at least three geographies.
  4. Restore TTL. Once you're confident the change is settled, raise the TTL back to its previous value to reduce load on authoritative servers and improve resolver performance.

This is the standard playbook used by everyone running production DNS — registrars, CDNs, hosting providers. If you skip step 1 and just change a record with TTL=86400, you are guaranteeing a stretched-out propagation window where some users see the old value for the rest of the day.

Authoritative vs recursive queries

When you debug DNS issues, the most useful distinction you can make is between authoritative and recursive queries.

A recursive query asks your local resolver: "what's the address for example.com?" The resolver may answer from cache, or it may walk the chain and find out for you. Either way, you get an answer that may or may not be fresh.

An authoritative query goes directly to the nameservers listed on the zone, bypassing all caches in between. The answer is the ground truth — what the zone owner has actually configured right now.

The DNS Lookup tool lets you query authoritative servers directly. When you suspect a propagation issue, this is the first check to run. If the authoritative server returns the new value but a regular dig example.com from your laptop returns the old one, the change is good — you're just looking at a stale cache.

Why some users see changes slower than others

Even with a short TTL, you'll occasionally see a user — usually a corporate user — who still hits the old IP an hour after everyone else. The usual culprits:

  • Corporate DNS resolvers sometimes override TTLs and cache aggressively to reduce egress traffic. Some appliances cap all TTLs at "at least 4 hours" regardless of what the record says.
  • DNSSEC validation failures can cause a resolver to fall back to a previously validated answer rather than serve nothing.
  • ISP-level CDN routing can intercept DNS queries to direct users to a regional cache that's slower to update than the central one.
  • Browser DNS caches are separate from the OS resolver. Chrome holds its own DNS cache. So does Firefox. So does the Node.js runtime in your serverless function.

The first one is the most common in enterprise contexts. There is nothing you can do about a Cisco Umbrella appliance that's configured to cache everything for 4 hours. You can only document the constraint and plan around it.

Diagnosing a "stuck" propagation

If a change really seems stuck after 24 hours, here is the order to investigate:

  1. Verify authoritative. Run a DNS lookup against the zone's nameservers. If the authoritative answer is wrong, the change never landed at your provider — go fix that first.
  2. Check propagation across geographies. Use a multi-location DNS lookup. If most regions are fine but one is stuck, the problem is a downstream resolver, not your zone.
  3. Bypass caches. From a clean test environment (a fresh VM, a phone on cellular, a coffee shop wifi), query the hostname. If that resolves correctly, the issue is local caching, not propagation.
  4. Check DNSSEC. Validation failures can cause silent or slow propagation. Tools like Verisign's DNSSEC analyzer or dig +dnssec show whether the chain is intact.

Don't trust a single source

The mistake most people make when debugging DNS issues is trusting a single perspective. Your laptop's view of DNS is not authoritative. Your monitoring dashboard's view of DNS is not authoritative either — it's running from a specific cloud provider's resolver, which may have its own caching behaviour.

The only authoritative perspective is a direct query against the zone's nameservers. Build that habit. Every time you change a record, query the authoritative nameservers directly to confirm the change is live there, then check propagation from at least one independent perspective. The whole process takes thirty seconds and saves the afternoon of confusion that comes from arguing about what DNS is doing.

The takeaway

DNS propagation is not magic and it is not slow. It is a cache invalidation problem with a well-documented mechanism. Lower your TTLs before planned changes, query authoritative servers when you need ground truth, accept that a small percentage of corporate users will lag for hours regardless of what you do, and stop blaming "DNS propagation" for things that are actually local cache issues.

Run a DNS lookup against authoritative nameservers the next time a colleague says "DNS is broken." Nine times out of ten, you'll find that DNS is fine — it's something else.