Networking Foundations

How the Internet Works

The full journey of a single click — from your fingertip in Bangalore to a server 13,000 km away in Virginia and back, through 14 routers, 2 undersea cables, and 4 protocol layers, in under half a second.

8 Think Firsts 20 SVG Diagrams 15 Sections Live Commands 38 Tooltips 5 Exercises
Section 1

TL;DR — The One-Minute Version

Mental Model — The Postal System: The internet is a global postal service for data. Your message gets split into small envelopes (packets), each stamped with a destination address (IP), routed through a chain of sorting offices (routers), and reassembled at the other end. If any envelope gets lost in transit, it gets resent automatically. DNSDomain Name System — the internet's phone book. It translates human-friendly names like "google.com" into numeric IP addresses like 142.250.195.68 that routers can actually use. is the phone book that converts names to addresses.

Here is the full journey of what happens when you type maps.google.com in a cafe in Bangalore. Not the textbook version — the real one, with real cities, real distances, and real time.

The Life of a Web Request — Bangalore to Virginia, 380ms 1. YOU TYPE maps.google.com Bangalore cafe 2. DNS LOOKUP 142.250.195.68 ~20ms 3. TCP + TLS 4 round trips ~200ms (physics!) 4. HTTP GET "Send me /maps" ~50ms 5. RENDER HTML to pixels ~110ms THE PHYSICAL JOURNEY UNDERNEATH Your Phone 192.168.1.5 Jio Router Bangalore NIXI Mumbai IXP AAE-1 Cable undersea, 13,000 km Google DC Virginia, USA 14 hops | 26,000 km round trip | light in fiber: 200,000 km/s | minimum latency: 130ms (pure physics) Total: ~380 milliseconds Your brain needs 300ms to blink. This is FASTER than blinking. DNS: 20ms TCP+TLS: 200ms HTTP: 50ms Render: 110ms

The entire internet runs on four layers stacked on top of each other, like a postal system. You write a letter (that is your HTTPHyperText Transfer Protocol — the language browsers and servers speak. When your browser says "GET /maps," it is speaking HTTP. Every webpage you have ever loaded used this protocol. request). You put it in an envelope with a reliable delivery guarantee (that is TCPTransmission Control Protocol — guarantees your data arrives complete and in order. If a packet gets lost, TCP detects it and resends. Think of it as registered mail with delivery confirmation.). The envelope gets stamped with a destination address (that is IPInternet Protocol — the addressing system. Every device on the internet has an IP address, like 142.250.195.68. Routers read these addresses to forward packets toward their destination.). And the postal truck physically carries it on roads and across oceans (that is the physical layerThe actual hardware: copper Ethernet cables, fiber optic lines, Wi-Fi radio waves, undersea cables. This is where your data becomes light pulses, electrical signals, or radio waves. — fiber optics, copper, radio waves). Each layer only worries about its one job.

One-line takeaway: The internet is layers of trust. Each layer solves exactly one problem and hands off to the next, so no single piece has to understand the whole system.
Section 2

The Scenario — 380 Milliseconds That Change Everything

It is a Tuesday afternoon. You are sitting in a Third Wave Coffee outlet in Koramangala, Bangalore. Your phone is connected to the cafe's Wi-Fi. You open Chrome and type maps.google.com. You press Enter.

380 milliseconds later, a fully interactive map of Bangalore loads on your screen. You pinch to zoom. You search for "pizza near me." Results appear in 120ms. You have done this ten thousand times without thinking about it.

But here is what actually happened in those 380 milliseconds:

The Physical Path of Your Request YOUR PHONE 192.168.1.5 (private) Koramangala, Bangalore Wi-Fi: 5 GHz radio 3m CAFE ROUTER TP-Link Archer C7 NAT: 49.37.x.x fiber ACT FIBERNET ISP, Bangalore BRAS router NIXI MUMBAI Internet Exchange Where ISPs meet Juniper MX960 core UNDERSEA CABLE: AAE-1 Mumbai to Marseille — 20,000 km — capacity: 40 Tbps — built by Reliance Light pulses through glass fiber thinner than a hair, on the ocean floor MAREA CABLE: Bilbao to Virginia Beach 6,600 km — 224 Tbps — owned by Microsoft + Meta — see submarinecablemap.com GOOGLE DATA CENTER — Ashburn, Virginia THE MATH: Distance: 13,000 km one-way Light in fiber: 200,000 km/s One-way: 13000/200000 = 65ms Round trip: 130ms (physics) + 4 round trips for TLS+TCP+HTTP = ~380ms total

Your request left your phone as a radio wave, traveled 3 meters to the cafe's router, got converted to light pulses in a fiber optic cable, shot through ACT Fibernet's networkACT (Atria Convergence Technologies) is one of India's largest fiber-to-the-home ISPs. Their Bangalore network connects to the rest of the internet through peering points like NIXI. to an exchange point in Mumbai, dove into an undersea cableThere are only ~450 active undersea cables in the world, carrying 99% of intercontinental internet traffic. Not satellites — cables. You can see every single one at submarinecablemap.com. on the floor of the Indian Ocean, crossed through Marseille, hopped across the Atlantic on another cable, and arrived at a Google server rack in Ashburn, Virginia — a suburb outside Washington, D.C. where roughly 70% of the world's internet traffic passes through.

Google's server processed the request, built a response containing map tiles and JavaScript, and sent it all back — through the same chain of cables and routers, in reverse. 26,000 km round trip. 14 network hops. Under 400 milliseconds.

Think First

Before reading on — what do you think happens between pressing Enter and seeing the page? Try listing every step. Do not worry about being wrong. The point is to notice what you don't know yet.

Hint: You need to solve at least 4 problems — finding the server's address, establishing a connection, requesting the data, and getting the data back reliably. Each of these is a separate challenge.

Most developers, even experienced ones, have a fuzzy picture here. "DNS, something, TCP, something, HTTP, something." They could not trace the actual physical path their data takes, or explain why the round trip is 380ms and not 130ms, or say what happens when a packet gets lost in the Indian Ocean. That is exactly what we are going to fix.

Try it right now. Open your terminal and run traceroute google.com (macOS/Linux) or tracert google.com (Windows). You will see every router your data passes through — with city names, ISP names, and the millisecond latency at each hop. That is the real internet laid bare.
Why 380ms and not 130ms?

Physics floor: Light in fiber travels at ~200,000 km/s (not 300,000 — glass is slower than vacuum).

Bangalore to Virginia = ~13,000 km. One-way = 13,000 / 200,000 = 65ms. Round trip = 130ms.

But a single round trip is not enough. You need:

  • 1 round trip for TCP handshakeBefore sending data, TCP requires a "three-way handshake" — your computer says SYN, the server says SYN-ACK, you say ACK. This establishes a reliable connection but costs one full round trip. (SYN, SYN-ACK, ACK)
  • 2 round trips for TLS handshakeTransport Layer Security — the encryption setup for HTTPS. The client and server exchange certificates and agree on encryption keys. TLS 1.2 needs 2 round trips; TLS 1.3 cut it to 1. (certificate exchange, key agreement)
  • 1 round trip for the actual HTTP request and response
4 round trips x 130ms = 520ms of pure physics — before any processing time. The real number (380ms) is lower because Google has edge serversGoogle runs Points of Presence (PoPs) in Mumbai and other Indian cities. Your DNS query resolves to the nearest PoP, so your data might only travel 1,000 km instead of 13,000 km. That is why real latency is lower than the Virginia calculation. closer to Bangalore.
Section 3

First Attempt — What If You Just Sent Raw Bytes?

Let us go back to 1969. The ARPANETThe Advanced Research Projects Agency Network — the U.S. military-funded project that became the ancestor of the internet. It started with just 4 computers: UCLA, Stanford Research Institute, UC Santa Barbara, and University of Utah. has just connected four computers: one at UCLA, one at Stanford Research Institute (SRI), one at UC Santa Barbara, and one at the University of Utah. Four machines. Four direct connections. Simple enough.

The approach was straightforward: each computer has a direct line to the others. If UCLA wants to send a message to SRI, it puts bytes on the wire. SRI reads them. Done.

ARPANET, October 1969 — 4 Computers This worked. Direct connections are fine when you have 4 machines. UCLA Los Angeles, CA SRI Menlo Park, CA UCSB Santa Barbara, CA UTAH Salt Lake City, UT 6 connections for 4 machines Formula: n(n-1)/2 = 4(3)/2 = 6 connections. Manageable.

With 4 computers, you need 6 direct connections. Manageable. The first ARPANET message was sent on October 29, 1969 — UCLA tried to send "LOGIN" to SRI. The system crashed after "LO." So the first message ever sent on the internet was "LO" — which, honestly, is pretty fitting.

But this direct-connection approach has a fatal flaw that only shows up when you scale.

Think First

If you have 100 computers and each one needs a direct connection to every other one, how many connections do you need? What about 1 million computers? What about 4 billion (the number of devices on the internet today)?

Hint: The formula is n(n-1)/2. For 100 computers, that is 4,950 connections. For 1 million... well, try the math.
The Scaling Disaster: Direct Connections 4 machines 6 connections Totally fine 100 machines 4,950 connections Getting expensive 1 million 500 billion connections Physically impossible 4 billion (today) 8 x 10^18 connections 8 quintillion. LOL no. This is an O(n^2) problem. It does not scale. Period. Even if you could wire 8 quintillion connections, every machine would need to understand how to talk to every other machine — different hardware, different OS, different protocols.

Direct connections worked for 4 university computers in 1969. But by 1973, ARPANET had 40 nodes. By 1983, it had 500. Today the internet has roughly 30 billion connected devices — phones, laptops, smart TVs, refrigerators, industrial sensors. The direct-connection approach does not just fail at scale. It fails spectacularly.

But it is not just a numbers problem. Even if you could magically wire everything together, you have three more problems that are just as lethal:

Section 4

Where It Breaks — The Monolith Attempt

OK, so direct connections are out. What if we tried a different approach: one giant protocol that handles everything? One mega-specification that knows how to address machines, route data, ensure reliable delivery, encrypt traffic, and format messages — all in one monolithic system.

This is what the telecom companies wanted in the 1970s. The ISOInternational Organization for Standardization — the body that later created the OSI model (a 7-layer framework). The OSI model was elegant on paper but too complex to implement. The simpler 4-layer TCP/IP model won because it actually worked in practice. tried to build this. It was called the OSI modelOpen Systems Interconnection — a 7-layer model that defined everything from physical signals to application data. It was thorough but so complex that by the time it was finalized, the simpler TCP/IP had already won the real world.. It had 7 layers, but each layer was so tightly specified that the entire thing behaved like a monolith. Let us see why that approach — even with layers — fails when the layers are too rigid.

Think First

Imagine a country's postal system where a single government agency controls everything: writing rules, envelope sizes, truck routes, mailbox designs, and stamp prices. What happens when a new technology arrives — say, drones for delivery? How fast can that system adapt?

Hint: Compare this to a system where the envelope format is separate from the truck routes, which are separate from the delivery method. Each piece can change independently.
The Monolith vs. Modular Layers THE MONOLITH APPROACH ONE MEGA-PROTOCOL Addressing + Routing + Reliability + Encryption + Formatting + Error Handling + Flow Control + ... Every device must implement ALL of this Change ONE thing? Rewrite EVERYTHING. THE MODULAR APPROACH APPLICATION: "What to say" (HTTP) TRANSPORT: "How to deliver" (TCP) NETWORK: "Where to send" (IP) LINK: "How to transmit" (Wi-Fi, Ethernet) Swap Wi-Fi for 5G? Only change the bottom layer. Upgrade HTTP/2 to HTTP/3? Only change the top layer. Each layer evolves independently. REAL-WORLD MONOLITH FAILURES: 1992: OSI too complex to implement France's Minitel: proprietary = dead end IBM SNA: locked to IBM hardware

The monolith approach had three fatal problems:

Problem 1: It could not evolve. In the 1980s, everyone used Ethernet (copper cables). In the 1990s, Wi-Fi arrived. In the 2000s, 3G mobile. In the 2010s, 4G LTE. In the 2020s, 5G. If your protocol is monolithic, every new physical technology means rewriting the entire stack from scratch. With layers, you just swap out the bottom layer. HTTP does not care whether the bits travel by Wi-Fi, fiber, or carrier pigeon.

Problem 2: It could not mix networks. Right now, your data might start on Wi-Fi in the cafe, switch to fiber at the ISP, cross an undersea cable as light pulses, and arrive as electrical signals in a data center's Ethernet. Four different physical technologies in one request. A monolithic protocol would need to understand all of them simultaneously. With layers, each hop only needs to understand its own physical medium.

Problem 3: It could not be built incrementally. The OSI model was a 7-layer specification that took a committee years to finalize. By the time it was done, a scrappier alternative — TCP/IPThe protocol suite that actually powers the internet. Vint Cerf and Bob Kahn designed it in 1974. Unlike OSI, TCP/IP was implemented first and standardized later. It has 4 layers instead of 7, and its simplicity is why it won. — had already been running on ARPANET for a decade. TCP/IP won not because it was more elegant, but because it existed and worked while OSI was still being designed by committee.

The lesson for system design: A working simple system beats a perfect complex system every time. This is true for the internet's architecture, and it is true for the systems you will design in your career. Ship the 4-layer solution, not the 7-layer masterpiece.
The Race: OSI vs TCP/IP 1974 Cerf & Kahn publish TCP/IP 1977 OSI committee formed by ISO 1983 ARPANET switches to TCP/IP (Flag Day) 1984 OSI spec finally published 1990 Tim Berners-Lee builds WWW on TCP/IP ~1995 OSI effectively dead. TCP/IP wins. TCP/IP: running in production the entire time OSI was being designed OSI: designed by committee, never widely deployed "The best is the enemy of the good." — Voltaire (and every shipping engineer)
Section 5

The Breakthrough — Layers That Talk Only to Their Neighbors

The insight that makes the internet possible is deceptively simple: split the problem into layers, and make each layer only talk to the layer directly above and below it.

Think about how the postal system works. When you mail a letter, you interact with exactly one layer: you write your message, put it in an envelope, write the address, and drop it in a mailbox. You do not care how the post office sorts it, which truck carries it, or which route the truck takes. That is the postal system's problem, not yours. And the truck driver does not care what is inside the envelope. They just drive it from sorting center A to sorting center B. Each layer has one job. Each layer trusts the layers around it.

The internet works exactly the same way, with four layers:

The TCP/IP Stack = A Postal System LAYER 4: APPLICATION HTTP, HTTPS, DNS, FTP, SMTP "What message do you want to send?" Postal analogy: THE LETTER You write "Dear Google, send me /maps" You don't care how it gets there. LAYER 3: TRANSPORT TCP (reliable) or UDP (fast) "How do we guarantee it arrives intact?" Postal analogy: THE ENVELOPE Registered mail with tracking number. If it gets lost, they resend it automatically. LAYER 2: NETWORK (INTERNET) IP (v4 and v6) — addressing & routing "What is the destination address? Which path?" Postal analogy: THE ADDRESS + SORTING Zip code tells sorting offices where to route. Each office only knows the next hop, not the full path. LAYER 1: LINK (PHYSICAL) Ethernet, Wi-Fi, Fiber, 5G, Undersea Cable "What physical medium carries the bits?" Postal analogy: THE TRUCK + ROAD Truck on highway, boat across ocean, drone in air. Different vehicles, same envelope format.

Here is the key insight that makes this work: each layer only talks to its immediate neighbors. HTTP (Layer 4) does not know or care about IP addresses — it just says "send this data reliably" and hands it to TCP (Layer 3). TCP does not know or care about Wi-Fi vs. fiber — it just says "deliver this packet to this IP address" and hands it to IP (Layer 2). IP does not know or care about electrical signals — it just says "put these bits on the wire" and hands it to the physical layer (Layer 1). Each layer adds its own header (like writing on the outside of the envelope), and the receiving side peels them off in reverse order.

Think First

When your phone sends data over Wi-Fi at the cafe, and then the cafe's router forwards it over fiber to the ISP — what happens to the TCP and HTTP layers? Do they change at each hop?

Hint: Think about the postal analogy. If a letter goes by truck from the local post office, then by plane to another city, does the letter inside the envelope change? Does the address on the envelope change?

The answer is: only the bottom layer changes at each hop. Your HTTP request stays the same from browser to server. Your TCP segments stay the same. Your IP packet stays the same (with the same source and destination addresses). But the physical encoding changes at every hop — radio waves become electrical signals become light pulses become radio waves again. The beauty of layers is that the upper layers are completely oblivious to this constant physical transformation happening below them.

Encapsulation: Russian Dolls of Headers Each layer wraps the data from the layer above — like putting a letter in an envelope, then that envelope in a box GET /maps HTTP/1.1 Application (HTTP) TCP HTTP data Transport (TCP) IP TCP + HTTP data Network (IP) ETH IP + TCP + HTTP data Link (Ethernet) On the wire, this entire thing becomes a stream of 0s and 1s (electrical pulses, light, or radio)

This wrapping process is called encapsulationEach layer adds its own header in front of the data it receives from the layer above. At the receiver, each layer strips its header and passes the rest up. It is literally like opening nested envelopes.. Your HTTP request ("GET /maps") gets wrapped in a TCP header (with port numbers and sequence numbers), which gets wrapped in an IP header (with source and destination IP addresses), which gets wrapped in an Ethernet frame header (with MAC addresses for the local network hop). On the wire, the whole thing is just a stream of electrical pulses, light, or radio — ones and zeros.

And you can see this happening in real time. Right now, on your machine:

See every router your data passes through:

Terminal
# macOS / Linux
traceroute google.com

# Windows
tracert google.com

# Sample output (from Bangalore):
 1  192.168.1.1      1ms    # Your cafe's TP-Link router
 2  49.37.0.1        5ms    # ACT Fibernet, Bangalore
 3  49.37.128.1     12ms    # ACT Fibernet backbone
 4  72.14.194.34    15ms    # Google peering point, Mumbai
 5  108.170.248.1   18ms    # Google backbone, Mumbai
 6  142.250.195.68  22ms    # Google server (edge, Mumbai!)

Each line is a hopA hop is one router-to-router jump. Your data passes through multiple routers between source and destination. Each router reads the IP address, checks its routing table, and forwards the packet to the next router. The number of hops determines part of your latency. — a router that read the IP address on your packet, checked its routing table, and forwarded it to the next router. Notice how the latency increases at each hop. If Google has an edge server in Mumbai, you might only see 6 hops and 22ms. If your request has to cross the ocean, you will see 14+ hops and 130ms+ just for the last hop.

See DNS resolution — the phone book lookup:

Terminal
# Find the IP address behind a domain name
dig google.com

# Output (simplified):
;; QUESTION SECTION:
;google.com.              IN    A

;; ANSWER SECTION:
google.com.        300    IN    A    142.250.195.68

;; Query time: 12 msec
;; SERVER: 192.168.1.1#53   (your router, forwarding to ISP's DNS)

The dig command asks a DNS serverDNS servers are spread across the world in a hierarchy: your router's cache, your ISP's DNS (like ACT Fibernet's or Jio's), root servers (13 worldwide), and authoritative servers (Google runs its own at ns1.google.com). When you type google.com, this hierarchy resolves it to an IP address. "What IP address does google.com point to?" Notice the query time — 12ms. The 300 in the answer is the TTL (time-to-live) in seconds, meaning your computer can cache this answer for 5 minutes before asking again. On Windows, use nslookup google.com instead.

See the full HTTP conversation — request AND response headers:

Terminal
curl -v https://google.com 2>&1 | head -30

# Output:
*   Trying 142.250.195.68:443...
* Connected to google.com (142.250.195.68) port 443
* TLS handshake...
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
> GET / HTTP/2                    # YOUR REQUEST
> Host: google.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/2 301                      # GOOGLE'S RESPONSE
< Location: https://www.google.com/
< Content-Type: text/html
< Content-Length: 220

The -v flag (verbose) shows you every layer in action: the IP address it connected to (Layer 2), the TCP port 443 (Layer 3), the TLS encryption version (between Layers 3 and 4), and the HTTP headers (Layer 4). Lines starting with > are what you sent. Lines starting with < are what Google sent back. The 301 means "this page moved to www.google.com" — even URLs have forwarding addresses.

See every active connection on your machine right now:

Terminal
# See all active TCP connections
netstat -an | head -20

# Sample output:
Proto  Local Address         Foreign Address        State
tcp    192.168.1.5:54832     142.250.195.68:443     ESTABLISHED
tcp    192.168.1.5:54833     142.250.195.68:443     ESTABLISHED
tcp    192.168.1.5:54900     13.107.42.14:443       ESTABLISHED
tcp    192.168.1.5:55010     52.96.166.178:443      ESTABLISHED

Every line is a live TCP connection. The local address (192.168.1.5) is your machine's private IP. The foreign address is the server you are connected to. Port 443 means HTTPS. You can look up what 13.107.42.14 is by running nslookup 13.107.42.14 — that one is Microsoft (probably OneDrive or Teams). Right now, your computer probably has 50-100 active connections, even if you are only looking at one website.

The layers are not just a theoretical model drawn in textbooks. They are running right now, on your machine, and you can inspect every single one with commands you just saw. traceroute shows you the network layer (IP routing). dig shows you the application layer (DNS). curl -v shows you the transport layer (TCP/TLS) and application layer (HTTP). netstat shows you the transport layer (active TCP connections).

Your maps.google.com Request — Layer by Layer APPLICATION GET /maps HTTP/2 | Host: maps.google.com | Accept: text/html TRANSPORT TCP | Src Port: 54832 | Dst Port: 443 | Seq: 1 | ACK: 1 NETWORK IPv4 | Src: 49.37.xx.xx | Dst: 142.250.195.68 | TTL: 64 LINK Wi-Fi 802.11ac | Src MAC: aa:bb:cc:dd:ee:ff | Dst MAC: router's MAC Every packet on the internet has ALL of these headers. Wireshark lets you inspect each one.
Why this matters for system design: The layered architecture of the internet is the same pattern you will use when building systems. Microservices are layers. API gateways are layers. CDNs are layers. Load balancers are layers. The principle is universal: split a complex problem into independent layers, let each layer evolve independently, and define clear contracts between them.
Anti-lesson: When does this NOT matter? If you are building a typical web app — a CRUD API, a React frontend, a database — you will almost never think below the HTTP layer. TCP, IP, and the physical layer are handled by your OS, your cloud provider, and your ISP. Understanding the layers helps you debug latency issues, design global systems, and ace system design interviews. But for day-to-day feature work, HTTP is the floor you stand on, and you rarely need to look underneath.
Section 6

How It Works: The Four Layers (Deep Dive)

In Section 5 we introduced the four layers as a postal analogy. Now we are going to rip each one open and look at the actual bytes, actual commands, and actual hardware. Each layer below is something you can inspect on your own machine, right now, today.

Layer 4 — Application: The Language You Speak (HTTP, DNS, SMTP)

The application layer is where your code lives. When your browser says "give me the page at maps.google.com," it is speaking HTTPHyperText Transfer Protocol — a text-based request-response protocol. Your browser sends a request (GET, POST, PUT, DELETE), and the server sends back a status code (200 OK, 404 Not Found, 500 Server Error) plus the data. Every API you have ever called uses HTTP. — a plain-text protocol that has been around since 1991. Let us see the actual conversation. Open your terminal and run this:

Terminal — curl verbose
curl -v https://maps.google.com 2>&1 | head -25

# Here's what you'll see (annotated):
* Trying 142.250.183.206:443...           # IP address resolved by DNS
* Connected to maps.google.com port 443   # TCP connection established
* TLSv1.3 (OUT), TLS handshake, Client hello (1):  # Encryption begins
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* Server certificate: CN=*.google.com     # Google proves its identity
> GET / HTTP/2                  # YOUR REQUEST (lines with > are you)
> Host: maps.google.com         # Which website on this server
> User-Agent: curl/8.4.0        # What software you're using
> Accept: */*                   # "I'll take any content type"
>
< HTTP/2 200                    # GOOGLE'S RESPONSE (lines with < are them)
< content-type: text/html; charset=UTF-8
< date: Tue, 18 Mar 2026 10:23:47 GMT
< server: scaffolding on HTTPServer2
< content-length: 146847        # 143 KB of HTML coming your way
< x-xss-protection: 0
< x-frame-options: SAMEORIGIN

Every line is a real protocol field. The Host: header is how one server can host multiple websites — Google's server at 142.250.183.206 handles maps.google.com, drive.google.com, and dozens more. The 200 status code means "here is your page." The content-length: 146847 tells your browser to expect exactly 143 KB of HTML. If even one byte is missing, TCP (the layer below) will detect it and ask for a retransmit.

Now let us trace a DNS resolution from scratch. DNS is the other heavyweight at this layer — it is how "maps.google.com" becomes "142.250.183.206":

Terminal — dig +trace
dig maps.google.com +trace

# The resolution chain (simplified):
.                  IN NS  f.root-servers.net.    # Step 1: Ask a root server
                                                  #   (f.root is run by ISC, San Francisco)
com.               IN NS  a.gtld-servers.net.    # Step 2: Root says "ask .com TLD"
                                                  #   (run by Verisign in Reston, Virginia)
google.com.        IN NS  ns1.google.com.        # Step 3: .com says "ask Google's nameserver"
                                                  #   (ns1.google.com is at 216.239.32.10)
maps.google.com.   IN A   142.250.183.206        # Step 4: Google says "here's the IP"
                                                  #   Query time: 87 msec total

Four hops: root server, TLD server, authoritative server, answer. But you rarely see all four hops because results get cached. Your browser caches it (chrome://net-internals/#dns to see), your OS caches it (Windows: ipconfig /displaydns), your router caches it, and your ISP's resolver caches it. Most DNS lookups resolve in under 5ms because someone nearby already asked the same question.

Try it right now: On Windows, run ipconfig /displaydns | findstr "maps.google" to see if maps.google.com is in your OS DNS cache. On macOS/Linux: dig maps.google.com and look at the "Query time" — if it says 0 msec, it was cached.

Layer 3 — Transport: The Delivery Guarantee (TCP and UDP)

The transport layer answers one question: do you need every byte to arrive, or is speed more important? That question splits the entire internet into two camps.

TCPTransmission Control Protocol — guarantees complete, in-order delivery. Every web page, email, file download, and API call uses TCP. It is slower than UDP because it waits for confirmation that each segment arrived before sending more. is registered mail with tracking. Before sending any data, both sides do a handshake — three packets just to say "hello." Then every chunk of data gets a sequence number, and the receiver confirms receipt. If anything goes missing, it gets resent. This is how every webpage, email, and API call works.

Here is what the handshake actually looks like. Open WiresharkA free, open-source packet capture tool. It lets you see every single packet entering and leaving your computer — the headers, the payload, the timing. It is the X-ray machine for network debugging. Download at wireshark.org. (free at wireshark.org), start a capture, then visit google.com. Filter by tcp.flags.syn==1 and you will see:

Wireshark — TCP Three-Way Handshake
No.  Time     Source          Destination       Info
1    0.000    192.168.1.5     142.250.195.68    SYN      Seq=0       Win=65535  Len=0
2    0.022    142.250.195.68  192.168.1.5       SYN,ACK  Seq=0 Ack=1 Win=65535  Len=0
3    0.022    192.168.1.5     142.250.195.68    ACK      Seq=1 Ack=1 Win=65535  Len=0

# Handshake complete in 22ms. Now data can flow:
4    0.023    192.168.1.5     142.250.195.68    [TCP] Seq=1     Len=1460   # First 1460 bytes
5    0.023    192.168.1.5     142.250.195.68    [TCP] Seq=1461  Len=1460   # Next 1460 bytes
6    0.045    142.250.195.68  192.168.1.5       ACK    Ack=2921            # "Got bytes 0-2920"

Let us break down the math. Each packet is a segment. The maximum size of data in one TCP segment is typically 1460 bytesThis is the Maximum Segment Size (MSS). An Ethernet frame can carry 1500 bytes (the MTU). Subtract 20 bytes for the IP header and 20 bytes for the TCP header, and you get 1460 bytes of actual data per packet. (called the MSS — Maximum Segment Size). So to send a 143 KB Google Maps page: 146,847 bytes / 1,460 bytes per segment = 101 segments. Each one gets a sequence number. The receiver's ACK says "I have received everything up to byte X, send me the next chunk." If segment #47 gets lost in the Indian Ocean, the receiver's ACK will keep saying "I need byte 67,620" until the sender resends it.

UDPUser Datagram Protocol — fire and forget. No handshake, no acknowledgments, no retransmissions. Each packet (datagram) is independent. If it arrives, great. If not, too bad. Used for DNS queries, video streaming, gaming, and VoIP — anything where speed matters more than perfection. is the opposite. No handshake, no tracking, no retransmission. One packet out, one packet back (hopefully). DNS uses UDP by default — why wait for a three-way handshake when you just need a single question and answer?

Terminal — UDP DNS query
# DNS uses UDP — one packet out, one back. No handshake.
dig @8.8.8.8 google.com

# In Wireshark, filter: udp.port==53
# You'll see exactly TWO packets:
# 1. Your query:  192.168.1.5 → 8.8.8.8    "What is google.com?"  (44 bytes)
# 2. The answer:  8.8.8.8 → 192.168.1.5    "142.250.195.68"      (60 bytes)
# Total: 104 bytes, ~12ms. No SYN, no ACK, no overhead.
Think First

A Zoom video call sends 30 frames per second. If frame #17 arrives late (say, 200ms late), should Zoom wait for it and freeze the video? Or skip it and show frame #18 instead?

Hint: TCP would wait for it. UDP would skip it. Which one does Zoom actually use, and why?

The answer: Zoom uses UDP. A late video frame is useless — by the time it arrives, you are already 6 frames ahead. Showing a slightly glitchy video is better than freezing for 200ms. But a bank transfer missing one byte? That is TCP territory — you must have every byte in order. The choice between TCP and UDP is the choice between "complete" and "fast."

TCP vs UDP — The Decision TCP — "Every byte matters" Web pages (HTTP/HTTPS) APIs (REST, GraphQL, gRPC) Email (SMTP), file transfer (FTP/SFTP) Database connections (MySQL, Postgres) Overhead: 3-way handshake + ACKs + retransmits UDP — "Speed over perfection" DNS queries (single question + answer) Video calls (Zoom, Meet, Teams) Live streaming (Twitch, YouTube Live) Online gaming (Fortnite, Valorant) Overhead: zero. Fire and forget.

Layer 2 — Network: The Addressing and Routing System (IP)

The network layer is the postal service's sorting system. Every device on the internet has an IP addressInternet Protocol address — a unique number assigned to every device. IPv4 uses 32-bit addresses (like 142.250.195.68 — four numbers from 0 to 255). IPv6 uses 128-bit addresses (like 2404:6800:4007:820::200e) because we ran out of IPv4 addresses in 2011., and every router between you and your destination reads that address and decides where to forward your packet next. No single router knows the full path — each one only knows "for this destination, send it to my neighbor X."

Run this on your machine and watch the hops in real time:

Terminal — traceroute with city names
# Windows:
tracert maps.google.com

# macOS/Linux:
traceroute maps.google.com

# Real output from a Bangalore connection:
 1   1 ms    192.168.1.1          # Your router (TP-Link, on your desk)
 2   5 ms    49.207.47.1          # ACT Fibernet local node, Koramangala
 3  12 ms    49.207.128.1         # ACT Fibernet aggregation, Bangalore
 4  18 ms    72.14.194.18         # Google peering point, Chennai
 5  20 ms    108.170.248.65       # Google backbone router
 6  22 ms    142.250.183.206      # Google edge server, Mumbai

# Only 6 hops, 22ms! Google's edge server is in India.
# Without an edge server, this would be 14 hops and 130ms+

Notice hop #4: 72.14.194.18. That is a peering pointA peering point is where two networks connect directly to exchange traffic without going through a third party. Google peers with major ISPs at internet exchange points (IXPs) around the world. This keeps traffic local instead of routing across oceans. — the place where ACT Fibernet's network physically connects to Google's network. From there, your packet enters Google's private backbone and reaches an edge server in Mumbai. That is why the ping is 22ms instead of 130ms.

Each router along the way has a routing table — a lookup table that says "for this range of IP addresses, forward packets out this port." Here is a simplified version of what a router's table looks like:

Inside a Router: The Routing Table ACT Fibernet aggregation router, Bangalore Destination Range Next Hop Interface 142.250.0.0/16 72.14.194.18 (Google peering) port 3 13.107.0.0/16 157.49.12.1 (Tata Comm transit) port 7 49.207.0.0/16 local (ACT Fibernet internal) port 1 0.0.0.0/0 (default) 115.249.100.1 (Tata upstream) port 5 The "default" row catches everything the router doesn't have a specific rule for — like a catch-all mailbox.

When your packet (destined for 142.250.183.206) hits this router, it matches the first row (142.250.0.0/16 means "any IP starting with 142.250"). The router sends it out port 3, toward Google's peering point. This lookup happens in nanoseconds using specialized hardware called a TCAMTernary Content-Addressable Memory — a special chip in routers that can search the entire routing table in a single clock cycle (about 3 nanoseconds). A Juniper MX960 has TCAM that holds 2 million routes. Without TCAM, a router would need to scan every row — impossibly slow at line rate..

Now, about those IP addresses. Your phone has a private address like 192.168.1.5, but the rest of the internet only sees your router's public IP (something like 49.37.xx.xx). How? NAT — Network Address Translation:

NAT — How 3 Devices Share One Public IP Phone 192.168.1.5:54832 Laptop 192.168.1.6:49201 Smart TV 192.168.1.7:60100 YOUR ROUTER (NAT) Public: 49.37.42.100 Rewrites source IP + port NAT Translation Table 192.168.1.5:54832 ↔ 49.37.42.100:10001 192.168.1.6:49201 ↔ 49.37.42.100:10002 192.168.1.7:60100 ↔ 49.37.42.100:10003 When a response comes back to 49.37.42.100:10001, the router knows to forward it to 192.168.1.5:54832 (your phone).

NAT is why we have not completely run out of IPv4 addressesIPv4 uses 32 bits — only 4.3 billion possible addresses. IANA (the global allocator) gave out the last block on February 3, 2011. We have 8 billion people and 30 billion devices. NAT lets thousands of devices share one public IP, buying time until IPv6 (128 bits, 340 undecillion addresses) takes over. even though we technically ran out in 2011. Your whole household — phone, laptop, TV, thermostat — shares one public IP. Your ISP probably does another layer of NAT on top of that (CGNATCarrier-Grade NAT — when your ISP puts thousands of customers behind a single public IP. This means your home router's "public" IP might itself be private (like 100.64.x.x). It works, but it breaks things like hosting a game server or setting up a VPN at home.), meaning thousands of households share one IP. It is NAT all the way down.

Layer 1 — Link/Physical: The Actual Wires and Radio Waves

Everything above is software. This layer is physics. Actual photons in glass fibers. Actual electrons in copper cables. Actual radio waves bouncing off walls.

Your data starts its life as a radio wave. Your phone's Wi-Fi antenna transmits on the 5 GHz bandWi-Fi uses two frequency bands: 2.4 GHz (longer range, slower, more interference from microwaves and Bluetooth) and 5 GHz (shorter range, faster, less congested). Modern routers use both — 802.11ac and 802.11ax (Wi-Fi 6) on 5 GHz can hit 1-2 Gbps in ideal conditions, but real-world speeds are typically 200-400 Mbps. (802.11ac/ax), reaching the cafe's TP-Link router 3 meters away. The router converts that radio signal into electrical pulses on a short Ethernet cable (Cat 6, 1 Gbps, max 100 meters), which feeds into a fiber optic converter. From there, your data becomes laser pulses in glass — traveling at roughly 200,000 km/s through fiber thinner than a human hair.

The Physical Media Your Data Travels Through WI-FI (Radio) 802.11ac, 5 GHz Speed: 200-400 Mbps real Range: ~50m (through walls) Latency: 2-5ms Shared medium — neighbors interfere ETHERNET (Copper) Cat 6, electrical signals Speed: 1-10 Gbps Range: 100m max Latency: <1ms Dedicated — no interference FIBER OPTIC (Light) Laser pulses in glass Speed: 100 Gbps per strand Range: 80+ km per segment Latency: ~5μs per km Immune to electrical interference SUBMARINE CABLE Fiber bundles on ocean floor Capacity: 250+ Tbps per cable Length: up to 20,000 km Cost: $300M+ per cable 552 active cables worldwide (2024) THE KEY INSIGHT 99% of intercontinental internet traffic travels through submarine cables, NOT satellites.

Submarine cables deserve a special mention. There are 552 active cables crisscrossing the ocean floor as of 2024. Google alone owns or co-owns Dunant (US to France, 250 Tbps), Grace Hopper (US to UK to Spain, 340 Tbps), Equiano (Portugal to South Africa, 144 Tbps), and more. These are physical objects — 17mm thick, armored with steel wire near shore, Kevlar-wrapped in deep water (because sharks bite them — this is real, Google confirmed it). You can explore every single cable at submarinecablemap.com.

Try it right now: Go to submarinecablemap.com and click on India. You will see about 17 cables landing on India's coast — in Mumbai, Chennai, Kochi, and Trivandrum. Click the AAE-1 cable (Asia-Africa-Europe 1) and trace it from Mumbai through the Red Sea to Marseille. That is the cable your data likely crosses when you visit European websites.
Section 7

Going Deeper: The Complete Journey of a Single Request

Let us trace the entire path of your maps.google.com request — every single step, from the moment you press Enter to the moment the map renders. Not the simplified version. The real one.

Think First

Your request traveled from Bangalore to a Google data center and back in 380ms. The round-trip physical distance through undersea fiber is roughly 26,000 km. Light in fiber travels at about 200,000 km/s. So the speed-of-light minimum round-trip time is 130ms. Where did the other 250ms go?

Hint: Count the stops your packet has to make — DNS lookup, TCP handshake, TLS negotiation, server processing, and multiple router hops with queuing delay at each one.
The Complete Journey — 14 Steps, 380ms 1 Browser DNS cache chrome://net-internals/#dns 2 OS DNS cache ipconfig /displaydns (Windows) 3 Router DNS → ISP resolver ACT: 49.207.47.136 (~8ms) 4 Got IP: 142.250.183.206 Total DNS: ~20ms 5 TCP 3-way handshake (SYN/SYN-ACK/ACK) 1 RTT = ~22ms (to Mumbai edge) 6 TLS 1.3 handshake (1 RTT) ClientHello + ServerHello+Cert = ~22ms 7 HTTP/2 GET /maps sent ~200 bytes of headers, encrypted THE PHYSICAL JOURNEY (Steps 8-11) 8 Wi-Fi → Router 5 GHz radio, 3m, <1ms TP-Link Archer C7 9 Router → ISP (fiber) NAT: rewrite src IP, 5ms ACT Fibernet, Bangalore 10 ISP → IXP → Google peering NIXI/peering, 12ms total Juniper MX960 at IXP 11 Google edge server, Mumbai 22ms from your phone Maglev load balancer 12 Inside Google DC: Maglev LB → App server → Bigtable (map tiles) → Assemble response → Compress (Brotli) Processing time: ~50ms. Response: 143 KB of HTML + JS + map tiles. Split into ~101 TCP segments for return trip. 13 Response travels back (same path in reverse) Google DC → peering → ISP → router → Wi-Fi → your phone (~22ms) 14 Browser renders: parse HTML → execute JS → paint map ~110ms rendering. You see the map of Bangalore. TOTAL: DNS 20ms + TCP/TLS 44ms + HTTP RTT 22ms + Processing 50ms + Transfer 22ms + Render 110ms ≈ 268ms to a Mumbai edge — under 380ms even to Virginia A human blink takes 300-400ms. Your entire Google Maps load happens in less than one blink. WHY MUMBAI EDGE IS FASTER THAN VIRGINIA Mumbai edge: 1,000 km away → 1 RTT = 22ms → 3 RTTs (TCP+TLS+HTTP) = 66ms + 50ms processing + 110ms render = ~226ms Virginia: 13,000 km → 1 RTT = 130ms → 3 RTTs = 390ms + 50ms + 110ms = ~550ms. CDNs cut latency by 60%.

That SVG is not a simplification — it is what actually happens, with real latencies at each step. The single biggest variable is distance to the nearest edge server. Google operates edge Points of Presence (PoPs) in Mumbai, Chennai, Delhi, and other Indian cities. If your DNS resolves to a Mumbai PoP, your round trip is 22ms. If it resolves to Virginia, it is 130ms. That is the difference between "instant" and "noticeable."

DNS has a hierarchy, and every lookup walks that hierarchy until it finds the answer. Here is the full chain for maps.google.com, with real server names and locations:

Step 1 — Browser cache: Chrome checks its internal DNS cache. You can see it at chrome://net-internals/#dns. If you visited maps.google.com in the last 5 minutes (the TTL), the IP is already cached. Cost: 0ms.

Step 2 — OS cache: If the browser cache misses, the OS stub resolver checks. On Windows, run ipconfig /displaydns to see everything cached. On Linux, systemd-resolve --statistics shows cache hit rates. Cost: <1ms.

Step 3 — Router: Many home routers run a tiny DNS cache (dnsmasq). Your router at 192.168.1.1 might already know the answer from another device in your household. Cost: <1ms.

Step 4 — ISP's recursive resolver: ACT Fibernet runs DNS resolvers at 49.207.47.136. This server handles millions of queries per day — there is a very high chance someone else in Bangalore already asked for maps.google.com recently. Cost: 5-10ms (network hop to ISP).

Step 5 — If the ISP cache misses, the resolver walks the hierarchy: root server → .com TLD → google.com authoritative → answer. Each step is a UDP query. The root servers (13 logical servers, run by organizations like ICANN, Verisign, and the US Army) are anycast, so they exist in hundreds of physical locations — the "closest" root server to India is typically a mirror in Mumbai or Singapore. Cost: 50-100ms for the full walk, but this almost never happens for popular domains.

The math: Google's DNS TTL is 300 seconds (5 minutes). ACT Fibernet has ~10 million customers in India. The probability that nobody in all of ACT Fibernet asked for google.com in the last 5 minutes is essentially zero. ISP cache hit rate for popular domains: 99%+. This is why most DNS lookups are 5-10ms, not 50-100ms.

After TCP connects, your browser and Google's server need to set up encryption. This is the TLS handshakeTransport Layer Security handshake — the process where client and server agree on encryption algorithms and exchange keys. TLS 1.3 does this in 1 round trip (1-RTT), down from TLS 1.2's 2 round trips. If you have connected recently, 0-RTT resumption skips the handshake entirely.. TLS 1.3 does it in a single round trip:

Your browser sends ClientHello: "I support these cipher suites: TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256. Here is my key share (a public key for key exchange). I want to talk using TLS 1.3."

Google sends ServerHello + Certificate + Finished: "I picked TLS_AES_256_GCM_SHA384. Here is my certificate (signed by Google Trust Services, a Certificate Authority). Here is my key share. We are now encrypted."

All of this happens in one round trip. TLS 1.2 needed two round trips because it split the key exchange and the certificate into separate flights. TLS 1.3, finalized in 2018, merged them. That saves you 22ms on a Mumbai edge connection, or 130ms on a Virginia connection. For returning visitors, TLS 1.3 even supports 0-RTT resumption — your browser remembers the previous session key and starts sending encrypted data immediately, without any handshake at all.

Terminal — see TLS details
# See the exact TLS version and cipher suite used:
curl -v https://maps.google.com 2>&1 | grep -E "SSL|TLS"

# Output:
# * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
# * Server certificate: CN=*.google.com
# * SSL certificate verify ok.

# On Windows, open Chrome DevTools → Security tab
# to see the same information for any website.

When your packet leaves ACT Fibernet heading for Google, how does ACT's router know which direction to send it? The answer is BGPBorder Gateway Protocol — the internet's routing protocol. Every ISP, cloud provider, and large network is an "Autonomous System" (AS) with a unique number. BGP is how these ASes tell each other "I can reach these IP ranges." Google is AS15169. ACT Fibernet is AS45609. There are 70,000+ ASes worldwide. — the Border Gateway Protocol. It is the internet's GPS system.

The internet is made up of over 70,000 Autonomous Systems (ASes) — each one is a network run by a single organization. ACT Fibernet is AS45609. Google is AS15169. Jio is AS55836. Each AS announces to its neighbors: "I can reach these IP ranges." Those announcements propagate across the entire internet, so eventually every AS knows a path to every other AS.

You can look this up yourself. Go to bgp.he.net/AS15169 (Google's AS) and you will see every IP prefix Google announces, every AS they peer with, and the paths traffic takes to reach them. On the same site, check bgp.he.net/AS45609 for ACT Fibernet.

BGP is powerful but fragile. In 2008, Pakistan Telecom accidentally announced that it owned YouTube's IP range (208.65.153.0/24). Routers around the world believed it and started sending YouTube traffic to Pakistan, causing a global outage that lasted hours. In 2021, Facebook's engineers withdrew their own BGP routes during a maintenance error — Facebook, Instagram, and WhatsApp went offline for 6 hours, and because Facebook's own internal tools ran on the same infrastructure, their engineers could not even log in to fix it. They had to physically drive to the data center.

Your request arrives at Google's edge server in Mumbai. But what happens inside?

Step 1 — Maglev load balancer: Google's custom Layer 4 load balancer, running on commodity Linux servers (not expensive hardware LBs). Maglev uses consistent hashing to route your request to a specific backend server. It can handle 10+ million packets per second per machine. It picks a backend based on a hash of your connection's 5-tuple (src IP, src port, dst IP, dst port, protocol), ensuring all packets from the same connection hit the same server.

Step 2 — Application server (GFE): Google Front End terminates your TLS connection and parses the HTTP/2 request. It determines you want map tiles for Bangalore and makes internal RPCs to the map tile service.

Step 3 — Map tile service queries Bigtable: Google Maps pre-renders map tiles at different zoom levels and stores them in BigtableGoogle's distributed NoSQL database, designed for petabyte-scale. A single Bigtable cluster can serve millions of reads per second with single-digit millisecond latency. Google Maps, Gmail, YouTube, and Google Search all use Bigtable.. Your request for zoom level 14 of Bangalore triggers a Bigtable read — typically 2-5ms because the data is in RAM (SSD-backed, but frequently accessed tiles live in memory).

Step 4 — Response assembly and compression: The application server assembles the HTML, JavaScript, CSS, and map tile URLs. It compresses everything with Brotli (Google's compression algorithm, 15-25% smaller than gzip). The 143 KB response compresses to roughly 40 KB. It gets split into ~28 TCP segments and starts flowing back to you.

Total processing time inside Google's DC: approximately 50ms. Most of that is network latency between internal services, not computation. The actual CPU time is probably under 5ms.

Section 8

Variations — Different Flavors of the Same Internet

The four-layer model stays the same everywhere. But the specific technologies at each layer vary wildly depending on who you are, where you are, and what year it is. Let us look at the major variations.

IPv4 vs IPv6 — The Address Space Crisis

IPv4 addresses are 32 bits long: 4 numbers from 0 to 255, giving us 4.3 billion possible addresses. IANA (the global allocator) handed out the last block on February 3, 2011. There are 8 billion people and roughly 30 billion connected devices. The math does not work.

IPv6 fixes this with 128-bit addresses: 340 undecillion possible addresses (that is 340 followed by 36 zeros). Enough to give every atom on the surface of the Earth its own IP address. Twice.

Think First

IPv6 was standardized in 1998 — over 25 years ago. Why have we not switched yet? If IPv4 ran out in 2011, why is most of the internet still on IPv4 in 2026?

Hint: Think about what happens when you have billions of devices running IPv4 and you need them all to upgrade. Who pays for it? Who goes first?

The answer: NAT made IPv4 "good enough." When ISPs discovered they could put thousands of customers behind one public IP, the urgency to switch evaporated. Upgrading to IPv6 requires new router firmware, new ISP infrastructure, and application code that handles both protocols. It is expensive, and the old thing still works (sort of). As of 2024, Google measures that about 45% of traffic reaching their servers uses IPv6 (check it at google.com/intl/en/ipv6/statistics.html). India is surprisingly ahead at roughly 67% IPv6, largely because Jio rolled out IPv6-only to all their mobile customers.

Terminal — check your IPv6 support
# Check if you have an IPv6 address:
# Windows:
ipconfig | findstr "IPv6"
#   IPv6 Address. . . . : 2405:201:c00b:a0c2::1a3f

# macOS/Linux:
ifconfig | grep inet6
#   inet6 2405:201:c00b:a0c2::1a3f

# Test if your connection supports IPv6:
curl -6 https://google.com -s -o /dev/null -w "%{http_code}\n"
# 200 = IPv6 works. Connection refused = your ISP doesn't support it.
IPv4 vs IPv6 — By the Numbers IPv4 (1981) 142.250.195.68 32 bits → 4.3 billion addresses Ran out: Feb 3, 2011 Kept alive by NAT (a bandaid, not a fix) IPv6 (1998) 2404:6800:4007:820::200e 128 bits → 340 undecillion addresses Adoption: ~45% globally (2024) No NAT needed — every device gets a public IP

Wired vs Wireless vs Cellular — Latency by the Numbers

Not all connections are equal. The difference between fiber and satellite can be the difference between "feels instant" and "feels broken." Here are real-world numbers, not theoretical maximums:

Real-World Latency: How Long for a Packet to Come Back? Connection Type Round-Trip Latency Bandwidth Best For Fiber (FTTH) 1-2ms 100 Mbps - 10 Gbps Everything Ethernet (Cat 6) <1ms 1-10 Gbps LAN/Data centers Wi-Fi 6 (802.11ax) 2-5ms 200-600 Mbps real Home/office 4G LTE 30-50ms 10-50 Mbps Mobile browsing 5G (mmWave) 5-10ms 100 Mbps - 1 Gbps Dense urban areas Starlink (LEO satellite) 20-40ms 25-200 Mbps Rural/remote areas GEO satellite (VSAT) 600-800ms 1-20 Mbps Ships, airplanes

Notice the GEO satellite number: 600-800ms. That is because traditional communication satellites orbit at 35,786 km altitude. Light has to travel 35,786 km up, 35,786 km back down to the ground station, then back up and down again for the return trip — 143,144 km total. At 300,000 km/s, that is 477ms of pure physics, before any processing. Starlink fixes this by orbiting at just 550 km (Low Earth Orbit), cutting the physics delay to about 7ms.

Public vs Private IPs — NAT, CGNAT, and Port Forwarding

There are two kinds of IP addresses, and it matters which one you have:

Private IPs (192.168.x.x, 10.x.x.x, 172.16-31.x.x) are only valid inside your local network. Your phone might be 192.168.1.5, and someone else's phone in another country could also be 192.168.1.5. These addresses are not unique globally. Your router translates them to a single public IP using NAT (we covered that in Section 6).

But here is the twist: your ISP might do NAT again. This is called CGNAT (Carrier-Grade NAT). Your router's "public" IP might actually be 100.64.x.x — a special range reserved for ISP-level NAT. So you are behind two layers of NAT: your router's NAT and your ISP's NAT. This is why hosting a game server or running a VPN from home often does not work — incoming connections cannot find you through two layers of address translation.

Terminal — check if you're behind CGNAT
# Step 1: Find your router's "public" IP
# Windows: open cmd and run
curl ifconfig.me
# Output: 49.37.42.100  ← your public IP as the internet sees it

# Step 2: Check your router's WAN IP (log in to 192.168.1.1)
# If router WAN shows: 100.64.15.83 (a 100.64.x.x address)
# but curl shows: 49.37.42.100
# → You are behind CGNAT. Your ISP is doing NAT on top of your NAT.

# If both match → you have a true public IP. Port forwarding will work.

Internet Exchanges (IXPs) — Where Networks Meet

An Internet Exchange PointA physical building where ISPs and content providers connect their networks directly. Instead of routing traffic halfway around the world, they exchange it locally. India has NIXI (National Internet Exchange of India) with nodes in Mumbai, Delhi, Chennai, Kolkata, and other cities. The world's largest IXP is DE-CIX in Frankfurt. is a physical building where ISPs plug into each other's networks. Without IXPs, if you (on ACT Fibernet) visit a website hosted on Jio, your traffic might have to leave India, bounce through Singapore or the US, and come back — adding 200ms of unnecessary latency. With an IXP, ACT and Jio exchange traffic directly in Mumbai. Your request stays local. Latency stays under 10ms.

The world's major IXPs and their traffic:

  • DE-CIX Frankfurt — world's largest. Peak traffic: 14+ Tbps. Over 1,000 networks connected.
  • AMS-IX Amsterdam — Europe's oldest, 10+ Tbps peak. Where much of European traffic exchanges.
  • NIXI (India) — nodes in Mumbai, Delhi, Chennai, Kolkata, Bangalore. Keeps Indian traffic inside India.
  • Equinix exchanges — Ashburn (VA), Silicon Valley, Singapore. Ashburn alone handles an estimated 70% of the world's internet traffic because major cloud providers and content networks all have a presence there.

Why does this matter for system design? Because when you deploy an application, where you host it relative to the nearest IXP determines your latency. A server in Ashburn, Virginia can reach any major network in under 1ms because all the networks are right there in the same building. A server in a random data center might be 50ms away from the nearest IXP.

Section 9

At Scale: How the Real Internet Works

Everything so far has been about how one request works. But the internet handles 5 billion users and 500 exabytes of traffic per month. At that scale, new problems appear — and the solutions are some of the most impressive engineering projects humans have ever built.

Think First

Google handles roughly 100,000 search queries per second. Each query needs to reach a data center and return in under 200ms. If Google had a single data center in Virginia, users in Mumbai (150ms RTT), Tokyo (90ms RTT), and Sydney (180ms RTT) would all experience different latencies. What is the cheapest way to get every user under 50ms? How many data centers would you need, and where would you put them?

Hint: Think about the speed-of-light constraint — you cannot make light go faster, but you can move the data closer.

Submarine Cables — The Internet's Backbone Is on the Ocean Floor

Forget satellites. 99% of intercontinental internet traffic travels through cables on the ocean floor. There are 552 active submarine cables as of 2024, spanning over 1.4 million km. Each one costs $300 million to $500 million to build and has a lifespan of about 25 years.

Submarine Cables: India's Connections to the World INDIA Mumbai, Chennai landing points AAE-1: India → Europe Mumbai → Marseille, 25,000 km i2i: India → Singapore Chennai → Singapore, 3,200 km SEA-ME-WE 6 (2025) Singapore → Marseille, 19,200 km 2Africa: India → Africa Meta-backed, 45,000 km total Each cable: ~17mm diameter (garden hose), fiber strands thinner than hair, Kevlar armor, 25-year lifespan Yes, sharks bite them. Google wraps cables in Kevlar. Yes, anchors cut them. 100+ faults per year worldwide.

In January 2008, a ship's anchor near Alexandria, Egypt, severed the SEA-ME-WE 4 and FLAG cables almost simultaneously. The result: 70% of Egypt's internet went down, India lost 50-60% of its international bandwidth, and countries from Pakistan to Saudi Arabia saw massive disruptions. Two cables. One anchor. Millions of people offline. Repairs took weeks — specialized cable ships had to sail to the fault locations, grapple the cable from the ocean floor (sometimes 3,000+ meters deep), splice in new fiber, and re-lay it. The internet is more fragile than people realize.

CDNs — Bringing Content Closer to You

If every request for google.com had to travel to Virginia, the internet would be unbearably slow. Instead, companies like Google, Cloudflare, and Akamai place copies of their content in hundreds of cities worldwide. When you request a webpage, a CDN (Content Delivery NetworkA global network of edge servers that cache and serve content from the nearest location to the user. Cloudflare has 300+ data centers in 100+ countries. Akamai has 365,000+ servers in 135 countries. Instead of fetching content from a faraway origin server, you get it from a server that is 10-50ms away.) serves it from the nearest location.

Terminal — which edge served you?
# See which Cloudflare edge served you:
curl -sI https://cloudflare.com | grep -E "cf-ray|server"
# server: cloudflare
# cf-ray: 8a1b2c3d4e-BOM    ← "BOM" = Mumbai airport code = Mumbai edge

# See which Google edge served you:
curl -sI https://google.com | grep server
# server: gws   ← Google Web Server

# Check your latency to Cloudflare's nearest edge:
curl -o /dev/null -s -w "Time to connect: %{time_connect}s\n" https://cloudflare.com
# Time to connect: 0.012s  ← 12ms = the edge is in your city

The cf-ray header ending in BOM tells you the Cloudflare edge in Mumbai served your request. Airport codes are used as data center identifiers: BOM (Mumbai), DEL (Delhi), MAA (Chennai), SIN (Singapore), IAD (Ashburn/Virginia). If you are in Bangalore, your Google request might hit the BLR or MAA edge — never crossing an ocean.

Google's Private Network — The Internet Within the Internet

Google does not rely on the public internet to move data between its 33+ data centers. It operates its own private backbone — a global fiber network that connects its data centers directly, bypassing the public internet entirely. This is called a private WANA Wide Area Network owned and operated by a single company. Google's B4 network connects their data centers for internal traffic (replication, backups, machine learning training). Their B2 network handles user-facing traffic. Both use custom-built switches (Jupiter) that handle 1 Pbps (1 petabit per second) per cluster..

Google owns or leases capacity on 18+ submarine cables. Their custom-built Jupiter switches handle 1 Pbps (1 petabit per second) per cluster. Their Andromeda virtual network stack can move 100+ Gbps between two VMs in the same data center. The numbers are absurd because the scale is absurd — Google handles over 8.5 billion searches per day, 500 hours of YouTube video uploaded per minute, and 2 billion Gmail users.

Amazon (AWS), Microsoft (Azure), and Meta have built similar private backbones. The irony: the "public internet" increasingly rides on infrastructure owned by a handful of companies. More on this in the Anti-Lesson section below.

Famous Internet Outages — When the Unbreakable Broke

The internet was designed to survive nuclear war (ARPANET's original motivation). But it still breaks, and when it does, it breaks spectacularly:

October 4, 2021 — Facebook BGP Withdrawal (6 hours): During a routine maintenance operation, a Facebook engineer ran a command that accidentally withdrew all of Facebook's BGP routes. Every router on the internet promptly forgot how to reach Facebook. Facebook, Instagram, WhatsApp, and Oculus all went dark. The worst part: Facebook's internal tools (the ones engineers needed to fix the problem) were hosted on the same infrastructure. Their badge readers stopped working. Engineers had to physically break into the data center with bolt cutters. The outage lasted 6 hours and cost an estimated $100 million in revenue.

October 21, 2016 — Dyn DDoS (Mirai Botnet): The Mirai botnetA botnet made up of IoT devices (security cameras, DVRs, routers) infected with malware. The default passwords on these devices were never changed, so the malware logged in via Telnet and recruited them. At its peak, Mirai controlled 600,000+ devices and generated 1.2 Tbps of attack traffic. — an army of 600,000 hacked security cameras and DVRs — hit Dyn, a major DNS provider, with 1.2 Tbps of traffic. Dyn's DNS servers went down, and with them went Twitter, Netflix, Reddit, CNN, Spotify, and hundreds of other sites. Not because those sites were attacked — their DNS provider was. One DNS company going down took out half the internet.

February 24, 2008 — Pakistan YouTube Hijack: The Pakistan government ordered ISPs to block YouTube. Pakistan Telecom decided to do this by announcing via BGP that they owned YouTube's IP range (208.65.153.0/24). This BGP announcement leaked to the global internet. Routers worldwide started sending YouTube traffic to Pakistan instead of Google. YouTube was unreachable globally for about 2 hours until the bad route was manually corrected.

The pattern: Every major outage involves one of three things — a BGP misconfiguration (Facebook, Pakistan), a DNS failure (Dyn), or a physical cable cut (Egypt 2008). These are the internet's single points of failure. System design interviews love to ask: "How would you make this resilient?" The answer always involves redundancy at these three layers.
Section 10

The Anti-Lesson — Things You Might Believe That Are Wrong

Every topic has myths that sound reasonable but fall apart under scrutiny. Here are three about the internet that trip up even experienced engineers.

This is wrong. Most web developers never think below HTTP. If you are building a React frontend, a REST API, or a CRUD application, your world starts at Layer 4 (HTTP) and ends at your framework. You do not need to understand TCP congestion windows or IP routing tables to build a great web app.

The layers exist so you do not have to think about them. That is the entire point of layered architecture — each layer abstracts away the complexity below it. Your OS handles TCP. Your cloud provider handles routing. Submarine cable operators handle the physical layer. You handle HTTP.

When layer knowledge DOES matter: debugging latency problems ("why is this API call taking 300ms?"), designing globally distributed systems ("should we put servers in Mumbai or Singapore?"), handling partial failures ("what happens when a cable gets cut?"), and acing system design interviews. Know the layers exist. Know what each one does. But do not try to optimize what you are not responsible for.

This is partially wrong. In theory, the internet was designed with no single point of failure — packets can route around damage, and the network is a mesh. In practice, the internet is shockingly concentrated.

Three companies — Google, Meta, and Amazon — own or lease capacity on the majority of submarine cables. Cloudflare, AWS, and Google Cloud together handle DNS for a massive percentage of websites. Ashburn, Virginia, is a single town where roughly 70% of the world's internet traffic passes through. Cut 15 specific submarine cables and you could take most of the internet offline.

The internet is resilient to random failures (a router crashing, a single cable getting cut). It is remarkably fragile to targeted failures. In 2008, two cable cuts near Egypt disrupted internet for dozens of countries. In 2021, a single BGP mistake took Facebook offline globally for 6 hours. In 2016, attacking one DNS provider (Dyn) took out Twitter, Netflix, and Reddit simultaneously.

The system design takeaway: Never assume the internet is reliable. Always design for failure. Use multiple DNS providers. Use multiple CDNs. Deploy in multiple regions. The internet is robust in aggregate, but any individual path through it is fragile.

This is wrong, and it is one of the most common misconceptions. Bandwidth (how many bytes per second) and latency (how long for one byte to make the round trip) are completely different things. Upgrading from 100 Mbps to 1 Gbps does not make your ping to Google faster. Not even a little bit.

Here is a concrete example. Say you have a 100 Mbps connection with 20ms latency to Google. You upgrade to 1 Gbps. Your latency to Google is still 20ms. It has not changed at all. What changed is that you can now download a 1 GB file in ~8 seconds instead of ~80 seconds. Bandwidth is the width of the pipe. Latency is the length of the pipe. Making the pipe wider does not make it shorter.

The math that proves it

100 Mbps connection, 20ms latency:

Time to load a 143 KB page = 20ms (RTT) + 143,000 bytes / 12.5 MB/s = 20ms + 11.4ms = 31.4ms

1 Gbps connection, 20ms latency:

Time to load a 143 KB page = 20ms (RTT) + 143,000 bytes / 125 MB/s = 20ms + 1.1ms = 21.1ms

The bandwidth upgrade saved 10ms. But the 20ms latency is still there. For small requests (most web API calls), latency dominates bandwidth. The only way to reduce latency is to reduce physical distance (use a CDN) or reduce round trips (use HTTP/2, TLS 1.3, or QUIC).

This is why CDNs exist. You cannot make light faster. But you can put the server closer. A CDN edge server 10ms away will always beat a powerful origin server 130ms away, no matter how much bandwidth you have.

Section 11

Common Mistakes — Things Developers Get Wrong

These are not "gotcha" trivia questions. These are real misconceptions that lead to bad debugging, wrong capacity planning, and embarrassing interview answers. Each one below includes a terminal command so you can prove it to yourself.

Mistake #1: "HTTP and HTTPS are the same except for the lock icon"

This is dangerously wrong. The lock icon is the visible symptom of a massive difference underneath. HTTPS wraps every byte of your conversation in a TLSTransport Layer Security — the cryptographic protocol that encrypts data between your browser and the server. TLS 1.3 (current version) requires just 1 round trip to establish encryption, down from 2 in TLS 1.2. It negotiates a cipher suite, verifies the server's certificate, and generates session keys — all before a single byte of your data is sent. tunnel. That tunnel does not come free — it costs time and CPU.

Run both commands and compare the output:

Terminal — HTTP request
curl -v http://httpbin.org/get 2>&1 | head -12

* Trying 54.208.105.16:80...
* Connected to httpbin.org port 80          # ← Port 80, no encryption
> GET /get HTTP/1.1                         # ← Request sent IMMEDIATELY
> Host: httpbin.org
> User-Agent: curl/8.4.0
>
< HTTP/1.1 200 OK                           # ← Response comes back
# Total: TCP handshake (1 RTT) + data transfer
# No TLS. No certificates. No cipher negotiation.
# Anyone on the same Wi-Fi can read this in Wireshark.
Terminal — HTTPS request
curl -v https://httpbin.org/get 2>&1 | head -20

* Trying 54.208.105.16:443...
* Connected to httpbin.org port 443         # ← Port 443, encrypted
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* Server certificate:
*   subject: CN=httpbin.org
*   issuer: C=US, O=Amazon, CN=Amazon RSA 2048 M02
*   expire date: Jan 12 23:59:59 2026 GMT
> GET /get HTTP/2                            # ← NOW the request can go
# Extra cost: 1-2 RTTs for TLS + CPU for encryption
# But: nobody on your Wi-Fi can read the content.

The real difference: HTTPS adds 1-2 extra round trips (the TLS handshake), negotiates a cipher suite (TLS_AES_256_GCM_SHA384 in the example above), validates the server's identity via certificates, and encrypts every single byte of data — including the URL path, headers, and body. With HTTP, anyone on your cafe's Wi-Fi can see exactly what pages you visit and what data you send. With HTTPS, they see only the destination IP and encrypted gibberish.

Performance note: TLS 1.3 reduced the handshake from 2 RTTs (TLS 1.2) to just 1 RTT. And with 0-RTT resumptionIf you have visited the site before, TLS 1.3 can send encrypted data in the very first packet — zero round trips of handshake overhead. The browser caches a session ticket from the previous connection and uses it to skip negotiation. The tradeoff: 0-RTT data is vulnerable to replay attacks, so it is only used for safe (GET) requests., repeat visits can start sending encrypted data immediately. The performance gap between HTTP and HTTPS has shrunk to almost nothing.

Ping measures latency, not speed. These are two completely different things. Latency is how long one tiny packet takes to make the round trip. Speed (bandwidth) is how many bytes per second you can push through the connection. One is the length of the pipe, the other is the width.

Terminal — ping vs speed test
# This measures LATENCY (round-trip time for a 64-byte packet):
ping -c 4 google.com
# PING google.com (142.250.195.68): 64 bytes
# round-trip min/avg/max = 12.3/15.1/18.7 ms
# ↑ This tells you: "15ms to reach Google and back"
# It tells you NOTHING about your bandwidth.

# This measures BANDWIDTH (download/upload throughput):
speedtest-cli --simple
# Ping: 15.2 ms
# Download: 94.37 Mbit/s
# Upload: 11.23 Mbit/s
# ↑ NOW you know your actual "speed"

# The proof: You can have 15ms ping AND 5 Mbps bandwidth (slow DSL)
# Or 15ms ping AND 1000 Mbps bandwidth (fiber)
# Same latency, vastly different throughput.

When someone complains "my internet is slow," they usually mean one of two things: either pages are taking forever to start loading (that is latency — too many round trips before the first byte arrives) or downloads are crawling (that is bandwidth — the pipe is not wide enough). Ping only diagnoses the first one. For the second, you need speedtest-cli or fast.com.

Almost certainly not. Most home internet connections use DHCPDynamic Host Configuration Protocol — your router asks the ISP's DHCP server for an IP address, and gets one on a temporary lease (usually 24-72 hours). When the lease expires, you might get a different IP. This is why hosting a website from your home connection is unreliable — your address keeps changing., which hands out IP addresses on a temporary lease. Your public IP today might not be your public IP tomorrow.

Terminal — check your public IP
# Check your public IP right now:
curl ifconfig.me
# 103.42.156.78  (your ISP-assigned address)

# Run this again tomorrow after restarting your router:
curl ifconfig.me
# 103.42.156.112  (different! DHCP gave you a new lease)

# To check your DHCP lease on Windows:
ipconfig /all | findstr "Lease"
# Lease Obtained: Tuesday, March 18, 2026 9:15:23 AM
# Lease Expires:  Wednesday, March 19, 2026 9:15:23 AM
# ↑ 24-hour lease — your IP can change daily

Why this matters for system design: If you are building a system that identifies users by IP address (rate limiting, geolocation, fraud detection), remember that IPs change. Multiple people behind the same office NATNetwork Address Translation — your home router maps many private IPs (192.168.1.x) to a single public IP. Every device in your house shares one public address. This is why IPv4 still works despite there being more devices than available addresses — NAT lets thousands of devices hide behind one IP. share a single public IP. A single user on mobile can change IPs every few minutes as they move between cell towers. IP-based identification is unreliable.

The internet is as physical as a highway system. When someone says "the cloud," they mean someone else's computer, sitting in a specific building, in a specific city, connected by physical cables buried under the ocean floor.

The "Cloud" Is Actually This SUBMARINE CABLES 552 cables worldwide Garden-hose thick, ocean floor DATA CENTERS 10,000+ globally Ashburn VA alone: 300+ DCs EXCHANGE POINTS 900+ IXPs worldwide Where ISPs connect to each other POWER ~1-2% of world electricity ~200-250 TWh per year THE NUMBERS: 552 submarine cables 1.4M km of fiber on ocean floor 5+ billion internet users ~63% of world population Google: 33 data centers in 22 countries, 6 continents Explore the real map: submarinecablemap.com

Next time someone says "it's in the cloud," ask them: which building? If it is on AWS us-east-1, the answer is "a data center in Ashburn, Virginia, about 40 km west of Washington DC." That is not a metaphor. That is a physical building with security guards, diesel generators, and cooling towers burning through megawatts of power.

We have been saying "soon" since 1998. IPv6 was standardized in RFC 2460 in December 1998 — nearly three decades ago. As of 2026, Google's own statistics show IPv6 adoption at roughly 45% globally. That sounds high, but it means 55% of the internet still runs on IPv4.

The reason IPv4 refuses to die is NATNetwork Address Translation — the technology that lets your entire household share one public IPv4 address. Your router translates between private addresses (192.168.1.x) and your single public IP. NAT was meant to be a temporary hack in the 1990s. Three decades later, it is a permanent fixture because it works well enough that nobody feels the urgency to switch to IPv6.. In the early 1990s, engineers realized IPv4's 4.3 billion addresses would not be enough. NAT was invented as a temporary hack — let many devices share one public IP. But NAT worked so well that it killed the urgency to migrate. Your home right now probably has 10-30 devices (phones, laptops, TVs, smart speakers) all sharing one public IPv4 address via NAT. It just works.

Terminal — IPv4 vs IPv6 on your machine
# Check if you have IPv6:
curl -6 ifconfig.me
# 2409:40c0:1e:b4a1::3f2   (if your ISP supports IPv6)
# OR: "curl: (7) Couldn't connect" (if your ISP doesn't)

# Check both:
dig google.com A      # IPv4: 142.250.195.68
dig google.com AAAA   # IPv6: 2404:6800:4009:826::200e

# IPv4 addresses left: ZERO unallocated /8 blocks since 2011
# But NAT + address trading keeps IPv4 alive

The system design takeaway: Design your systems to support both IPv4 and IPv6 (called dual-stack). Do not assume IPv6 is available everywhere. Mobile networks are the furthest ahead (T-Mobile US is almost 100% IPv6). Enterprise networks and legacy ISPs are the furthest behind. IPv4 and IPv6 will coexist for at least another decade.

After about 25 Mbps, adding more bandwidth barely helps for normal web browsing. This surprises everyone, but the math proves it. A typical webpage is about 2-3 MB. On a 25 Mbps connection, that is about 1 second of download time. On a 1 Gbps connection, it is 0.025 seconds. But the page still takes 1-3 seconds to load. Why? Because the bottleneck is not bandwidth — it is round trips.

Terminal — measuring what actually matters
# Measure the actual timing breakdown:
curl -w "\n  DNS:      %{time_namelookup}s\n  Connect:  %{time_connect}s\n  TLS:      %{time_appconnect}s\n  First byte: %{time_starttransfer}s\n  Total:    %{time_total}s\n" \
  -o /dev/null -s https://google.com

#   DNS:      0.018s     ← 1 RTT to DNS server
#   Connect:  0.035s     ← 1 RTT for TCP handshake
#   TLS:      0.068s     ← 1-2 RTTs for TLS handshake
#   First byte: 0.102s   ← 1 RTT for the HTTP request
#   Total:    0.145s     ← data transfer time

# 68ms of 145ms is ROUND TRIPS (latency), not data transfer.
# Going from 100 Mbps to 10 Gbps would barely change Total.

Loading a modern webpage requires: a DNS lookup (1 RTT), a TCP handshake (1 RTT), a TLS handshake (1-2 RTTs), the HTTP request/response (1 RTT), then fetching CSS, JavaScript, fonts, and images (many more RTTs). That is 5+ round trips minimum before the page is usable. If each round trip is 30ms, that is 150ms of unavoidable latency that no amount of bandwidth can fix. This is exactly why HTTP/2The second major version of HTTP, standardized in 2015. Its key innovation is multiplexing — sending multiple requests and responses over a single TCP connection simultaneously. In HTTP/1.1, the browser had to wait for each response before sending the next request (or open 6+ parallel connections). HTTP/2 eliminated this bottleneck. (multiplexing) and HTTP/3The third major version of HTTP, based on QUIC instead of TCP. QUIC combines the TCP handshake and TLS handshake into a single round trip, and eliminates head-of-line blocking. Google developed QUIC and it is now used by Chrome, YouTube, and many CDNs. (0-RTT QUIC) exist — they reduce round trips, not bandwidth.

Section 12

Interview Playbook — "What Happens When You Type google.com?"

This is the single most-asked networking question in technical interviews. It appears in phone screens, onsite rounds, and system design discussions. The depth of your answer determines your level. Here is how to structure it at each career stage.

The Classic Interview Question — Answer Structure DNS Name → IP TCP 3-way handshake TLS Encrypt channel HTTP GET / RESPONSE HTML, CSS, JS RENDER DOM → pixels DEPTH BY LEVEL: JUNIOR Name each step Explain what each does Mention: caching, status codes MID-LEVEL Add TLS cipher negotiation HTTP/2 multiplexing, CDNs Include RTT math SENIOR / STAFF BGP anycast, QUIC/HTTP3 TCP congestion (Cubic vs BBR) Maglev LB, DPDK, kernel bypass Total time (cold): ~300-500ms | Total time (warm, cached): ~50-100ms The difference between cold and warm is mostly DNS caching + TLS session resumption

What they want to hear: You understand the basic flow and can name each step in order.

Sample answer:

"When I type google.com and press Enter, the browser first checks its DNS cache for the IP address. If it is not cached, it sends a DNS query — usually to my ISP's resolver — which walks the DNS hierarchy: root server, then the .com TLD server, then Google's authoritative nameserver. That comes back with an IP like 142.250.195.68.

Next, my browser opens a TCP connection to that IP on port 443 using a three-way handshake — SYN, SYN-ACK, ACK. Then it does a TLS handshake to encrypt the connection. Once the secure tunnel is established, it sends an HTTP GET request for the page.

Google's server responds with a 200 status code and the HTML. The browser parses the HTML, discovers it needs CSS, JavaScript, and images, and fetches those too — often from a CDN edge server nearby rather than Google's origin server. Finally, the browser builds the DOM, applies styles, runs JavaScript, and paints pixels on screen."

Bonus points: Mention that the browser caches the DNS result (TTL-based), reuses the TCP connection for subsequent requests (keep-alive), and that resources like Google's logo are probably already cached from a previous visit.

What they want to hear: You understand performance implications and can reason about trade-offs.

Build on the Junior answer, then add:

"Let me go deeper on the performance side. The DNS lookup might skip the hierarchy entirely if my OS, router, or ISP has it cached — Google's DNS TTL is typically 300 seconds. If it is a cold lookup, that is 1 RTT to the resolver plus potentially 3 more hops (root, TLD, authoritative), so ~50-100ms.

The TLS handshake in TLS 1.3 is just 1 RTT — the client sends supported ciphers and a key share in the Client Hello, the server responds with its choice and its key share in the Server Hello, and both sides derive the session key. If I have visited Google before, 0-RTT resumption means encrypted data goes in the first packet.

Google uses HTTP/2, so all subsequent resources — CSS, JS, images, fonts — are fetched over the same TCP connection via multiplexed streams. No more 6-connection limit. The server can even push critical CSS before the browser asks for it (server push). And Google almost certainly serves me from a CDN edge in my city, not from Virginia — so my RTT is maybe 5-10ms instead of 150ms."

RTT math they love: Cold load to a server 150ms away: DNS (150ms) + TCP (150ms) + TLS (150ms) + HTTP (150ms) = 600ms before first byte. CDN edge 10ms away with cached DNS: DNS (0ms) + TCP (10ms) + TLS (10ms) + HTTP (10ms) = 30ms. That is a 20x improvement. This is why CDNs exist.

What they want to hear: You understand infrastructure-level decisions and can discuss systems at Google/Meta scale.

Build on the Mid answer, then add:

"Let me talk about what happens before the request even reaches Google's servers. When my resolver queries for google.com, it gets an IP that is actually an anycastA routing technique where the same IP address is announced from multiple locations worldwide. When you send a packet to an anycast IP, BGP routing automatically sends it to the nearest (in network terms) location. Google, Cloudflare, and all root DNS servers use anycast. Your DNS query to 8.8.8.8 goes to whichever Google DNS server is closest to you. address — the same IP is announced via BGP from dozens of Google edge locations worldwide. The BGP routing system automatically sends my request to the nearest Google PoP.

Inside that PoP, the packet hits a MaglevGoogle's custom network load balancer, described in their 2016 paper. Each Maglev machine can handle 10 Gbps of traffic. It uses consistent hashing to distribute connections across backend servers, ensuring that packets from the same TCP connection always reach the same server — even if a Maglev machine goes down. load balancer — Google's custom L4 load balancer that uses consistent hashing to distribute connections. Maglev handles 10 Gbps per machine and uses ECMP (Equal-Cost Multi-Path) so that multiple Maglev instances share the load without any single point of failure.

On the transport layer, Google is increasingly using QUIC (the protocol underneath HTTP/3). QUIC replaces TCP+TLS with a single protocol built on UDP. The handshake combines TCP and TLS into 1 RTT (or 0 RTT for repeat connections). QUIC also eliminates head-of-line blocking — if one stream's packet is lost, other streams on the same connection keep flowing. This is a huge win for multiplexed connections.

For congestion control, Google's servers use BBR (Bottleneck Bandwidth and Round-trip propagation time) instead of the traditional Cubic algorithm. Cubic backs off aggressively on any packet loss — which is fine for wired networks, but terrible for mobile networks where packet loss is often due to radio interference, not congestion. BBR measures actual bandwidth and RTT to find the optimal sending rate. Google reported 4-14% improvement in YouTube throughput after deploying BBR.

At the highest scale, companies like Cloudflare and Google use kernel bypass techniques — DPDK (Data Plane Development Kit) or XDP (eXpress Data Path) — to handle packets in userspace, skipping the OS kernel entirely. This gets you from ~1M packets/second (kernel) to ~10M+ packets/second on the same hardware. It is how a single Cloudflare server handles DDoS traffic at 100+ Gbps."

A realistic interview dialogue:

Interview — System Design Round
Interviewer: "Walk me through what happens when you type google.com."

You: [Give the Junior answer — DNS, TCP, TLS, HTTP, render]

Interviewer: "Good. Now, why does Google use a CDN instead of
              serving everything from one data center?"

You: "Latency. Light in fiber travels at about 200,000 km/s.
      Virginia to Mumbai is 13,000 km — that's 65ms one way,
      130ms round trip, just physics. A CDN edge in Mumbai cuts
      that to 5ms. For every page load, you save 4+ RTTs,
      which is 4 × 125ms = 500ms. That's the difference
      between 'fast' and 'did it load yet?'"

Interviewer: "What if I told you Google actually measures closer
              to 300ms for a full page load from Mumbai?"

You: "That makes sense. DNS (~20ms if cached) + TCP handshake
      (~5ms to CDN) + TLS 1.3 (~5ms) + server processing
      (~50-100ms for search, which can't be cached) + HTML
      transfer + sub-resource fetches. The CDN handles static
      assets, but the search results themselves come from
      Google's backend — probably the Mumbai GCP region.
      The 300ms is mostly server-side computation, not network."
Section 13

Hands-On Challenges — Learn by Doing

Reading about the internet is one thing. Watching packets fly across your screen in real time is another. These five challenges are ordered from "run one command" to "build something real." Each one teaches a concept better than any textbook paragraph ever could.

Beginner Trace Your Own Path to Google

Every time you visit a website, your data hops through 10-20 routers across multiple cities and countries. In this exercise, you will see the actual path — every router, every city, every hop.

Terminal — traceroute
# Trace the path to Google:
traceroute google.com     # macOS/Linux
tracert google.com        # Windows

# Trace the path to Netflix:
traceroute netflix.com

# Sample output (from Bangalore):
# 1  192.168.1.1      1ms    (your router)
# 2  10.0.0.1         5ms    (ISP local node)
# 3  103.42.156.1     8ms    (ISP backbone, Bangalore)
# 4  72.14.194.226    12ms   (Google edge, Chennai)
# 5  142.250.195.68   15ms   (Google server, Mumbai CDN)

Map each IP to a city using a free geolocation tool like ipinfo.io. Run curl ipinfo.io/72.14.194.226 for each hop. You will see your data cross city boundaries (Bangalore to Chennai to Mumbai). Notice how Google traffic stays short — Google has edge servers in most major cities. Now trace traceroute bbc.co.uk and watch your data cross undersea cables to London. The hop with the biggest latency jump is where the submarine cable is.

Intermediate Capture a TCP Handshake in Wireshark

You have read about the three-way handshake (SYN, SYN-ACK, ACK). Now you are going to see it happen in real time, with real timestamps, on your own machine.

1. Install Wireshark from wireshark.org (free, open-source).

2. Start a capture on your active network interface (usually Wi-Fi or Ethernet).

3. Open a new browser tab and visit https://example.com.

4. Stop the capture.

5. In the filter bar, type: tcp.flags.syn==1 && ip.addr==93.184.216.34

(93.184.216.34 is example.com's IP — find it with dig example.com)

6. You should see exactly two packets: the SYN (you to server) and the SYN-ACK (server to you).

7. Calculate RTT: subtract the SYN timestamp from the SYN-ACK timestamp. That is the round-trip time to example.com's server.

Expected result: If example.com's server is in the US and you are in India, the RTT should be ~150-200ms. If you are in the US, it should be ~10-30ms. This number is the speed of light through fiber plus router processing time — and it is the number that matters most for web performance.
Intermediate DNS Deep Dive with dig +trace

In this exercise, you will watch DNS resolution happen step by step — from the root of the internet all the way down to Google's authoritative nameserver.

Terminal — DNS trace
# Trace the full DNS resolution chain:
dig +trace google.com

# Questions to answer:
# 1. Which root server responded? (a.root-servers.net? f.root-servers.net?)
# 2. Which .com TLD server was used?
# 3. What is the TTL on Google's A record? (hint: it's 300 seconds)
# 4. How long did the entire resolution take? (check "Query time")

# Bonus — compare TTLs:
dig google.com    # TTL: 300 (5 minutes — Google wants fast updates)
dig wikipedia.org # TTL: 600 (10 minutes)
dig bbc.co.uk     # TTL: 300 (5 minutes)
dig gov.uk        # TTL: often 3600+ (1 hour — government sites change slowly)

The TTL (Time To Live) on a DNS record tells every cache in the chain how long to remember the answer. Google sets theirs to 300 seconds (5 minutes) because they frequently shift traffic between data centers for load balancing. A government site might set 3600 seconds (1 hour) because their IP rarely changes. If you are doing a DNS migration (moving your site to a new server), you should lower the TTL to 60 seconds days before the migration, so that when you change the IP, caches worldwide update within a minute instead of an hour.

Advanced Build a Simple HTTP Proxy

The best way to understand HTTP is to sit in the middle and watch it flow past you. In this exercise, you will write a tiny proxy server — about 20 lines of Python — that forwards requests and logs every single one.

Use Python's built-in http.server module. Your proxy needs to: (1) listen on a local port like 8080, (2) read the incoming HTTP request, (3) forward it to the actual destination, (4) return the response to the client, and (5) log the method, URL, and status code to the console.

proxy.py
"""Minimal HTTP proxy — forwards requests and logs them."""
from http.server import HTTPServer, BaseHTTPRequestHandler
from urllib.request import urlopen, Request
from urllib.error import URLError

class ProxyHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        url = self.path[1:]  # strip leading /
        if not url.startswith("http"):
            url = "http://" + url
        try:
            req = Request(url, headers={"User-Agent": "TinyProxy/1.0"})
            resp = urlopen(req, timeout=10)
            body = resp.read()
            self.send_response(resp.status)
            self.send_header("Content-Length", len(body))
            self.end_headers()
            self.wfile.write(body)
            print(f"[OK]  GET {url} → {resp.status} ({len(body)} bytes)")
        except URLError as e:
            self.send_response(502)
            self.end_headers()
            print(f"[ERR] GET {url} → {e}")

    def log_message(self, *args):
        pass  # suppress default logs

print("Proxy listening on http://localhost:8080")
print("Usage: curl http://localhost:8080/http://example.com")
HTTPServer(("", 8080), ProxyHandler).serve_forever()

Run it with python proxy.py, then in another terminal: curl http://localhost:8080/http://example.com. You will see the proxy fetch example.com, log the request, and return the HTML to your curl. Every web proxy (Nginx, HAProxy, Envoy) does exactly this — just at a much larger scale.

Expert Map Global Latency with mtr

mtr (My Traceroute) combines ping and traceroute into a single tool that continuously monitors the path to a destination. In this exercise, you will measure latency to servers on 3 different continents and build a comparison chart.

Terminal — mtr multi-continent
# Install mtr (if not present):
# macOS: brew install mtr
# Ubuntu: sudo apt install mtr
# Windows: use WinMTR (GUI) from winmtr.net

# Run continuous monitoring to 5 endpoints:
mtr --report --report-cycles 100 google.com          # Nearest CDN
mtr --report --report-cycles 100 bbc.co.uk           # Europe (London)
mtr --report --report-cycles 100 yahoo.co.jp         # Asia (Tokyo)
mtr --report --report-cycles 100 news.com.au         # Oceania (Sydney)
mtr --report --report-cycles 100 amazon.com.br       # South America

# Record the average latency to each destination.
# Plot a bar chart. You'll see the speed of light in action:
# nearby CDN: ~5-15ms | same continent: ~30-80ms
# cross-ocean: ~150-300ms

The biggest latency jumps happen at undersea cable crossings. If you are in India, the jump from your ISP to London will be ~100ms — that is the cable crossing the Arabian Sea and Mediterranean. The jump from India to the US east coast will be ~180ms (across the Pacific or through Europe). These numbers are dictated by physics: light travels through fiber at about 200,000 km/s, and the cable paths are not straight lines — they follow ocean floor geography, avoiding deep trenches and earthquake zones. No software optimization can make these numbers smaller. The only solution is to put servers closer to users — which is exactly what CDNs do.

Section 14

Quick Reference — Cheat Cards

Pin these to your wall or save them as screenshots. Every number here is real, every command is something you can run right now.

The 4 Layers
APPLICATION  HTTP, DNS, SMTP, FTP, SSH
             WebSocket, gRPC, MQTT

TRANSPORT    TCP (reliable, ordered)
             UDP (fast, no guarantees)
             QUIC (UDP + reliability)

NETWORK      IP (addressing + routing)
             ICMP (ping, traceroute)
             BGP (inter-AS routing)

LINK         Ethernet, Wi-Fi, 5G
             Fiber, PPP, ARP
Key Numbers to Know
Light in fiber    200,000 km/s
Light in vacuum   300,000 km/s

LAN RTT           <1 ms
Same city          1-5 ms
Same continent    30-80 ms
Cross-ocean      150-300 ms

Submarine cable   100+ Tbps each
5G bandwidth      1-10 Gbps (peak)
Wi-Fi 6           1-9.6 Gbps (peak)
Typical home      50-500 Mbps
Essential Commands
traceroute host   Path to destination
ping host         RTT latency check
dig domain        DNS lookup + details
nslookup domain   Simple DNS lookup
curl -v url       Full HTTP request
curl -w "..."     Timing breakdown
netstat -an       Active connections
mtr host          Continuous traceroute
ss -tuln          Listening ports (Linux)
ipconfig /all     Network config (Win)
IP Address Ranges
PRIVATE (RFC 1918 — not routable):
10.0.0.0/8        16M addresses
172.16.0.0/12     1M addresses
192.168.0.0/16    65K addresses

SPECIAL:
127.0.0.1         Localhost (loopback)
0.0.0.0           All interfaces
255.255.255.255   Broadcast
169.254.x.x       Link-local (no DHCP)

IPv6 FORMAT:
2001:0db8:85a3:0000:0000:8a2e:0370:7334
Short: 2001:db8:85a3::8a2e:370:7334
Loopback: ::1
Common Port Numbers
 20/21   FTP (data/control)
    22   SSH (secure shell)
    25   SMTP (email send)
    53   DNS (name resolution)
    80   HTTP (unencrypted web)
   110   POP3 (email receive)
   143   IMAP (email receive)
   443   HTTPS (encrypted web)
   587   SMTP (submission)
  3306   MySQL
  5432   PostgreSQL
  6379   Redis
  8080   HTTP alt / proxies
  8443   HTTPS alt
 27017   MongoDB
DNS Record Types
A       Domain → IPv4 address
AAAA    Domain → IPv6 address
CNAME   Domain → another domain
MX      Domain → mail server
NS      Domain → nameserver
TXT     Arbitrary text (SPF, DKIM)
SOA     Zone authority info
SRV     Service location + port
PTR     IP → domain (reverse DNS)
CAA     Which CAs can issue certs
Section 15

Connected Topics — Where to Go Next

This page covered the full journey of a packet — from your browser to a server and back. But each step in that journey is a deep topic on its own. Here is how everything connects and what to study next.

How This Page Connects to What is Next HOW THE INTERNET WORKS DNS Name resolution deep dive TCP & UDP Transport protocols HTTP EVOLUTION 1.0 → 1.1 → 2 → 3 LOAD BALANCERS Traffic distribution CDN Edge caching, anycast PERFORMANCE Latency, throughput, TTFB