Core Principles

Scalability

WhatsApp handled 2 billion users with ~50 engineers. Instagram served 30 million with 5. Your app crashes at 10,000. This page explains exactly why โ€” and how to fix it with real numbers, real commands, and real AWS pricing.

14 Think Firsts 35+ SVG Diagrams 21 Sections 5 Exercises 50+ Tooltips
Section 1

TL;DR โ€” The Restaurant That Couldn't Seat Everyone

  • The two fundamental strategies for handling more users (vertical vs horizontal scaling)
  • Why "just buy a bigger server" stops working โ€” and the exact dollar amounts where it breaks
  • How Instagram, WhatsApp, and Twitter actually scaled (with real architecture details)
  • The one word โ€” stateless โ€” that makes or breaks your ability to scale

Scalability is the art of serving more people without starting over from scratch.

Imagine you run a small restaurant. Twenty tables, one kitchen, one chef. On a Tuesday afternoon, it's perfect โ€” orders come in, food goes out, everyone's happy. Then a food blogger writes about you. Suddenly there's a 90-minute wait on Friday night. What do you do?

Your first instinct: make the restaurant bigger. Rent the space next door, knock down the wall, add 40 more tables, hire a second chef. That works. But it's expensive, and eventually you run out of wall to knock down. There's a physical limit to how big one building can get.

Your second option: open another location. Same menu, same recipes, same quality โ€” just in a different part of town. Each location handles its own customers. If one has a kitchen fire and closes for a day, the other keeps serving. Want more capacity? Open a third. A fourth. There's no limit.

That's it. That's scalability. In software, "making the restaurant bigger" is called vertical scalingAdding more power to a single server โ€” more CPU cores, more RAM, faster disks. Like making one restaurant bigger. Simple but has a ceiling. Also called "scaling up." โ€” you make one server more powerful. "Opening new locations" is called horizontal scalingAdding more servers that share the work. Like opening restaurant branches. No ceiling, but harder to coordinate. Also called "scaling out." โ€” you add more servers. Every scaling decision you'll ever make is some combination of these two ideas.

The Restaurant Problem = The Scaling Problem Your Restaurant (1 Server) 20 tables, 1 chef โ€” works great on Tuesday Friday night: 90-minute wait, customers leaving Option A VERTICAL: Make It Bigger 60 tables, 3 chefs Same building, expanded $500K renovation + Same recipes work + No coordination needed - Can't expand forever - Closed during renovation - One kitchen fire = total shutdown Option B HORIZONTAL: Open More Locations Downtown Uptown Suburbs + more + No ceiling โ€” open as many as you want + One closes? Others still serve - Need a system to route customers - Recipes must be identical everywhere

This isn't a toy analogy. It maps exactly to real engineering decisions. The "renovation" is server downtime during upgrades. The "identical recipes" are stateless serversServers that don't remember anything about previous requests. Every request is self-contained. This way, ANY server can handle ANY request โ€” just like any restaurant branch can serve any customer. โ€” every server runs the same code and produces the same result. The "system to route customers" is a load balancerA server that sits in front of your app servers and decides which one handles each request. Like a host at a restaurant chain who tells you which location has the shortest wait.. We'll build all of this piece by piece throughout this page.

What: Scalability means your system can handle more users by adding resources โ€” not by rewriting everything from scratch. Double the servers, handle (roughly) double the load.

When: When your htop shows CPU at 80%+, users are seeing slow responses, or your database is nearing max connections. These are the signals that you need a scaling strategy.

Key Insight: There are only two ways to scale: make one server bigger (vertical) or add more servers (horizontal). Vertical is faster to do. Horizontal has no ceiling. Every real system uses both.

Scalability boils down to two strategies: make one server bigger (vertical) or add more servers (horizontal). Vertical is fast and simple but has a hard ceiling. Horizontal has no limit but requires your servers to be stateless. Every major website โ€” Netflix, Google, Amazon โ€” uses horizontal scaling at its core.
Section 2

The Scenario โ€” Your App Just Went Viral

Let's make this concrete. You build a photo-sharing app. Nothing fancy โ€” users upload photos, leave comments, follow each other. You deploy it on a single AWS t3.microAmazon's cheapest general-purpose server: 2 virtual CPUs, 1 GB RAM. Costs about $8.50/month. Good for development and tiny apps. Can handle roughly 100-200 requests per second for a simple web app. instance โ€” 2 vCPUs, 1 GB RAM, $8.50/month. It works beautifully. You post it on Reddit. People love it.

Then the numbers start climbing. And with them, everything starts breaking.

What Actually Happens When Your App Grows DAY 1 100 users ~5 req/sec CPU: 3% RAM: 180 MB Response: 20ms Life is good WEEK 2 5,000 users ~80 req/sec CPU: 55% RAM: 780 MB (swapping) Response: 800ms Getting slow... MONTH 1 20,000 users ~400 req/sec CPU: 95% DB: 312 connections Response: 3.2 seconds Users complaining MONTH 2 50,000 users ~1,200 req/sec OOM killed 502 Bad Gateway pg: too many clients SITE IS DOWN $ htop CPU[||||||||||||||||||||||||||||||||||||||||100.0%] Mem[|||||||||||||||||||||||987M/1.00G] Tasks: 214, 189 running. Load avg: 47.3 23.1 12.8 PID USER CPU% MEM% COMMAND: node server.js (OOM killed by kernel)

This is not a hypothetical story. This is the exact trajectory of Instagram in 2010. Five engineers. One DjangoA Python web framework. Instagram's original backend was pure Django + PostgreSQL. They famously kept this stack even at massive scale, adding caching and replication rather than rewriting. app. One PostgreSQL database. They went from zero to 30 million users in 18 months. The code wasn't broken. The architecture wasn't broken. What changed was the number of people hitting it at the same time โ€” and that one variable exposed every limit in the system.

The same story played out at Twitter. In 2007-2009, Twitter ran a Ruby on RailsA web framework famous for being fast to develop with but notorious for performance problems at scale. Twitter's early struggles made "Ruby doesn't scale" a meme (the truth is more nuanced โ€” it was their architecture, not just Ruby). monolith. Every time you loaded your timeline, the server would query the database for every person you follow, fetch their recent tweets, sort them by time, and return the result. At 100 users, this takes 15ms. At 300,000 queries per second, it produces the Fail WhaleTwitter's famous error page showing a whale being lifted by birds, displayed when the site was over capacity. It appeared so often from 2007-2010 that it became a pop culture icon. Twitter eventually rewrote their timeline system in Scala to fix it. โ€” a cartoon whale that became the most recognizable error page on the internet.

Real Companies, Real Scaling Walls Instagram (2010) 5 engineers, 30M users Stack: Django + PostgreSQL + Redis + Memcached + Nginx Scaled READS with caching not by adding app servers redis-cli INFO โ†’ ops/sec, hit rate Twitter (2007-2010) Rails monolith, 300K+ QPS reads Timeline = query EVERY follow Sort by time, return top 20 Result: the Fail Whale Fix: Scala rewrite + pre-computed timelines in Redis. Took 3 YEARS. WhatsApp (2014-2020) ~55 engineers, 900Mโ†’2B users Erlang on FreeBSD Each server: 2M+ connections Scaled VERTICALLY first far longer than anyone expected beam.smp handling millions of procs

Notice something interesting? Instagram scaled with caching. Twitter had to rewrite in a different language. WhatsApp scaled vertically on Erlang for years before needing more servers. There's no single "right" answer. But every one of them hit a wall where what they had wasn't enough โ€” and they needed a strategy to get past it.

That's what this page is about. Not theory. Not definitions. The actual engineering toolkit โ€” with real commands you can run, real AWS prices you can check, and real numbers you can do math on โ€” that takes a system from "works for 100 people" to "works for 100 million."

Open a terminal on any Linux/Mac server and run htop. Watch the CPU bars. If they're consistently above 80%, your server is struggling. Run iostat -x 1 and check the await column โ€” if it's above 10ms, your disk is the bottleneck. Run pg_stat_activity on your PostgreSQL database โ€” if active connections are near max_connections (default: 100), you're about to get "FATAL: too many connections" errors. These aren't abstract concepts. They're things you can measure right now.
Think First

Your single server handles 200 requests per second on an AWS t3.micro ($8.50/month). Your app just got posted on Hacker News and you're getting 1,200 requests per second. The server is maxed out โ€” htop shows 100% CPU, users are getting timeouts. What are your two options?

One option keeps your current server. The other gets you more servers. Think about what each costs, how fast you can do it, and what happens if one server dies.
Section 3

The First Attempt โ€” Just Get a Bigger Server

Your server is drowning. htop shows 100% CPU. Users are getting 502 errors. You're panicking. What's the fastest thing you can do right now, without changing a single line of code?

Get a bigger server.

On AWS, this takes about 3 minutes. You stop the instance, change the instance type from t3.micro to c5.4xlarge, and start it again. Your app goes from 2 vCPUs and 1 GB RAM to 16 vCPUs and 32 GB RAM. You didn't change any code. You didn't redesign anything. You just moved your app to a faster computer. This is vertical scalingMaking a single machine more powerful by adding more CPU, RAM, or faster storage. On AWS, it's literally changing the instance type from t3.micro to c5.4xlarge. Also called "scaling up." Zero code changes required., and it's the first thing almost every team tries โ€” because it works immediately.

And honestly? It works really well. WhatsApp used this approach longer than anyone thought possible. Their Erlang servers on FreeBSD were so efficient that a single machine could handle 2 million concurrent connections. The beam.smp process (Erlang's virtual machine) would use all available cores to manage millions of lightweight processes. They scaled vertically for years before needing to think about horizontal scaling.

So vertical scaling isn't stupid. It's step one. Let's look at what it actually costs.

Real AWS Pricing โ€” Vertical Scaling Cost Curve Actual on-demand pricing for compute-optimized instances (us-east-1) Monthly Cost ($) Approximate Capacity (requests/sec for a web API) $8.50/mo t3.micro 2 vCPU, 1 GB ~200 req/sec $61/mo t3.large 2 vCPU, 8 GB ~1,000 req/sec $490/mo c5.4xlarge 16 vCPU, 32 GB ~5,000 req/sec $2,940/mo c5.24xlarge 96 vCPU, 192 GB ~20,000 req/sec THE CEILING u-24tb1.metal 448 vCPU 24 TB RAM Nothing bigger exists on AWS The brutal math: t3.micro โ†’ c5.24xlarge = 346x the cost But only 100x the capacity (200 โ†’ 20,000 rps) Every doubling costs MORE than the last one You can verify these prices yourself: aws.amazon.com/ec2/pricing/on-demand/

Look at that curve. Going from a t3.micro ($8.50/mo) to a t3.large ($61/mo) gives you roughly 5x the capacity for 7x the price. That's reasonable. But going from a t3.large to a c5.24xlarge ($2,940/mo) gives you 20x the capacity for 48x the price. The economics get worse at every step.

Here's the math that should scare you: 10x the cost does NOT give you 10x the capacity. It gives you maybe 4x. Because CPUs aren't 10x faster just because there are 10x more cores. Your database can't use 96 cores efficiently if every query has to lock the same row. Your application probably isn't even multi-threadedAn application that can do multiple things simultaneously using separate threads of execution. If your app is single-threaded, it can only use ONE CPU core โ€” buying 96 cores means 95 of them sit idle. Making an app multi-threaded takes significant engineering effort. enough to use more than 8 cores. You're paying for power you literally can't use.

How to Tell If You Need to Scale (commands you can run right now) $ htop Watch CPU and memory bars CPU > 80% consistently? You need more compute. Memory > 90%? You need more RAM (or a leak). $ iostat -x 1 Watch the await column await > 10ms consistently? Disk is the bottleneck. %util near 100%? Upgrade to NVMe SSD or add caching. SELECT count(*) FROM pg_stat_activity; Active DB connections Near max_connections (100)? You need connection pooling (PgBouncer) or read replicas.

But here's the thing โ€” vertical scaling buys you time. And time is the most valuable resource a startup has. Instagram didn't start with a perfectly horizontally-scaled architecture. They started on one PostgreSQL database and kept upgrading it. They added RedisAn in-memory data store that keeps data in RAM for extremely fast reads. Instagram used it to cache everything โ€” user sessions, follower lists, popular photos. You can run `redis-cli INFO stats` to see operations per second, memory usage, and cache hit rates on any Redis instance. for caching, MemcachedAnother in-memory cache, similar to Redis but simpler. Instagram used both โ€” Memcached for simple key-value caching of database queries, Redis for more complex data structures like sorted sets and lists. for query caching, and Nginx as a reverse proxyA server that sits in front of your application and handles things like SSL termination, static file serving, and request buffering. Nginx can serve static files 10-100x faster than your application server because it doesn't need to run your application code.. With those additions, one beefy server carried them past 30 million users. They scaled reads first, because photo-sharing is 95% reads and 5% writes. Smart vertical scaling plus caching got them further than most people get with a complex distributed system.

It's step one. WhatsApp ran on a handful of very powerful FreeBSD servers for years. The key is knowing that it will hit a wall โ€” and having a plan for when it does. The question isn't "should I scale vertically?" (yes, do it immediately, it's fast). The question is "what do I do when vertical scaling isn't enough anymore?"
Think First

You've upgraded to the biggest server on AWS: u-24tb1.metal โ€” 448 vCPUs, 24 TB RAM. It handles 20,000 requests per second. But your app keeps growing and now you need 60,000 req/sec. There is literally no bigger machine to buy. What do you do?

You can't make one server bigger. What if you had THREE servers, each handling 20,000 req/sec? But then how does a user's request know which server to go to?
Section 4

Where It Breaks โ€” The Three Walls of Vertical Scaling

So you upgraded to a c5.24xlarge. 96 vCPUs, 192 GB RAM, $2,940/month. Life is good again. Your app handles 20,000 requests per second. Users are happy. You sleep at night.

Then three things happen โ€” and no amount of money can fix any of them.

The Three Walls You Will Hit WALL 1 Single Point of Failure One server = one place to die Server crashes at 3 AM Saturday ALL 50,000 users go down Recovery: 30 min to 4 hours WALL 2 Downtime to Upgrade To handle MORE traffic... you must take the server OFFLINE "Closed for renovations" AWS instance type change: 2-5 min outage WALL 3 The Physical Ceiling AWS max: u-24tb1.metal 448 vCPU, 24 TB RAM Nothing bigger exists. If you need more: game over

When everything runs on one server, that server is the single point of failureA component whose failure takes down the ENTIRE system. If you have one server and it crashes, every user gets an error. There's no backup, no failover, no second chance. Abbreviated as SPOF. for your entire business. If the hard drive dies, if a kernel update goes wrong, if the power supply overheats โ€” every single user goes down at the same time. No fallback. No failover. Just a 502 error page for everyone.

One Server Architecture = One Failure Away From Total Outage 50,000 users 👤 👤 👤 + 49,997 more All sending requests 3:17 AM Saturday DEAD OOM killer struck dmesg: Out of memory: Killed process 1234 (node) TOTAL OUTAGE Every user sees: 502 Bad Gateway All API requests fail Database connections pile up and timeout On-call engineer gets paged at 3:17 AM SSH into server... server is unreachable Recovery: reboot + pray = 15-45 min

This is not theoretical. Netflix streams video to 250+ million subscribers worldwide. They know that servers fail โ€” not "might fail," but "will fail, regularly." So they built a tool called Chaos MonkeyA tool Netflix invented that randomly kills production server instances during business hours. The idea: if your system can survive random failures when engineers are awake and watching, it'll survive them at 3 AM too. Part of the Simian Army suite. Open-sourced on GitHub. that deliberately kills their own production servers at random โ€” during business hours, while engineers are watching โ€” to prove their system survives it. If your architecture can't survive one server dying, it's not ready for production. Period.

You can check if your own system has this problem right now. If you can answer "yes" to this question, you have a SPOFSingle Point of Failure. Any component where failure = total system outage. Common SPOFs: a single database server, a single API server, a single load balancer (yes, load balancers can be SPOFs too if you only have one).: "Is there any single server whose failure would make the entire application unavailable?" If you have one app server, one database, one anything โ€” that's a SPOF.

Here's a painful paradox: to make your server handle more traffic, you have to take it offline first. On AWS, changing an instance type requires stopping the instance, changing the type, and starting it again. That's 2-5 minutes of downtime โ€” minimum. If you're on physical hardware, it means physically swapping RAM sticks or CPUs, which takes hours.

Two minutes doesn't sound bad. But do the math. If your app handles 1,200 requests per second, 2 minutes of downtime means 144,000 failed requests. If those are API calls from a mobile app, that's 144,000 error messages shown to users. If those are payment processing requests at $50 average, that's $7.2 million in transactions that couldn't process.

You schedule a "2 AM maintenance window" for 15 minutes. The instance type change takes 4 minutes (normal). But the application takes 8 minutes to warm up โ€” loading caches, re-establishing database connections, JIT-compiling hot paths. Then you discover a config mismatch on the new instance type and need to debug it. Your "15-minute window" is now 45 minutes. Customers in other time zones are wide awake and angry. Your status page says "resolved" but the error rate is still 12% because the caches haven't warmed up yet.

This is the wall that cannot be moved. The largest instance on AWS is the u-24tb1.metal: 448 vCPUs, 24 TB of RAM. That's the ceiling. If your application needs more than what that machine can provide โ€” and any successful application eventually will โ€” there is literally no "Add to Cart" button for something bigger. The machine doesn't exist. Physics won't allow it.

And even before you hit the physical limit, you hit diminishing returnsWhen each additional unit of investment gives you less benefit than the previous one. Going from 4 to 8 CPUs might double your throughput, but going from 64 to 128 might only give you 30% more โ€” because your database queries are waiting on disk, not CPU. You paid for 128 cores but only 16 are doing useful work.. A database with 96 cores might only use 16 effectively because most queries are waiting on disk I/O, not CPU compute. Your application might not be multi-threaded enough to use more than 8 cores. The operating system itself has overhead that grows with core count โ€” context switchingWhen the CPU pauses one task to work on another. With more cores and more processes, the OS spends more time deciding WHAT to run instead of actually running it. At extreme core counts, this overhead eats into your performance gains. between hundreds of threads eats into your gains.

Diminishing Returns: More Cores != More Performance Cores 4 16 64 128 1,000 req/sec (baseline) 3,400 req/sec (expected: 4,000) 4,500 req/sec (expected: 16,000) 4,800 req/sec (expected: 32,000) Grey = theoretical linear scaling. Colored = what you actually get. 128 cores gets you 4.8x, not 32x.

Look at that. 128 cores gave you 4.8x the throughput of 4 cores โ€” not 32x. You paid 32x more money for 4.8x more performance. At that point, you're not scaling. You're wasting.

One c5.24xlarge (96 vCPU, 192 GB) costs $2,940/month and handles ~20,000 req/sec. Six t3.large instances (2 vCPU, 8 GB each) cost $366/month total ($61 each) and handle ~6,000 req/sec combined. But here's the real difference: when one t3.large dies, the other five keep running. When your one c5.24xlarge dies, everything dies. The cheaper option is also the more reliable option.
Think First

You have a $2,940/month server handling 20,000 req/sec. An alternative is six $61/month servers behind a load balancer handling 6,000 req/sec total. The six-server option is 8x cheaper and survives individual server failures. But it only handles 6,000 req/sec, not 20,000. How do you get to 20,000 req/sec with cheap servers?

If 6 servers give you 6,000 req/sec, how many would give you 20,000? What would that cost compared to $2,940?
Section 5

The Breakthrough โ€” Many Normal Servers Beat One Giant Server

Let's do the math from the Think First above. If one t3.large handles ~1,000 req/sec and costs $61/month, then to get 20,000 req/sec you need... 20 servers. That's 20 x $61 = $1,220/month. Compare that to the single c5.24xlarge at $2,940/month for the same throughput. The horizontal option is 2.4x cheaper. And if one of the 20 servers dies? The other 19 keep running. You lose 5% of capacity instead of 100%.

This is horizontal scalingAdding more servers to share the work instead of making one server bigger. Also called "scaling out." Each server handles a fraction of the total traffic. No ceiling โ€” you can always add more. But it requires your servers to be stateless (they don't store session data locally)., and it's the insight that powers every major system on the internet. Netflix, Google, Amazon, Facebook โ€” none of them run on one big server. They all run on thousands of normal servers working together.

The Math That Changes Everything OPTION A: One Giant Server c5.24xlarge 96 vCPU, 192 GB RAM ~20,000 req/sec $2,940/mo If it dies โ†’ 100% outage No redundancy. No failover. Upgrade = downtime. $0.147 per 1,000 req/sec VS OPTION B: 20 Normal Servers dead $1,220/mo If one dies โ†’ 5% capacity loss 19 servers keep running. Users don't notice. Need more? Add another $61 server. $0.061 per 1,000 req/sec โ€” 2.4x cheaper

But there's a catch. If you have 20 servers, how does a user's request know which one to go to? Users don't type server-7.your-app.com. They type your-app.com. Something needs to sit in front of all those servers and direct traffic. That something is a load balancerA server (or service) that receives ALL incoming requests and distributes them across your backend servers. On AWS, this is an ALB (Application Load Balancer). It decides which server gets each request using rules like round-robin (take turns) or least-connections (send to the least busy server). The user never knows which server handled them..

Horizontal Scaling with a Load Balancer This is how Netflix, Google, Amazon, and every major website works Users 50,000 users 20,000 req/sec 👤 👤 👤 + thousands your-app.com LOAD BALANCER AWS ALB: $22/mo round-robin / least-conn health checks every 10s Server 1-4 4,000 rps Server 5-8 4,000 rps Server 9-12 4,000 rps Server 13-16 4,000 rps Server 17-20 4,000 rps Total: 20,000 rps Database (still one โ€” for now) Health checks every 10 sec Dead server? LB stops sending traffic to it. Zero user impact. 20 t3.large @ $61 each = $1,220/month for 20,000 req/sec vs $2,940/month for ONE server. 2.4x cheaper AND fault-tolerant.

A load balancer receives all incoming requests and distributes them across your servers. On AWS, this is an ALBApplication Load Balancer โ€” AWS's managed Layer 7 load balancer. It understands HTTP and can route based on URL paths, headers, etc. Costs about $22/month base plus small per-request charges. It runs across multiple availability zones by default, so the load balancer itself isn't a single point of failure. (Application Load Balancer) โ€” costs about $22/month and handles millions of requests. It distributes traffic using strategies like round-robinThe simplest load balancing algorithm: send request 1 to Server A, request 2 to Server B, request 3 to Server C, request 4 back to Server A. Take turns. Works well when all servers are identical and requests take similar time. (take turns) or least connectionsA smarter algorithm: send the next request to whichever server currently has the fewest active connections. Better than round-robin when some requests take longer than others, because it naturally avoids overloading slow servers. (send to whichever server is least busy). It also runs health checksThe load balancer periodically sends a request (like GET /health) to each server. If a server doesn't respond within a timeout (e.g., 5 seconds) for several checks in a row, the LB marks it as "unhealthy" and stops sending traffic to it. When it starts responding again, traffic resumes. every 10 seconds โ€” if a server stops responding, the load balancer stops sending traffic to it. No user impact.

But there's a critical catch: for this to work, your servers must be statelessA stateless server doesn't remember anything about previous requests. It doesn't store your shopping cart in memory. It doesn't keep your login session on its local disk. Every request carries all the information the server needs to process it (usually via a token in the HTTP header). This is why stateless design is the #1 prerequisite for horizontal scaling.. This is the most important word in this entire page. Let me explain why.

Why Servers MUST Be Stateless STATEFUL: Cart stored in server memory Request 1 โ†’ Server A: "Add iPhone to cart" Server A memory: {cart: ["iPhone"]} Request 2 โ†’ Server B: "Show my cart" Server B memory: {cart: []} โ€” EMPTY! User: "WHERE IS MY IPHONE?!" Load balancer sent request 2 to a different server STATELESS: Cart stored in Redis Request 1 โ†’ Server A: "Add iPhone to cart" Server A โ†’ Redis: SET cart:user123 ["iPhone"] Request 2 โ†’ Server B: "Show my cart" Server B โ†’ Redis: GET cart:user123 โ†’ ["iPhone"] User: "There it is!" Works perfectly. ANY server can handle ANY request

A stateless server doesn't store any user-specific data in its own memory or disk. No shopping carts. No login sessions. No uploaded files stored locally. All shared state lives in an external store โ€” a database, RedisAn in-memory data store that keeps data in RAM for microsecond reads. Perfect for session data, shopping carts, and anything that multiple servers need to share. You can check its performance with `redis-cli INFO stats` โ€” look for instantaneous_ops_per_sec and keyspace_hits vs keyspace_misses., or S3Amazon's object storage service. Used for uploaded files, images, backups โ€” anything that shouldn't live on a specific server. Files stored in S3 are accessible from any server in any region. Costs about $0.023 per GB per month.. This way, when the load balancer sends Request 1 to Server A and Request 2 to Server B, both servers can find the same user data because they're both reading from the same external store.

This is exactly what Instagram did. Their Django app servers were stateless โ€” user sessions lived in Redis, uploaded photos went to S3, cached data lived in Memcached. Any server could handle any request. That's how 5 engineers handled 30 million users: they could spin up as many app servers as they needed because every server was identical and interchangeable.

You can test how many requests your own server handles right now with Apache Bench: ab -n 10000 -c 100 https://your-api.com/health. This sends 10,000 requests with 100 running at the same time. Look at "Requests per second" in the output. That's your server's actual throughput. If it's 200 req/sec and you need 2,000, you need 10 servers โ€” or one bigger server. Now you know the exact math.
Think First

You've horizontally scaled your app servers to 20 instances. They're stateless โ€” sessions in Redis, files in S3. Users are happy. But now the database is the bottleneck. All 20 app servers are hammering one PostgreSQL database. pg_stat_activity shows 350 active connections on a server with max_connections = 400. What do you do?

You can't just "horizontally scale" a database the same way you scale app servers. Databases are stateful โ€” they hold actual data. But there's a trick: most apps READ data far more often than they WRITE it. What if you had copies of the database for reads?
Horizontal scaling (20 cheap servers) is 2.4x cheaper than vertical scaling (1 giant server) for the same throughput โ€” and it's fault-tolerant. The key prerequisite is stateless servers: move sessions to Redis, files to S3, caches to Memcached. Once your servers are interchangeable, a load balancer distributes traffic across them and removes dead servers automatically.
Section 6

How It Works โ€” The Scaling Toolkit

You now know why systems break. Let's talk about the six tools that fix them โ€” ordered from simplest to most complex. The golden rule: always start with tool #1 and only reach for the next tool when the current one runs out of headroom. Most apps never need tools 5 or 6.

SIMPLE โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ COMPLEX 1 Vertical 2 Horizontal 3 Caching 4 Replicas 5 Queues 6 Sharding

Green tools are easy โ€” you can do them in an afternoon. Orange tools need some planning. Red tools are serious engineering projects that take weeks. Let's walk through each one.

This is the "throw money at the problem" approach, and honestly? It works great up to a point. Your server is slow, so you upgrade it. More CPU, more RAM, faster disks. Done. No code changes, no architecture changes, no new deployment pipelines.

BEFORE 4 vCPU ยท 16 GB 500 req/sec $120/mo โ†’ $ $ $ AFTER 96 vCPU ยท 192 GB ยท NVMe SSD 20,000 req/sec (40ร— more!) $2,940/mo (24ร— more) Still a single point of failure

When it's great: Early stage, simple stack, small team. You're making $5K/month and have 2 engineers. Don't build a distributed system โ€” just upgrade the server.

When it fails: When you hit the ceiling (Section 4's three walls). The biggest AWS instance is ~448 vCPUs. Beyond that, money can't help. Also, if your server dies, everything dies. There's no backup.

Instead of one beefy server, use many cheap ones. Section 5 covered this in depth โ€” the key prerequisite is stateless serversServers that don't store any user-specific data locally. Sessions, files, and caches live in external services.. If your server stores anything locally (user sessions, uploaded files, cached data), you can't just add more servers because each one would have different data.

Netflix runs roughly 100,000 EC2 instances. Not because one instance can't handle anything โ€” but because 100,000 cheap instances give them fault tolerance, geographic distribution, and unlimited headroom. If 50 instances die at 3 AM, the load balancer routes around them. Nobody wakes up.

Before you horizontally scale, make sure these are TRUE:
โœ… Sessions stored in Redis/Memcached (not server memory)
โœ… Uploaded files stored in S3/GCS (not local disk)
โœ… Caches in a shared layer (not in-process)
โœ… Config from environment variables (not local files)
If any are FALSE, fix that first. Horizontal scaling with stateful servers is a nightmare.

Here's a number that changes everything: reading from RAM is about 100,000 times faster than reading from a spinning hard drive, and about 1,000 times faster than an SSD. If your database query takes 50ms, the same data from a cacheAn in-memory data store used to speed up reads. Redis and Memcached are the two most popular ones. takes 0.5ms. That's the difference between a snappy app and a sluggish one.

The idea is simple: the first time someone asks for data, you fetch it from the database (slow). Before returning it, you also save a copy in RedisRedis is an in-memory key-value store. Think of it as a giant dictionary/hashmap that lives in RAM โ€” blazingly fast to read and write. (fast RAM). The next time anyone asks for the same data, you grab it from Redis and skip the database entirely.

Cache Lookup Flow App Server Redis Cache 0.5ms reads Database 50ms queries โ‘  Check โ‘ก HIT โ†’ Return! (0.5ms โ€” done!) โ‘ข MISS โ†’ Query DB โ‘ฃ Store result in cache for next time Cache Hit (fast) Cache Miss (slow, but cached for next time)

A well-tuned cache typically achieves a 95% hit rate โ€” meaning 95 out of 100 requests never touch the database. Think about what that means: if your DB was handling 10,000 queries/sec and struggling, caching drops it to 500 queries/sec. That might be enough to delay scaling your database for years.

Real example: CricBuzz (live cricket scores). During an India vs. Pakistan match, millions of people refresh the score every few seconds. The score changes maybe once per 30 seconds. Without caching, that's millions of identical DB queries. With caching, the score is fetched from the DB once, stored in Redis with a 5-second TTLTime-To-Live: how long a cached value stays valid before being automatically deleted. After TTL expires, the next request triggers a fresh database query., and every subsequent request gets the cached version.

There's a famous joke: "There are only two hard things in computer science โ€” cache invalidation and naming things." When the underlying data changes, your cache might still serve the old version. Strategies include TTL (expire after X seconds), write-through (update cache on every write), and event-based invalidation. There's no perfect answer โ€” each has trade-offs.

Here's a fact about most web apps: they read data far more often than they write it. Instagram's ratio is roughly 1,000 reads for every 1 write. Think about it โ€” when you open Instagram, you see a feed (read), view stories (read), check comments (read), view profiles (read). Occasionally you post a photo (write) or leave a comment (write). The reads absolutely dominate.

A read replicaAn exact copy of the primary database that stays synchronized. It accepts read queries but rejects writes โ€” all writes go to the primary, which then copies the changes to replicas. exploits this imbalance. You keep one primary database that handles all writes. Then you create copies (replicas) that handle reads. Since reads are 99% of your traffic, you've just spread 99% of your load across multiple servers.

Read Replica Architecture App Servers Primary (Writes) 1% of traffic Replica 1 Replica 2 Replica 3 Writes Reads (99%) Async replication Replication is not instant. There's a delay (usually milliseconds, but sometimes seconds) between when data is written to the primary and when it appears on replicas. This means: a user posts a comment, immediately refreshes the page, and the comment isn't there yet โ€” because the read hit a replica that hasn't received the write yet. The fix? Read-after-write consistency: for the user who just wrote data, route their next read to the primary. Everyone else reads from replicas.

Not everything needs to happen right now. When a user signs up, do they need their welcome email to arrive before the signup page finishes loading? No. When someone uploads a profile photo, does the thumbnail need to be generated before the upload response? No. When a manager requests a monthly report, does it need to render in the HTTP response? Absolutely not.

An async queueA message queue lets your app server drop a task onto a list and move on. A separate background worker picks up the task later and processes it. The user doesn't wait for the heavy work. lets you separate "acknowledge the request" from "do the work." The user gets an instant response ("Your photo is being processed!") while the actual work happens in the background.

โŒ Synchronous (User Waits) User Server Heavy Work Request 8 sec! User waited 8 sec ๐Ÿ˜ค โœ… Asynchronous (User Free) User Server Queue + Worker Request Enqueue 50ms! User is free Worker handles it

Common queue use cases: sending emails, generating PDFs, resizing images, processing payments, syncing data to third-party services, building search indexes, running analytics. Basically, anything that takes more than a second and doesn't need to be in the HTTP response.

Popular tools: RabbitMQRabbitMQ is a message broker. You publish messages to it, and consumers pick them up. Great for task queues, event-driven architecture, and decoupling services., AWS SQSAmazon Simple Queue Service. A fully managed message queue โ€” you don't run any servers. Pay per message. Great if you're already on AWS., and Apache KafkaOriginally built as a distributed log for LinkedIn. Kafka handles millions of messages per second and retains them for days. Overkill for simple task queues, but perfect for event streaming at massive scale. (for really high throughput).

You've added caching. You've added read replicas. Your database is still struggling because you have so much data that writes are the bottleneck, or the dataset simply doesn't fit on one machine anymore. This is when you reach for shardingSplitting a single database into multiple smaller databases, each holding a portion of the data. Each piece is called a shard. โ€” the most powerful and most dangerous tool in the toolkit.

The concept is simple: instead of one database with 1 billion rows, you have 4 databases with 250 million rows each. You decide which database holds which data based on a shard keyThe rule that decides which shard a piece of data goes to. Common choices: hash of user_id, geographic region, or first letter of username..

Sharding by User ID (First Letter) Routing Layer user_id[0] โ†’ shard Shard 1 (Aโ€“M) Alice, Bob, Charlie, Maria... Shard 2 (Nโ€“Z) Nancy, Omar, Patricia, Zara... โš  Watch out: โ€ข Cross-shard JOINs are painful โ€ข Rebalancing shards = downtime risk โ€ข Uneven data = "hot shard" problem โ€ข Auto-increment IDs break Do this LAST. Seriously. Once you shard your database, almost every query becomes harder. Want to find all orders across all users? You now need to query every shard and merge the results. Want to JOIN users with orders? Better hope they're on the same shard. Want to add a new shard? You need to migrate data while the system is live. Companies like Pinterest, Uber, and Discord have entire teams dedicated to their sharding infrastructure. Don't do this until you absolutely must.
Think First

Your e-commerce app has 50,000 daily active users. The homepage takes 4 seconds to load. Your monitoring shows the database is at 90% CPU, with 80% of queries being identical product catalog reads. Which tool from the toolkit would you reach for first, and why?

The answer is NOT horizontal scaling or sharding. 80% of queries are identical reads. There's a tool that's designed exactly for this situation โ€” and it requires zero changes to your database.
Six tools, ordered by complexity: (1) Vertical scaling โ€” just upgrade the machine, great for early stage. (2) Horizontal scaling โ€” add more stateless servers behind a load balancer. (3) Caching โ€” store hot data in RAM (Redis/Memcached) for 1000x faster reads, 95% hit rates. (4) Read replicas โ€” copy your database for reads, keep one primary for writes, exploits the 1000:1 read-to-write ratio. (5) Async queues โ€” move heavy work (emails, image processing, reports) to background workers so users don't wait. (6) Sharding โ€” split your data across multiple databases when nothing else works. Always start at #1 and work your way down.
Section 7

Going Deeper โ€” The Math Behind Scaling

You don't need a math degree to scale systems, but knowing three formulas will make you dangerous in design interviews and capacity planning. These aren't academic exercises โ€” they're the reason Netflix knows exactly how many servers to provision on a Friday night, and why your database connection pool is set to 20 instead of 2,000.

Amdahl's Law โ€” The Speed Limit of Parallelism

Imagine you're cooking dinner. Chopping vegetables takes 20 minutes, and boiling them takes 10 minutes. You invite 3 friends to help. Can they chop 4x faster? Sure โ€” each person chops a quarter of the vegetables, so chopping drops from 20 minutes to 5 minutes. But the boiling still takes 10 minutes no matter how many friends you have. You can't parallelize boiling.

That's Amdahl's LawA formula that tells you the maximum theoretical speedup you can achieve by adding more processors/servers. The sequential (non-parallelizable) portion of your work sets an upper bound. in a nutshell: the sequential (non-parallelizable) part of your work limits how much speedup you can get from adding more resources.

The formula: Max Speedup = 1 / (S + (1 - S) / N) where S is the fraction of work that's sequential, and N is the number of parallel workers (servers).

Amdahl's Law โ€” Diminishing Returns Number of Servers Speedup 1ร— 5ร— 10ร— 15ร— 20ร— 1 16 64 256 1024 โˆž 0% seq (perfect) 5% seq max = 20ร— 25% seq max = 4ร— 50% seq โ†’ max 2ร— Even 5% sequential code caps you at 20ร—

The punchline: if just 5% of your code must run sequentially (a global lock, a single-threaded database write, a shared counter), then no matter how many servers you add โ€” 100, 1,000, 10,000 โ€” you'll never get more than a 20x speedup. This is why database locks, global mutexes, and single-threaded bottlenecks are such a big deal. They cap your entire system's scalability.

Database transactions with exclusive locks, writing to a single log file, incrementing a global counter, leader election in distributed systems, any section of code protected by a mutex. If only one thread/server can do it at a time, it's sequential.

Little's Law โ€” The Capacity Formula

Walk into any coffee shop during the morning rush. There are 12 people inside (some ordering, some waiting, some getting their drinks). New customers arrive at 4 per minute. Each customer spends an average of 3 minutes in the shop. Notice anything? 12 = 4 ร— 3. That's Little's LawL = ฮป ร— W. A universal law that says: the average number of items in a system equals the arrival rate multiplied by the average time each item spends in the system..

L = ฮป ร— W

Little's Law: L = ฮป ร— W ฮป = 2,000 req/sec W = 50ms avg response time L = 2,000 ร— 0.05 = 100 concurrent connections

Why this matters for you: If your server handles 2,000 requests per second with a 50ms average response time, you need 100 concurrent connections at any given moment (2000 ร— 0.05 = 100). If your database connection pool only has 20 connections, it becomes the bottleneck โ€” requests will queue up waiting for a free connection, and your response time shoots up. Set the pool to 100+, and the bottleneck moves elsewhere.

Connection Pooling โ€” Why 100 Connections Serve 10,000 Users

This confuses a lot of people: "We have 10,000 active users but only 100 database connections. Won't users be stuck waiting?" No โ€” because most users aren't hitting the database at the same time. A user loads a page (1 query, 10ms), reads it for 30 seconds, clicks something (1 more query, 10ms), reads for another minute. Each user touches the database for milliseconds out of every minute.

Using Little's Law: if 10,000 users make an average of 1 request every 30 seconds, that's ~333 requests/sec. Each request uses a DB connection for 10ms. So you need 333 ร— 0.01 = 3.3 concurrent connections. A pool of 100 is overkill! The math shows why: connections are so fast to use and release that a small pool serves a huge number of users.

PostgreSQL spawns one process per connection (~10 MB RAM each). At 100 connections, that's 1 GB just for connection overhead. Tools like PgBouncerA lightweight connection pooler for PostgreSQL. It sits between your app and the database, multiplexing thousands of app connections into a small number of actual database connections. let your app open 5,000 connections while PgBouncer maps them to just 50 real database connections. Result: your app thinks it has plenty of connections, and your database isn't overwhelmed.
Think First

Your API has 50ms average response time and handles 2,000 requests per second. How many concurrent connections do you need? If you add caching that brings 80% of requests down to 2ms response time (cache hits), how does the concurrency requirement change?

Apply Little's Law twice: once for the current state (L = 2000 ร— 0.05), and once for the cached state. For the cached state, think of it as two streams: 80% of requests at 2ms and 20% at 50ms. Add their concurrency needs together.
Three formulas every engineer should know: (1) Amdahl's Law โ€” if S% of your work is sequential, max speedup is 1/S regardless of server count (5% sequential = 20x ceiling). (2) Little's Law โ€” L = ฮป ร— W, concurrent connections = arrival rate ร— response time (2000 req/s ร— 50ms = 100 concurrent connections needed). (3) Connection pooling math โ€” because connections are held for milliseconds, a pool of 100 can serve 10,000+ users. Use PgBouncer to multiplex app connections into fewer DB connections.
Section 8

The Variations โ€” Types of Scaling

Not all bottlenecks are the same. "We need to scale" is meaningless without answering "scale WHAT?" Your system has different layers โ€” reading data, writing data, computation, and storage โ€” and each one scales differently. Identifying the actual bottleneck is the first step; reaching for the wrong tool wastes months of engineering.

What's the Bottleneck? โ†’ Choose the Right Scaling Strategy Bottleneck? htop / metrics Reads Slow? โ†’ Cache + Read Replicas Writes Slow? โ†’ Sharding + Async Queues CPU Maxed? โ†’ Horizontal App Servers Storage Full? โ†’ Tiered Storage (hot/cold) Redis, Memcached, CDN PostgreSQL read replicas Kafka, SQS, write-behind Hash-based sharding Auto-scaling groups Load balancer + stateless S3 (cheap), EBS (fast) Archive to Glacier Step 1: Run metrics (htop, pg_stat, CloudWatch) Step 2: Identify the bottleneck โ†’ Step 3: Pick the right tool

Problem: Too Many Reads

This is the most common bottleneck. Your database CPU is at 90%, and when you check the query logs, it's the same SELECT queries over and over. Users loading feeds, viewing profiles, browsing products โ€” all reads.

Solution stack (in order):

  1. Application-level caching โ€” cache API responses in Redis. If your homepage calls 5 endpoints and each takes 40ms, caching brings total load time from 200ms to 10ms.
  2. CDN for static content โ€” images, CSS, JS served from edge servers near the user. CDNsContent Delivery Network: a geographically distributed network of servers that cache and serve static files from locations near the user. CloudFront, Cloudflare, and Fastly are popular CDNs. like CloudFront handle billions of requests without touching your servers.
  3. Read replicas โ€” when caching isn't enough because data changes too frequently. Route read queries to replica databases.

Real example: Instagram uses Memcached extensively. Their user profile data is cached, so when 100,000 people visit @natgeo's profile in a minute, only the first request hits the database. The other 99,999 get the cached version.

Problem: Too Many Writes

This is harder than read scaling because writes can't be cached (you need the data to actually persist). If your database is bottlenecked on INSERT and UPDATE operations, you need different tools.

Solution stack (in order):

  1. Async queues โ€” if writes don't need to be instant, buffer them. Write to a queue first, then batch-insert into the database. This smooths out traffic spikes.
  2. Write-behind caching โ€” write to Redis first (fast), then periodically flush to the database in batches. Great for analytics counters, view counts, like counts.
  3. Sharding โ€” split the database by a shard key so writes are distributed across multiple DB servers. This is the nuclear option.

Real example: Twitter writes ~500 million tweets per day. They use a combination of async processing (tweets go into a queue first) and sharding (tweets are distributed across many database clusters by user ID).

Problem: CPU Is the Bottleneck

Your database is fine. Your cache hit rate is great. But your app servers are maxing out CPU because the business logic is computationally heavy โ€” image processing, recommendation algorithms, search ranking, data transformations.

Solution stack:

  1. Horizontal app servers โ€” add more instances behind a load balancer. Since app servers are stateless, this is the easiest scaling move.
  2. Auto-scaling โ€” configure your cloud provider to automatically add/remove servers based on CPU utilization. AWS Auto Scaling Groups spin up new EC2 instances when CPU exceeds 70% and terminate them when it drops below 30%.
  3. Offload to specialized services โ€” image resizing? Use a dedicated image processing service (or Lambda functions). Search? Use Elasticsearch instead of doing it in your app code.

Problem: Running Out of Disk Space

Your database has grown to 5 TB. Queries are slow because indexes don't fit in RAM anymore. Backups take 6 hours. This isn't about read or write speed โ€” it's about the sheer volume of data.

Solution stack:

  1. Tiered storage โ€” not all data is equally important. Recent data (last 90 days) is "hot" and lives on fast SSDs. Older data is "warm" and moves to cheaper storage. Ancient data is "cold" and goes to S3 GlacierAmazon S3 Glacier is an archival storage service. Extremely cheap (~$3.60/TB/month) but retrieval takes minutes to hours. Perfect for data you rarely access but must keep for compliance or legal reasons. at ~$3.60/TB/month.
  2. Data partitioning โ€” split tables by time (orders_2024, orders_2025). Queries only scan relevant partitions.
  3. Compression โ€” columnar formats like ParquetApache Parquet is a columnar storage format. Instead of storing rows together, it stores columns together. This makes analytical queries much faster because you only read the columns you need, and similar data compresses better. compress data 5-10x compared to raw row storage.
Instagram's architecture (serving 2 billion users) uses ALL four scaling types simultaneously: Read scaling โ€” Memcached for profiles/feeds, CDN for images. Write scaling โ€” Cassandra for direct messages (sharded by conversation ID), async queues for feed fanout. Compute scaling โ€” hundreds of Django app servers behind a load balancer. Storage scaling โ€” S3 for photos/videos (petabytes), tiered PostgreSQL for metadata. Four types of scaling, each with different tools: (1) Read scaling โ€” caching (Redis, Memcached, CDN) and read replicas handle the 99% of traffic that's reads. (2) Write scaling โ€” async queues buffer writes, write-behind caching batches them, sharding distributes them. (3) Compute scaling โ€” horizontal app servers with auto-scaling for CPU-heavy workloads. (4) Storage scaling โ€” tiered storage (hot/warm/cold), time-based partitioning, columnar compression. Always identify the bottleneck first with metrics before choosing a tool.
Section 9

At Scale โ€” Real-World Numbers

Theory is great, but what does scaling actually look like at real companies? This section maps user counts to architecture tiers โ€” so when an interviewer says "design for 10 million users," you know exactly which tools to reach for and roughly what it costs.

The Scale Ladder โ€” Users โ†’ Architecture 1K Users 1 server, 1 DB ~$50/mo 10K Users Bigger server + cache ~$300/mo 100K Users LB + 3 app + replica ~$2K/mo 1M Users CDN + sharding + queues ~$15K/mo 100M+ Users Global, multi-region $500K+/mo Monolith Django/Rails on a single VPS Vertical + Cache Redis + bigger RDS instance Horizontal ALB + ASG + read replicas + CDN Distributed Sharded DB + Kafka + microservices Global Scale Multi-region + custom infra + dedicated teams Most startups that fail never get past Tier 2 Don't build Tier 5 architecture for Tier 1 traffic โ€” you'll go bankrupt before you get users

Real Company Architectures

Netflix serves 250 million subscribers across 190 countries. During peak hours (evening in any timezone), they account for roughly 15% of global internet traffic. Here's how they handle it:

  • Compute: ~100,000 EC2 instances across multiple AWS regions. Each microserviceA small, independent service that does one thing well. Instead of one big application, Netflix has hundreds of small services: one for recommendations, one for user profiles, one for playback, etc. auto-scales independently.
  • Caching: EVCache (their custom Memcached layer) handles millions of requests/sec. Most API calls never touch a database.
  • Storage: Cassandra for user data (distributed, no single point of failure). S3 for video files.
  • CDN: Open Connect โ€” Netflix's own CDN. They put physical servers inside ISP data centers so video streams travel shorter distances.
  • Resilience: Chaos MonkeyA tool Netflix built that randomly kills production servers to make sure the system can handle failures. If your service can't survive random server deaths, Chaos Monkey will expose that. randomly kills production servers to ensure the system survives failures.

Estimated AWS bill: Netflix reportedly spends $400-500 million per year on AWS.

Uber processes millions of trips daily across 10,000+ cities. Their challenge is unique: ride-matching is extremely latency-sensitive (you need a driver in seconds, not minutes) and geographically localized (a ride in Mumbai doesn't care about servers in Virginia).

  • Geo-sharding: Data is sharded by city/region. Tokyo rides are processed by servers in Tokyo. This keeps latency under 100ms for driver matching.
  • Polyglot services: Different languages for different jobs โ€” Go for high-throughput services, Java for business logic, Python for data science, Node.js for real-time features.
  • Real-time layer: A custom dispatch system matches riders to drivers using geospatial indexes that update every second as drivers move.
  • Database mix: MySQL (sharded by city), Cassandra (trip logs), Redis (real-time state), Elasticsearch (search).

Discord handles 150 million monthly active users, with some servers having millions of members. The challenge: real-time messaging at massive scale with presence indicators (who's online) updating constantly.

  • Real-time: ElixirA functional programming language built on the Erlang VM (BEAM). Designed for massive concurrency โ€” a single Elixir server can handle millions of simultaneous WebSocket connections because each connection is a lightweight process. on the Erlang VM handles WebSocket connections. A single Elixir node can manage millions of concurrent connections thanks to Erlang's lightweight processes.
  • Message storage: Originally Cassandra, migrated to ScyllaDB (2023) for billions of messages. Messages are partitioned by channel_id and time bucket, so loading a channel's recent messages only reads from one partition.
  • Presence system: Tracking which of your friends is online across millions of users. They built a custom system that propagates presence updates through a gossip protocol.
  • Challenge faced: When a server with 1 million+ members loads, the "members" list query was crushing Cassandra. They solved it with aggressive caching and lazy loading.

AWS Cost Estimates by Tier

These are rough estimates for a typical web application (API + database + cache + storage). Actual costs vary wildly based on your specific workload, region, and reserved instance usage.

TierUsersArchitectureMonthly Cost
Hobby1-1K1 t3.small + RDS db.t3.micro$30-80
Startup1K-10K1 t3.large + RDS db.r5.large + ElastiCache$200-500
Growth10K-100KALB + 3 c5.xlarge + RDS Multi-AZ + 2 replicas + Redis cluster$1,500-3,000
Scale100K-1MMulti-AZ + ASG (5-20 nodes) + sharded RDS + Kafka + CDN$10K-30K
Massive1M-100MMulti-region + 100+ nodes + dedicated infra teams$100K-1M+
When you say "we need 10 servers" in an interview, follow up with a cost estimate. "10 c5.xlarge instances at ~$150/month each = $1,500/month for compute." This shows you think about real-world constraints, not just whiteboard architecture. Interviewers love candidates who can reason about cost. Architecture scales in tiers: 1K users needs 1 server ($50/mo), 100K needs horizontal scaling with replicas ($2K/mo), 1M+ needs sharding and queues ($15K/mo), 100M+ needs global multi-region ($500K+/mo). Netflix runs 100K+ EC2 instances with Chaos Monkey and custom CDN. Uber geo-shards by city for sub-100ms ride matching. Discord uses Elixir for millions of concurrent WebSocket connections and Cassandra for message storage. Most startups never need past tier 2 โ€” don't over-engineer.
Section 10

Anti-Lesson โ€” When NOT to Scale

This might be the most important section on the entire page. Every engineer has a natural instinct to build for scale. It feels responsible. It feels professional. It feels like you're planning ahead. But here's the uncomfortable truth: premature scaling has killed more startups than traffic ever did.

There's a famous quote from Donald Knuth: "Premature optimization is the root of all evil." The scaling version is: premature scaling is the root of all bankruptcy. You don't need Kubernetes when you have 50 users. You don't need sharding when your database is 2 GB. You don't need Kafka when you send 100 emails a day.

Do You Actually NEED to Scale? > 1,000 DAU? Focus on product No Yes Response time > 2 seconds? Optimize code first No Yes CPU/DB > 80% utilization? Fix slow queries first No Yes NOW you scale Use Section 6's toolkit

The Startup Graveyard

There's a pattern that repeats endlessly in tech: a founding team spends 6 months building an elaborate microservices architecture with Kubernetes, Kafka, and a sharded database. They launch. They get 200 users. They run out of money. The irony? A single $20/month VPS could have handled those 200 users easily โ€” and the team could have spent those 6 months talking to customers and building features people actually want.

Companies that burned through millions building infrastructure they didn't need yet. The pattern is always the same: hire expensive DevOps engineers before you have product-market fit, build for "millions of users" when you have hundreds, solve scaling problems that don't exist while ignoring product problems that do. The infrastructure still works perfectly โ€” but the company is dead. Start with a monolith. Deploy to a single server. Measure everything (response times, CPU, memory, error rates). When metrics PROVE you have a bottleneck, fix it with the simplest tool from Section 6's toolkit. Repeat. This is how Instagram scaled to 30 million users with 5 engineers โ€” they scaled reactively based on data, not proactively based on fear.
Think First

You're the CTO of a startup. You have 500 daily active users and $200K in funding. Your co-founder wants to move to microservices "so we're ready to scale." Your single Django server on a $40/month VPS handles current traffic with 5% CPU utilization. What do you do?

The server is at 5% CPU. Even if traffic grows 10x (5,000 users), you'd be at 50% CPU โ€” still comfortable on the same $40 server. The $200K should go toward finding more users, not building infrastructure for users you don't have. Politely say no to microservices and show the CPU graphs.
Premature scaling kills more startups than traffic does. Before scaling, check: (1) Do you have >1,000 daily users? (2) Is response time >2 seconds? (3) Is CPU/DB utilization >80%? (4) Have you already optimized the obvious things (indexes, caching, slow queries)? If any answer is "no," don't scale โ€” fix the simpler problem first. Start with a monolith, measure everything, and scale reactively based on metrics, not proactively based on fear. Instagram proved this works: 30 million users, 5 engineers, reactive scaling.
Section 11

CDN & Geographic Distribution โ€” Putting Your Servers Next to Your Users

Everything we've covered so far โ€” caching, replicas, load balancers, queues โ€” assumes your servers live in one place. That's fine if all your users live nearby. But the moment you have users on different continents, physics becomes your biggest bottleneck. Not CPU, not RAM, not disk โ€” the speed of light.

Think of it like a library system. Your town has one big library downtown with every book ever published. It works great if you live nearby โ€” pop in, grab a book, done. But what if you live 30 miles away? Every visit takes an hour round-trip. Now multiply that by thousands of people driving from all over the county. The solution is obvious: open branch libraries in each neighborhood. Stock them with copies of the most popular books. 95% of the time, people find what they need at their local branch and never drive downtown.

That's exactly what a CDNContent Delivery Network โ€” a global network of servers (called "edge locations" or "PoPs") that store cached copies of your static content (images, CSS, JS, videos) close to users. Major CDNs: CloudFront (AWS), Cloudflare, Akamai, Fastly. does. It takes your static files โ€” images, CSS, JavaScript, fonts, videos โ€” and copies them to servers all over the world. When a user in Tokyo requests your homepage, instead of their browser traveling 10,000 km to your server in Virginia, it grabs the files from an edge server right there in Tokyo.

The math that makes CDNs essential

Light travels through fiber optic cable at roughly 200 km per millisecond (about two-thirds the speed of light in vacuum, because the signal bounces around inside the glass). That sounds incredibly fast, but the Earth is incredibly big:

New York โ†’ London: 5,500 km รท 200 km/ms = 28 ms one way, 56 ms round-trip
New York โ†’ Tokyo: 10,800 km รท 200 km/ms = 54 ms one way, 108 ms round-trip
New York โ†’ Sydney: 16,000 km รท 200 km/ms = 80 ms one way, 160 ms round-trip

And that's the theoretical minimum. Real-world latency is 2-3x worse because of routing hops, congestion, and protocol overhead. A realistic NY โ†’ Tokyo request takes 200-300 ms. A typical web page loads 50-100 resources. If each resource needs a round-trip... that's 10-30 seconds of waiting.

With a CDN edge server in Tokyo, those same resources load in <5 ms. The user doesn't notice. Without a CDN, users on the other side of the world see your site as painfully slow โ€” and they leave. Google found that a 100 ms increase in latency costs them 1% of ad revenue. Amazon found that every 100 ms of latency costs them 1% in sales. At their scale, that's hundreds of millions of dollars.

CDN Architecture โ€” Origin + Edge Locations ORIGIN SERVER US-East (Virginia) Your app + database + all files US-West (Oregon) Edge cache Europe (Frankfurt) Edge cache Asia (Tokyo) Edge cache Australia (Sydney) Edge cache S. America (Sรฃo Paulo) Edge cache ๐Ÿ‘ค LA user ~12 ms ๐Ÿ‘ค Berlin user ~8 ms ๐Ÿ‘ค Tokyo user ~4 ms ๐Ÿ‘ค Sydney user ~6 ms Without CDN: all users hit the origin in Virginia Tokyo user โ†’ 200 ms | Sydney user โ†’ 280 ms | Berlin user โ†’ 120 ms With CDN: users hit the nearest edge Tokyo โ†’ 4 ms | Sydney โ†’ 6 ms | Berlin โ†’ 8 ms | 30-70x faster

Pull-based vs Push-based CDNs

There are two ways to get your content onto edge servers, and each is suited to different situations:

Pull-based (lazy loading) โ€” This is the most common approach and what services like Cloudflare and CloudFront use by default. You don't upload anything to the CDN. Instead, the first time a user in Tokyo requests /images/logo.png, the Tokyo edge server says "I don't have that yet," fetches it from your origin server, serves it to the user, and keeps a cached copy. The next user in Tokyo gets the cached version instantly. The downside? That very first user experiences the full origin latency โ€” they're the "cache-warming" request.

Push-based (eager loading) โ€” You proactively upload files to the CDN before anyone requests them. This is how video platforms like Netflix work. Netflix pre-positions popular content on edge servers in every region before users press play. No one waits for a cache-warm. The downside? You pay for storage on every edge server even if nobody in that region ever requests the file. Push makes sense for large, predictable content (videos, software downloads); pull makes sense for everything else.

What belongs on a CDN (and what doesn't)

  • Images (product photos, avatars, banners) โ€” the single biggest bandwidth consumer on most sites
  • CSS & JavaScript bundles โ€” your compiled app.js and styles.css. Version them with hash filenames (app.a3f2c1.js) for aggressive caching.
  • Fonts โ€” web fonts like Inter or Roboto are static and identical for every user
  • Videos & large downloads โ€” Netflix serves 15% of global internet traffic, almost entirely through CDN edges
  • Static HTML pages โ€” blog posts, marketing pages, documentation (this page you're reading is CDN-cached)
  • API responses with user-specific data โ€” your /api/me/orders returns different data for every user, so caching it would show one user's orders to another
  • Real-time data โ€” stock prices, live sports scores, chat messages need to be fresh every second
  • Authenticated endpoints โ€” anything behind a login should go to your origin, not an edge cache
  • Dynamically generated content that changes per request โ€” search results, personalized recommendations, A/B test variants

Real-world CDN pricing

CDNs are surprisingly cheap for what they do. Here's what the major providers charge:

Cloudflare โ€” Free tier includes unlimited bandwidth for static sites. Yes, free. Millions of requests. Their business model is upselling WAF, DDoS protection, and Workers to paying customers.
AWS CloudFront โ€” ~$0.085/GB for the first 10 TB/month to North America. Free tier: 1 TB transfer + 10 million requests per month for the first year.
Akamai โ€” Enterprise pricing (negotiated per contract). Handles roughly 30% of all global web traffic. If you're asking the price, it's probably not for you yet.
Fastly โ€” ~$0.12/GB. Popular with developers for its real-time purge (invalidate cached content in ~150 ms worldwide).

For reference: if your site serves 1 million page views/month at 2 MB average page weight, that's ~2 TB/month. On CloudFront, that's about $170/month. On Cloudflare's free tier, it's $0.

Multi-region deployment โ€” when even your API needs to be close

CDNs solve the static content problem. But what about your API servers and databases? If a user in Frankfurt hits your API in Virginia to load their dashboard, every single API call takes 120+ ms round-trip. For a dashboard that makes 15 API calls to load, that's almost 2 seconds of pure network latency โ€” before your code even runs.

The solution: deploy your application in multiple regions. Run a full copy of your app + database in US-East and in EU-West. Route European users to EU-West, American users to US-East. This is called multi-region deploymentRunning your full application stack (app servers + database) in two or more geographic regions. Dramatically reduces latency for global users but introduces the hard problem of keeping data in sync across regions., and it comes in two flavors:

Active-passive โ€” One region (the "primary") handles all writes. The other region (the "secondary") is a read-only copy that stays in sync. If the primary goes down, you "fail over" to the secondary and promote it. Simple, but users in the passive region still send writes to the far-away primary. Most companies start here.

Active-active โ€” Both regions handle reads and writes. Users in any region get full functionality from their nearest servers. This is the dream, but it introduces the hardest problem in distributed systems: conflict resolutionWhen two users in different regions modify the same data simultaneously, which write "wins"? Strategies include last-write-wins (simple but can lose data), merge functions (complex but preserves both writes), and CRDTs (mathematically proven to converge, but limited data types).. What happens when a user in New York and a user in London both edit the same record at the same millisecond? You need a strategy, and every strategy has trade-offs.

Multi-Region Active-Active Architecture US-EAST (Virginia) Load Balancer App 1 App 2 PostgreSQL Primary Redis Cache Reads + Writes ๐Ÿ‘ค US users routed here EU-WEST (Frankfurt) Load Balancer App 1 App 2 PostgreSQL Replica Redis Cache Reads + Writes ๐Ÿ‘ค EU users routed here Cross-region replication Route 53 / GeoDNS โ†’ nearest region Most companies don't need multi-region until they have a truly global user base or strict uptime SLAs. The complexity of keeping databases in sync across continents is enormous. Start with a CDN for static content โ€” that gives you 80% of the latency benefit with 5% of the complexity. Only go multi-region when you've exhausted every other option and your users are genuinely suffering from API latency across continents.
Think First

Your SaaS app has 80% of users in the US and 20% in Europe. Your API is in US-East. European users complain about slow page loads (~1.8 seconds vs ~400ms for US users). Your site serves 3 MB of static assets per page. What's your cheapest, simplest first move โ€” and what would you do if the problem persists after that?

The 3 MB of static assets is the low-hanging fruit. Put them on a CDN and European users load static content from Frankfurt instead of Virginia โ€” that alone could drop page load from 1.8s to 800ms. If the remaining latency is still unacceptable (because the API calls themselves take 120ms each), then consider adding a read replica in EU and routing European API reads to it. Full multi-region is the last resort, not the first.
A CDN copies your static files (images, CSS, JS, videos) to edge servers around the world, cutting latency from 200+ ms to under 10 ms for global users. Pull-based CDNs (Cloudflare, CloudFront) fetch and cache on first request; push-based CDNs pre-position content. Cloudflare's free tier handles millions of requests; CloudFront costs ~$0.085/GB. For API latency, multi-region deployment runs your full stack in multiple locations โ€” active-passive (simple, one region handles writes) or active-active (both write, but conflict resolution is hard). Start with a CDN for static content โ€” it gives 80% of the benefit with 5% of the complexity.
Section 12

Auto-Scaling โ€” Let the Machines Decide

So far we've talked about scaling as something you do. You notice the server is struggling, you add more servers, you configure the load balancer. But what happens at 3 AM on Black Friday when traffic spikes 10x in 20 minutes and you're asleep? What about a viral tweet that sends 500,000 people to your site in the next hour?

You can't predict this. And you definitely can't be awake 24/7 watching dashboards. The answer is auto-scalingA cloud feature that automatically adds or removes servers based on real-time metrics like CPU usage, request count, or custom metrics. Available in AWS (Auto Scaling Groups), GCP (Managed Instance Groups), Azure (VM Scale Sets), and Kubernetes (Horizontal Pod Autoscaler). โ€” you define the rules ("if average CPU exceeds 70%, add 2 servers"), and the cloud platform does the rest. When traffic drops, it removes the extra servers so you stop paying for them.

Auto-Scaling Over 24 Hours Time of Day Requests/sec 12am 4am 8am 12pm 4pm 8pm 12am โ† Peak: 12,000 req/s Traffic (req/s) Instance count 2 4 8 12 Night: min = 2 Peak: scaled to 12

The green dashed line follows the traffic curve like a staircase. As traffic ramps up in the morning, auto-scaling adds servers in steps. At peak, you might have 12 instances running. By midnight, traffic drops and the system scales back to just 2. You only pay for what you use โ€” no more wasting money on idle servers "just in case."

How auto-scaling works (the three ingredients)

Every auto-scaling system โ€” whether it's AWS, Google Cloud, Azure, or Kubernetes โ€” needs the same three things:

When the auto-scaler decides to add a server, it needs to know what kind of server to create. This is called a launch templateIn AWS, a launch template defines everything about the server to create: the machine image (AMI), instance type (e.g., t3.large), security groups, SSH key, and any startup scripts. In Kubernetes, the Pod spec + container image serve the same purpose. (AWS) or a Pod spec (Kubernetes). It specifies the machine image, instance size, startup scripts โ€” everything needed so the new server comes up ready to serve traffic without human intervention.

This is the brain of auto-scaling. A scaling policy says "when metric X crosses threshold Y, add Z servers." You can base this on CPU usage, memory, request count, queue depth, or any custom metric you define. The art is choosing the right metric and threshold โ€” too aggressive and you waste money, too conservative and users see errors before new servers arrive.

You always set a minimum and maximum. The minimum (e.g., 2) ensures you always have at least some servers running even at 4 AM. The maximum (e.g., 50) prevents a traffic spike or bug from spinning up thousands of servers and bankrupting you. There's also a "desired" count โ€” the number of servers the auto-scaler is currently targeting.

Scaling policies deep dive

Not all scaling policies are created equal. Here are the four main types, from simplest to most sophisticated:

The simplest and most common. You say: "Keep average CPU at 60%." The auto-scaler does all the math. If CPU is at 85%, it adds servers until it drops to 60%. If CPU is at 30%, it removes servers until it rises to 60%. It's like a thermostat for your infrastructure.

TargetTrackingConfiguration: TargetValue: 60.0 # Keep CPU at 60% PredefinedMetricSpecification: PredefinedMetricType: ASGAverageCPUUtilization DisableScaleIn: false # Allow removing servers too

Why 60% and not 80%? Because you need headroom. If you target 80% and traffic spikes suddenly, your servers are already near max while new ones are still booting. Targeting 60% gives you a 20% buffer to absorb spikes while new capacity comes online.

More control, more rules. You define multiple thresholds: "If CPU is 60-75%, add 1 server. If CPU is 75-90%, add 3 servers. If CPU is above 90%, add 5 servers." This reacts proportionally to the severity of the load โ€” a gentle increase gets a gentle response, a sudden spike gets an aggressive response.

StepAdjustments: - MetricIntervalLowerBound: 0 # CPU 60-75% MetricIntervalUpperBound: 15 ScalingAdjustment: 1 # Add 1 instance - MetricIntervalLowerBound: 15 # CPU 75-90% MetricIntervalUpperBound: 30 ScalingAdjustment: 3 # Add 3 instances - MetricIntervalLowerBound: 30 # CPU > 90% ScalingAdjustment: 5 # Add 5 instances

For predictable patterns. If you know traffic spikes every Friday at 5 PM (end-of-week reports), every Monday at 9 AM (everyone logging in), or every Black Friday, you can pre-schedule capacity. The auto-scaler adds servers before the spike arrives.

ScheduledActions: - ScheduledActionName: "friday-evening-spike" Recurrence: "0 17 * * 5" # Every Friday at 5pm MinSize: 8 # Ensure 8 minimum DesiredCapacity: 12 # Start with 12 - ScheduledActionName: "weekend-low" Recurrence: "0 22 * * 5" # Friday at 10pm MinSize: 2 # Scale back down DesiredCapacity: 2

Machine learning does the work. AWS analyzes your last 14 days of traffic patterns and uses ML to predict tomorrow's traffic. It pre-provisions servers before you need them, so there's zero cold-start delay. This is genuinely impressive โ€” it learns your weekly and daily patterns, handles gradual trends, and even detects that your traffic is 3x higher on the first of the month (payday). Available in AWS Auto Scaling since 2021.

When to use it: If your traffic has a repeating pattern (most apps do โ€” low at night, high during business hours). It doesn't help with truly random spikes like viral tweets.

Target Tracking vs Step Scaling Response Target Tracking (smooth) Instances follow traffic smoothly Step Scaling (staircase) Instances jump in defined steps Traffic Instances

The cold start problem

Here's the gotcha everyone discovers the hard way: new servers don't appear instantly. When the auto-scaler decides to add a server, it has to:

  1. Provision the virtual machine โ€” 30-90 seconds on AWS (faster with warm pools)
  2. Boot the OS and start the application โ€” 15-60 seconds depending on your app
  3. Pass the health check โ€” the load balancer won't send traffic until the server reports healthy (10-30 seconds)
  4. Warm caches โ€” the first requests to a cold server are slower because in-memory caches are empty

Total time from "auto-scaler decides" to "server is handling real traffic": 1-3 minutes. If traffic spikes from 1,000 req/s to 10,000 req/s in 30 seconds, your existing servers are drowning for at least a full minute before help arrives.

Warm pools โ€” Keep pre-initialized instances in a "stopped" state. Starting a stopped instance takes ~15 seconds vs. 90 seconds for a new one. You pay a tiny fee for the stopped EBS volume but save critical time during spikes.
Minimum spare capacity โ€” Set your desired count above what you currently need. If you need 4 servers, run 6. Those 2 extras handle surprise spikes while new servers boot.
Predictive scaling โ€” Let ML analyze your traffic and pre-provision servers before the spike arrives. Works beautifully for daily/weekly patterns.
Over-provision slightly โ€” Target 50% CPU instead of 70%. Costs 40% more in servers but gives you a massive buffer. Often cheaper than the revenue lost during a 2-minute outage.

Cooldown periods โ€” preventing the flap

Without cooldown, here's what happens: CPU hits 80%, auto-scaler adds 3 servers. The new servers absorb traffic and CPU drops to 40%. The auto-scaler immediately thinks "40% is too low, let me remove servers." It removes 2, CPU spikes back to 80%, it adds again... and you get a constant cycle of adding and removing servers โ€” wasting money on instance-hours and destabilizing your application. This is called flappingThe auto-scaler rapidly oscillating between scaling up and scaling down, constantly adding and removing servers. Wastes money (you pay for partial hours), causes instability (connections drop when servers are removed), and masks the real capacity you need. Fixed with cooldown periods..

The fix is a cooldown period โ€” after any scaling action, the auto-scaler waits a set time (typically 5 minutes) before taking another action. This gives the new servers time to fully absorb traffic and stabilize metrics. AWS default cooldown is 300 seconds. For most workloads, that's about right.

If you're running containers on Kubernetes, the Horizontal Pod AutoscalerThe Kubernetes equivalent of AWS Auto Scaling Groups. Watches metrics (CPU, memory, or custom metrics), and adjusts the number of Pod replicas in a Deployment or StatefulSet. Default check interval: 15 seconds. Default stabilization window: 5 minutes for scale-down. (HPA) does the same thing. It monitors CPU/memory/custom metrics and adjusts the replica count of your Deployment. The difference is just packaging โ€” instead of spinning up VMs, it adds Pods (containers). The concepts of target tracking, cooldown (called "stabilization window" in K8s), and min/max are identical.
Think First

Your e-commerce site normally runs 4 servers. On Black Friday, traffic will be 20x normal. You know Black Friday is in 3 weeks. Which scaling policy (or combination) would you use, and what would you set the minimum/maximum to?

Use scheduled scaling to pre-provision servers the morning of Black Friday โ€” set minimum to 20 instances by 6 AM. Layer target tracking on top (target CPU 50%, not 70% โ€” Black Friday is not the day to be stingy) so if traffic exceeds even your generous pre-provision, the system adds more. Set maximum to 80 to cap costs. After the weekend, the scheduled policy drops minimum back to 4 and target tracking handles the gradual wind-down.
Auto-scaling automatically adds servers when load is high and removes them when it drops โ€” matching capacity to demand without human intervention. It needs three things: a launch template (what server to create), scaling policies (when to scale), and min/max boundaries (cost guardrails). Target tracking ("keep CPU at 60%") is the simplest policy. Step scaling gives proportional response. Scheduled scaling handles predictable spikes. Predictive scaling uses ML to pre-provision. Watch out for cold starts (1-3 minutes to boot new servers) and flapping (rapidly adding/removing โ€” fixed with 5-minute cooldown periods).
Section 13

Database Scaling Deep Dive โ€” Where Every System Actually Bottlenecks

Here's a truth that experienced engineers learn the hard way: every scaling story eventually becomes a database scaling story. Your app servers are stateless โ€” they don't remember anything between requests. That's why they're so easy to scale: just add more of them behind a load balancer. Done.

Your database is the opposite. It holds all the state โ€” every user account, every order, every product, every relationship. You can't just "add another database" because the data has to be consistent. If a user updates their email address, that change needs to be visible on every query, from every server, immediately. This coordination is what makes databases the bottleneck at scale.

The good news: there's a clear ladder of techniques, ordered from simplest to most complex. Always start at the bottom and climb only when you've exhausted the current rung. Most apps never need to go past rung 4.

The Database Scaling Ladder Start at the bottom. Only climb when you've exhausted the current rung. 7 Sharding 10M+ users 6 Archive old data 5M+ users 5 Table partitioning 2M+ users 4 Read replicas 500K+ users 3 Vertical scale the DB (more RAM) 100K+ users 2 Connection pooling 50K+ users 1 Optimize queries & add indexes Any size โ€” do this FIRST HARD EASY

Rung 1: Optimize queries and add indexes

Before you spend a single dollar on bigger hardware or fancier architecture, look at your queries. In most applications, 5 slow queries cause 80% of database load. Finding and fixing them is free and takes hours, not weeks.

The tool is EXPLAIN ANALYZE. Run it before any slow query and the database shows you its execution plan โ€” exactly how it's finding your data. The most common finding? A sequential scanThe database reads every single row in the table to find matching rows. Like searching for a name by reading every page of a phone book from start to finish. Fine for tiny tables, catastrophically slow for large ones. The fix is almost always adding an index. on a large table. That means the database is reading every single row โ€” millions of them โ€” to find the 3 rows you need. The fix is an indexA data structure (usually a B+ tree) that lets the database jump directly to matching rows instead of scanning the whole table. Like the index at the back of a textbook โ€” instead of reading every page, you look up the topic in the index and jump to the right page. Adding an index can turn a 2-second query into a 2-millisecond query..

Your code loads 50 orders, then for each order, makes a separate query to get the customer name. That's 1 query for orders + 50 queries for customers = 51 queries. With a JOIN, it's 1 query. This is the "N+1 problem" and it hides in every ORM. Run pg_stat_statements to find queries that execute thousands of times per minute โ€” they're almost always N+1s.

Rung 2: Connection pooling

Every time your app server opens a database connection, there's overhead: TCP handshake, TLS negotiation, authentication, memory allocation. A single PostgreSQL connection uses about 5-10 MB of RAM on the database server. With 100 connections, that's 500 MB to 1 GB of RAM just for connections โ€” before any query runs.

If you have 20 app servers each opening 10 connections, that's 200 connections. But here's the thing โ€” most connections are idle most of the time. A request hits the app server, grabs a connection, runs a query (2 ms), and the connection sits idle until the next request. Thanks to Little's LawA fundamental queuing theory formula: L = ฮปW, where L is the average number of items in the system, ฮป is arrival rate, and W is average service time. For databases: if you process 1,000 queries/sec and each takes 2 ms, you only need 1,000 ร— 0.002 = 2 active connections. The other 198 are idle. (if you process 1,000 queries/sec and each takes 2ms, you only need 2 active connections), a connection pool of 20-30 connections can serve thousands of requests per second.

Tools like PgBouncerA lightweight connection pooler for PostgreSQL. Sits between your app servers and PostgreSQL, maintaining a small pool of real database connections and multiplexing thousands of application connections through them. Uses ~2 KB of memory per connection vs. PostgreSQL's 5-10 MB. Can handle 10,000+ application connections with just 20-30 real database connections. (for PostgreSQL) and ProxySQL (for MySQL) sit between your app servers and the database. Your app servers open connections to the pooler (cheap, lightweight). The pooler maintains a small set of real database connections and multiplexes traffic through them. 10,000 app connections โ†’ 30 real database connections. Problem solved.

Rung 3: Vertical scale the database

More RAM on a database server doesn't just "make things faster" in some vague way โ€” it has a very specific, measurable effect. Databases keep frequently accessed data in memory using a buffer pool (shared_buffers in PostgreSQL, innodb_buffer_pool_size in MySQL). When a query needs data that's already in the buffer pool, it reads from RAM (~100 nanoseconds). When the data isn't in the buffer pool, it reads from disk (~5 milliseconds for HDD, ~0.1 ms for SSD). That's a 50,000x difference for HDD or a 1,000x difference for SSD.

The key concept is the working setThe portion of your total data that's accessed frequently โ€” typically recent orders, active users, popular products, and hot metadata tables like sessions and settings. If your database has 500 GB total but only 20 GB is accessed in a typical hour, your working set is 20 GB. If your buffer pool is bigger than 20 GB, almost every query hits RAM. If it's smaller, you get constant disk reads. โ€” the subset of your total data that's accessed frequently. Your database might hold 500 GB total, but only 20 GB is "hot" (recent orders, active user profiles, product catalog). If your buffer pool fits the working set, almost every query is a RAM read. Performance is fantastic. If the working set grows beyond your buffer pool, you fall off a cliff.

The Working Set Cliff Working set FITS in RAM Buffer Pool (32 GB RAM) Working set: 20 GB 12 GB free 95% cache hit rate Avg query: 2 ms Disk reads: rare FAST Working set EXCEEDS RAM Buffer Pool (32 GB RAM) Working set: 60 GB (overflows!) 53% cache hit rate Avg query: 45 ms (22x slower!) Disk reads: constant thrashing SLOW โ€” falling off a cliff

This is why upgrading a database from 32 GB to 64 GB of RAM can feel like magic. You didn't change any code, but suddenly everything is 10-20x faster because the working set that was spilling to disk now fits entirely in memory. Check your buffer pool hit rate: SELECT blks_hit::float / (blks_hit + blks_read) AS ratio FROM pg_stat_database WHERE datname = 'your_db';. If it's below 95%, you probably need more RAM.

Rung 4: Read replicas

Most applications read data far more than they write it. Think about it: when you browse Amazon, every page load reads dozens of products, reviews, and recommendations. But you only write when you place an order or leave a review. The read-to-write ratio for most apps is between 80:20 and 99:1.

Read replicasCopies of your primary database that stay in sync (via streaming replication) and handle read-only queries. The primary handles all writes; replicas only handle reads. You can have 5, 10, or even 50 replicas, each handling a slice of read traffic. This multiplies your read throughput without touching the primary. exploit this asymmetry. You keep one primary database for all writes, and create copies (replicas) that handle read traffic. Each replica stays in sync with the primary via streaming replicationThe primary database continuously streams its write-ahead log (WAL) to replicas. Each write on the primary is replayed on every replica within milliseconds (async) or before the write is confirmed (sync). Async replication has ~10-100 ms lag but doesn't slow down writes. Sync replication has zero lag but every write waits for all replicas to confirm โ€” slower. โ€” every write on the primary is immediately shipped to replicas and replayed.

The one tricky part is replication lagThe delay between when a write happens on the primary and when it's visible on replicas. With async replication, lag is typically 10-100 ms but can spike to seconds under heavy write load. This means: a user updates their profile, then immediately reads it โ€” and sees the old data because the replica hasn't caught up yet. The fix is "read-after-write consistency" โ€” for the specific user who just wrote, route their reads to the primary temporarily.. With asynchronous replication (the default and most common), there's a tiny delay โ€” typically 10-100 ms โ€” between a write on the primary and its appearance on replicas. This means a user might update their email, immediately refresh the page, and still see the old email because their read hit a replica that hasn't caught up yet. The solution is called read-after-write consistency: for the specific user who just wrote something, route their reads to the primary for the next few seconds. Everyone else continues reading from replicas.

Rung 5: Table partitioning

Your orders table has 200 million rows spanning 5 years. When a user views their recent orders, the database could theoretically scan all 200 million rows. Even with an index, the index itself is huge and slow to traverse. But you know that 99% of queries are for orders from the last 3 months โ€” only about 10 million rows.

Table partitioningSplitting one logical table into multiple physical tables (called "partitions"), usually by date range, region, or ID range. Queries automatically get routed to only the relevant partitions โ€” so scanning "orders from March 2025" only touches the March 2025 partition, not the entire 200-million-row table. Same database, same queries, dramatically faster. splits that one big table into smaller physical chunks. Instead of one orders table with 200 million rows, you have orders_2024_01, orders_2024_02, ... orders_2025_03. When someone queries orders from March 2025, PostgreSQL only scans the March 2025 partition โ€” 4 million rows instead of 200 million. Same query, same application code, 50x less data to scan.

Table Partitioning โ€” Same Query, Less Data to Scan BEFORE (one table) orders 200 million rows ยท 80 GB ยท full table scan โ†’ AFTER (partitioned by month) Jan 2025 Feb 2025 Mar 2025 โ†‘ Query scans ONLY this partition 4M rows ยท 1.6 GB 2024-12 2024-11 2024-10 ... 57 more SQL is unchanged: SELECT * FROM orders WHERE created_at > '2025-03-01' PostgreSQL automatically routes to the right partition. Your app doesn't know or care.

Rung 6: Archive old data

This is the forgotten technique that gives you the most bang for zero architectural complexity. Ask yourself: does a user really need to see their order from 3 years ago on the same page as today's orders? Probably not. Move old rows to an orders_archive table (or export to S3/cold storage). Your "hot" orders table stays small, indexes stay compact, queries stay fast. If a user wants their 2022 orders, you fetch from the archive โ€” it's slower, but it's a rare request.

A mid-size SaaS company had a 120 GB events table with 4 years of data. They archived everything older than 6 months to S3 (using pg_dump with date filters). The table dropped to 18 GB. Backup time went from 45 minutes to 6 minutes. Query p99 latency dropped from 800 ms to 50 ms. Index rebuild (during VACUUM FULL) went from 2 hours to 12 minutes. Zero code changes.

Rung 7: Sharding โ€” the last resort

After you've optimized queries, added connection pooling, vertically scaled, added read replicas, partitioned tables, and archived old data... and you still can't keep up โ€” welcome to sharding. This is the technique that Instagram, Uber, Pinterest, and Shopify all eventually needed.

ShardingSplitting your data across multiple independent database servers. Each server (shard) holds a subset of the data. User IDs 1-1M go to Shard 1, 1M-2M go to Shard 2, etc. Each shard is a complete database that handles reads and writes for its subset. Queries that need data from multiple shards are expensive (cross-shard queries) โ€” this is why choosing the right shard key is critical. means splitting your data across multiple independent database servers. Unlike read replicas (which are copies of the same data), each shard holds a different subset of data. User IDs 1-1M on Shard 1, 1M-2M on Shard 2, and so on. Each shard handles reads and writes for its subset.

The critical decision is the shard key โ€” the field you split on. Shard by user_id? Great for "show me my orders" (all on one shard), terrible for "show me all orders in the last hour" (have to query every shard). Shard by created_at? Recent data is hot and overloads one shard while old shards sit idle. There is no perfect shard key โ€” every choice is a trade-off. The general advice: shard on the field that appears in the WHERE clause of 80%+ of your queries.

Sharding is incredibly hard to undo. Once you've split data across 16 database servers, every query needs a routing layer to find the right shard. JOINs across shards are expensive or impossible. Transactions across shards require two-phase commit (slow and fragile). Resharding (adding or removing shards) means migrating billions of rows with zero downtime. This is why the entire ladder exists โ€” exhaust every simpler option first.
Think First

Your PostgreSQL database has a 90 GB events table with 800 million rows. Your database server has 64 GB of RAM with shared_buffers set to 16 GB. Queries that used to take 20 ms are now taking 2 seconds. The buffer pool hit rate is 72%. Where are you on the ladder, and what's your next move?

The 72% buffer pool hit rate tells you the working set has exceeded your buffer pool. Before doing anything complex, try two things: (1) Increase shared_buffers to 24 GB or 32 GB โ€” you have 64 GB of RAM, so there's headroom (Rung 3). (2) If the table has 5 years of data, archive rows older than 12 months (Rung 6). This shrinks the working set itself, which is even better than adding RAM. You'd likely go from 72% hit rate to 97%+ and queries would drop back to 20 ms. No replicas, no partitioning, no sharding needed yet.
The database is always the bottleneck because it's stateful โ€” you can't just add more without coordination. Scale it using the 7-rung ladder (always start at the bottom): (1) Optimize queries and add indexes โ€” free and often gives 10-100x improvement. (2) Connection pooling (PgBouncer/ProxySQL) โ€” 30 real connections serve 10,000 app connections via Little's Law. (3) Vertical scale โ€” more RAM means the working set fits in memory; queries go from disk speed to RAM speed (1,000-50,000x faster). (4) Read replicas โ€” copies for read traffic, primary for writes; watch for replication lag. (5) Table partitioning โ€” split big tables by date/ID; same queries, 50x less data scanned. (6) Archive old data โ€” move cold rows to archive tables or S3; shrinks the hot table dramatically. (7) Sharding โ€” split data across independent databases; powerful but very hard to undo. Most apps never need to go past rung 4.
Section 14

Monitoring & Knowing When to Scale

Here's a question that separates beginners from experienced engineers: how do you know it's time to scale? If your answer is "when the site feels slow" or "when users complain," you're already too late. By the time users notice, you've been losing revenue for hours. The real answer: you know because your metrics told you three days ago that this was coming.

There's a golden rule we've repeated throughout this page: never scale based on a hunch. Every scaling decision should be backed by a number โ€” a CPU percentage, a latency percentile, a connection count, a queue depth. This section teaches you exactly what to measure, how to measure it, and how to set up alerts so you scale before things break.

Imagine two engineers. Engineer A sees "the site is slow" and immediately spins up 5 more servers. Engineer B checks the dashboard, sees the database is at 92% CPU with 3ms average query time ballooning to 300ms, identifies a missing index on the orders table, adds it in 2 minutes, and the problem disappears โ€” zero new servers, zero extra cost. Monitoring is the difference between throwing money at problems and actually solving them.

The Four Golden Signals

Google's SRESite Reliability Engineering โ€” Google's approach to running production systems. They literally wrote the book on it (free online). SRE teams focus on reliability, monitoring, and automation. team distilled decades of operational experience into four metrics that tell you everything you need to know about a system's health. If you only monitor four things, make it these four.

The Four Golden Signals โ€” Your System's Vital Signs 1. LATENCY โ€” How Long Requests Take Track percentiles, not averages p50: 45ms p95: 120ms p99: 850ms p99 = 850ms means 1 in 100 users waits almost 1 second 2. TRAFFIC โ€” Requests Per Second Track total + per-endpoint SPIKE! Sudden spike = viral event or DDoS attack 3. ERRORS โ€” Failure Rate (%) 5xx = your problem, 4xx = client's problem 0.3% error rate โ€” healthy 5.2% error rate โ€” ALERT! Alert when error rate > 1%. Investigate immediately at > 5%. At 1M requests/day, 1% errors = 10,000 angry users 4. SATURATION โ€” Resource Fullness The leading indicator โ€” scale BEFORE 100% CPU 68% Memory 82% Disk I/O 30% DB Conns 93%! Memory at 82% = warning. DB connections at 93% = act NOW.

Let's break each one down with the why that makes them matter:

If your average response time is 100ms, that sounds great. But averages hide the truth. Imagine 99 requests take 50ms and 1 request takes 5,000ms. The average is ~100ms โ€” but that one user waited 5 full seconds. At 1 million requests per day, that's 10,000 users getting a terrible experience.

That's why you track percentilesA percentile tells you the latency that X% of requests are faster than. p50 = median (half of requests are faster). p95 = 95% are faster. p99 = 99% are faster. The higher the percentile, the more "tail latency" you're measuring.:

  • p50 (median) โ€” half your requests are faster than this. Your "typical" experience.
  • p95 โ€” 95% of requests are faster. This catches most problem cases.
  • p99 โ€” 99% of requests are faster. This is where the tail latencyThe small percentage of requests that take much longer than average. Often caused by garbage collection pauses, cache misses, or database lock contention. Important because at scale, even 1% of users is a LOT of people. lives. If p99 is 2 seconds, 1 in every 100 users is having a bad time.

Why p99 matters so much: At 1 million requests per day, p99 = 2 seconds means 10,000 requests per day are painfully slow. Those aren't random nobodies โ€” they're often your most engaged users making complex queries (heavy shopping carts, long search histories, big dashboards). The users you most want to keep are the ones most likely to hit tail latency.

Traffic is simply how many requests your system handles per second. Track it two ways: total (across all endpoints) and per-endpoint (to spot which features are hot). A sudden traffic spike means one of two things: you went viral (great!) or you're under a DDoS attackDistributed Denial of Service โ€” an attack where thousands of machines flood your server with requests to overwhelm it. Looks like a traffic spike but the requests are malicious, not from real users. (not great). Monitoring tells you which.

Track traffic trends over weeks, not just real-time. If your traffic grows 15% month-over-month, you can predict when you'll hit capacity limits and scale proactively instead of scrambling at 3 AM.

Error rate is your "something is on fire" indicator. The key distinction:

  • 5xx errors (500, 502, 503, 504) โ€” these are YOUR fault. The server crashed, timed out, or ran out of resources. Alert immediately.
  • 4xx errors (400, 401, 403, 404) โ€” these are usually the client's fault. Bad request, wrong URL, missing auth token. Still worth monitoring for spikes (could indicate a broken client deployment).

A healthy system runs under 0.1% error rate. At 0.5%, investigate. At 1%, you have a problem. At 5%, you have an outage โ€” even if the site is technically "up," 1 in 20 users is hitting errors.

Saturation is the only leading indicator of the four. Latency, traffic, and errors tell you what's happening right now. Saturation tells you what's about to happen. If your database connections are at 93% of the maximum, you don't have a problem yet โ€” but you're one traffic spike away from FATAL: too many connections.

What to watch:

  • CPU utilization โ€” sustained above 80% means you need more compute
  • Memory usage โ€” above 85% means you're risking OOM kills
  • Disk I/O wait โ€” above 10ms means your storage is the bottleneck
  • Database connections โ€” above 80% of max means add PgBouncer or more replicas
  • Queue depth โ€” growing faster than it shrinks means your workers can't keep up

What to Measure at Each Layer

Different parts of your system need different metrics. Here's a practical checklist โ€” these are the metrics you'd configure in a real monitoring setup:

  • CPU utilization โ€” sustained > 80%? Add servers or optimize hot paths
  • Memory usage โ€” growing steadily? You might have a memory leak
  • Request latency (p50, p95, p99) โ€” per endpoint, not just overall
  • Error rate โ€” 5xx errors grouped by endpoint and error type
  • Active connections โ€” how many concurrent requests are in-flight
  • Garbage collection pauses โ€” in languages like Java or Go, long GC pauses cause latency spikes
  • Query latency โ€” slow query log (anything over 100ms deserves attention)
  • Connections used vs max โ€” if active_connections / max_connections > 80%, add pooling
  • Cache hit ratio โ€” PostgreSQL's buffer cache should hit > 99%. Below that, you need more RAM.
  • Replication lag โ€” how far behind are read replicas? > 1 second is concerning for most apps
  • Disk I/O wait โ€” if the database spends time waiting for disk, it's bottlenecked on storage
  • Sequential scans vs index scans โ€” lots of seq scans = missing indexes
  • Hit rate โ€” below 90% means your cache isn't helping much. Aim for 95%+.
  • Memory usage โ€” approaching maxmemory? Keys will start getting evicted
  • Eviction rate โ€” if Redis is constantly evicting keys, increase memory or reduce TTLs on low-value keys
  • Operations per second โ€” Redis handles ~100K ops/sec easily. If you're near that, consider clustering
  • Connected clients โ€” each connection uses memory. Pool them.
  • Healthy vs unhealthy targets โ€” if targets keep flapping, something is wrong with health checks or the servers themselves
  • Request count โ€” total throughput through the LB, useful for capacity planning
  • 5xx from backend โ€” the LB sees all errors; use it as a single pane of glass
  • Connection draining โ€” during deployments, are requests being properly drained from old servers?
  • Target response time โ€” if the LB reports high response time, the backend servers are struggling

The Monitoring Stack

How do all these metrics actually get collected and visualized? Here's the standard architecture used by most companies, from startups to Netflix:

How Monitoring Works โ€” From Metrics to Alerts Your Services App, DB, Cache, LB, Workers expose /metrics endpoint Metrics Collector Scrapes every 15s Prometheus, Datadog Agent, CloudWatch Time-Series DB Stores weeks/months PromQL queries, downsampling old data Dashboard Visual graphs Grafana, Datadog, CloudWatch Console Alert Rules Threshold checks โ†’ Slack, PagerDuty, email, SMS Alert rules query the TSDB continuously Open source stack: Prometheus + Grafana + Alertmanager (free, used by most startups) Managed stack: Datadog / New Relic / AWS CloudWatch (paid, easier setup)

Setting Up Alerts That Actually Help

The biggest mistake teams make with monitoring: alerting on symptoms instead of causes. "Response time > 2 seconds" is a symptom โ€” it could be caused by a dozen things (slow DB, full CPU, network issue, GC pauses). You get paged, spend 30 minutes figuring out which component is broken, and only then start fixing it.

Better approach: alert on the cause directly. "CPU > 80% for 5 minutes" tells you exactly what's wrong. "DB connections > 80% of max" tells you exactly where to look. You can still have symptom-based alerts as a catch-all, but cause-based alerts are faster to act on.

Warning (yellow) โ€” something is trending toward a problem. "CPU > 70% for 10 minutes." No immediate action needed, but someone should look at it during business hours. Notify via Slack channel.

Critical (orange) โ€” a problem is imminent or happening. "CPU > 85% for 5 minutes" or "error rate > 2%." Needs attention within 30 minutes. Notify via Slack DM + email.

Page (red) โ€” users are affected right now. "CPU > 95% for 2 minutes" or "error rate > 5%" or "health check failing." Wake someone up. Notify via PagerDuty / phone call.

Capacity Planning โ€” Predicting the Future

The smartest scaling decisions happen weeks before they're needed. Capacity planning means using your current metrics to project when you'll run out of room โ€” and acting before you do.

Simple example: your database uses 800 GB today and grows 10 GB per month. Your disk has 1 TB total. That gives you 200 GB of headroom, or roughly 20 months before you need to act. But wait โ€” disk performance degrades as it fills past 80%, so your real runway is only about 2 months (until you hit 1 TB ร— 80% = 800 GB... you're already there!). Plot these trends on a graph and you'll see the deadline coming.

Resources don't degrade linearly. A server at 70% CPU handles spikes gracefully. A server at 90% CPU has no room for spikes and starts dropping requests. A disk at 90% full triggers aggressive filesystem maintenance. A connection pool at 90% means every new request waits in line. Treat 80% as your effective maximum โ€” plan to scale before you hit it, not after.

The Queries You Should Know

You don't always need a fancy dashboard. Sometimes the fastest way to check your system's health is a quick command. Here are the most useful ones:

# Operations per second and hit rate redis-cli INFO stats | grep -E "instantaneous_ops|keyspace_hits|keyspace_misses" # Output: # instantaneous_ops_per_sec:4523 # keyspace_hits:8234109 # keyspace_misses:412005 # Hit rate = hits / (hits + misses) = 95.2% โ€” healthy! # Memory usage vs limit redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human" # Output: # used_memory_human:2.84G # maxmemory_human:4.00G # 71% used โ€” comfortable headroom # Replication lag (if using replicas) redis-cli INFO replication | grep -E "role|master_link_status|master_last_io" # Output: # role:slave # master_link_status:up # master_last_io_seconds_ago:0 โ† 0 means caught up! -- Active connections vs limit SELECT count(*) AS active, setting::int AS max_allowed, round(count(*)::numeric / setting::int * 100, 1) AS pct_used FROM pg_stat_activity, pg_settings WHERE name = 'max_connections' GROUP BY setting; -- Result: active=78, max_allowed=400, pct_used=19.5% โ€” plenty of room -- Find tables doing sequential scans (= missing indexes!) SELECT relname AS table_name, seq_scan, idx_scan, CASE WHEN seq_scan + idx_scan > 0 THEN round(seq_scan::numeric / (seq_scan + idx_scan) * 100, 1) ELSE 0 END AS pct_sequential FROM pg_stat_user_tables WHERE seq_scan > 1000 ORDER BY pct_sequential DESC LIMIT 10; -- If pct_sequential > 50% on a big table, you need an index! -- Buffer cache hit ratio (should be > 99%) SELECT round( sum(blks_hit)::numeric / (sum(blks_hit) + sum(blks_read)) * 100, 2 ) AS cache_hit_pct FROM pg_stat_database; -- Result: 99.7% โ€” the DB is serving almost everything from RAM # CPU usage (1-second snapshot) top -bn1 | head -5 # Look at "%Cpu(s):" line โ€” "id" = idle percentage # If idle < 20%, CPU is struggling # Memory usage free -h # Look at "available" column โ€” that's your REAL free memory # (Linux uses "free" memory for disk cache, which is reclaimable) # Disk I/O latency iostat -x 1 3 # Look at "await" column (average wait time in ms) # < 5ms = great, 5-20ms = okay, > 20ms = bottleneck # Network connections ss -s # Shows total TCP connections, established, time-wait, etc. # Thousands of TIME_WAIT connections = connection leak
Think First

Your dashboard shows: CPU at 45%, memory at 60%, but request latency p99 has jumped from 200ms to 3 seconds. Error rate is still at 0.1%. Database connections are at 91% of max. What's the bottleneck, and what's your first move?

The CPU and memory are fine, so it's not a compute problem. The error rate is low, so requests aren't failing โ€” they're just slow. One metric is screaming. That's your answer.
Monitor the Four Golden Signals: (1) Latency โ€” track p50/p95/p99, not averages. (2) Traffic โ€” requests per second, watch for spikes. (3) Errors โ€” 5xx is your fault, alert at > 1%. (4) Saturation โ€” CPU, memory, disk, connections; scale before hitting 80%. Use a monitoring stack (Prometheus + Grafana or Datadog) to collect, store, visualize, and alert. Set tiered alerts: warning at 70%, critical at 85%, page at 95%. Use capacity planning to predict when you'll need to scale โ€” don't wait for emergencies.
Section 15

The Complete Architecture Walkthrough: 0 to 10 Million Users

This is the section that ties everything together. We're going to take a simple web application โ€” say, a social recipe-sharing app โ€” and walk it through five stages of growth. At each stage, you'll see exactly what breaks, why it breaks, what to do about it, and how much it costs. Every tool from this entire page shows up in context, in the right order, at the right time.

This is also the single most common system design interview question format: "How would you scale this to X users?" After this section, you'll be able to answer it with specifics, not hand-waving.

Each stage's fix creates the next stage's problem. Caching solves slow reads โ€” but creates cache invalidation headaches. Read replicas solve DB read load โ€” but introduce replication lag. Sharding solves write bottlenecks โ€” but makes cross-shard queries painful. There is no final "done" state. Scaling is a continuous loop of measure โ†’ identify bottleneck โ†’ apply simplest fix โ†’ repeat.

Stage 1: Single Server โ€” Everything in One Place ~100 daily users Users Browsers / Mobile ONE SERVER ($20/month) App Code + Database + File Storage CPU: 5% | Memory: 20% | Disk: 2 GB used t3.micro or $5 DigitalOcean droplet ~$20/month Focus: Build the product

What the architecture looks like: One server running everything โ€” your web framework, your database (PostgreSQL or MySQL), and your uploaded files (stored on the local disk). Maybe Nginx as a reverse proxy in front.

What breaks: Nothing. Your server is at 5% CPU utilization. You could handle 50x your current traffic without blinking. This is the stage where the biggest mistake is spending time on infrastructure instead of building features.

Focus: Ship features. Talk to users. Iterate on the product. The only infrastructure task worth doing: set up automated backups (a cron job that dumps the DB to S3 daily). If the server dies, you can rebuild from backup in an hour.

Most startups die from not having enough users, not from too many users. If you spend 3 months building a "scalable microservices architecture" and then get 12 users, you wasted 3 months. Build fast, measure, and only scale when the metrics demand it.

Stage 2: Separate the Concerns ~5K Users ~20 req/sec CDN Static files, images App Server Handles business logic t3.small ($15/mo) Redis Cache Hot queries cached t3.micro ($10/mo) PostgreSQL db.t3.small ($30/mo) ~$80/month App: $15 | DB: $30 | Redis: $10 CDN: $5 | S3: $5 | Backups: $15 Focus: Separate DB, add caching

What the architecture looks like: The app server and database are now on separate machines. You've added Redis as a cache for frequently-read data (popular recipes, user profiles, feed items). Static files (images, CSS, JS) are served through a CDN so your server doesn't waste time on them. Uploaded photos go to S3 instead of the local disk.

What breaks: During peak hours (evenings, weekends), the database starts getting slow. Response times climb from 50ms to 500ms. Not because the DB server is underpowered, but because popular queries are running thousands of times without caching.

The fix: Add indexes on frequently-queried columns (free, instant improvement). Set up Redis caching for hot data โ€” your "trending recipes" query goes from hitting the DB 10,000 times per hour to once per minute, with Redis serving the rest. Target a 90%+ cache hit rate. Also, add basic monitoring โ€” even a free Grafana Cloud tier is better than nothing.

Stage 3: Horizontal Scaling Begins ~50K users ~200 req/sec Load Balancer App Server 1 App Server 2 App Server 3 Redis Cache DB Primary writes Read Replica Message Queue emails, thumbnails Background Workers ~$300-500/mo 3 app servers, DB + replica Redis, LB, queue, workers Focus: Horizontal scaling + async

What the architecture looks like: A load balancer distributes traffic across 2-3 stateless app servers. The database has a read replica handling read queries. A message queue (RabbitMQ or SQS) offloads heavy work โ€” sending emails, generating thumbnails, building recommendation feeds โ€” to background workers so users don't wait.

What breaks: During peak hours, the single primary database can't keep up with write load. The connection pool maxes out โ€” you see FATAL: too many connections errors. Image processing during upload spikes makes the app feel sluggish for everyone.

The fix: Deploy PgBouncerA lightweight connection pooler for PostgreSQL. It sits between your app and the DB, multiplexing hundreds of app connections into a small pool of real DB connections. Reduces PostgreSQL memory usage and handles connection storms gracefully. for connection pooling โ€” your 3 app servers with 50 connections each now share a pool of 30 real DB connections. Move all heavy processing (image resize, email, notifications) to the message queue. Consider vertically scaling the DB to a bigger instance โ€” at this stage, it's cheaper and simpler than sharding.

Stage 4: Auto-Scaling + Monitoring + Multi-Region CDN ~500K ~2K req/sec Global CDN 3 regions ALB health checks Auto-Scaling Group (3-10) App 1 App 2 App 3 App 4 App 5 ... App 10 Scales based on CPU / request count Redis Cluster 95%+ hit rate DB Primary R1 R2 R3 Worker Fleet W1 W2 W3 Monitoring Prometheus + Grafana ~$2-5K/month Auto-scaling app fleet 3-5 read replicas, workers Monitoring, multi-region CDN Focus: Auto-scale + observability

What the architecture looks like: The app servers now auto-scale between 3 and 10 instances based on CPU utilization or request count. The database has 3-5 read replicas. A dedicated worker fleet handles all async jobs. The CDN serves static content from multiple geographic regions. And critically: you now have real monitoring with Prometheus and Grafana.

What breaks: The single primary database becomes the bottleneck for writes. During peak hours, write queries queue up and lock contention increases. Cache invalidation gets more complex โ€” when a user updates their profile, you need to invalidate it across the cache consistently. Deployment complexity grows: you're now deploying to 10+ servers.

The fix: Table partitioning to spread data across multiple physical files. More aggressive caching with short TTLs (5-30 seconds) so stale data is quickly refreshed. Start writing runbooks for common incidents. Consider CI/CD pipelines with blue-green or rolling deployments so you can deploy without downtime.

Stage 5: Distributed Architecture at Scale Region: US-East API Gateway Users Svc Feed Svc Recipe Svc Search Service Mesh (Envoy / Istio) Shard 1 users A-M Shard 2 users N-Z Redis Cluster Kafka / SQS Tracing Jaeger Auto-Scaling Worker Fleet (10-50 instances) โ€” email, search indexing, recommendations Region: EU-West Same services (replicated) DB Replicas Redis Replica Global Load Balancer (Route53 / CloudFront) sync ~$20-50K/month Microservices, DB sharding Multi-region, service mesh Distributed tracing, Kafka Dedicated SRE team needed Focus: Resilience + global reach

What the architecture looks like: The monolith has evolved into services โ€” not necessarily full microservices, but at least service-oriented: a Users service, a Feed service, a Recipe service, a Search service. The database is sharded by user_id so write load is distributed across multiple database instances. A global load balancer routes users to the nearest region. Distributed tracing (Jaeger or Zipkin) helps debug requests that cross service boundaries.

What breaks: Everything, at some point. Cross-service communication adds latency. Data consistency across shards requires careful handling (no more simple JOINs between users on different shards). Deployments become complex โ€” 20+ services, each with its own release cycle. A subtle bug in one service can cascade to others.

The fix: This is where organizational solutions matter as much as technical ones. A service meshA dedicated infrastructure layer (like Istio or Envoy) that handles service-to-service communication, including load balancing, retries, circuit breaking, and observability. It takes networking concerns out of your application code. handles retries, circuit breaking, and observability. Chaos engineeringThe practice of deliberately injecting failures (killing servers, adding network latency, breaking dependencies) to find weaknesses before they cause real outages. Netflix popularized this with Chaos Monkey. (deliberately breaking things) finds weaknesses before real outages do. A dedicated SRE team manages on-call rotations and incident response. Detailed runbooks ensure anyone can handle common incidents at 3 AM.

The Complete Evolution at a Glance

The Scaling Journey โ€” Each Fix Creates the Next Problem Stage 1: 0-1K 1 server, $20/mo Focus on product, not infra Stage 2: 1-10K Separate DB + cache, $80/mo Redis fixes 90% of DB load Stage 3: 10-100K LB + replicas + queue, $500/mo Horizontal scaling begins Stage 4: 100K-1M Auto-scale + monitoring, $5K/mo Observability becomes critical Stage 5: 1-10M Sharding + multi-region, $50K/mo Organizational complexity peaks

Most apps will never reach Stage 5. The vast majority of software in the world runs happily at Stage 1 or Stage 2. Don't build for 10 million users when you have 100. Each stage adds complexity, cost, and operational burden. Only advance when your metrics demand it โ€” not when your ego does.

Think First

Your recipe app is at Stage 3 (50,000 users, 3 app servers, 1 read replica). Your monitoring shows the primary DB is at 85% CPU during peak hours, mostly from write queries (new recipe submissions and user interactions). Read replica handles reads fine. What stage are you approaching, and what's your first move?

You're heading toward Stage 4. But remember the toolkit order โ€” what's simpler than sharding for write load? Think: queue the writes, batch the inserts, check for slow queries, optimize indexes. Sharding is the last resort.
Scale in stages, not all at once: (1) 0-1K users โ€” one $20 server, focus on product. (2) 1-10K โ€” separate DB + Redis cache, $80/mo. (3) 10-100K โ€” load balancer + multiple app servers + read replica + message queue, $500/mo. (4) 100K-1M โ€” auto-scaling + monitoring + multi-region CDN, $5K/mo. (5) 1-10M โ€” microservices + sharding + multi-region deployment, $50K/mo. Each stage's fix creates the next stage's problem. Most apps never need past Stage 2.
Section 16

Cost Optimization โ€” Scaling Without Going Broke

Here's a truth that system design interviews rarely mention: scaling costs money. The question isn't just "can the system handle the load?" โ€” it's "can we afford to handle the load?" Many startups have been killed not by technical failures but by cloud bills they didn't see coming. An engineer who can scale a system and keep costs under control is worth their weight in gold.

This section is about spending smart โ€” getting the most performance per dollar without sacrificing reliability.

The Three Types of Cloud Instances

Every major cloud provider (AWS, GCP, Azure) offers the same three pricing models. Understanding them is the single biggest cost lever you have.

On-Demand instances are like hailing a taxi. You pay full price, no commitment, and you can stop anytime. Good for unpredictable workloads, but expensive for steady-state usage.

Reserved instances are like a monthly transit pass. You commit to 1 or 3 years and get 30-60% off. The server is yours whether you use it or not. Perfect for your baseline capacity โ€” the servers you always need running.

Spot instances are like standby airline tickets. You get 60-90% off, but the cloud provider can take them back with 2 minutes' notice when they need the capacity for full-price customers. Perfect for batch processing, background workers, and anything stateless that can handle interruptions.

Same Server, Three Prices โ€” Monthly Cost for m5.xlarge (4 vCPU, 16 GB) $140/month On-Demand No commitment. Full price. Cancel anytime. $58/month Reserved (1yr) 1-year commitment. 58% savings. Always yours. $20 Spot 85% savings โ€” but can be reclaimed with 2-min notice. For 10 servers: On-Demand = $1,400/mo | Reserved = $580/mo | Spot = $200/mo

The Smart Strategy: Mix and Match

No real company uses just one pricing model. The winning strategy is a blend:

The Cost-Optimized Instance Strategy High Low Traffic Night Morning Afternoon Evening Peak Night RESERVED โ€” Always running (your baseline: 5 servers) ON-DEMAND Scales up for peaks (3-8 more servers) Spot: batch jobs (night) Reserved ($58/mo each) On-Demand ($140/mo each) Spot ($20/mo each)

Right-Sizing: Stop Paying for Idle Resources

This is the lowest-hanging fruit in cost optimization, and almost every company is guilty of ignoring it. Right-sizing means matching your instance type to your actual usage.

If your t3.large (2 vCPU, 8 GB RAM) averages 15% CPU and 3 GB memory, you're paying for resources you don't use. Downsize to a t3.medium (2 vCPU, 4 GB RAM) and save 30%. Multiply that across 20 servers and you've saved thousands per year doing nothing but clicking a button.

AWS: Use AWS Compute Optimizer โ€” it analyzes your CloudWatch metrics and recommends right-sized instances. It's free. GCP: Use the VM rightsizing recommendations in the console. Azure: Use Azure Advisor. Or just check your monitoring dashboard: if any server's CPU or memory never exceeds 40%, it's a candidate for downsizing.

The Database Cost Ladder

Database costs are often the single biggest line item on a cloud bill. Here's the optimization order, from cheapest to most expensive:

  1. Add indexes โ€” Free. Reduces CPU usage because the DB does less work per query. Check pg_stat_user_tables for tables with high sequential scan counts.
  2. Optimize queries โ€” Free. Use EXPLAIN ANALYZE to find slow queries. Often a small query rewrite turns a 500ms query into a 5ms query.
  3. Add caching (Redis) โ€” ~$15-50/month. Eliminates 90% of read queries from hitting the DB at all.
  4. Add read replicas โ€” ~$30-200/month each. Spreads read load across multiple servers.
  5. Vertically scale the primary โ€” $100-2,000/month. More RAM means more data in the buffer cache, fewer disk reads.
  6. Shard the database โ€” $1,000+/month (multiple DB instances + operational overhead). Only when write load exceeds what one server can handle.

Notice the pattern: the first three options cost under $50/month combined and solve 95% of database performance issues. Sharding costs 20x more and adds enormous complexity. Always exhaust the cheap options first.

Storage Tiering: Not All Data Deserves SSD

Different data has different access patterns. Storing everything on the fastest (most expensive) storage is wasteful. Cloud providers offer storage tiers designed for this:

What: SSD-backed, high-performance. Instant access.

Cost: ~$0.08/GB/month (AWS EBS gp3)

Use for: Active database files, frequently-accessed application data, current user uploads.

Example: The last 30 days of user posts, active session data, search indexes.

What: HDD-backed or infrequent-access tier. Slightly slower, much cheaper.

Cost: ~$0.0125/GB/month (AWS S3 Standard-IA)

Use for: Data accessed occasionally โ€” old user uploads, logs from the past 90 days, monthly reports.

Example: Posts older than 30 days, completed order history, non-current profile photos.

What: Archive tier. Retrieval takes minutes to hours. Extremely cheap.

Cost: ~$0.004/GB/month (AWS S3 Glacier)

Use for: Backups, compliance data, anything you must keep but rarely access.

Example: Database backups older than 30 days, audit logs from 2 years ago, deleted user data held for legal compliance.

The savings are dramatic. If you have 10 TB of data and 80% of it is cold or warm:

Use S3 Lifecycle Rules to automatically move objects to cheaper tiers based on age. Example: move to Standard-IA after 30 days, to Glacier after 90 days. Set it once, save forever.

Real Cost Example: Unoptimized vs Optimized

Let's look at a real app serving 1 million active users. Same features, same reliability โ€” just different infrastructure choices:

  • 10 ร— m5.xlarge on-demand app servers โ€” $1,400/mo
  • 2 ร— r5.2xlarge on-demand DB (primary + replica) โ€” $2,200/mo
  • No connection pooler โ€” DB running 800 connections at once โ€” extra RAM cost $500/mo
  • 10 TB all on gp3 SSD โ€” $1,000/mo
  • No caching layer โ€” DB handles ALL reads โ€” oversized DB instance $3,000/mo
  • On-demand workers for background jobs (always running) โ€” $1,200/mo
  • Oversized NAT gateway (all traffic through one) โ€” $800/mo
  • Monitoring + logging + data transfer โ€” $4,900/mo
  • Total: ~$15,000/month
  • 5 ร— m5.large reserved + auto-scale 0-5 on-demand โ€” $450/mo (reserved saves 58%, right-sized from xlarge to large)
  • 1 ร— r5.xlarge reserved primary + 2 ร— r5.large reserved replicas โ€” $650/mo
  • PgBouncer connection pooling โ€” DB needs 100 connections, not 800 โ€” smaller instance saves $300/mo
  • Storage tiered: 2 TB hot + 3 TB warm + 5 TB cold โ€” $217.50/mo
  • Redis cache (r6g.large reserved) โ€” 95% hit rate, DB load drops 90% โ€” $120/mo
  • Spot instances for background workers โ€” $180/mo
  • VPC endpoints instead of NAT gateway where possible โ€” $200/mo
  • Monitoring + logging + data transfer (optimized) โ€” $2,245/mo
  • Total: ~$4,500/month

Same 1 million users. Same features. Same reliability. $10,500/month cheaper. That's $126,000 per year โ€” enough to hire another engineer. And the optimized setup actually performs better because caching reduces latency and connection pooling prevents DB overload.

Think First

Your AWS bill shows: 8 on-demand m5.xlarge servers running 24/7, averaging 25% CPU utilization. You also have 3 on-demand workers running batch jobs only at night (12 hours/day). Name three specific cost optimizations and estimate the monthly savings.

Think about: (1) the instance size given 25% CPU, (2) the pricing model for always-on servers, and (3) the pricing model for night-only batch work. Each one is a different lever.
Scale smart, not just big. Three pricing models: On-Demand (full price, no commitment), Reserved (30-60% off for 1-3 year commitment), Spot (60-90% off but can be reclaimed). Use reserved for baseline, on-demand for peaks, spot for batch jobs. Right-size instances by checking actual CPU/memory utilization โ€” most servers are over-provisioned. Tier storage: hot data on SSD ($0.08/GB), warm on Standard-IA ($0.0125/GB), cold on Glacier ($0.004/GB). For databases, exhaust free optimizations (indexes, query tuning) before spending on bigger instances. Real example: same 1M-user app costs $15K/mo unoptimized vs $4.5K/mo optimized โ€” a $126K/year difference.
Section 17

Common Mistakes โ€” The Scaling Traps Everyone Falls Into

Every team makes at least one of these. The scary part isn't the mistake itself โ€” it's that each one feels like the right thing to do at the time. You think you're being proactive. You're actually building a trap that springs under load.

What goes wrong: Your app feels slow, so you throw more servers at it. You go from 2 servers to 8. The bill quadruples. The app is still slow. Why? Because the bottleneck was a single unindexed SQL query that took 4 seconds per request โ€” and adding servers just meant more servers running the same slow query.

Why it's wrong: Scaling is a multiplier, not a fix. If each request takes 4 seconds because of a missing database index, 8 servers give you 8ร— the capacity to run 4-second requests. The Amdahl's LawThe maximum theoretical speedup of a system is limited by the sequential (non-parallelizable) portion of the work. from Section 7 explains exactly why โ€” the serial bottleneck (that slow query) dominates no matter how much parallel capacity you add.

The fix: Always profile before scaling. Run EXPLAIN ANALYZE on your slowest queries. Check your APM tool (New Relic, Datadog) for the slowest endpoints. Find where time is actually spent. In most cases, a single index or query rewrite eliminates the need for more servers entirely.

What goes wrong: Your login system stores user sessions in the app server's memory. It works great with one server. Then you add a second server behind a load balancer. Half the time, users get routed to the "wrong" server and see a login screen again. Angry support tickets pile up.

Why it's wrong: Horizontal scalingHorizontal scaling means adding more servers. If any server holds unique data, you can't freely route requests to any server. only works when every server is interchangeable. The moment one server holds something another doesn't (like a session), you've created a "sticky" dependency. You'll need "sticky sessions" on your load balancer, which defeats the purpose โ€” if that server dies, every user session on it is gone.

The fix: Move sessions to an external store that all servers share โ€” Redis, Memcached, or a database. Now any server can handle any request. The server becomes truly statelessA stateless server doesn't remember anything between requests. All the data it needs comes with the request or from an external store., and you can add or remove servers freely.

What goes wrong: Traffic grows, response times climb, so you add more app servers. At first it helps โ€” CPU on each server drops. Then it gets worse again, even worse than before. What happened? All those new app servers are hammering the same single database. You scaled the wrong layer.

Why it's wrong: In most web apps, the database is the bottleneck, not the application server. App servers do relatively cheap computation (render a template, run some business logic). The database does the expensive work (disk I/O, joins, sorting). Adding app servers without addressing the database is like adding more cashiers when the kitchen is the bottleneck โ€” the line at the counter moves faster, but everyone still waits 30 minutes for food.

The fix: Before adding app servers, check your database metrics โ€” CPU, connections, query latency, I/O wait. If the database is struggling: add read replicas for read-heavy workloads, add caching (Redis) for hot data, optimize slow queries with indexes, or consider connection pooling. Only add app servers after the database has room to breathe.

What goes wrong: You have 20 app servers, each running 50 worker threads. Each thread opens its own database connection. That's 20 ร— 50 = 1,000 simultaneous connections to your PostgreSQL, which has a default limit of ~100 connections. The database starts rejecting connections. Users see "500 Internal Server Error."

Why it's wrong: Every database connection costs memory (PostgreSQL uses ~10 MB per connection). 1,000 connections = 10 GB of RAM just for connection overhead. Worse, the database can't actually run 1,000 queries in parallel โ€” it has maybe 8-16 CPU cores. Most of those connections sit idle, wasting resources.

The fix: Put a connection poolerA connection pool maintains a set of reusable database connections. Instead of each thread opening its own connection, threads borrow from the pool and return when done. between your app servers and the database โ€” tools like PgBouncer for PostgreSQL or ProxySQL for MySQL. 1,000 app-side connections get multiplexed into 50-100 actual database connections. The database stays happy, and you can scale app servers without drowning the DB.

What goes wrong: You add Redis caching and everything flies. Product prices, user profiles, inventory counts โ€” all cached. Months later, a customer buys an item listed at $29.99 that was repriced to $49.99 weeks ago. The cache still serves the old price because nobody set an expiration.

Why it's wrong: A cache without a TTL (Time-To-Live)Time-To-Live: the maximum duration a cached entry is considered valid. After TTL expires, the entry is evicted and the next request fetches fresh data from the source. is just a second database that never updates. Over time, the gap between cached data and real data grows. You end up with a system that's fast but wrong โ€” which is worse than being slow but right.

The fix: Every cache entry must have a TTL. Use shorter TTLs for data that changes often (inventory: 30-60 seconds), longer TTLs for data that rarely changes (product descriptions: 1 hour). Also implement cache invalidation for critical data โ€” when a price changes, explicitly delete the cached entry so the next read fetches the new price.

What goes wrong: You read that sharding is how the big companies scale, so you shard your database at 10,000 users. Now every query needs a routing layer. Joins across shards don't work. Migrations take 10ร— longer. Your 3-person team spends more time managing shard infrastructure than building features. Meanwhile, a single PostgreSQL instance could comfortably handle 10 million rows.

Why it's wrong: Sharding is the last resort of database scaling, not the first step. It introduces massive operational complexity: cross-shard transactions, rebalancing when shards get hot, backup coordination, schema migrations across N databases. For most apps, you won't need sharding until you have hundreds of millions of rows or thousands of writes per second.

The fix: Exhaust simpler options first. Add indexes. Optimize queries. Add read replicas. Use caching. Upgrade to a bigger database server (vertical scaling). Archive old data. Partition tables within a single database. Only when ALL of these are maxed out โ€” and you have a team large enough to manage the complexity โ€” should you consider sharding.

The six scaling traps: (1) Scaling before profiling โ€” always measure first. (2) Server-side sessions โ€” move to external session store for stateless servers. (3) Ignoring the database โ€” the DB is usually the real bottleneck, not app servers. (4) No connection pooling โ€” use PgBouncer/ProxySQL to multiplex connections. (5) Cache without TTL โ€” every cached entry needs an expiration. (6) Sharding too early โ€” exhaust indexes, replicas, caching, and vertical scaling before sharding.
Section 18

Interview Playbook โ€” Nail Scalability Questions

Scalability comes up in almost every system design interview. The good news: interviewers aren't looking for you to recite textbook definitions. They want to see a structured thought process โ€” proof that you'd make sound decisions under pressure. Here's the framework that gets you there.

The 4-Step Scalability Answer Framework Step 1 Identify the Bottleneck "The DB is at 90% CPU" Step 2 Choose the Right Tool "Add read replicas" Step 3 Explain the Tradeoffs "Replication lag vs speed" Step 4 Give Real Numbers "3 replicas, ~200ms lag"

If an interviewer says "Tell me about scalability," here's a clean, structured answer you can deliver in under a minute:

"Scalability is the system's ability to handle growing load without degrading performance. There are two main strategies: vertical scaling โ€” giving one machine more CPU and RAM (simple but has a ceiling), and horizontal scaling โ€” adding more machines behind a load balancer (harder to build but practically unlimited).

In practice, you combine both with a toolkit: caching to reduce repeated work, read replicas to spread database reads, async queues to defer non-urgent tasks, and sharding as a last resort for massive write volume.

The key principle is: always measure first. Profile the bottleneck, pick the simplest tool that addresses it, and understand the tradeoffs โ€” like how caching trades freshness for speed, or how replicas introduce replication lag. No scaling decision should be made without looking at the numbers."

Q: "How would you scale this system to 10 million users?"

Don't jump to microservices. Walk through it in stages: (1) Start with one server โ€” at 10K users, add a CDN for static assets and a Redis cache for hot data. (2) At 100K users, separate the database from the app server, add read replicas, put a load balancer in front of 2-3 app servers. (3) At 1M users, add async message queues for background tasks (emails, notifications), shard the database by user ID if write volume demands it. (4) At 10M, consider splitting into services if the codebase is too large for one team, add global CDN edges, and implement auto-scaling. Each step should be justified by a metric hitting a threshold, not by guessing.

Q: "Vertical vs horizontal โ€” when would you choose each?"

Vertical when the bottleneck is a single-threaded process that can't be parallelized (like a single PostgreSQL primary doing heavy writes) โ€” throw a bigger machine at it. Horizontal when the workload is easily distributable (stateless web servers, read-heavy traffic) โ€” add more machines. In reality, most systems use vertical scaling on the database layer and horizontal scaling on the application layer. Mention that vertical has a hard ceiling (the biggest cloud instance exists) while horizontal has a complexity ceiling (distributed systems are harder to reason about).

Q: "What happens when your database can't keep up?"

Ladder of fixes, simplest first: (1) Check for missing indexes and slow queries โ€” this alone fixes 80% of DB performance issues. (2) Add caching (Redis) for frequently read data โ€” reduces DB load by 60-80%. (3) Add read replicas for read-heavy workloads. (4) Add connection pooling (PgBouncer) if you have many app servers. (5) Vertical scale the DB server (more RAM = bigger buffer pool = fewer disk reads). (6) Shard only as a last resort for write-heavy workloads that exceed what one machine can handle.

  • "Let me first identify the bottleneck..." โ€” shows you measure before acting
  • "The tradeoff here is..." โ€” shows you understand nothing is free
  • "At our current scale of X requests/sec..." โ€” shows you think in numbers
  • "We'd start simple and add complexity only when metrics justify it" โ€” shows engineering maturity
  • "That depends on the read/write ratio" โ€” shows you know one-size-fits-all doesn't exist
Use the 4-step framework for every scalability answer: (1) Identify the bottleneck, (2) Choose the right tool, (3) Explain the tradeoffs, (4) Give real numbers. Have a 60-second pitch ready. For "scale to 10M users" questions, walk through stages (10K โ†’ 100K โ†’ 1M โ†’ 10M) with specific tools at each stage. Always emphasize measuring before scaling and understanding tradeoffs.
Section 19

Practice Exercises โ€” Build Your Scaling Intuition

Reading about scalability is one thing. Doing the math and making decisions is another. These exercises go from napkin math to architectural reasoning โ€” try them before peeking at hints.

Your personal blog gets 500 unique visitors per day. Each visitor loads an average of 3 pages. Your single $10/month VPS has a Nginx + WordPress setup that can handle about 50 requests per second.

Questions: (a) How many requests per second does your blog actually serve on average? (b) What's the peak-to-average ratio if all traffic comes in 8 hours? (c) Do you need to scale?

(a) 500 visitors ร— 3 pages = 1,500 requests/day. Spread over 24 hours: 1,500 รท 86,400 โ‰ˆ 0.017 req/sec. That's essentially nothing.

(b) If traffic concentrates in 8 hours: 1,500 รท (8 ร— 3,600) โ‰ˆ 0.05 req/sec. Even with a 10ร— spike (viral article), you'd hit 0.5 req/sec โ€” your server handles 50. You're at 1% capacity.

(c) Absolutely not. You could handle 100ร— your current traffic on this server. Spend your time writing content, not scaling infrastructure. The only thing you might add is a CDN (free tier of Cloudflare) to cache static assets and protect against DDoS โ€” but that's about security, not scale.

You have a Flask web app that stores user sessions like this:

# Current code โ€” sessions stored in server memory from flask import Flask, session app = Flask(__name__) app.secret_key = "super-secret" @app.route("/login") def login(): session["user_id"] = 42 # Stored in THIS server's memory return "Logged in"

You need to add a second server behind a load balancer. What breaks, and how do you fix it?

User logs in โ†’ session stored on Server A. Next request goes to Server B โ†’ no session found โ†’ user sees login page again. This happens randomly (~50% of requests).

Fix: Use server-side sessions backed by Redis. In Flask, swap to flask-session with Redis backend. Now both servers read/write sessions from the same Redis instance. The server becomes stateless โ€” any server can handle any request.

You're building a photo-sharing app (think early Instagram). You're at 1,000 users today and growing fast โ€” you expect 1 million users in 12 months. Each user uploads ~2 photos/week and views ~50 photos/day. Design a scaling roadmap with specific milestones.

At 1K users: ~140 uploads/day, ~50K views/day (~0.6 req/sec). A single server with local storage works fine. Focus on product, not infrastructure.

At 10K users: ~1.4K uploads/day, ~500K views/day (~6 req/sec). Move images to object storage (S3) + CDN. Add Redis cache for the feed. Still one app server.

At 100K users: ~14K uploads/day, ~5M views/day (~60 req/sec). Add a load balancer + 2-3 app servers. Add a read replica for the database. Use a message queue for image processing (thumbnails, compression) so uploads don't block.

At 1M users: ~140K uploads/day, ~50M views/day (~600 req/sec). Multiple app servers with auto-scaling. Database read replicas (3-5). Dedicated image processing workers. CDN in multiple regions. Consider sharding the database by user_id if write volume demands it. Total infrastructure cost: ~$2-5K/month on AWS.

Your PostgreSQL database has max_connections = 400. You're seeing FATAL: too many connections errors in your logs. You have 20 app servers, each with 25 worker threads, and each thread opens its own DB connection. What's going on and how do you fix it โ€” both short-term and long-term?

The problem: 20 servers ร— 25 threads = 500 potential connections. But max_connections is 400. Under peak load, you exceed the limit.

Short-term fix: Reduce per-server worker threads to 15 (20 ร— 15 = 300, safely under 400). Or increase max_connections to 600 โ€” but this costs ~6 GB of RAM (PostgreSQL uses ~10 MB per connection), and most connections are idle anyway.

Long-term fix: Deploy PgBouncer in front of PostgreSQL. Configure each app server to connect to PgBouncer with 25 connections each. PgBouncer multiplexes all 500 app-side connections into ~50-80 real database connections using transaction-level pooling. The database sees 80 connections instead of 500, and you can safely add more app servers without touching the DB.

Design a system that handles 100,000 concurrent WebSocketWebSocket is a protocol that keeps a persistent two-way connection between client and server, allowing real-time data push without repeated HTTP requests. connections for a real-time chat application. Each user sends ~1 message per minute and receives messages from up to 50 group chats. Consider: connection management, message fan-out, presence tracking, and what happens when a server dies.

Connection capacity: A single modern server can hold ~50K-65K WebSocket connections (limited by file descriptors and RAM โ€” each connection uses ~10-50 KB). So you need 2-3 WebSocket servers minimum, behind a load balancer with sticky connections (WebSocket is stateful).

Message volume: 100K users ร— 1 msg/min = ~1,667 messages/sec. Each message fans out to group members. If average group has 20 members, that's ~33K message deliveries/sec. This is manageable for a pub/sub system like Redis Pub/Sub or Kafka.

Architecture: Users connect to WebSocket servers. When User A sends a message, the WS server publishes to a message broker (Redis Pub/Sub). All WS servers subscribe to relevant channels and push to their connected users. Presence (who's online) tracked in Redis with TTL-based heartbeats.

Server failure: When a WS server dies, ~50K users disconnect. They auto-reconnect to another server (clients should have reconnection logic). The message broker ensures no messages are lost during reconnection โ€” undelivered messages are queued.

Five exercises from napkin math to architecture: (1) Blog traffic math โ€” 0.017 req/sec needs zero scaling. (2) Stateless migration โ€” move sessions to Redis. (3) Photo app roadmap โ€” staged scaling at 1K/10K/100K/1M users. (4) Connection pool emergency โ€” PgBouncer multiplexes 500 app connections into 80. (5) Real-time chat โ€” WebSocket servers + Redis Pub/Sub for 100K concurrent users.
Section 20

Cheat Sheet โ€” Scalability at a Glance

Bigger machine โ€” more CPU, RAM, SSD. Simple but has a ceiling (largest cloud instance exists). More machines behind a load balancer. Harder to build but practically unlimited. Distributes requests across servers. Use when you have 2+ app servers. Round-robin is the simplest algorithm. Never store user sessions on the app server. Move state to Redis or a database so any server can handle any request. Store hot data in memory (Redis/Memcached). Target 90%+ cache hit rate. Every cache entry needs a TTL. Copies of the database that handle read queries. Great for read-heavy workloads. Caveat: replication lag (100-500ms). Defer non-urgent work (emails, thumbnails, reports) to background workers via a message queue (RabbitMQ, Kafka, SQS). Split data across multiple databases by a key (user_id, region). Last resort โ€” use only after exhausting simpler options. Multiplex app connections into fewer DB connections. Tools: PgBouncer (Postgres), ProxySQL (MySQL). Speedup = 1 / (S + (1โˆ’S)/N). The serial fraction S limits your gains no matter how many servers N you add. L = ฮป ร— W. Concurrent users (L) = arrival rate (ฮป) ร— avg time in system (W). Use it to size your server pool. Measure first, scale second. Profile the bottleneck, pick the simplest fix, understand the tradeoffs, then act.
Twelve quick-reference cards covering the entire scalability toolkit: vertical vs horizontal scaling, load balancers, stateless servers, caching, read replicas, async queues, sharding, connection pooling, Amdahl's Law, Little's Law, and the golden rule of measuring before scaling.
Section 21

Connected Topics โ€” Where to Go Next

Scalability doesn't exist in a vacuum. Every tool in the scaling toolkit connects to a deeper topic. Pick the ones that matter most for your next system design interview or project.

Scalability connects to reliability, availability, performance, load balancing, caching, sharding, CAP theorem, back-of-envelope estimation, message queues, CDNs, microservices, and auto-scaling. Each topic deepens one piece of the scaling toolkit.