Every app has reviews. Amazon, Uber, Airbnb, Google Maps — billions of reviews shaping billions of decisions. "Leave a review" seems trivial: text, stars, submit. But what happens when you need different rating algorithms — simple average vs weighted vs BayesianA statistical approach that "pulls" ratings toward a global average, preventing a product with 1 five-star review from outranking one with 500 reviews averaging 4.8. Named after Thomas Bayes, it's how Amazon, IMDb, and Reddit actually rank things.? When reviews need content moderation — spam detection, profanity filtering, fake review detection? When sellers need real-time notifications when reviews arrive? When one angry customer submits 50 one-star reviews in a minute? That simple review form becomes a real design problem.
We're going to build this system 7 times — each time adding ONE constraintA real-world requirement that forces your code to evolve. Each constraint is something the BUSINESS needs, not a technical exercise. "Reviews can be for products, sellers, or deliveries" is a constraint. "Use the Strategy pattern" is not — that's a solution you DISCOVER. that breaks your previous code. You'll feel the pain, discover the fix, and understand WHY the design exists. By Level 7, you'll have a complete, production-grade review system — and a set of reusable thinking tools that work for any system.
The Constraint Game — 7 Levels
L0: Submit a Review
L1: Review Types
L2: Rating Algorithms
L3: Content Moderation
L4: Notifications
L5: Edge Cases
L6: Testability
L7: Scale It
The System Grows — Level by Level
Each level adds one constraint. Here's a thumbnail of how the class diagram expands from 1 class to a full review platform:
What You'll Build
System
Production-grade review platform with polymorphic review typesDifferent kinds of reviews (product, seller, delivery) share a common interface but carry different data. A product review has photos; a seller review has a "would buy again" flag. Polymorphism lets the system handle all of them through one IReview interface., pluggable rating algorithmsThe system can switch between Simple Average, Weighted Average, and Bayesian Average at runtime without changing any code. Each algorithm implements IRatingStrategy, and the service picks the right one based on context., content moderation, and real-time notifications.
Patterns
StrategyDefines a family of interchangeable algorithms. Rating calculation uses IRatingStrategy — SimpleAverage, WeightedAverage, and BayesianAverage are swappable at runtime without modifying calling code., ObserverWhen a review is submitted, interested parties (email service, push notifications, analytics) get notified automatically. The review system doesn't know WHO listens — it just broadcasts. Adding a new listener requires zero changes to existing code., Chain of ResponsibilityModeration rules form a pipeline — spam check, profanity check, fake review detection. Each rule inspects the review and either passes it along or rejects it. Adding a new rule means adding one class, not touching existing rules., Result<T>A functional error handling pattern that returns either a success value or an error message, instead of throwing exceptions for business logic. Makes error paths explicit and compiler-visible.
Skills
Real-world walkthrough, "What Varies?" instinctWhen you see multiple ways to do the same thing (3 rating algorithms, multiple review types), ask "What varies?" The answer tells you where to put an interface. This single question reveals Strategy, Observer, and dozens of other patterns., pipeline thinking, rating math, CREATESThe 7-step interview framework: Clarify → Requirements → Entities → API → Trade-offs → Edge cases → Scale. Works for every LLD problem.
Before we write a single line of code, let's open Amazon and look at a product page. Not as a shopper — as a designer. Pay attention to every thing (noun) and every action (verb) you can find in the review section. These become your classes, methods, and data types.
Think First #1 — Open Amazon. Find any product with reviews. List every THING you see (star bars, badges, photos) and every ACTION you can take (filter, sort, vote helpful, report). Don't peek below for 60 seconds.
Things you probably found: Star distribution bar, overall rating, review count, reviewer name, verified purchase badge, helpful count, star rating per review, review title, review text, review photos, review date, seller response.
Actions: Filter by stars, sort by most recent/most helpful, mark as helpful, report abuse, write a review, upload photos, edit your review.
Every noun is a candidate entityIn LLD, an entity is a real-world concept that becomes a class, record, or enum in your code. Reviews, products, ratings, and users are entities because they hold data and have identity.. Every verb is a candidate method. This is noun extractionA systematic technique for finding entities: read the problem description, highlight every noun, and evaluate which ones need to exist in your code. Not every noun becomes a class — some become fields, some become enums, some are irrelevant. — and it works for ANY system.
Stage 1: Browse Reviews
What you SEE: A star distribution bar showing how many people gave 5, 4, 3, 2, or 1 stars. An overall rating like "4.2 out of 5." A review count ("1,847 ratings"). Individual reviews with star ratings, titles, text, dates, and a "Verified Purchase" badge. Photos attached to some reviews. A "Was this helpful?" button with a count.
What you can DO: Filter by star count (show only 1-star reviews). Sort by most recent or most helpful. Search within reviews.
Design insight: We already found several nouns — Review (the core entity with text, stars, date), Product (what's being reviewed), User (who wrote the review), Rating (the star value, 1-5), and HelpfulVote (a separate entity tracking who found what helpful). The star distribution bar tells us we need an aggregateA computed summary that's derived from individual records. The "4.2 out of 5" rating isn't stored directly — it's calculated from all individual review ratings. Aggregates need to be recalculated when data changes. — a computed summary across all reviews.
Stage 2: Write a Review
What you SEE: A star selector (click 1-5 stars). A title field. A text area for the review body. An option to upload photos. A "Verified Purchase" badge that appears automatically if you actually bought the product.
What happens behind the scenes: The system checks if you've already reviewed this product (one review per user per product). It verifies your purchase history. It validates star range (1-5, no 0 or 6). The review gets a timestamp and a unique ID.
Design insight: New concept: validation rules. Stars must be 1-5. Text can't be empty. One review per user per product. These aren't just if-statements scattered through the code — they're business rules that deserve their own home. Also notice that a review is immutable after creationOnce submitted, the core data of a review (user, product, stars, text, date) doesn't change. You can edit a review, but that's logically a NEW version, not a mutation of the old one. This makes Review a natural candidate for a record type in C#. (except for edits, which are really new versions).
Stage 3: Moderation
What you SEE: After submitting, there's sometimes a delay before the review appears. Some reviews are rejected entirely. You might get a message: "Your review could not be posted."
What happens behind the scenes: The review enters a moderation pipelineA series of checks that a review passes through before being published. Each check (spam, profanity, fake detection) is independent and can approve, reject, or flag the review for manual review. If any check fails, the review doesn't go live.. Spam detection checks for repetitive text, suspicious patterns, and bulk submissions. Profanity filtering scans for inappropriate language. Fake review detection looks for patterns like a sudden burst of 5-star reviews from new accounts. Each check is independent — a review might pass spam but fail profanity.
Design insight: The moderation system is a pipeline — a series of independent checks that run in sequence. Adding a new check (say, checking for competitor mentions) should require writing ONE new class, not modifying existing checks. When you see "multiple independent processors that all need to inspect the same thing," there's a pattern for that. You'll discover it in Level 3.
Stage 4: Interact with Reviews
What you SEE: A "Helpful" button with a count ("42 people found this helpful"). A "Report abuse" link. On some platforms, the seller or business owner can respond to reviews publicly.
What happens behind the scenes: Helpful votes are tracked per user (you can only vote once). Report abuse triggers a moderation workflow. Seller responses are linked to the original review. Notifications go out — the reviewer gets notified when someone finds their review helpful, and the seller gets notified when a new review arrives.
Design insight: When a review is submitted, multiple parts of the system need to know: the seller gets an email, the analytics service logs it, the rating aggregate needs recalculation, the notification system fires. The review service shouldn't know about ALL these listeners — it should just say "hey, a review happened" and let interested parties react. That's the Observer patternA design where one object (the subject) broadcasts events, and any number of listeners can subscribe to hear about them. The subject doesn't know or care who's listening. Adding a new listener requires zero changes to the subject. — you'll discover it in Level 4.
Stage 5: Aggregate & Rank
What you SEE: The overall rating updates when new reviews arrive. The star distribution bar shifts. "Most helpful" reviews float to the top. Products with more reviews seem to rank differently than products with fewer reviews (even if the average is the same).
What happens behind the scenes: The rating algorithmThe formula used to calculate a product's overall score. Simple average (sum/count) is naive — a product with one 5-star review ranks above one with 500 reviews averaging 4.8. Real systems use weighted or Bayesian averages to account for review count and recency. recalculates. But it's NOT just sum/count. A product with one 5-star review shouldn't outrank a product with 500 reviews averaging 4.8. Amazon, IMDb, and Reddit all use sophisticated algorithms that account for review count, recency, and global averages. This is where the math gets interesting — and where a naive implementation fails spectacularly.
Design insight: Different contexts need different rating algorithms. A new product with few reviews needs Bayesian averaging. A trending sort needs time-weighted scores. A "most helpful" sort needs a completely different formula. Multiple algorithms, same operation, different behaviors — that's the Strategy patternWhen you have multiple ways to do the same thing and need to switch between them, Strategy encapsulates each algorithm in its own class behind a common interface. The caller doesn't know which algorithm it's using — it just calls Calculate().. You'll discover it in Level 2.
What We Discovered
Hidden Complexity You Didn't See
Discovery
Real World
Code
Type
Review
Written feedback with stars
Review
record (immutable fact)
Review Types
Product / Seller / Delivery
IReview
interface (polymorphism)
Star Rating
1-5 star selection
int Stars
validated field (1-5)
Rating Algorithm
How overall score is computed
IRatingStrategy
interface (Strategy pattern)
Moderation
Spam / profanity checks
IModerationRule
interface (pipeline)
Notifications
Alert seller on new review
IReviewListener
interface (Observer)
Helpful Vote
"Was this helpful?" button
HelpfulVote
record (one per user per review)
Verified Purchase
Badge on reviewer
bool IsVerified
computed flag
The real world is your first diagram. Senior engineers start here — not with code, not with patterns, but with observation. Everything we need to build is already visible on an Amazon product page.
Skill Unlocked: Real-World Walkthrough
Walk through the physical system before coding. List every noun (= entity) and verb (= method). This gives you a requirements checklist AND a starter class diagram — for free. Works for review systems, parking lots, elevators, chat apps, anything.
Section 3 🟢 EASY
Level 0 — Submit a Review
Constraint: "A user submits a review with text and a star rating (1-5). The system stores it and shows an average rating."
This is where it all begins. The simplest possible version — no review types, no moderation, no notifications. Just: accept a review, store it, and calculate an average. We'll feel the pain of missing features soon enough.
Every complex system starts with a laughably simple version. For a review system, that means: a recordIn C#, a record is an immutable reference type perfect for data that never changes after creation. A review that was submitted at 3pm with 4 stars will always be that — it's a fact, not something you mutate. Records auto-generate Equals(), GetHashCode(), and ToString(). to hold the review data, a store to keep them in, and a method to calculate the average. That's it. No validation, no types, no nothing. The goal of Level 0 is to get something working — then let the next constraint break it.
Think First #2
What's the simplest data structure for a review? What fields does it need at minimum? How would you calculate the average rating for a product? Take 60 seconds.
60 seconds — try it before peeking.
Reveal Answer
A review needs at minimum: who wrote it (UserId), what it's about (ProductId), how many stars (1-5), what they said (Text), and when (CreatedAt). That's 5 fields. A readonly record struct is perfect — lightweight, immutable, value semantics. For the store: a List<Review> and LINQ's .Average() to compute the mean rating. ~15 lines total.
Your Internal Monologue
"OK, simplest thing possible... A review is just data — user, product, stars, text, timestamp. Nothing changes after it's created. That screams record."
"For storage... just a List<Review> for now. No database, no persistence — just an in-memory list. I know this won't scale, but Level 0 isn't about scale. It's about getting the shape right."
"Average rating? Filter reviews by product ID, then call .Average(r => r.Stars). LINQ does the math. One line. ...Wait, what if there are zero reviews for a product? .Average() on an empty sequence throws an InvalidOperationExceptionIn C#, calling .Average() on an empty collection throws this exception because the mathematical average of nothing is undefined. You can guard with .Any() first, or use .DefaultIfEmpty(0).Average(), or return 0.0 as a fallback.. I should handle that. But no — Level 0. Keep it minimal. I'll feel that pain later."
What Would You Do?
NaiveApproach.cs
// Just store raw data — no structure
var reviews = new List<(string userId, string productId, int stars, string text)>();
reviews.Add(("user1", "prod1", 5, "Great product!"));
reviews.Add(("user2", "prod1", 3, "It's okay"));
double avg = reviews
.Where(r => r.productId == "prod1")
.Average(r => r.stars);
// avg = 4.0
The catch: It works for a demo, but tuplesTuples are lightweight containers for grouping values, but they have no identity, no methods, and no validation. You can pass (userId: "", productId: "", stars: 99, text: "") and nobody stops you. As soon as you need to enforce rules (stars must be 1-5), add behavior (compute age), or pass reviews between methods, you need a real type. have no validation, no methods, and no identity. You can't add a CreatedAt timestamp without changing every call site. When Level 1 adds review types, you'd need a completely different tuple shape for each type. It falls apart fast.
When IS this approach better? Quick scripts, throwaway prototypes, or one-off data analysis where the data shape will never evolve. If you're 100% sure the structure won't change, raw tuples save keystrokes.
StructuredApproach.cs
public readonly record struct Review(
string UserId,
string ProductId,
int Stars,
string Text,
DateTimeOffset CreatedAt);
public class ReviewStore
{
private readonly List<Review> _reviews = new();
public void Add(Review review) => _reviews.Add(review);
public double GetAverageRating(string productId)
=> _reviews
.Where(r => r.ProductId == productId)
.Average(r => r.Stars);
}
Why this wins: The Review record gives us a named type we can pass around, extend, and validate. The ReviewStore gives us a single place where storage logic lives. When Level 1 adds review types, we can evolve Review into an interface without rewriting everything. The structure mirrors how we think about reviews: a thing (Review) and a place to keep them (Store).
Decision Compass: Will the data shape evolve? → Named record. Throwaway script? → Tuple is fine.
Here's the complete Level 0 code. Read every line — there are only about 20 of them.
ReviewSystem.cs — Level 0
public readonly record struct Review(
string UserId,
string ProductId,
int Stars, // 1-5
string Text,
DateTimeOffset CreatedAt);
public class ReviewStore
{
private readonly List<Review> _reviews = new();
public void Add(Review review)
=> _reviews.Add(review);
public IReadOnlyList<Review> GetReviews(string productId)
=> _reviews.Where(r => r.ProductId == productId).ToList();
public double GetAverageRating(string productId)
=> _reviews
.Where(r => r.ProductId == productId)
.Average(r => r.Stars);
}
Let's walk through what each piece does:
Review record — a readonly record structCombines three C# features: readonly (fields can't be reassigned), record (value equality + ToString() auto-generated), struct (stored on the stack, no heap allocation). Perfect for small immutable data like a review submission. with five fields. Once created, it never changes. The DateTimeOffset captures both the time and timezone, which matters when reviews come from around the world.
ReviewStore — a simple class that wraps a List<Review>. Right now it's in-memory, but by isolating storage behind a class (instead of passing raw lists around), we can swap in a database later without touching calling code.
Add() — appends a review to the list. No validation at this level — stars could be 0 or 99. We'll fix that.
GetReviews() — returns all reviews for a product as a read-only listIReadOnlyList<T> prevents callers from accidentally modifying the internal collection. They can read and iterate, but can't Add() or Remove(). This is defensive programming — exposing data without giving away control.. Callers can read but not modify the internal collection.
GetAverageRating() — filters reviews by product ID, then uses LINQ's .Average() to compute the mean star rating. One line of actual logic.
20 lines. It works for a toy system. But can you spot what's missing? There's no validation (stars could be 99), no distinction between product/seller/delivery reviews, no moderation, no notifications, and .Average() on zero reviews will throw an exception. We'll feel each of these pains in the coming levels.
Review Submission Flow
Growing Diagram — After Level 0
Class Diagram — Level 0
What Will Break This?
Before This Level
You see "review system" and think "text box, 5 stars, store in a database, done."
After This Level
You know to start with the stupidest possible version — a record and a list — then let each new constraint reveal what's missing.
Transfer: This "start with the dumbest thing that works" approach is universal. In a Chat App, Level 0 would be: one message, one sender, one receiver, stored in a list. In a Parking Lot: one lot, park a car, return a fee. Build the skeleton first, then let constraints shape the design.
Section 4 🟢 EASY
Level 1 — Review Types
New Constraint: "Reviews can be for products, sellers, or delivery experience. Each type has different fields: product reviews have photos, seller reviews have a 'would buy again' flag, delivery reviews have an 'on-time' rating."
What Breaks?
Our Level 0 Review record has Stars and Text — that's it. Product reviews need a PhotoUrls list. Seller reviews need a WouldBuyAgain boolean. Delivery reviews need an OnTimeRating.
The instinct? Add all three as nullable fieldsA nullable field is one that CAN be null — for example, string? PhotoUrls or bool? WouldBuyAgain. The problem: if Review has 15 fields and 12 are null for any given review, you're encoding type information in which fields are null. That's fragile, confusing, and impossible to validate properly. to the Review record. string? PhotoUrls, bool? WouldBuyAgain, int? OnTimeRating. Now a single review has 8 fields, and for any given review, 3 of them are always null. Add two more review types? That's 12+ fields, 10 of which are null. The record becomes a junk drawer.
This is a classic problem: you have different kinds of a thing, and each kind carries different data. A product review is NOT the same shape as a delivery review. Cramming them into one type means the type doesn't accurately describe ANY of them. Let's explore three approaches and see which one survives.
Think First #3
You have three kinds of reviews with shared fields (userId, productId, stars, text, date) and unique fields (photos, wouldBuyAgain, onTimeRating). Design a type hierarchy that avoids nullable junk drawers. What's the common shape? What varies?
60 seconds — think about what's SHARED vs what's UNIQUE.
Reveal Answer
Extract the shared shape into an interface — IReview with UserId, TargetId, Stars, Text, CreatedAt. Then create three records that implement it: ProductReview adds PhotoUrls, SellerReview adds WouldBuyAgain, DeliveryReview adds OnTimeRating. The ReviewStore works with IReview — it doesn't know or care which specific type it's holding. Adding a fourth review type means adding one new record, zero changes to existing code.
Your Internal Monologue
"Three review types with shared and unique fields... I could add all the unique fields as nullable properties to the existing Review record. That's quick, but then I have WouldBuyAgain on a product review — what does that even mean? It's null, sure, but the type doesn't tell you it shouldn't exist."
"What about inheritance? A base Review class, then ProductReview : Review, SellerReview : Review... That works. But records in C# can't inherit from other records (well, they CAN now in C# 12, but it gets messy). And deep inheritance hierarchies get brittle fast."
"Actually, what I really want is: define the SHAPE that all reviews share, then let each type add its own fields. That's an interfaceAn interface defines a contract — a set of properties and methods that any implementing type must provide. IReview says "every review must have UserId, Stars, Text, etc." but doesn't say HOW to store them. Each record type implements the interface its own way.. IReview defines the common shape. Each review type is its own record that implements IReview. The store works with IReview — it doesn't care about the specific type. Adding a ServiceReview later? Just implement IReview. Zero changes to existing code."
"Wait — that's the Open/Closed PrincipleSoftware entities should be open for extension (add new types) but closed for modification (don't change existing code). Here, adding a new review type extends the system (open) without touching ReviewStore or existing review records (closed).! I didn't try to apply it. I just asked 'What varies?' and the answer led me here naturally."
Three Review Types, Different Fields
What Would You Do?
GiantRecord.cs
// Approach A: cram everything into one record
public record Review(
string UserId,
string TargetId, // product, seller, or delivery
string ReviewType, // "product", "seller", "delivery"
int Stars,
string Text,
DateTimeOffset CreatedAt,
// Product-specific
string? Title,
List<string>? PhotoUrls,
bool? IsVerifiedPurchase,
// Seller-specific
bool? WouldBuyAgain,
int? CommunicationRating,
// Delivery-specific
int? OnTimeRating,
int? PackageCondition);
// Creating a product review:
var r = new Review("u1", "p1", "product", 5, "Great!",
DateTimeOffset.Now,
"Love it", new() { "photo.jpg" }, true,
null, null, // seller fields: null
null, null); // delivery fields: null
// 🤢 5 null arguments
The catch: Every review carries fields it doesn't use. A delivery review has PhotoUrls = null, WouldBuyAgain = null, Title = null. The constructor is a minefield of null, null, null. And the ReviewType string? One typo ("prodcut") and nothing catches it at compile time. Add a fifth review type? Five more nullable fields, more nulls in every constructor call.
When IS this approach better? In a quick prototype or database-backed system with a single table where you'd use discriminated columnsA database pattern where one table stores multiple entity types, using a "type" column to distinguish them. Unused columns for a given type are NULL. It's simple for queries but messy for application logic. Often used as a compromise for storage simplicity.. But even then, your application code should map to typed records.
DeepInheritance.cs
// Approach B: class inheritance hierarchy
public abstract class Review
{
public string UserId { get; init; }
public string TargetId { get; init; }
public int Stars { get; init; }
public string Text { get; init; }
public DateTimeOffset CreatedAt { get; init; }
}
public class ProductReview : Review
{
public string Title { get; init; }
public List<string> PhotoUrls { get; init; } = new();
public bool IsVerifiedPurchase { get; init; }
}
public class SellerReview : Review
{
public bool WouldBuyAgain { get; init; }
public int CommunicationRating { get; init; }
}
public class DeliveryReview : Review
{
public int OnTimeRating { get; init; }
public int PackageCondition { get; init; }
}
The catch: This WORKS, but classes are mutable by default and don't get free equality semantics. You lose the benefits of records (immutability, value equality, nice ToString()). And if you later need a review that's both a product review AND a seller review (a marketplace product sold by a third party)? C# doesn't support multiple inheritanceInheriting from more than one base class. C# deliberately doesn't support this because it creates the "diamond problem" — ambiguity when two parent classes define the same method. Interfaces are C#'s solution: a class can implement multiple interfaces.. Deep hierarchies also make serialization painful.
When IS this approach better? When review types share significant BEHAVIOR (not just data) and you need template methodsA pattern where a base class defines the skeleton of an algorithm with some steps implemented and others left abstract for subclasses. If each review type had a complex multi-step Validate() process with shared steps, inheritance would make sense. in the base. For pure data carriers, records with an interface are cleaner.
InterfaceRecords.cs
// Approach C: shared interface, specific records
public interface IReview
{
string UserId { get; }
string TargetId { get; }
int Stars { get; } // 1-5
string Text { get; }
DateTimeOffset CreatedAt { get; }
}
public sealed record ProductReview(
string UserId, string TargetId, int Stars,
string Text, DateTimeOffset CreatedAt,
string Title,
List<string> PhotoUrls,
bool IsVerifiedPurchase) : IReview;
public sealed record SellerReview(
string UserId, string TargetId, int Stars,
string Text, DateTimeOffset CreatedAt,
bool WouldBuyAgain,
int CommunicationRating) : IReview;
public sealed record DeliveryReview(
string UserId, string TargetId, int Stars,
string Text, DateTimeOffset CreatedAt,
int OnTimeRating,
int PackageCondition) : IReview;
Why this wins: Each record has EXACTLY the fields it needs — no nullables, no junk. Records give us immutability, value equality, and clean ToString() for free. The IReview interface lets the ReviewStore handle all types through one contract. Adding a ServiceReview next month? Implement IReview, done. Zero changes to existing code. And with C# pattern matchingC#'s ability to check an object's type and extract data in one step. switch(review) { case ProductReview pr => use pr.PhotoUrls; case SellerReview sr => use sr.WouldBuyAgain; } — clean, exhaustive, compiler-checked., you can handle type-specific logic cleanly when needed.
Decision Compass: Different kinds sharing a shape but with unique fields? → Interface + records. Shared complex behavior? → Consider abstract base class. Prototyping? → Giant record is fine temporarily.
Now the ReviewStore needs to work with IReview instead of Review. The change is minimal:
IReview.cs
/// The common shape all reviews share.
/// The store works with this — it doesn't know or care
/// whether it's a product, seller, or delivery review.
public interface IReview
{
string UserId { get; }
string TargetId { get; } // productId, sellerId, or deliveryId
int Stars { get; } // 1-5
string Text { get; }
DateTimeOffset CreatedAt { get; }
}
ReviewTypes.cs
public sealed record ProductReview(
string UserId, string TargetId, int Stars,
string Text, DateTimeOffset CreatedAt,
string Title, // "Best headphones ever!"
List<string> PhotoUrls, // customer photos
bool IsVerifiedPurchase // bought through the platform?
) : IReview;
public sealed record SellerReview(
string UserId, string TargetId, int Stars,
string Text, DateTimeOffset CreatedAt,
bool WouldBuyAgain, // "Would you buy from this seller again?"
int CommunicationRating // 1-5 for responsiveness
) : IReview;
public sealed record DeliveryReview(
string UserId, string TargetId, int Stars,
string Text, DateTimeOffset CreatedAt,
int OnTimeRating, // 1-5 for punctuality
int PackageCondition // 1-5 for package state
) : IReview;
ReviewStore.cs — Level 1
public class ReviewStore
{
private readonly List<IReview> _reviews = new(); // changed: IReview
public void Add(IReview review) // changed: IReview
=> _reviews.Add(review);
public IReadOnlyList<IReview> GetReviews(string targetId)
=> _reviews.Where(r => r.TargetId == targetId).ToList();
public double GetAverageRating(string targetId)
=> _reviews
.Where(r => r.TargetId == targetId)
.Average(r => r.Stars);
}
// What changed from Level 0?
// 1. Review → IReview (the store is type-agnostic now)
// 2. ProductId → TargetId (reviews can target products, sellers, or deliveries)
// That's it. Two renames. The logic is identical.
Program.cs — Usage
var store = new ReviewStore();
// Product review (has photos, title, verified badge)
store.Add(new ProductReview(
"user1", "prod-123", 5, "Excellent headphones!",
DateTimeOffset.Now,
Title: "Best purchase of 2024",
PhotoUrls: new() { "img1.jpg", "img2.jpg" },
IsVerifiedPurchase: true));
// Seller review (has wouldBuyAgain flag)
store.Add(new SellerReview(
"user2", "seller-456", 4, "Fast shipping",
DateTimeOffset.Now,
WouldBuyAgain: true,
CommunicationRating: 5));
// Delivery review (has on-time and condition ratings)
store.Add(new DeliveryReview(
"user3", "delivery-789", 3, "Package was dented",
DateTimeOffset.Now,
OnTimeRating: 4,
PackageCondition: 2));
// The store handles all types identically through IReview
double avg = store.GetAverageRating("prod-123"); // 5.0
Nullable Junk Drawer vs Typed Records
How the Store Handles All Types
Growing Diagram — After Level 1
Before This Level
You see "three types of reviews" and think "add nullable fields for each type."
After This Level
You smell "variants with unique fields" and instinctively reach for an interface + specific records. Zero nullables, zero junk.
Smell → Pattern: "Different kinds of a thing, each with unique data" → Interface + typed records. This isn't a GoF pattern — it's a fundamental modeling instinct. You'll use it in nearly every system: different payment methods, different vehicle types, different notification channels.
Transfer: In a Payment System, you'd have IPayment with CreditCardPayment, PayPalPayment, CryptoPayment — each with unique fields (card number vs email vs wallet address). Same technique, different domain.
Section 5 🟡 MEDIUM
Level 2 — Rating Algorithms
New Constraint: "The system needs multiple rating algorithms: Simple Average (sum/count), Weighted Average (recent reviews count more), and Bayesian Average (pulls toward global mean to prevent 5-star ratings from 1 review). Adding 'Wilson Score' should require ZERO changes to existing code."
What Breaks?
Our Level 1 GetAverageRating() is hardcoded to .Average(r => r.Stars) — that's a simple arithmetic mean. It treats every review equally, regardless of when it was written or how many reviews exist.
Here's the absurd result: a product with one 5-star review shows 5.0. A product with 500 reviews averaging 4.8 shows 4.8. If you sort by rating, the single-review product ranks HIGHER. That's a brand-new product with one suspicious review outranking a beloved product with hundreds of genuine reviews. Every real e-commerce platform has this problem, and simple average is the wrong answer.
This level introduces one of the most practical design problems you'll encounter: multiple algorithms for the same operation. The operation is "calculate a product's rating." But HOW you calculate it depends on context — do you want simplicity, recency weighting, or statistical robustness? The business might switch algorithms seasonally, A/B test them, or use different ones for different product categories. Hardcoding one formula is a dead end.
The "One Review = Five Stars" Problem
Imagine you're shopping for headphones on Amazon. Two products catch your eye:
Product
Reviews
Average
Your Trust?
HeadphoneX
1
5.0 ★
Suspicious — could be fake
HeadphoneY
500
4.8 ★
Trustworthy — 500 people agree
With simple average, HeadphoneX ranks higher. Your gut says that's wrong — and your gut is right. The problem is that simple average doesn't account for confidence. One review could be the seller's mom. Five hundred reviews are a statistically significant sample. This is why Amazon, IMDb, Reddit, and every major platform use something more sophisticated.
Think First #4
Multiple algorithms, same operation (calculate rating). Adding a new rating formula should require zero changes to existing code. What pattern lets you swap algorithms at runtime without modifying the caller?
60 seconds — you've seen this shape before.
Reveal Answer
That's the Strategy patternDefines a family of algorithms, puts each one in its own class, and makes them interchangeable. The caller uses an interface (IRatingStrategy) and doesn't know which concrete algorithm is running. Swapping from SimpleAverage to BayesianAverage is a one-line DI change.. Define an IRatingStrategy interface with a Calculate() method. Each algorithm (Simple, Weighted, Bayesian) implements it. The ReviewStore accepts an IRatingStrategy and delegates rating calculation to it. Adding Wilson Score? Create a new class, implement the interface, done. Zero changes to existing code.
Your Internal Monologue
"Multiple ways to calculate a rating... I could use a switch: if (algorithm == "simple") ... else if (algorithm == "weighted") ... else if (algorithm == "bayesian") ... That's three branches. Wilson Score? Four branches. Every new algorithm means modifying this method."
"Wait — that's the SAME smell from the parking lot pricing: multiple algorithms, same operation, should be independently swappable. The question is: what varies? The FORMULA varies. The input (list of reviews) and output (a number) stay the same."
"So I need: one interface, one method, different implementations. IRatingStrategy with Calculate(reviews) → double. Each algorithm is a class. The store gets injected with whichever strategy the business wants. Adding a new algorithm = adding a new class. Modifying existing code = never."
"That's StrategyThe Strategy pattern encapsulates each algorithm in its own class behind a shared interface. The context (ReviewStore) delegates to the strategy without knowing which concrete algorithm is running. This is one of the most practical patterns — you'll use it in nearly every system.. I didn't decide to use it — I just asked 'what varies?' and arrived at it naturally. That's the thinking skill, not the pattern name."
The Strategy Fan-Out
What Would You Do?
SwitchApproach.cs
public double GetRating(string targetId, string algorithm)
{
var reviews = _reviews.Where(r => r.TargetId == targetId).ToList();
return algorithm switch
{
"simple" => reviews.Average(r => r.Stars),
"weighted" => CalculateWeighted(reviews),
"bayesian" => CalculateBayesian(reviews),
// Wilson Score? Add another case here.
// Time-decay? Another case.
// Every new algorithm modifies THIS method.
_ => throw new ArgumentException($"Unknown: {algorithm}")
};
}
The catch: Every new algorithm requires modifying GetRating(). The method grows forever. The string algorithm parameter is type-unsafe — a typo like "baysian" compiles fine but throws at runtime. And all the algorithm code ends up in one class, mixing storage concerns with math. This violates OCPThe Open/Closed Principle: software should be open for extension (add new algorithms) but closed for modification (don't touch existing code). A switch statement that grows with every new algorithm is the textbook OCP violation..
IfElseApproach.cs
public enum RatingAlgorithm { Simple, Weighted, Bayesian }
public double GetRating(string targetId, RatingAlgorithm algo)
{
var reviews = _reviews.Where(r => r.TargetId == targetId).ToList();
if (algo == RatingAlgorithm.Simple)
return reviews.Average(r => r.Stars);
else if (algo == RatingAlgorithm.Weighted)
return CalculateWeighted(reviews);
else if (algo == RatingAlgorithm.Bayesian)
return CalculateBayesian(reviews);
else
throw new ArgumentOutOfRangeException(nameof(algo));
}
// Better than strings — enum is type-safe.
// But STILL modifies this method for every new algorithm.
The catch: Better than strings (enum is type-safe), but the core problem remains: adding Wilson Score means adding an enum value AND an else-if branch AND a new method, all in the same class. The ReviewStore class keeps growing. The math and the storage live together. You can't test algorithms independently.
StrategyApproach.cs
public interface IRatingStrategy
{
double Calculate(IReadOnlyList<IReview> reviews);
string Name { get; }
}
// Each algorithm is its OWN class — single responsibility
public sealed class SimpleAverage : IRatingStrategy
{
public string Name => "Simple Average";
public double Calculate(IReadOnlyList<IReview> reviews)
=> reviews.Count == 0 ? 0 : reviews.Average(r => r.Stars);
}
// ReviewStore delegates to whichever strategy it's given:
public double GetRating(string targetId, IRatingStrategy strategy)
{
var reviews = GetReviews(targetId);
return strategy.Calculate(reviews);
}
// Adding Wilson Score?
// 1. Create WilsonScore : IRatingStrategy
// 2. Done. Zero changes to ReviewStore.
Why this wins: Each algorithm lives in its own class. The ReviewStore doesn't know which algorithm it's running — it just calls strategy.Calculate(). Adding a new algorithm means creating ONE new file. Zero changes to existing code. You can test each algorithm in isolation with mock review data. The math and the storage are completely separated.
Decision Compass: Multiple algorithms, same input/output, swappable at runtime? → Strategy. One fixed algorithm that won't change? → Inline method is fine.
Here are all three algorithms. The interesting one is Bayesian — it solves the "one review = 5 stars" problem with real math.
IRatingStrategy.cs
/// One operation, many algorithms.
/// Every rating formula implements this interface.
public interface IRatingStrategy
{
/// Calculate the aggregate rating for a set of reviews.
/// Returns 0.0 when there are no reviews.
double Calculate(IReadOnlyList<IReview> reviews);
/// Human-readable name for display/logging.
string Name { get; }
}
SimpleAverage.cs
/// Sum of all stars / number of reviews.
/// Fast and simple, but treats every review equally —
/// a 3-year-old review counts as much as yesterday's.
public sealed class SimpleAverage : IRatingStrategy
{
public string Name => "Simple Average";
public double Calculate(IReadOnlyList<IReview> reviews)
{
if (reviews.Count == 0) return 0.0;
return reviews.Average(r => (double)r.Stars);
}
}
// When to use: prototypes, internal tools, or when
// all reviews are equally important regardless of age.
WeightedAverage.cs
/// Recent reviews count more than older ones.
/// A review from yesterday gets more weight than one from 2 years ago.
/// Uses exponential time decay: weight = e^(-lambda * ageDays).
public sealed class WeightedAverage : IRatingStrategy
{
private readonly double _lambda; // decay rate (higher = faster decay)
public WeightedAverage(double lambda = 0.005)
=> _lambda = lambda;
public string Name => "Weighted Average";
public double Calculate(IReadOnlyList<IReview> reviews)
{
if (reviews.Count == 0) return 0.0;
var now = DateTimeOffset.UtcNow;
double weightedSum = 0, totalWeight = 0;
foreach (var r in reviews)
{
double ageDays = (now - r.CreatedAt).TotalDays;
double weight = Math.Exp(-_lambda * ageDays);
// Recent review (0 days): weight ≈ 1.0
// 1-year-old review: weight ≈ 0.16
// 2-year-old review: weight ≈ 0.03
weightedSum += r.Stars * weight;
totalWeight += weight;
}
return weightedSum / totalWeight;
}
}
// When to use: products where recent quality matters
// (restaurants, hotels, apps that update frequently).
BayesianAverage.cs
/// The "confidence-aware" algorithm.
/// Pulls products with few reviews toward the global average,
/// so a product with 1 five-star review doesn't outrank
/// a product with 500 reviews averaging 4.8.
///
/// Formula: (C × M + sum_of_stars) / (C + review_count)
/// C = confidence threshold (how many reviews before we trust the data)
/// M = global mean rating across ALL products
///
/// IMDb calls this "True Bayesian Estimate" — it's how they rank the Top 250.
public sealed class BayesianAverage : IRatingStrategy
{
private readonly double _confidenceThreshold; // C: typically 10-25
private readonly double _globalMean; // M: typically 3.0-3.5
public BayesianAverage(double confidenceThreshold = 10, double globalMean = 3.5)
{
_confidenceThreshold = confidenceThreshold;
_globalMean = globalMean;
}
public string Name => "Bayesian Average";
public double Calculate(IReadOnlyList<IReview> reviews)
{
if (reviews.Count == 0) return _globalMean; // no data → assume average
double sum = reviews.Sum(r => (double)r.Stars);
int count = reviews.Count;
// The magic formula:
// Blends the product's actual average with the global average,
// weighted by how much data we have.
return (_confidenceThreshold * _globalMean + sum)
/ (_confidenceThreshold + count);
// Example with C=10, M=3.5:
// 1 review at 5 stars: (10×3.5 + 5) / (10+1) = 40/11 ≈ 3.64
// 500 reviews at 4.8: (10×3.5 + 2400) / (10+500) = 2435/510 ≈ 4.77
// HeadphoneY (4.77) now outranks HeadphoneX (3.64). Justice!
}
}
// When to use: any ranking/sorting where you need fairness
// between products with different numbers of reviews.
// Used by: IMDb (Top 250), Amazon, Reddit (Wilson Score variant).
ReviewStore.cs — Level 2
public class ReviewStore
{
private readonly List<IReview> _reviews = new();
public void Add(IReview review)
=> _reviews.Add(review);
public IReadOnlyList<IReview> GetReviews(string targetId)
=> _reviews.Where(r => r.TargetId == targetId).ToList();
// CHANGED: accepts a strategy instead of hardcoding an algorithm
public double GetRating(string targetId, IRatingStrategy strategy)
{
var reviews = GetReviews(targetId);
return strategy.Calculate(reviews);
}
}
// Usage:
var store = new ReviewStore();
// ... add reviews ...
var simple = new SimpleAverage();
var weighted = new WeightedAverage(lambda: 0.005);
var bayesian = new BayesianAverage(confidenceThreshold: 10, globalMean: 3.5);
double r1 = store.GetRating("prod-123", simple); // 4.0
double r2 = store.GetRating("prod-123", weighted); // 4.3 (recent reviews matter more)
double r3 = store.GetRating("prod-123", bayesian); // 3.9 (pulled toward global mean)
Why Bayesian Average Exists — The Math Made Simple
Most developers have never seen this formula. But it's running behind every product ranking you've ever used. Here's the intuition:
Imagine a global average for ALL products on the platform is 3.5 stars. Before a product has ANY reviews, we assume it's average — 3.5. As real reviews come in, the product's rating gradually moves away from 3.5 toward its actual average. The more reviews it gets, the less the global average pulls on it.
The confidence threshold (C) controls how many reviews it takes before we mostly trust the actual data. With C=10, you need about 10 reviews before the product's score starts reflecting reality instead of the global average. It's like a skeptical friend who says: "One review? That proves nothing. Show me 10 and I'll start believing you."
Product
Reviews
Actual Avg
Simple
Bayesian (C=10, M=3.5)
HeadphoneX
1
5.0
5.0 ★
3.64 ★
HeadphoneY
500
4.8
4.8 ★
4.77 ★
HeadphoneZ
10
4.5
4.5 ★
4.0 ★
HeadphoneW
0
—
error/0
3.5 ★ (assumes average)
Notice how Bayesian handles the edge cases gracefully: zero reviews returns the global mean (no crash), one review barely moves the needle, and hundreds of reviews let the true average shine through. This is real-world knowledge most developers don't have — and it shows up in interviews more than you'd think.
Growing Diagram — After Level 2
Before This Level
You see "multiple rating algorithms" and think "switch statement with three branches." You'd use simple average for everything and wonder why rankings look wrong.
After This Level
You smell "multiple algorithms, same interface" and instinctively reach for Strategy. You know WHY Bayesian average exists and can explain the formula in an interview.
Smell → Pattern: "Multiple algorithms, same input/output, swappable at runtime" → Strategy pattern. You saw this same smell in the Parking Lot (pricing strategies) and you'll see it again in Vending Machine (payment methods), Elevator (scheduling algorithms), and dozens of other systems. It's one of the most used patterns in real codebases.
Transfer: In a Search Engine, ranking algorithms vary (relevance, recency, popularity, personalized). In a Ride-Sharing app, pricing strategies vary (flat rate, surge, distance-based, subscription). Same technique: IRankingStrategy, IPricingStrategy — one interface, many implementations, zero changes when adding a new one.
Section 6
Level 3 — Content Moderation 🟡 MEDIUM
New Constraint: "Every review passes moderation before it goes live: spam detection, profanity filter, purchase verification. Next month we're adding sentiment analysis — and that should require zero changes to existing code."
What breaks: Our Level 2 code has no moderation at all. Reviews go straight into the store — spam, profanity, fake reviews, everything gets published instantly. If we jam three if checks into a single method, adding sentiment analysis later means cracking open that method again. Every new check = a new if branch, a longer method, and a higher chance of breaking something that already works.
Think First #4 — pause and design before you see the answer
You need to run multiple checks on every review, one after another. If any check fails, the review is rejected. The tricky part: the number of checks isn't fixed. Today it's 3, next month it's 4, next quarter it's 6. How do you design this so adding a new check means creating a new class — and nothing else changes?
Hint: think about how a factory assembly line works. Each station does ONE thing, and adding a new station doesn't require rebuilding the existing ones.
Your inner voice:
"Three checks: spam, profanity, purchase verification. I could shove them into one method with three if blocks... Actually, no — that's exactly what broke Level 2's pricing. Every new check means touching that method."
"What if each check is its own class? They all answer the same question: 'Is this review acceptable?' That's the same interface — an IModerationStrategyAn interface representing a single moderation check. Each implementation (spam detection, profanity filter, etc.) answers one question: "Does this review pass MY specific check?" The pipeline runs all of them in sequence.. Then I just have a list of these strategies and loop through them. Adding sentiment analysis = add one class to the list. Done."
"Wait — that's the Strategy patternThe Strategy pattern lets you swap algorithms at runtime. Here we're using a twist: instead of picking ONE strategy, we run ALL of them in sequence like an assembly line. Each strategy is a checkpoint that can approve or reject the review. again, but as a pipeline. Instead of choosing ONE algorithm (like we did for ratings), we run ALL of them in sequence. Same interface, different usage. Neat."
What Would You Do?
Three ways to moderate reviews. One is a mess, one hides everything, and one scales cleanly.
The idea: One method, all checks inline. Simple, right?
GiantIfElse.cs — everything crammed into one method
public bool Moderate(IReview review)
{
// Spam check
if (review.Text.Contains("buy now") || review.Text.Contains("click here"))
return false;
// Profanity check
var badWords = new[] { "badword1", "badword2" };
if (badWords.Any(w => review.Text.Contains(w)))
return false;
// Purchase verification
if (!_purchaseService.HasPurchased(review.UserId, review.ProductId))
return false;
// Next month: add sentiment analysis HERE
// The month after: add image scanning HERE
// ... this method grows forever
return true;
}
Verdict: Works for 3 checks. But this method is a magnet for change. Every new moderation rule means editing it. Six months from now it's 200 lines of tangled if/else blocks, and changing the profanity filter accidentally breaks purchase verification because they share the same scope. This is an OCP violationThe Open/Closed Principle says classes should be open for extension but closed for modification. This method requires modification every time a new check is added — the textbook definition of an OCP violation..
The idea: A ModerationService class that owns all the logic. Better than inline, but still one big class.
SingleService.cs — one class, many responsibilities
public class ModerationService
{
public bool CheckSpam(IReview r) { /* ... */ }
public bool CheckProfanity(IReview r) { /* ... */ }
public bool VerifyPurchase(IReview r) { /* ... */ }
public bool Moderate(IReview review)
{
return CheckSpam(review)
&& CheckProfanity(review)
&& VerifyPurchase(review);
}
}
Verdict: The methods are separated, which is better than inline if blocks. But the class still grows with every new check. Need to test spam detection in isolation? You can call CheckSpam() directly, but you can't swap it out or configure which checks run. The pipeline is hard-coded in Moderate(). When is this OK? Small teams, few checks, and no plan to add more.
The idea: Each check is its own class implementing IModerationStrategy. A pipeline runs them all in sequence. Adding a new check = creating a new class + registering it.
StrategyPipeline.cs — each check is independent and pluggable
public interface IModerationStrategy
{
ModerationResult Check(IReview review);
}
public record ModerationResult(bool Passed, string? Reason = null);
// Each check: one class, one responsibility
public class SpamDetector : IModerationStrategy { /* ... */ }
public class ProfanityFilter : IModerationStrategy { /* ... */ }
public class PurchaseVerifier : IModerationStrategy { /* ... */ }
// Pipeline: run all checks in sequence
public class ModerationPipeline
{
private readonly List<IModerationStrategy> _checks;
public ModerationPipeline(IEnumerable<IModerationStrategy> checks)
=> _checks = checks.ToList();
public ModerationResult Run(IReview review)
{
foreach (var check in _checks)
{
var result = check.Check(review);
if (!result.Passed) return result; // First failure stops
}
return new ModerationResult(true);
}
}
Verdict: This is the winner. Each check lives in its own class, testable in isolation. The pipeline doesn't know which checks it runs — it just loops through whatever it's given. Adding sentiment analysis next month? Create SentimentAnalyzer : IModerationStrategy and register it. Zero changes to existing code. The OCPOpen/Closed Principle: open for extension (add new checks), closed for modification (never touch existing check classes or the pipeline itself). is perfectly satisfied.
The Solution
Each moderation check is a standalone class. The pipeline orchestrates them without knowing any details. This is the Strategy pattern used as a pipelineA pipeline is a sequence of processing steps where each step transforms or validates data before passing it along. Think of an airport security line: baggage scan, metal detector, ID check — each station is independent, and adding a new station doesn't change the existing ones. — same interface, multiple implementations, all running in sequence.
IModerationStrategy.cs — the contract every check must follow
public interface IModerationStrategy
{
/// Returns Passed=true if the review is OK,
/// or Passed=false with a Reason explaining why not.
ModerationResult Check(IReview review);
}
public record ModerationResult(
bool Passed,
string? Reason = null
)
{
public static ModerationResult Pass() => new(true);
public static ModerationResult Reject(string reason) => new(false, reason);
}
Every moderation check returns the same thing: did it pass, and if not, why? The ModerationResultrecordA C# record is an immutable reference type. Once created, you can't change its fields. Perfect for results that should be read-only — a moderation verdict shouldn't be tampered with after the check runs. is immutable — once the verdict is in, nobody can tamper with it.
SpamDetector.cs — catches spammy patterns
public class SpamDetector : IModerationStrategy
{
private static readonly string[] SpamPhrases =
{ "buy now", "click here", "limited offer", "act fast" };
public ModerationResult Check(IReview review)
{
var text = review.Text.ToLowerInvariant();
var match = SpamPhrases.FirstOrDefault(p => text.Contains(p));
return match is null
? ModerationResult.Pass()
: ModerationResult.Reject($"Spam detected: '{match}'");
}
}
ProfanityFilter.cs — rejects reviews with banned words
public class ProfanityFilter : IModerationStrategy
{
private readonly HashSet<string> _banned;
public ProfanityFilter(IEnumerable<string> bannedWords)
=> _banned = new HashSet<string>(
bannedWords, StringComparer.OrdinalIgnoreCase);
public ModerationResult Check(IReview review)
{
var words = review.Text.Split(' ', StringSplitOptions.RemoveEmptyEntries);
var found = words.FirstOrDefault(w => _banned.Contains(w));
return found is null
? ModerationResult.Pass()
: ModerationResult.Reject("Review contains prohibited language.");
}
}
PurchaseVerifier.cs — only buyers can review
public class PurchaseVerifier : IModerationStrategy
{
private readonly IPurchaseService _purchases;
public PurchaseVerifier(IPurchaseService purchases)
=> _purchases = purchases;
public ModerationResult Check(IReview review)
{
var hasBought = _purchases.HasPurchased(
review.UserId, review.ProductId);
return hasBought
? ModerationResult.Pass()
: ModerationResult.Reject("Only verified buyers can leave reviews.");
}
}
ModerationPipeline.cs — runs every check, stops on first failure
public class ModerationPipeline
{
private readonly List<IModerationStrategy> _checks;
public ModerationPipeline(IEnumerable<IModerationStrategy> checks)
=> _checks = checks.ToList();
public ModerationResult Run(IReview review)
{
foreach (var check in _checks)
{
var result = check.Check(review);
if (!result.Passed)
return result; // First failure = immediate rejection
}
return ModerationResult.Pass();
}
// Adding a new check at runtime? Easy:
public void Add(IModerationStrategy check)
=> _checks.Add(check);
}
The pipeline doesn't know about spam, profanity, or purchases. It knows one thing: loop through checks, stop on failure. Adding sentiment analysis next month means creating SentimentAnalyzer : IModerationStrategy and registering it in the DI containerDependency Injection container — a tool that automatically creates objects and wires up their dependencies. Instead of manually writing "new SpamDetector()" everywhere, you register it once and the container hands it to whoever needs it.. The pipeline, the spam detector, the profanity filter — none of them change.
Diagrams
The Moderation Pipeline
Think of airport security. Each station checks one thing. If you fail any station, you don't board the plane. Adding a new station doesn't require rebuilding the existing ones.
if/else vs. Pipeline — at a glance
Adding Sentiment Analysis — zero changes
When the product team asks for sentiment analysis next month, here's everything that changes:
Growing Diagram — Level 3
Before this level
You see "multiple validation checks" and think "a big if/else chain — I'll add each check inline."
After this level
You smell "growing list of checks" and instinctively build a pipeline of Strategy objects. Adding a check = adding a class. Nothing else changes.
Smell → Pattern: "When you see multiple validation steps that grow over time → Strategy Pipeline. Each step is its own class implementing a shared interface. A runner loops through them all."
Transfer: This exact pattern powers e-commerce order validation (stock check → fraud detection → address verification → payment authorization). Same pipeline, different checks. Also used in CI/CD: each pipeline stage (lint → build → test → deploy) is a pluggable strategy.
Section 7
Level 4 — Notifications 🟡 MEDIUM
New Constraint: "When a new review is published: notify the seller, update the aggregate rating, email the reviewer a confirmation, and log it for analytics. The review system should know nothing about email, analytics, or seller dashboards."
What breaks: Right now, after a review passes moderation and gets stored, the method returns. That's it. No one else knows a review was published. If we hardcode _emailService.Send() and _analytics.Log() inside the SubmitReview() method, our review system suddenly depends on email servers, analytics SDKs, and seller notification APIs. Change your email provider? Edit the review code. Add a new listener? Edit the review code. The review system becomes a God classA class that knows about and controls too many things. Like a manager who insists on doing every employee's job personally — instead of delegating, they handle email, analytics, notifications, and reviews all in one class. It's fragile, untestable, and painful to change. that does everything.
Think First #5 — pause and design before you see the answer
The review system publishes a review. Four different systems need to react. But the review system shouldn't know those systems exist. How do you let multiple systems react to an event without the event source knowing about them?
Hint: think about a radio station. The station broadcasts a signal. It doesn't know how many radios are tuned in, or what brand they are. Listeners subscribe, and the station just... broadcasts.
Your inner voice:
"I need the review system to say 'hey, a review was published!' and have multiple things react. But I don't want the review system to know what reacts."
"I could call each service directly... but then the review code imports EmailService, AnalyticsService, SellerNotifier. That's tight coupling. Adding a fifth listener means editing the review class."
"Wait — this is the radio station problem. Broadcast an event, let anyone subscribe. That's the Observer patternA pattern where an object (the subject/publisher) maintains a list of dependents (observers/subscribers) and notifies them automatically when its state changes. The subject doesn't know what the observers do with the information — it just broadcasts. Think of a YouTube channel: upload a video, all subscribers get notified, but the channel doesn't know what each subscriber does with the notification.. The ReviewService is the publisher. Seller dashboard, email, analytics, rating aggregator are all observers. They register themselves, and when a review is published, the ReviewService just says 'notify all' without knowing who's listening."
What Would You Do?
Two approaches to "telling everyone what happened." One creates a web of dependencies, the other broadcasts into the void.
The idea: After storing the review, call each service directly.
DirectCalls.cs — the review system does everything
public void SubmitReview(IReview review)
{
var result = _pipeline.Run(review);
if (!result.Passed) throw new ModerationException(result.Reason!);
_store.Add(review);
// Now notify everyone... manually
_emailService.SendConfirmation(review.UserId);
_sellerNotifier.NotifySeller(review.ProductId);
_ratingAggregator.Recalculate(review.ProductId);
_analytics.LogReviewEvent(review);
// Next month: _pushNotifications.Send(...)
// The month after: _mlPipeline.TrainOn(review)
// This list NEVER stops growing
}
Verdict: It works, but the SubmitReview() method now depends on four different services. That means four constructor parameters, four things that can fail, four things to mock in tests. And it violates SRPSingle Responsibility Principle: a class should have only one reason to change. This ReviewService changes when email logic changes, when analytics changes, when seller notification changes... it has four reasons to change instead of one. — the review service now has multiple reasons to change.
The idea: The review system publishes an event. Anyone who cares subscribes. The publisher doesn't know who's listening.
ObserverApproach.cs — publish and forget
public void SubmitReview(IReview review)
{
var result = _pipeline.Run(review);
if (!result.Passed) throw new ModerationException(result.Reason!);
_store.Add(review);
// Just broadcast — don't care who's listening
_observers.ForEach(o => o.OnReviewPublished(review));
}
Verdict: This is the winner. The review service has ONE dependency: a list of IReviewObserver. It doesn't know about email, analytics, or sellers. Adding a push notification listener next month? Create the class, register it. The review service doesn't change. Each observer is independently testable. The coupling goes from "everyone knows everyone" to "everyone knows one interface."
The Solution
The Observer patternA behavioral design pattern where a "subject" maintains a list of "observers" and notifies them of state changes. This decouples the event source from the event handlers — the source broadcasts, observers react independently. decouples the "something happened" part from the "react to it" part. The review system just says "a review was published" and moves on. Each observer decides independently what to do with that information.
IReviewObserver.cs — the contract for anyone who cares about reviews
public interface IReviewObserver
{
void OnReviewPublished(IReview review);
}
One method. That's it. Any class that wants to react when a review is published just implements this interface. The review system doesn't care what happens inside OnReviewPublished() — that's the observer's business.
Observers — four independent listeners
public class SellerNotificationObserver : IReviewObserver
{
private readonly ISellerService _sellers;
public SellerNotificationObserver(ISellerService sellers)
=> _sellers = sellers;
public void OnReviewPublished(IReview review)
=> _sellers.Notify(review.ProductId,
$"New {review.Rating}-star review on your product.");
}
public class RatingAggregatorObserver : IReviewObserver
{
private readonly IRatingStrategy _strategy;
public RatingAggregatorObserver(IRatingStrategy strategy)
=> _strategy = strategy;
public void OnReviewPublished(IReview review)
{
// Recalculate aggregate rating for this product
var reviews = ReviewStore.GetByProduct(review.ProductId);
var newAvg = _strategy.Calculate(reviews);
ReviewStore.UpdateAggregate(review.ProductId, newAvg);
}
}
public class EmailConfirmationObserver : IReviewObserver
{
private readonly IEmailService _email;
public EmailConfirmationObserver(IEmailService email)
=> _email = email;
public void OnReviewPublished(IReview review)
=> _email.Send(review.UserId,
"Thanks for your review!", "Your review has been published.");
}
public class AnalyticsObserver : IReviewObserver
{
private readonly IAnalyticsService _analytics;
public AnalyticsObserver(IAnalyticsService analytics)
=> _analytics = analytics;
public void OnReviewPublished(IReview review)
=> _analytics.Track("review_published", new {
review.ProductId, review.Rating, review.CreatedAt });
}
Four classes, four responsibilities. Each one does exactly ONE thing when a review is published. They don't know about each other. They don't know the review service's internals. They just implement OnReviewPublished() and do their job.
ReviewService.cs — publish and forget
public class ReviewService
{
private readonly ReviewStore _store;
private readonly ModerationPipeline _moderation;
private readonly List<IReviewObserver> _observers;
public ReviewService(
ReviewStore store,
ModerationPipeline moderation,
IEnumerable<IReviewObserver> observers)
{
_store = store;
_moderation = moderation;
_observers = observers.ToList();
}
public void Submit(IReview review)
{
// Step 1: Moderate
var check = _moderation.Run(review);
if (!check.Passed)
throw new ModerationException(check.Reason!);
// Step 2: Store
_store.Add(review);
// Step 3: Broadcast — we don't know who's listening
foreach (var observer in _observers)
observer.OnReviewPublished(review);
}
// Subscribe/unsubscribe at runtime
public void Subscribe(IReviewObserver observer)
=> _observers.Add(observer);
public void Unsubscribe(IReviewObserver observer)
=> _observers.Remove(observer);
}
The Submit() method is clean: moderate, store, notify. It doesn't import EmailService or AnalyticsService. It just loops through whatever observers were registered and calls OnReviewPublished(). The DI containerDependency Injection container. In .NET, you register all IReviewObserver implementations in Startup/Program.cs, and the container automatically injects the full list into the ReviewService constructor. No manual wiring needed. wires up all the observers at startup — the ReviewService never changes.
Diagrams
The Observer Pattern in Action
The ReviewService is a radio station. It broadcasts "review published!" to all subscribers. Each subscriber reacts independently.
Event Flow — What Happens When a Review is Submitted
Decoupling — Before vs. After Observer
Growing Diagram — Level 4
Before this level
You see "notify multiple systems" and think "call each service directly from the method."
After this level
You smell "react to an event" and reach for Observer. The publisher broadcasts, observers subscribe independently. Adding a listener = adding a class. The publisher never changes.
Smell → Pattern: "When you see multiple systems that need to react when something happens → Observer. The event source broadcasts, listeners subscribe. They don't know about each other."
Transfer: This exact pattern powers e-commerce order events: order placed → notify warehouse, charge card, send confirmation, update inventory. Also: social media post published → update feed, notify followers, log analytics, trigger recommendation engine.
Section 8
Level 5 — Edge Cases 🔴 HARD
New Constraint: "Handle every edge case: duplicate reviews, self-reviews, rate bombing (50 reviews/minute from one user), edit history tracking, and reviews on deleted products. No silent failures, no data corruption."
What breaks: Our Level 4 code is a happy-path hero. A user submits 50 reviews in one minute? All 50 go through. The same user reviews the same product twice? Both get stored. A seller reviews their own product? Nobody stops them. A product gets deleted, but its 200 reviews still show up in search results, linked to a ghost product. And when a user edits their review, the original text vanishes — no audit trail, no history. Every one of these is a real production bug waiting to happen.
Think First #6 — pause and design before you see the answer
Pick any two edge cases from the list above. For each one, answer: (1) What's the worst thing that happens if we don't handle it? (2) Where in the current code should the check go? (3) What data structure or technique prevents it?
The key question for all edge cases: should the check go in the moderation pipeline, the ReviewService, or the ReviewStore? Think about who owns the responsibility.
Your inner voice:
"Five edge cases. Where does each one belong? Duplicates — that's a uniqueness check. Belongs in the store or as a moderation strategy. Self-reviews — a seller reviewing their own product? That's a business rule, goes in moderation. Rate bombing — that's rate limitingRate limiting restricts how often a user can perform an action within a time window. For reviews, it might be "max 5 reviews per hour per user." It prevents abuse, spam, and coordinated attacks without blocking legitimate use., needs a time-windowed counter."
"Edit history... I need to keep old versions. A List<ReviewSnapshot> inside the review? Or a separate EditHistory store? Separate store feels cleaner — the review object stays simple, and the history can be queried independently."
"Deleted product reviews... soft deleteInstead of actually removing a record from the database, you mark it as deleted with a flag (like IsDeleted = true) and a timestamp. The data stays in the database but is excluded from normal queries. This preserves history, enables undo, and prevents orphaned references.. Don't delete reviews when a product is deleted — mark them as 'orphaned' and hide from display. The data stays for analytics and dispute resolution."
The Five Edge Cases
Each edge case follows a pattern: What goes wrong → Where the check lives → The fix. Let's walk through all five.
The Solution
Each edge case gets a focused fix. The beauty of the pipeline architecture from Level 3 is that two of these (self-reviews and rate bombing) slot in as new IModerationStrategy implementations — zero changes to existing code.
public class ReviewStore
{
// Key = (UserId, ProductId) — only one review per user per product
private readonly Dictionary<(string UserId, string ProductId), IReview> _reviews = new();
public Result<bool> Add(IReview review)
{
var key = (review.UserId, review.ProductId);
if (_reviews.ContainsKey(key))
return Result<bool>.Fail(
"You've already reviewed this product. Use Edit instead.");
_reviews[key] = review;
return Result<bool>.Ok(true);
}
}
The composite keyA key made up of multiple fields combined. Here, (UserId, ProductId) together form a unique identifier. One user can review many products, and one product can have many reviews, but one user can only review one product once.(UserId, ProductId) makes duplicates physically impossible. In a real database, this would be a unique index — the database itself enforces the rule.
SelfReviewGuard.cs — plugs into existing moderation pipeline
public class SelfReviewGuard : IModerationStrategy
{
private readonly IProductService _products;
public SelfReviewGuard(IProductService products)
=> _products = products;
public ModerationResult Check(IReview review)
{
var product = _products.GetById(review.ProductId);
if (product is null)
return ModerationResult.Reject("Product not found.");
return review.UserId == product.SellerId
? ModerationResult.Reject("Sellers cannot review their own products.")
: ModerationResult.Pass();
}
}
This is a new IModerationStrategy — it plugs directly into the pipeline from Level 3. The pipeline, the spam detector, the profanity filter — none of them change. That's the power of the pipeline architecture paying off already.
RateLimiter.cs — sliding window prevents abuse
public class RateLimiter : IModerationStrategy
{
private readonly int _maxReviews;
private readonly TimeSpan _window;
private readonly IClock _clock;
// userId -> list of submission timestamps
private readonly Dictionary<string, List<DateTimeOffset>> _history = new();
public RateLimiter(int maxReviews, TimeSpan window, IClock clock)
{
_maxReviews = maxReviews;
_window = window;
_clock = clock;
}
public ModerationResult Check(IReview review)
{
var now = _clock.UtcNow;
var cutoff = now - _window;
if (!_history.ContainsKey(review.UserId))
_history[review.UserId] = new();
var timestamps = _history[review.UserId];
// Remove entries outside the sliding window
timestamps.RemoveAll(t => t < cutoff);
if (timestamps.Count >= _maxReviews)
return ModerationResult.Reject(
$"Rate limit: max {_maxReviews} reviews per {_window.TotalMinutes} min.");
timestamps.Add(now);
return ModerationResult.Pass();
}
}
The sliding windowA rate limiting technique where you track timestamps of recent actions. You look back N minutes from "now" and count how many actions occurred. Old timestamps fall off the window naturally. This is more fair than fixed windows because it doesn't have a "reset at midnight" boundary where someone could sneak in double the limit. tracks each user's recent submissions. Notice IClock instead of DateTimeOffset.UtcNow — we'll see in Level 6 why injectable clocks make testing 100x easier.
ReviewSnapshot.cs + ReviewStore.Update()
// A frozen snapshot of the review at a point in time
public record ReviewSnapshot(
string Text,
int Rating,
DateTimeOffset EditedAt
);
// Inside ReviewStore
public Result<bool> Update(string userId, string productId, string newText, int newRating)
{
var key = (userId, productId);
if (!_reviews.TryGetValue(key, out var existing))
return Result<bool>.Fail("Review not found.");
// Save the old version before overwriting
if (!_editHistory.ContainsKey(key))
_editHistory[key] = new List<ReviewSnapshot>();
_editHistory[key].Add(new ReviewSnapshot(
existing.Text, existing.Rating, _clock.UtcNow));
// Now update the live review
existing.Text = newText;
existing.Rating = newRating;
return Result<bool>.Ok(true);
}
public IReadOnlyList<ReviewSnapshot> GetEditHistory(string userId, string productId)
=> _editHistory.TryGetValue((userId, productId), out var history)
? history.AsReadOnly()
: Array.Empty<ReviewSnapshot>();
Before overwriting, we save a snapshotAn immutable copy of the review's state at a specific moment. Think of it like a photograph: it captures what the review looked like before the edit. The edit history is a photo album of every version. of the current state. The snapshots are immutable records — once saved, they never change. Users can see "edited 3 times" and admins can view the full history for dispute resolution.
SoftDelete.cs — products disappear from view, not from data
public class Product
{
public string Id { get; init; }
public string SellerId { get; init; }
public bool IsActive { get; private set; } = true;
public DateTimeOffset? DeletedAt { get; private set; }
public void SoftDelete(IClock clock)
{
IsActive = false;
DeletedAt = clock.UtcNow;
}
}
// In ReviewStore — filter out orphaned reviews
public IEnumerable<IReview> GetActiveReviews(string productId)
=> _reviews.Values
.Where(r => r.ProductId == productId)
.Where(r => _products.GetById(r.ProductId)?.IsActive == true);
// Reviews still exist for analytics, disputes, and audit trails
public IEnumerable<IReview> GetAllReviews(string productId)
=> _reviews.Values.Where(r => r.ProductId == productId);
Soft deleteInstead of permanently removing data, you flag it as deleted. The data stays in storage but is excluded from normal queries. This preserves referential integrity (no orphaned foreign keys), enables undo, and keeps audit trails intact. Most production systems use soft delete. means "invisible but not gone." Reviews for deleted products are hidden from shoppers but available for analytics, legal disputes, and admin auditing. In a real database, this is a filtered index: WHERE IsActive = true.
Rate Bombing — Sliding Window Defense
Edit History — Every Version Preserved
Growing Diagram — Level 5
Before this level
You see a clean, working system and feel done. Edge cases? "We'll handle those later."
After this level
You ask "What if?" for every feature: What if duplicate? What if abuse? What if deleted? You treat edge cases as first-class requirements, not afterthoughts.
Smell → Pattern: "When you see code that only handles the happy path → apply the 'What If?' framework. For every method, ask: what if invalid input? what if duplicate? what if the referenced entity is gone? what if too many requests?"
Section 9
Level 6 — Testability 🔴 HARD
New Constraint: "Every component must be testable in isolation. Mock the moderation pipeline, control time for rate limiter tests, fake purchase verification. No test should depend on a real database, real clock, or real email server."
What breaks: Look at the RateLimiter from Level 5 — it uses IClock. Good, that's already injectable. But what about the PurchaseVerifier? If it calls a real API in tests, your test suite is slow, flaky, and depends on network connectivity. What about ReviewStore? If it's a static singleton, you can't reset it between tests. Every hard dependencyA hard dependency is when a class creates its own dependency (using 'new') or calls a static method directly, instead of receiving the dependency through its constructor. Hard dependencies can't be swapped for fakes in tests, making the class untestable in isolation. is a wall between you and reliable tests.
Think First #7 — pause and design before you see the answer
Look at the review system so far. Identify 3 components that are hard to test because they depend on external services or system state. For each one, describe what interface you'd extract so you could swap in a fake for testing.
Key principle: if a class creates its own dependencies (using new), tests can't control those dependencies. If a class receives its dependencies through the constructor, tests can inject fakes.
Your inner voice:
"OK, what's hard to test? (1) PurchaseVerifier talks to an API — I need IPurchaseService so I can fake it. (2) The RateLimiter needs to manipulate time — I already have IClock, good. (3) The whole ReviewService depends on ModerationPipeline, ReviewStore, and observers. In a unit test, I want to test just the service logic with fake everything-else."
"The pattern is always the same: extract an interface, inject through the constructor, swap fakes in tests. That's Dependency InjectionA technique where a class receives its dependencies from the outside (usually through its constructor) instead of creating them internally. This lets you swap real implementations for fakes/mocks in tests. Think of it like a power outlet — you can plug in any device that fits the socket.. Not a framework feature — just a design principle."
The Solution
Every external dependency gets an interface. Tests inject fakes. The real implementations are wired up in the DI containerA DI (Dependency Injection) container is a tool that manages object creation and wiring. You register "when someone asks for IPurchaseService, give them RealPurchaseService." The container handles creating everything and connecting the dependencies. In .NET, this is built into the framework via IServiceCollection. at startup — the classes themselves never know (or care) whether they're running in production or in a test.
Interfaces.cs — the seams that make testing possible
// Time — injectable so tests can control the clock
public interface IClock
{
DateTimeOffset UtcNow { get; }
}
public class SystemClock : IClock
{
public DateTimeOffset UtcNow => DateTimeOffset.UtcNow;
}
// Purchase verification — injectable so tests skip the real API
public interface IPurchaseService
{
bool HasPurchased(string userId, string productId);
}
// Email — injectable so tests don't send real emails
public interface IEmailService
{
void Send(string userId, string subject, string body);
}
// Review storage — injectable so tests use in-memory store
public interface IReviewStore
{
Result<bool> Add(IReview review);
IEnumerable<IReview> GetByProduct(string productId);
}
Each interface is a seamIn testing terminology, a "seam" is a point in the code where you can swap one implementation for another without changing the code that uses it. Interfaces are the most common seams — they let you replace real services with fakes, mocks, or stubs during testing. — a point where you can swap implementations. In production, IClock is SystemClock. In tests, it's a fake clock you control.
TestFakes.cs — controllable replacements for real services
// Fake clock: you tell it what time it is
public class FakeClock : IClock
{
public DateTimeOffset UtcNow { get; set; } = DateTimeOffset.UtcNow;
public void Advance(TimeSpan duration) => UtcNow += duration;
}
// Fake purchase service: you control who "bought" what
public class FakePurchaseService : IPurchaseService
{
private readonly HashSet<(string, string)> _purchases = new();
public void AddPurchase(string userId, string productId)
=> _purchases.Add((userId, productId));
public bool HasPurchased(string userId, string productId)
=> _purchases.Contains((userId, productId));
}
// Fake email: captures sent emails for assertions
public class FakeEmailService : IEmailService
{
public List<(string UserId, string Subject, string Body)> SentEmails { get; } = new();
public void Send(string userId, string subject, string body)
=> SentEmails.Add((userId, subject, body));
}
Fakes are dead simple. The fake clock lets you jump forward in time. The fake purchase service lets you pre-load "this user bought this product." The fake email service captures messages so you can assert "was a confirmation email sent?" All without touching real infrastructure.
[Fact]
public void RateLimiter_Rejects_After_Threshold()
{
var clock = new FakeClock();
var limiter = new RateLimiter(maxReviews: 3, TimeSpan.FromHours(1), clock);
var review = CreateTestReview("user1", "product1");
// Submit 3 reviews — all pass
Assert.True(limiter.Check(review).Passed);
Assert.True(limiter.Check(review).Passed);
Assert.True(limiter.Check(review).Passed);
// 4th review — rejected!
Assert.False(limiter.Check(review).Passed);
// Advance clock past the window — resets
clock.Advance(TimeSpan.FromHours(1).Add(TimeSpan.FromSeconds(1)));
Assert.True(limiter.Check(review).Passed); // Accepted again
}
[Fact]
public void SelfReview_Is_Rejected()
{
var products = new FakeProductService();
products.Add(new Product { Id = "p1", SellerId = "seller42" });
var guard = new SelfReviewGuard(products);
var review = CreateTestReview(userId: "seller42", productId: "p1");
var result = guard.Check(review);
Assert.False(result.Passed);
Assert.Contains("cannot review their own", result.Reason);
}
[Fact]
public void Observer_Receives_Published_Review()
{
var fakeEmail = new FakeEmailService();
var emailObserver = new EmailConfirmationObserver(fakeEmail);
var service = BuildServiceWith(observers: new[] { emailObserver });
service.Submit(CreateTestReview("user1", "product1"));
Assert.Single(fakeEmail.SentEmails);
Assert.Equal("user1", fakeEmail.SentEmails[0].UserId);
}
Notice what's not here: no real clock, no real database, no real email server, no network calls. Each test runs in milliseconds, never flakes, and tests exactly one behavior. The FakeClock.Advance() call is the magic — we literally fast-forward time to test the sliding window.
Program.cs — production wiring
var builder = WebApplication.CreateBuilder(args);
// Real implementations for production
builder.Services.AddSingleton<IClock, SystemClock>();
builder.Services.AddSingleton<IPurchaseService, RealPurchaseService>();
builder.Services.AddSingleton<IEmailService, SmtpEmailService>();
builder.Services.AddSingleton<IReviewStore, SqlReviewStore>();
// Moderation pipeline — order matters!
builder.Services.AddSingleton<IModerationStrategy, SpamDetector>();
builder.Services.AddSingleton<IModerationStrategy, ProfanityFilter>();
builder.Services.AddSingleton<IModerationStrategy, PurchaseVerifier>();
builder.Services.AddSingleton<IModerationStrategy, SelfReviewGuard>();
builder.Services.AddSingleton<IModerationStrategy>(sp =>
new RateLimiter(5, TimeSpan.FromHours(1), sp.GetRequiredService<IClock>()));
builder.Services.AddSingleton<ModerationPipeline>();
// Observers
builder.Services.AddSingleton<IReviewObserver, SellerNotificationObserver>();
builder.Services.AddSingleton<IReviewObserver, RatingAggregatorObserver>();
builder.Services.AddSingleton<IReviewObserver, EmailConfirmationObserver>();
builder.Services.AddSingleton<IReviewObserver, AnalyticsObserver>();
// The service itself
builder.Services.AddSingleton<ReviewService>();
The DI container wires everything together at startup. In production, IClock is SystemClock. In tests, it's FakeClock. The ReviewService class is identical in both environments — it never calls new internally. This is the Dependency Inversion PrincipleThe D in SOLID. High-level modules (ReviewService) should depend on abstractions (IClock, IReviewStore), not on concrete implementations (SystemClock, SqlReviewStore). This lets you swap implementations without changing the high-level code. The "inversion" is that the low-level details depend on the high-level abstractions, not the other way around. in action.
Diagrams
Dependency Injection — Same Code, Different Wiring
The class depends on interfaces. The wiring decides what's behind the interface. Production gets real services, tests get fakes. The class never knows the difference.
What We Can Test Now
Growing Diagram — Level 6
Before this level
You write code that works in production but is impossible to test without spinning up a database, email server, and faking the system clock.
After this level
You design for testability from the start: every external dependency gets an interface, every class receives its deps through the constructor. Tests run in milliseconds with zero infrastructure.
Smell → Pattern: "When you see a class that's hard to test because it creates its own dependencies → extract interfaces, inject through the constructor. If you need to control time → inject IClock. If you need to control external services → inject their interface."
Section 10
Level 7 — Scale It 🔴 HARD
New Constraint: "The platform now has 50 million reviews. Users search by keyword, filter by rating, sort by date. Sellers see real-time aggregate ratings update the moment a review is published. The system handles 10,000 review submissions per minute."
What breaks: Our in-memory Dictionary won't survive a server restart, let alone hold 50 million reviews. Searching reviews by keyword means scanning every review's text — O(n) on 50 million records. Recalculating the aggregate rating on every new review means reading ALL reviews for that product, every time. And 10,000 submissions per minute on a single server? The moderation pipeline becomes a bottleneck. This is where LLD meets HLDLow-Level Design (LLD) focuses on class structure, patterns, and code. High-Level Design (HLD) focuses on infrastructure: databases, caches, message queues, and distributed systems. Level 7 is the bridge — you recognize when your code design needs infrastructure support to work at scale..
Think First #8 — pause and design before you see the answer
Three scaling challenges: (1) full-text search across millions of reviews, (2) real-time aggregate ratings that update instantly, (3) handling burst traffic without the moderation pipeline becoming a bottleneck. For each one, think about what infrastructure or pattern would solve it without rewriting our class design.
Hint: the patterns we've built (Strategy, Observer, Pipeline) map beautifully to distributed infrastructure. The Observer pattern in code becomes a message queue in infrastructure. The Pipeline becomes an async processing chain.
Your inner voice:
"Full-text search on 50 million reviews... ElasticsearchA distributed search engine built for full-text search. Instead of scanning every row in a database, it builds inverted indexes (like the index at the back of a book) that map words to documents. Searching "great battery life" across 50M reviews takes milliseconds, not minutes.. It's purpose-built for this. Our IReviewStore interface already abstracts storage — we add an Elasticsearch implementation that writes to both the database and the search index."
"Real-time aggregates... Every time someone submits a review, recalculating the average from ALL reviews is insane at scale. I need CQRSCommand Query Responsibility Segregation: use separate models for reading and writing. The write side handles new reviews. The read side maintains pre-computed aggregates (total reviews, average rating) that update incrementally. You never recalculate from scratch — you just add the new review's contribution to the running total.. Keep a running aggregate (count + sum) that updates incrementally when a new review arrives. Never recalculate from scratch."
"10K submissions per minute... The moderation pipeline is CPU work. Put it behind a message queueA queue (like RabbitMQ, Kafka, or Azure Service Bus) that sits between the API and the processing logic. The API pushes new reviews onto the queue and responds immediately. Worker processes consume from the queue and run moderation at their own pace. If traffic spikes, the queue buffers the overflow.. The API accepts the review, pushes it to a queue, and responds immediately. Workers consume from the queue and run moderation asynchronously. Our Observer pattern in code becomes publish/subscribe in infrastructure. Beautiful."
Three Scaling Strategies
Each strategy maps a code pattern we already built to an infrastructure pattern. The class design doesn't change — we swap implementations behind the same interfaces.
Problem: Recalculating average rating from 10,000 reviews every time one review is added.
Solution: Keep a running aggregate. When a new review arrives, update the count and sum incrementally. When someone reads the rating, return the pre-computed value. This is CQRSCommand Query Responsibility Segregation separates writes (commands) from reads (queries). The write path processes new reviews and updates aggregates. The read path serves pre-computed data instantly. Neither path interferes with the other.: separate the write path (process review) from the read path (display rating).
RatingAggregate.cs — incremental, never recalculates
public class RatingAggregate
{
public string ProductId { get; init; }
public int TotalReviews { get; private set; }
public int SumOfRatings { get; private set; }
public double Average => TotalReviews == 0 ? 0 : (double)SumOfRatings / TotalReviews;
// O(1) — just add, never recalculate
public void AddReview(int rating)
{
TotalReviews++;
SumOfRatings += rating;
}
// Handle edits: remove old, add new
public void UpdateReview(int oldRating, int newRating)
{
SumOfRatings += (newRating - oldRating);
}
// Handle deletes
public void RemoveReview(int rating)
{
TotalReviews--;
SumOfRatings -= rating;
}
}
Every operation is O(1). No matter if the product has 10 reviews or 10 million, adding a new review takes the same constant time. The read side (Average) is a simple division — no database query needed.
Problem: "Show me all reviews mentioning 'battery life'" across 50 million records.
Solution: A search index (Elasticsearch, Azure Cognitive Search, etc.) that builds inverted indexesLike the index at the back of a textbook. Instead of scanning every page to find "battery," you look up "battery" in the index and it tells you exactly which pages (documents) contain that word. This turns a full scan into a near-instant lookup. on review text. Our IReviewStore interface already abstracts storage — we create an implementation that writes to both the database and the search index.
SearchableReviewStore.cs — dual-write to DB + search index
public class SearchableReviewStore : IReviewStore
{
private readonly IReviewStore _database; // SQL/NoSQL
private readonly ISearchIndex _searchIndex; // Elasticsearch
public SearchableReviewStore(IReviewStore database, ISearchIndex searchIndex)
{
_database = database;
_searchIndex = searchIndex;
}
public Result<bool> Add(IReview review)
{
var result = _database.Add(review);
if (result.IsSuccess)
_searchIndex.Index(review); // Also index for search
return result;
}
// Search by keyword — milliseconds on 50M records
public IEnumerable<IReview> Search(string query, int? minRating = null)
=> _searchIndex.Search(query, minRating);
}
The Decorator patternSearchableReviewStore wraps an existing IReviewStore and adds search functionality. The inner store handles persistence, the wrapper adds indexing. This is the Decorator pattern: add behavior without modifying the original class. in action — SearchableReviewStore wraps the real store and adds search capability. The ReviewService doesn't know the difference.
Problem: 10,000 reviews per minute. Moderation is CPU-heavy. Users wait while each check runs.
Solution: Accept the review immediately, push it to a message queueA queue is a buffer between the producer (API) and the consumer (moderation workers). The API responds instantly ("Review received, processing..."), and background workers process reviews from the queue at their own pace. If there's a traffic spike, the queue absorbs it.. Workers pull from the queue and run moderation asynchronously. The user sees "Review submitted, pending moderation" instantly.
AsyncReviewService.cs — non-blocking submission
public class AsyncReviewService
{
private readonly IMessageQueue _queue;
public AsyncReviewService(IMessageQueue queue) => _queue = queue;
// API endpoint — returns immediately
public string Submit(IReview review)
{
_queue.Publish("review-submitted", review);
return "Review received. You'll be notified once it's approved.";
}
}
// Background worker — processes at its own pace
public class ModerationWorker : IMessageHandler<IReview>
{
private readonly ModerationPipeline _pipeline;
private readonly IReviewStore _store;
private readonly List<IReviewObserver> _observers;
public async Task Handle(IReview review)
{
var result = _pipeline.Run(review);
if (!result.Passed)
{
// Notify user their review was rejected
return;
}
_store.Add(review);
_observers.ForEach(o => o.OnReviewPublished(review));
}
}
The Observer pattern from Level 4 maps perfectly to pub/subPublish/Subscribe: a messaging pattern where the publisher sends messages to a topic, and subscribers receive messages from that topic. In-code Observer and infrastructure pub/sub are the same idea at different scales. Code Observer = method calls in one process. Pub/Sub = messages across distributed services. infrastructure. In-code IReviewObserver becomes separate microservices listening on a message bus. Same pattern, bigger scale.
Diagrams
CQRS — Separate Read and Write Paths
Full-Text Search Architecture
Growing Diagram — Level 7 (Complete)
Before this level
You think LLD and HLD are separate worlds. "That's an infrastructure problem" means "not my concern."
After this level
You see how code patterns map to infrastructure patterns. In-code Observer becomes pub/sub. In-code Strategy pipeline becomes async workers. Good LLD enables good HLD — they're the same design at different scales.
Smell → Pattern: "When you see reading from the same data you're writing, and reads are 100x more frequent → CQRS. Separate read and write models. Pre-compute what readers need so they never wait."
Transfer: CQRS powers social media feeds (posts are written once, feeds are pre-computed for millions of readers). Async pipelines power payment processing (charge accepted instantly, settlement runs in background). Search indexes power e-commerce product catalogs (filter by price, brand, rating in milliseconds).
Section 11
The Full Code — Everything Assembled
You've built this review & rating system piece by piece across seven levels. Now it's time to see the whole thing in one place — every file, every pattern, every guard clause. Each code section is annotated with // Level N comments so you can trace exactly which constraint forced each line into existence.
Before diving into the files, here's a bird's-eye view of every type in the system, color-coded by the level that introduced it. Green types appeared early (Levels 0–1), yellow ones came in the middle (Levels 2–4), and red ones were added in the advanced levels (5–7).
Now let's see the actual code. Each file is organized by responsibility — models in one place, rating strategies in another, moderation pipeline in a third. Click through the tabs to read each file.
Models.cs — All data types the system carries around
namespace ReviewRating.Models;
// ─── IReview ──────────────────────────────────────────
// The contract every review must follow. // Level 1
// Product reviews, seller reviews, and delivery reviews
// all look the same to the system — it only cares about
// who wrote it, what rating they gave, and the text.
public interface IReview
{
string ReviewId { get; }
string UserId { get; }
int Rating { get; } // 1-5 stars
string Text { get; }
DateTimeOffset CreatedAt { get; }
ReviewStatus Status { get; set; }
}
public enum ReviewStatus // Level 1
{
Pending, // Submitted, awaiting moderation
Approved, // Passed all checks
Rejected, // Failed moderation
Flagged // Needs human review
}
// ─── ProductReview ────────────────────────────────────
// A review for a specific product. // Level 1
// Has a product ID + optional photo URLs.
public class ProductReview : IReview
{
public string ReviewId { get; init; } = Guid.NewGuid().ToString("N")[..8];
public string UserId { get; init; } = "";
public string ProductId { get; init; } = ""; // Level 1
public int Rating { get; init; }
public string Text { get; init; } = "";
public List<string> PhotoUrls { get; init; } = []; // Level 1
public DateTimeOffset CreatedAt { get; init; } = DateTimeOffset.UtcNow;
public ReviewStatus Status { get; set; } = ReviewStatus.Pending;
public bool IsVerifiedPurchase { get; init; } // Level 3
}
// ─── SellerReview ─────────────────────────────────────
// A review for a seller/merchant. // Level 1
// Has a seller ID + communication rating.
public class SellerReview : IReview
{
public string ReviewId { get; init; } = Guid.NewGuid().ToString("N")[..8];
public string UserId { get; init; } = "";
public string SellerId { get; init; } = "";
public int Rating { get; init; }
public string Text { get; init; } = "";
public int CommunicationRating { get; init; } // Level 1
public DateTimeOffset CreatedAt { get; init; } = DateTimeOffset.UtcNow;
public ReviewStatus Status { get; set; } = ReviewStatus.Pending;
}
// ─── DeliveryReview ───────────────────────────────────
// A review for the delivery experience. // Level 1
// Has an order ID + on-time flag.
public class DeliveryReview : IReview
{
public string ReviewId { get; init; } = Guid.NewGuid().ToString("N")[..8];
public string UserId { get; init; } = "";
public string OrderId { get; init; } = "";
public int Rating { get; init; }
public string Text { get; init; } = "";
public bool WasOnTime { get; init; } // Level 1
public DateTimeOffset CreatedAt { get; init; } = DateTimeOffset.UtcNow;
public ReviewStatus Status { get; set; } = ReviewStatus.Pending;
}
// ─── ReviewResult<T> ─────────────────────────────────
// A generic result wrapper — either a value or an error. // Level 5
// Used for operations that might fail (rate limited,
// spam detected, duplicate review).
public record ReviewResult<T>(T? Value, string? Error = null)
{
public bool IsSuccess => Error is null;
public static ReviewResult<T> Ok(T value) => new(value);
public static ReviewResult<T> Fail(string error) => new(default, error);
}
// ─── EditHistory ──────────────────────────────────────
// Tracks every edit made to a review. // Level 5
// Prevents abuse: users can edit but the trail is visible.
public record ReviewEdit(
string PreviousText,
int PreviousRating,
DateTimeOffset EditedAt);
public class EditHistory
{
private readonly List<ReviewEdit> _edits = [];
private readonly int _maxEdits; // Level 5
public EditHistory(int maxEdits = 5) => _maxEdits = maxEdits;
public IReadOnlyList<ReviewEdit> Edits => _edits;
public int EditCount => _edits.Count;
public ReviewResult<bool> RecordEdit(string previousText, int previousRating,
DateTimeOffset editedAt)
{
if (_edits.Count >= _maxEdits)
return ReviewResult<bool>.Fail(
$"Maximum {_maxEdits} edits reached. Cannot edit further.");
_edits.Add(new ReviewEdit(previousText, previousRating, editedAt));
return ReviewResult<bool>.Ok(true);
}
}
Everything here is either a data typeData types describe the SHAPE of information — what fields it has and what values are valid. They don't contain business logic. Think of them as forms: they define the blanks, not what you write in them. or a thin wrapper around data. The IReview interface was born in Level 1 when we realized product reviews, seller reviews, and delivery reviews have different properties but the service shouldn't care which type it's processing. ReviewResult<T> arrived in Level 5 when we needed a way to say "this submission failed, and here's why" instead of throwing exceptions everywhere.
namespace ReviewRating.Ratings;
// ─── The contract for calculating aggregate ratings ──
// Any algorithm — simple average, weighted, Bayesian — // Level 2
// implements this one method. The service doesn't care
// HOW the rating is calculated, only that it gets a
// number back between 1.0 and 5.0.
public interface IRatingStrategy
{
string Name { get; }
double Calculate(IReadOnlyList<IReview> reviews);
}
// ─── SimpleAverage ────────────────────────────────────
// Sum all ratings, divide by count. The most basic // Level 2
// approach. Fair when you have thousands of reviews,
// but easily manipulated with a handful of fake ones.
public sealed class SimpleAverage : IRatingStrategy
{
public string Name => "Simple Average";
public double Calculate(IReadOnlyList<IReview> reviews)
{
if (reviews.Count == 0) return 0;
return Math.Round(
reviews.Average(r => r.Rating), 2);
}
}
// ─── WeightedAverage ──────────────────────────────────
// Recent reviews matter more. A review from yesterday // Level 2
// counts more than one from 2 years ago. Uses exponential
// decay: weight = e^(-lambda * daysAgo).
public sealed class WeightedAverage : IRatingStrategy
{
private readonly double _decayLambda;
private readonly ITimeProvider _timeProvider; // Level 6
public string Name => "Weighted Average (time-decay)";
public WeightedAverage(ITimeProvider timeProvider,
double decayLambda = 0.01)
{
_timeProvider = timeProvider;
_decayLambda = decayLambda;
}
public double Calculate(IReadOnlyList<IReview> reviews)
{
if (reviews.Count == 0) return 0;
var now = _timeProvider.UtcNow; // Level 6
double weightedSum = 0, totalWeight = 0;
foreach (var review in reviews)
{
var daysAgo = (now - review.CreatedAt).TotalDays;
var weight = Math.Exp(-_decayLambda * daysAgo);
weightedSum += review.Rating * weight;
totalWeight += weight;
}
return totalWeight > 0
? Math.Round(weightedSum / totalWeight, 2)
: 0;
}
}
// ─── BayesianAverage ──────────────────────────────────
// Pulls toward the global average when review count is // Level 2
// low. A product with 2 reviews at 5.0 stars shouldn't
// outrank one with 500 reviews at 4.7. The "confidence
// parameter" C controls how many reviews are needed
// before the product's own average dominates.
public sealed class BayesianAverage : IRatingStrategy
{
private readonly double _globalAverage;
private readonly int _confidenceParameter; // Level 2
public string Name => "Bayesian Average";
public BayesianAverage(double globalAverage = 3.5,
int confidenceParameter = 10)
{
_globalAverage = globalAverage;
_confidenceParameter = confidenceParameter;
}
public double Calculate(IReadOnlyList<IReview> reviews)
{
if (reviews.Count == 0) return _globalAverage;
var sum = reviews.Sum(r => r.Rating);
// Bayesian formula: (C * globalAvg + sum) / (C + count)
var result = (_confidenceParameter * _globalAverage + sum)
/ (_confidenceParameter + reviews.Count);
return Math.Round(result, 2);
}
}
This is the Strategy patternThe Strategy pattern lets you define a family of algorithms, put each one in its own class, and make them interchangeable. The service calls Calculate() without caring which strategy is behind it — Simple, Weighted, or Bayesian all look the same from the outside.. Three very different algorithms, one interface. The service calls Calculate() and gets back a rating — it has no idea whether the rating was a plain average, a time-weighted calculation, or a Bayesian estimate that factors in confidence. Adding a new algorithm (like "median" or "trimmed mean") means adding one class. Zero changes to the service.
ModerationPipeline.cs — Strategy as pipeline: chain of moderation checks
namespace ReviewRating.Moderation;
// ─── The contract for a single moderation check ──────
// Each check examines the review and returns a result: // Level 3
// pass, reject, or flag for human review. The pipeline
// runs them in sequence — any failure stops the chain.
public interface IModerationStrategy
{
string Name { get; }
ModerationResult Check(IReview review);
}
public record ModerationResult(
bool Passed,
string? Reason = null,
ReviewStatus SuggestedStatus = ReviewStatus.Approved);
// ─── SpamDetector ─────────────────────────────────────
// Catches low-effort spam: too short, ALL CAPS, // Level 3
// repeated characters ("aaaaaaa"), or suspicious URLs.
public sealed class SpamDetector : IModerationStrategy
{
public string Name => "Spam Detector";
public ModerationResult Check(IReview review)
{
if (string.IsNullOrWhiteSpace(review.Text))
return new(false, "Review text is empty.",
ReviewStatus.Rejected);
if (review.Text.Length < 10) // Level 5
return new(false, "Review too short (min 10 chars).",
ReviewStatus.Rejected);
// Check for ALL CAPS (more than 80% uppercase)
var upperRatio = (double)review.Text.Count(char.IsUpper)
/ review.Text.Length;
if (upperRatio > 0.8 && review.Text.Length > 20)
return new(false, "Excessive capitalization detected.",
ReviewStatus.Flagged);
// Check for repeated characters
if (HasRepeatedChars(review.Text, 5))
return new(false, "Repeated character spam detected.",
ReviewStatus.Rejected);
return new(true);
}
private static bool HasRepeatedChars(string text, int threshold)
{
int count = 1;
for (int i = 1; i < text.Length; i++)
{
count = text[i] == text[i - 1] ? count + 1 : 1;
if (count >= threshold) return true;
}
return false;
}
}
// ─── ProfanityFilter ──────────────────────────────────
// Checks review text against a blocklist of banned // Level 3
// words. In production, this would use a proper NLP
// service — here we keep it simple with a word list.
public sealed class ProfanityFilter : IModerationStrategy
{
private readonly HashSet<string> _blockedWords;
public string Name => "Profanity Filter";
public ProfanityFilter(IEnumerable<string>? blockedWords = null)
{
_blockedWords = new HashSet<string>(
blockedWords ?? ["spam", "scam", "fake"],
StringComparer.OrdinalIgnoreCase);
}
public ModerationResult Check(IReview review)
{
var words = review.Text.Split(' ',
StringSplitOptions.RemoveEmptyEntries);
var found = words.FirstOrDefault(w =>
_blockedWords.Contains(w.Trim('.', ',', '!', '?')));
return found is not null
? new(false, $"Blocked word detected: '{found}'.",
ReviewStatus.Rejected)
: new(true);
}
}
// ─── PurchaseVerifier ─────────────────────────────────
// For product reviews, checks if the reviewer actually // Level 3
// bought the product. Non-verified reviews are flagged
// rather than rejected — they're still allowed but
// carry less trust.
public sealed class PurchaseVerifier : IModerationStrategy
{
public string Name => "Purchase Verifier";
public ModerationResult Check(IReview review)
{
// Only applies to product reviews
if (review is not ProductReview pr)
return new(true);
return pr.IsVerifiedPurchase
? new(true)
: new(true, "Unverified purchase — review will be marked.",
ReviewStatus.Approved);
}
}
// ─── ModerationPipeline ───────────────────────────────
// Runs all moderation strategies in sequence. // Level 3
// If ANY check fails, the review is rejected/flagged
// immediately — no need to run remaining checks.
// This is Strategy-as-Pipeline: same interface, chained.
public sealed class ModerationPipeline
{
private readonly List<IModerationStrategy> _strategies = [];
public ModerationPipeline(
params IModerationStrategy[] strategies)
{
_strategies.AddRange(strategies);
}
public ModerationResult RunAll(IReview review)
{
foreach (var strategy in _strategies)
{
var result = strategy.Check(review);
if (!result.Passed)
return result; // Short-circuit on first failure
}
return new ModerationResult(true);
}
public void AddStrategy(IModerationStrategy strategy) // Level 5
{
ArgumentNullException.ThrowIfNull(strategy);
_strategies.Add(strategy);
}
}
This is Strategy as a pipelineEach moderation check is its own Strategy implementation. But instead of picking ONE strategy (like rating calculation), the pipeline runs ALL of them in sequence. Think of it as an airport security line: each check (metal detector, bag scan, ID check) is independent, but they all run on every passenger. Fail any one of them and you don't fly.. Each moderation check is its own class implementing IModerationStrategy. The pipeline runs them all in order — spam detection first, profanity filter second, purchase verification third. If any check fails, the review is immediately rejected or flagged. Adding a new check (like "AI content detector") means adding one class and registering it in the pipeline. Zero changes to existing checks.
ReviewService.cs — The orchestrator that ties everything together
namespace ReviewRating;
// ─── IReviewObserver ─────────────────────────────────
// Observers get notified when reviews are submitted, // Level 4
// approved, or rejected. Each observer reacts in its
// own way without the service knowing what they do.
public interface IReviewObserver
{
void OnReviewSubmitted(IReview review);
void OnReviewApproved(IReview review);
void OnReviewRejected(IReview review, string reason);
}
// ─── Observer Implementations ─────────────────────────
public sealed class SellerNotifier : IReviewObserver // Level 4
{
public void OnReviewSubmitted(IReview review) { }
public void OnReviewApproved(IReview review) =>
Console.WriteLine($"[Seller] New approved review: {review.Rating} stars");
public void OnReviewRejected(IReview review, string reason) { }
}
public sealed class EmailNotifier : IReviewObserver // Level 4
{
public void OnReviewSubmitted(IReview review) =>
Console.WriteLine($"[Email] Review received — pending moderation.");
public void OnReviewApproved(IReview review) =>
Console.WriteLine($"[Email] Your review was published!");
public void OnReviewRejected(IReview review, string reason) =>
Console.WriteLine($"[Email] Your review was rejected: {reason}");
}
public sealed class AnalyticsLogger : IReviewObserver // Level 4
{
public void OnReviewSubmitted(IReview review) =>
Console.WriteLine($"[Analytics] review_submitted rating={review.Rating}");
public void OnReviewApproved(IReview review) =>
Console.WriteLine($"[Analytics] review_approved id={review.ReviewId}");
public void OnReviewRejected(IReview review, string reason) =>
Console.WriteLine($"[Analytics] review_rejected reason={reason}");
}
public sealed class AggregateUpdater : IReviewObserver // Level 4
{
private readonly IRatingStrategy _ratingStrategy;
private readonly ConcurrentDictionary<string, double> _cache = new();
public AggregateUpdater(IRatingStrategy ratingStrategy)
{
_ratingStrategy = ratingStrategy;
}
public void OnReviewApproved(IReview review)
{
// Recalculate aggregate would happen here // Level 4
Console.WriteLine($"[Aggregate] Rating cache updated for review {review.ReviewId}");
}
public void OnReviewSubmitted(IReview review) { }
public void OnReviewRejected(IReview review, string reason) { }
}
// ─── RateLimiter ──────────────────────────────────────
// Prevents review bombing: max N reviews per user // Level 5
// per time window. Uses a sliding window approach.
public sealed class RateLimiter
{
private readonly ConcurrentDictionary<string, List<DateTimeOffset>>
_submissions = new();
private readonly int _maxPerWindow;
private readonly TimeSpan _window;
private readonly ITimeProvider _timeProvider; // Level 6
public RateLimiter(ITimeProvider timeProvider,
int maxPerWindow = 5, int windowMinutes = 60)
{
_timeProvider = timeProvider;
_maxPerWindow = maxPerWindow;
_window = TimeSpan.FromMinutes(windowMinutes);
}
public ReviewResult<bool> CheckLimit(string userId)
{
var now = _timeProvider.UtcNow;
var timestamps = _submissions.GetOrAdd(userId, _ => []);
lock (timestamps) // Level 5
{
// Remove expired entries
timestamps.RemoveAll(t => now - t > _window);
if (timestamps.Count >= _maxPerWindow)
return ReviewResult<bool>.Fail(
$"Rate limit exceeded. Max {_maxPerWindow} reviews per {_window.TotalMinutes} minutes.");
timestamps.Add(now);
return ReviewResult<bool>.Ok(true);
}
}
}
// ─── ReviewService ────────────────────────────────────
// The orchestrator. Validates, moderates, stores, // Level 0 (evolved)
// calculates ratings, and notifies observers.
public interface IModerationService // Level 6
{
ModerationResult Moderate(IReview review);
}
public interface IRatingCalculator // Level 6
{
double CalculateRating(IReadOnlyList<IReview> reviews);
}
public interface ITimeProvider // Level 6
{
DateTimeOffset UtcNow { get; }
}
public sealed class SystemTimeProvider : ITimeProvider // Level 6
{
public DateTimeOffset UtcNow => DateTimeOffset.UtcNow;
}
public sealed class ReviewService
{
private readonly ConcurrentDictionary<string, List<IReview>>
_reviewsByTarget = new(); // Level 0
private readonly List<IReviewObserver> _observers = []; // Level 4
private readonly IModerationService _moderationService; // Level 6
private readonly IRatingCalculator _ratingCalculator; // Level 6
private readonly RateLimiter _rateLimiter; // Level 5
private readonly object _lock = new(); // Level 5
public ReviewService(
IModerationService moderationService,
IRatingCalculator ratingCalculator,
RateLimiter rateLimiter)
{
_moderationService = moderationService;
_ratingCalculator = ratingCalculator;
_rateLimiter = rateLimiter;
}
// ─ Observer management ─ // Level 4
public void Subscribe(IReviewObserver observer) =>
_observers.Add(observer);
public void Unsubscribe(IReviewObserver observer) =>
_observers.Remove(observer);
// ─ Submit a review ─
public ReviewResult<IReview> SubmitReview(IReview review)
{
// Step 1: Validate input // Level 5
if (review.Rating < 1 || review.Rating > 5)
return ReviewResult<IReview>.Fail(
"Rating must be between 1 and 5.");
if (string.IsNullOrWhiteSpace(review.Text))
return ReviewResult<IReview>.Fail(
"Review text is required.");
// Step 2: Rate limiting // Level 5
var rateCheck = _rateLimiter.CheckLimit(review.UserId);
if (!rateCheck.IsSuccess)
return ReviewResult<IReview>.Fail(rateCheck.Error!);
// Step 3: Moderation pipeline // Level 3
var modResult = _moderationService.Moderate(review);
if (!modResult.Passed)
{
review.Status = modResult.SuggestedStatus;
NotifyRejected(review, modResult.Reason ?? "Moderation failed");
return ReviewResult<IReview>.Fail(
$"Review rejected: {modResult.Reason}");
}
// Step 4: Store the review // Level 0
lock (_lock)
{
var targetId = GetTargetId(review);
var reviews = _reviewsByTarget.GetOrAdd(targetId, _ => []);
// Check for duplicate // Level 5
if (reviews.Any(r =>
r.UserId == review.UserId &&
r.Status == ReviewStatus.Approved))
return ReviewResult<IReview>.Fail(
"You have already reviewed this item.");
review.Status = ReviewStatus.Approved;
reviews.Add(review);
}
// Step 5: Notify observers // Level 4
NotifySubmitted(review);
NotifyApproved(review);
return ReviewResult<IReview>.Ok(review);
}
// ─ Get aggregate rating ─ // Level 2
public double GetRating(string targetId)
{
if (!_reviewsByTarget.TryGetValue(targetId, out var reviews))
return 0;
var approved = reviews
.Where(r => r.Status == ReviewStatus.Approved)
.ToList();
return _ratingCalculator.CalculateRating(approved);
}
// ─ Get reviews with pagination ─ // Level 7
public IReadOnlyList<IReview> GetReviews(string targetId,
int page = 1, int pageSize = 10)
{
if (!_reviewsByTarget.TryGetValue(targetId, out var reviews))
return [];
return reviews
.Where(r => r.Status == ReviewStatus.Approved)
.OrderByDescending(r => r.CreatedAt)
.Skip((page - 1) * pageSize)
.Take(pageSize)
.ToList();
}
private static string GetTargetId(IReview review) => review switch
{
ProductReview pr => $"product:{pr.ProductId}",
SellerReview sr => $"seller:{sr.SellerId}",
DeliveryReview dr => $"delivery:{dr.OrderId}",
_ => $"unknown:{review.ReviewId}"
};
private void NotifySubmitted(IReview review) =>
_observers.ForEach(o => o.OnReviewSubmitted(review));
private void NotifyApproved(IReview review) =>
_observers.ForEach(o => o.OnReviewApproved(review));
private void NotifyRejected(IReview review, string reason) =>
_observers.ForEach(o => o.OnReviewRejected(review, reason));
}
This is the orchestratorAn orchestrator coordinates multiple subsystems without doing their work. ReviewService doesn't know HOW to calculate ratings (Strategy does that), HOW to filter spam (ModerationPipeline does that), or WHAT to do when a review is approved (Observers do that). It just calls each piece in the right order.. Notice how ReviewService doesn't contain any rating math, moderation logic, or notification code. It delegates to IModerationService, IRatingCalculator, and IReviewObservers. Each piece can be swapped, tested, or extended independently. The service just coordinates the workflow: validate, rate-limit, moderate, store, notify.
Program.cs — DI wiring and demo
using ReviewRating;
using ReviewRating.Models;
using ReviewRating.Ratings;
using ReviewRating.Moderation;
// ─── DI Wiring ────────────────────────────────────────
// Level 6: Adapter — wraps ModerationPipeline behind
// the IModerationService interface for DI
var timeProvider = new SystemTimeProvider();
var pipeline = new ModerationPipeline(
new SpamDetector(),
new ProfanityFilter(),
new PurchaseVerifier());
// Level 6: Wrap pipeline in a service adapter
IModerationService moderationService = new ModerationServiceAdapter(pipeline);
// Level 2: Pick a rating strategy (swap via config)
IRatingStrategy ratingStrategy = new BayesianAverage(
globalAverage: 3.5, confidenceParameter: 10);
IRatingCalculator ratingCalculator = new RatingCalculatorAdapter(ratingStrategy);
// Level 5: Rate limiter (5 reviews per hour per user)
var rateLimiter = new RateLimiter(timeProvider,
maxPerWindow: 5, windowMinutes: 60);
// Level 0: Create the service
var service = new ReviewService(
moderationService, ratingCalculator, rateLimiter);
// Level 4: Register observers
service.Subscribe(new SellerNotifier());
service.Subscribe(new EmailNotifier());
service.Subscribe(new AnalyticsLogger());
service.Subscribe(new AggregateUpdater(ratingStrategy));
// ─── Demo ─────────────────────────────────────────────
Console.WriteLine("=== Review & Rating System Demo ===\n");
// Submit a valid product review
var result1 = service.SubmitReview(new ProductReview
{
UserId = "user-42",
ProductId = "laptop-x1",
Rating = 5,
Text = "Absolutely love this laptop. Battery lasts all day!",
IsVerifiedPurchase = true
});
Console.WriteLine($"Review 1: {(result1.IsSuccess ? "Accepted" : result1.Error)}\n");
// Submit a spam review (too short)
var result2 = service.SubmitReview(new ProductReview
{
UserId = "spammer-99",
ProductId = "laptop-x1",
Rating = 1,
Text = "bad", // Too short — spam detector catches this
IsVerifiedPurchase = false
});
Console.WriteLine($"Review 2: {(result2.IsSuccess ? "Accepted" : result2.Error)}\n");
// Submit a seller review
var result3 = service.SubmitReview(new SellerReview
{
UserId = "user-55",
SellerId = "seller-abc",
Rating = 4,
Text = "Good communication, fast shipping. Minor packaging issue.",
CommunicationRating = 5
});
Console.WriteLine($"Review 3: {(result3.IsSuccess ? "Accepted" : result3.Error)}\n");
// Check aggregate rating
var rating = service.GetRating("product:laptop-x1");
Console.WriteLine($"Laptop X1 rating (Bayesian): {rating}/5.0");
// ─── Adapter classes (thin wrappers for DI) ───────────
public sealed class ModerationServiceAdapter : IModerationService
{
private readonly ModerationPipeline _pipeline;
public ModerationServiceAdapter(ModerationPipeline p) => _pipeline = p;
public ModerationResult Moderate(IReview review) => _pipeline.RunAll(review);
}
public sealed class RatingCalculatorAdapter : IRatingCalculator
{
private readonly IRatingStrategy _strategy;
public RatingCalculatorAdapter(IRatingStrategy s) => _strategy = s;
public double CalculateRating(IReadOnlyList<IReview> reviews) =>
_strategy.Calculate(reviews);
}
The Program.cs file is where all the pieces connect. Notice the Adapter patternAn Adapter wraps one interface to make it compatible with another. Here, ModerationServiceAdapter wraps the ModerationPipeline (which has its own RunAll method) behind the IModerationService interface (which ReviewService expects). It's a thin translation layer — no logic, just shape conversion. in action: ModerationServiceAdapter wraps the pipeline behind the IModerationService interface, and RatingCalculatorAdapter wraps the strategy behind IRatingCalculator. These thin adapters exist purely so that ReviewService depends on interfaces, not concrete classes — making every dependency swappable in tests.
Section 12
Pattern Spotting — X-Ray Vision
You've been using design patterns for the last seven levels. But here's the interesting part: you might not have noticed all of them. Some patterns are obvious — we named them as we built them. Others are hiding in the code, doing their job quietly without anyone putting a label on them.
This section is about developing pattern recognitionThe ability to look at code and see the underlying design patterns at work. Senior engineers do this unconsciously — they glance at code and immediately see "that's a Strategy" or "that's a Pipeline." This skill comes from practice: once you've built patterns yourself, you spot them everywhere. — the skill of looking at code and seeing the structural bones underneath. Let's start with a challenge.
Think First #10
We explicitly named two patterns during the build: Strategy (for ratings AND moderation) and Observer (for notifications). But there are at least two more patterns hiding in our code that we never mentioned by name. Hint: look at how the moderation pipeline chains checks, and think about the lifecycle of a review from submission to approval.
Take your time.
Reveal Answer
Chain of ResponsibilityChain of Responsibility passes a request through a chain of handlers. Each handler either processes the request or passes it to the next one. In our moderation pipeline, each check (spam, profanity, purchase) gets a chance to reject the review. If it passes, the next handler in the chain takes over.: The ModerationPipeline is a textbook Chain of Responsibility. Each moderation strategy gets the review, decides pass/fail, and if it passes, the next strategy in the chain takes over. The key tell: a list of handlers processed sequentially where any one can short-circuit the chain.
Template MethodTemplate Method defines the skeleton of an algorithm in a base method, letting subclasses override specific steps. In our system, every review follows the same lifecycle: validate → rate-limit → moderate → store → notify. That fixed sequence IS the template. The "steps" (which moderation checks to run, which rating algorithm to use) vary, but the ORDER never changes.: The SubmitReview() method follows a fixed lifecycle: validate → rate-limit → moderate → store → notify. That sequence never changes — only the individual steps vary (which moderation checks, which observers). That's the Template Method pattern: a fixed algorithm skeleton with pluggable steps.
The Explicit Patterns
These are the patterns we named during the build. For each one, let's look at where it lives in the code, what it enables, and what would happen without it.
Strategy — Swappable Rating Algorithms
Where it lives:IRatingStrategy + SimpleAverage, WeightedAverage, BayesianAverage
What it enables: Amazon uses Bayesian averaging to prevent gaming. A small shop might prefer simple averages. A news site might weight recent reviews more heavily. All three algorithms exist as plug-and-play classes. Switching from Bayesian to Weighted? Change one line in the DI configuration.
Without it: A giant switch statement inside GetRating() that grows every time someone invents a new averaging method. After 5 algorithms, the switch is 200 lines and nobody wants to touch it.
Strategy (as Pipeline) — Moderation Checks
Where it lives:IModerationStrategy + SpamDetector, ProfanityFilter, PurchaseVerifier + ModerationPipeline
What it enables: Each check is independent. The spam detector doesn't know the profanity filter exists. Adding "AI content detection" means writing one class and registering it in the pipeline. Reordering checks (maybe profanity first is cheaper than spam?) is a one-line change. Disabling a check for testing? Just don't register it.
Without it: One enormous ModerateReview() method with nested if-statements. Adding a new check means modifying that method — risking breaking all existing checks. Testing one check in isolation? Impossible.
Observer — React to Review Events
Where it lives:IReviewObserver + SellerNotifier, EmailNotifier, AnalyticsLogger, AggregateUpdater
What it enables: When a review is approved, the seller gets notified, the user gets an email, analytics are logged, and the aggregate rating is recalculated. None of these features exist inside the review service — they're plugged in from outside. Adding "push notification to mobile app" means writing one new observer. Zero changes to the service.
Without it: The review service would contain SendEmail(), NotifySeller(), LogAnalytics(), and UpdateAggregate() methods inline. Every new "reaction" bloats the service. Testing review submission means dealing with email servers and analytics APIs.
How Patterns Interact at Runtime
When a user submits a review, multiple patterns fire in sequence. Here's the interaction flow — notice how each pattern handles exactly ONE concern:
Hidden Patterns: We found 3 patterns hiding in the code that were never named during the build. Chain of Responsibility emerged naturally when we needed to run multiple moderation checks in sequence. Template Method hides in the fixed lifecycle of SubmitReview(). And Adapter appeared when we needed thin wrappers (ModerationServiceAdapter, RatingCalculatorAdapter) to bridge concrete classes to DI interfaces. Patterns aren't things you "decide to use" — they're structures that emerge naturally when you solve problems cleanly.
Section 13
The Growing Diagram — Complete Evolution
This is the visual summary of the entire Constraint Game. Watch the system grow from a single record type to a 25-type architecture, one level at a time. Each stage adds exactly the types that were forced into existence by that level's constraint.
Think First #11
Look at the 8 stages below. Which single level added the most types? Why do you think that level needed so many new types compared to the others? And which level added zero new types but changed the most existing code?
60 seconds. Think about it before scrolling.
Reveal Answer
Level 3 (Moderation) and Level 4 (Observer) tied — each added 4–5 new types. Moderation introduced an interface plus three concrete strategies and a pipeline orchestrator. Observer introduced an interface plus four concrete listeners. Both levels deal with extensibility — the need for multiple independent implementations of the same contract — which naturally demands more types.
Level 5 added zero new abstractions but changed the most existing code. Edge case handling (validation, rate limiting, duplicate detection, edit history) threads through everything. It doesn't create new design concepts — it hardens the existing ones. That's why edge case work feels exhausting: it's invisible, scattered, and touches every file.
Entity Summary Table
Here's every type in the system, what kind of type it is, which level introduced it, and why it's that kind of type. This table is your study guide for the entire Review & Rating architecture.
Type
Kind
Level
Why This Kind?
Review
record
L0
Immutable data carrier — no behavior, just fields
ReviewStore
class
L0
Mutable state — holds the list of reviews
IReview
interface
L1
Abstraction: service doesn't care if review is for product, seller, or delivery
ProductReview
class
L1
Has product-specific fields (ProductId, PhotoUrls, IsVerifiedPurchase)
SellerReview
class
L1
Has seller-specific fields (SellerId, CommunicationRating)
DeliveryReview
class
L1
Has delivery-specific fields (OrderId, WasOnTime)
IRatingStrategy
interface
L2
Strategy patternDefines a family of algorithms and makes them interchangeable. The service calls Calculate() on whatever strategy is plugged in.: one interface, multiple implementations
SimpleAverage
class
L2
Algorithm: sum / count — simplest approach
WeightedAverage
class
L2
Algorithm: recent reviews weighted more via exponential decay
BayesianAverage
class
L2
Algorithm: pulls toward global mean when review count is low
IModerationStrategy
interface
L3
Strategy for moderation: each check is independent and testable
SpamDetector
class
L3
Checks for empty text, ALL CAPS, repeated characters
ProfanityFilter
class
L3
Checks text against a blocklist of banned words
PurchaseVerifier
class
L3
Flags unverified product reviews (soft check, not rejection)
ModerationPipeline
class
L3
Runs all strategies in sequence, short-circuits on failure
IReviewObserver
interface
L4
Observer patternWhen a review is submitted/approved/rejected, the service notifies all registered observers. Each observer can react however it wants — notify seller, send email, log analytics — without the service knowing about any of them.: decoupled event listeners
SellerNotifier
class
L4
Observer: alerts seller when a review is approved
EmailNotifier
class
L4
Observer: sends email confirmations to the reviewer
AnalyticsLogger
class
L4
Observer: logs review events for data analysis
AggregateUpdater
class
L4
Observer: recalculates aggregate rating after approval
ReviewResult<T>
record
L5
Result type: carry success value or error without exceptions
The big takeaway: 25+ types sounds like a lot. But none of them were added "just because." Every type was forced into existence by a specific constraint. Remove any level, and the types it introduced become unnecessary. That's the difference between accidental complexityComplexity that comes from poor design choices — unnecessary abstractions, premature patterns, over-engineering. It can be removed without losing functionality. (types you added because you thought you should) and essential complexityComplexity that's inherent to the problem. A review system with multiple review types, rating algorithms, moderation checks, and observer notifications IS genuinely complex. The types reflect that reality, not a design choice. (types the problem demanded).
Section 14
Five Bad Solutions — Learn What NOT to Do
You've seen the good solution — built incrementally over 7 levels. Now let's study five bad approaches that people commonly reach for. Each one is tempting for a different reason, and each one breaks in a different way.
Bad Solution #1 — The God Class
What it is: Everything crammed into one massive class. Rating calculation, spam detection, profanity filtering, email notifications, analytics logging, and search — all inline, all tangled together.
GodClass.cs — Everything in one place
public class ReviewSystem
{
private List<(string user, int rating, string text)> _reviews = new();
public void Submit(string user, int rating, string text, string productId)
{
// Spam check inline
if (text.Length < 10) { Console.WriteLine("Too short"); return; }
if (text.ToUpper() == text) { Console.WriteLine("ALL CAPS"); return; }
// Profanity check inline
var banned = new[] { "spam", "scam", "fake" };
if (banned.Any(b => text.Contains(b)))
{ Console.WriteLine("Profanity"); return; }
_reviews.Add((user, rating, text));
// Calculate average inline
var avg = _reviews.Average(r => r.rating);
// Send email inline
SendEmail(user, $"Your review was published! Avg: {avg}");
// Notify seller inline
Console.WriteLine($"Seller: new review {rating} stars");
// Log analytics inline
HttpClient client = new();
client.PostAsync("https://analytics.example.com/event",
new StringContent($"review_submitted:{productId}"));
}
private void SendEmail(string to, string msg) { /* SMTP code */ }
// ... 800 more lines of tangled concerns
}
The Moment It Dies: The trust & safety team wants to add an "AI content detector" moderation check. The developer opens this 1400-line file, scrolls past email code, scrolls past analytics code, finds the inline spam check, adds another if-statement, accidentally breaks the profanity filter with a missing return, and fake reviews start appearing on the site.
The fix: Apply the Single Responsibility PrincipleEvery class should have one reason to change. The service orchestrates the workflow. Rating strategies calculate. Moderation strategies check. Observers react. Each concern lives in its own class.. Split into ReviewService (orchestrates), IRatingStrategy implementations (calculate ratings), IModerationStrategy implementations (check content), and IReviewObserver implementations (react to events). This is exactly what we built across Levels 0–6.
Maps to: SRP + Strategy + Observer + DI
Bad Solution #2 — The Over-Engineer
What it is: Every design pattern in the book, applied upfront before any real constraint demands it. The code technically follows every SOLID principle, but nobody can read it.
OverEngineered.cs — Patterns for the sake of patterns
// To submit a review, you need ALL of these:
public interface IReviewFactory { IReview Create(ReviewDto dto); }
public interface IReviewBuilder { IReviewBuilder WithRating(int r); IReview Build(); }
public interface IReviewCommandHandler { Task Handle(IReviewCommand cmd); }
public interface IReviewCommand { }
public record SubmitReviewCommand(ReviewDto Dto) : IReviewCommand;
public interface IReviewEventMediator { void Publish(IReviewEvent e); }
public interface IReviewEvent { }
public record ReviewSubmittedEvent(IReview Review) : IReviewEvent;
public interface IReviewRepository { void Save(IReview r); }
public interface IReviewUnitOfWork { void Commit(); }
public interface IModerationChainBuilder { IModerationChain Build(); }
public interface IRatingStrategyResolver { IRatingStrategy Resolve(string type); }
// The actual "submit" logic? Buried 14 layers deep.
// Total: 16 types just to store a string and a number.
The Moment It Dies: A junior developer joins the team and needs to "change the spam threshold from 10 to 20 characters." They spend 4 hours tracing through factories, builders, mediators, and command handlers before finding the one line. They update the wrong class. The bug ships.
The fix:YAGNI"You Aren't Gonna Need It." Don't add abstractions until a real constraint demands them. A Factory is great when you have complex object construction. But if creating a review is just setting 3 properties, a Factory adds complexity with zero benefit. — You Aren't Gonna Need It. Start with the simplest code that works (Level 0). Add patterns ONLY when a constraint forces them. Our Constraint Game approach guarantees this: every pattern earned its place by solving a real problem.
Maps to: YAGNI + incremental design
Bad Solution #3 — The Happy-Path Hero
What it is: Actually well-structured! Clean patterns, good naming. Looks production-ready. But: no rate limiting, no duplicate detection, no moderation edge cases. Works perfectly in dev. First week in production: review bombing.
HappyPath.cs — Clean but fragile
public class ReviewService
{
private readonly IRatingStrategy _ratingStrategy;
private readonly List<IReviewObserver> _observers = new();
private readonly Dictionary<string, List<IReview>> _reviews = new();
// No rate limiting — bots can submit 1000 reviews/minute
// No moderation — profanity goes straight to the site
// No duplicate check — same user reviews same product 10 times
// No lock — concurrent requests corrupt state
public void Submit(IReview review)
{
var targetId = GetTargetId(review);
if (!_reviews.ContainsKey(targetId))
_reviews[targetId] = new();
_reviews[targetId].Add(review); // No checks at all!
_observers.ForEach(o => o.OnReviewSubmitted(review));
}
}
Why It's the Most Dangerous: Solutions #1 and #2 are obviously bad. A code reviewer catches them in 5 minutes. But this? It looks professional. Clean patterns, good naming, proper interfaces. It passes code review. It passes unit tests (because the tests are also happy-path). The abuse only appears when real users — and real bad actors — start using the system.
The fix: Apply Level 3's moderation pipeline (spam + profanity + purchase verification), Level 5's What If? frameworkBefore calling any feature "done," ask: What if a bot sends 1000 reviews? What if the same user reviews twice? What if the text contains profanity? What if two requests arrive at the same time? Every "What If?" becomes a guard clause, a rate limiter, or a moderation check. (rate limiting + duplicate detection + validation), and Level 5's concurrency protection (lock around shared state).
What it is: Reviews are published immediately upon submission. No spam check, no profanity filter, no purchase verification. The system trusts every user completely. This seems efficient — why add overhead? — until the first wave of abuse hits.
NoModeration.cs — Trust everyone
public ReviewResult<IReview> SubmitReview(IReview review)
{
// No spam check — "aaaa" is valid review text
// No profanity filter — profanity goes live
// No purchase verification — anyone can review anything
// No rate limiting — submit 1000 reviews per second
review.Status = ReviewStatus.Approved; // Auto-approve!
_reviews.GetOrAdd(GetTargetId(review), _ => []).Add(review);
NotifyApproved(review);
return ReviewResult<IReview>.Ok(review);
}
// "It's so fast! No moderation overhead!"
// One week later: trust & safety team pulls the feature.
The Moment It Dies: A competitor uses bots to submit 500 fake 1-star reviews on your top product in 30 minutes. The product's rating drops from 4.7 to 2.1. Sales plummet 60%. Restoring trust takes months. The cost of "no moderation overhead" is catastrophic.
The fix: Introduce the ModerationPipelineA pipeline of independent moderation checks that every review passes through before publication. Each check (SpamDetector, ProfanityFilter, PurchaseVerifier) is its own class implementing IModerationStrategy. The pipeline runs them in sequence — any failure rejects or flags the review. (Level 3). Every review passes through spam detection, profanity filtering, and purchase verification before publication. Adding new checks (AI content detector, duplicate image detector) means adding one class. The pipeline is extensible, testable, and each check can be enabled/disabled independently.
Maps to: Level 3 (Moderation Pipeline = Strategy as Chain of Responsibility)
Bad Solution #5 — Polling for Notifications
What it is: Instead of the review service telling sellers about new reviews, the seller dashboard asks the database every 5 seconds: "Any new reviews? How about now?" This is pollingPolling means repeatedly checking for changes at fixed intervals, whether or not anything actually changed. It wastes resources when nothing changed and responds slowly when something did. The alternative is event-driven: the source TELLS you when something happens., and it's the opposite of Observer.
PollingDashboard.cs — Checking every 5 seconds
public class SellerDashboardPoller
{
private readonly ReviewService _service;
private int _lastKnownCount = 0;
// Runs on a background timer — every 5 seconds
public async Task PollForNewReviews(string sellerId)
{
while (true)
{
await Task.Delay(5000);
var reviews = _service.GetReviews($"seller:{sellerId}");
if (reviews.Count > _lastKnownCount)
{
Console.WriteLine("New review!");
_lastKnownCount = reviews.Count;
}
// 10,000 sellers = 120,000 DB queries/minute
// A product gets maybe 5 reviews per DAY.
// 99.9% of polls return nothing new.
}
}
}
The Moment It Dies: Black Friday. 50,000 sellers watching their dashboards. The polling service fires 600,000 database queries per minute. The database CPU hits 100%. Review submissions start timing out. No new reviews can be posted during the busiest shopping day of the year.
The fix: Replace polling with the Observer patternInstead of asking "any new reviews?" every 5 seconds, the service TELLS observers when a review is approved. OnReviewApproved fires instantly — no delay, no wasted queries, no polling overhead.. The service fires OnReviewApproved() at the exact moment a review is approved. The seller gets notified instantly. Zero queries when nothing happens. The database load drops from 600,000 queries/minute to near-zero.
Maps to: Level 4 (Observer Pattern)
Think First #12
Which of the five bad solutions is the most dangerous? Why?
Reveal Answer
Bad Solution #3 (Happy-Path Hero) is always the most dangerous. Solutions #1 (God Class) and #2 (Over-Engineer) are obviously bad — any code reviewer catches them. Solution #4 (No Moderation) is bad but at least the team knows they skipped moderation. But #3? It looks professional. Clean Strategy pattern, proper Observer, good naming. It passes code review. It passes tests. The vulnerabilities (no rate limiting, no duplicate detection) only surface when real bad actors find the system. A wolf in sheep's clothing is always more dangerous than an obvious wolf.
Section 15
Code Review Challenge — Find 5 Bugs
A candidate submitted this Review & Rating implementation as a pull request. It compiles. It runs. It handles a basic submit-and-rate flow. But there are exactly 5 bugs hiding in it — issues that would cause real problems in production. Can you find them all before scrolling down?
Read the code below carefully. Try to find all 5 issues before revealing the answers.
CandidateReviewSolution.cs — Find 5 Bugs
public class ReviewService // Line 1
{
private Dictionary<string, List<IReview>> _reviews = new(); // Line 3
private List<IModerationStrategy> _moderators = new();
private List<IReviewObserver> _observers = new();
private IRatingStrategy _ratingStrategy;
public ReviewService(IRatingStrategy ratingStrategy) // Line 8
{
_ratingStrategy = ratingStrategy;
}
public void Subscribe(IReviewObserver observer)
{
_observers.Add(observer);
}
// Note: no Unsubscribe method // Line 16
public bool SubmitReview(IReview review) // Line 18
{
// Run moderation
foreach (var mod in _moderators)
{
var result = mod.Check(review);
if (!result.Passed)
{
Console.WriteLine($"Rejected: {result.Reason}");
return false; // Line 26
}
}
// Store
var targetId = review switch // Line 31
{
ProductReview pr => pr.ProductId,
SellerReview sr => sr.SellerId,
_ => review.ReviewId
};
if (!_reviews.ContainsKey(targetId)) // Line 38
_reviews[targetId] = new();
_reviews[targetId].Add(review);
// Notify
foreach (var obs in _observers) // Line 43
obs.OnReviewApproved(review);
return true;
}
public double GetAverageRating(string targetId) // Line 48
{
if (!_reviews.ContainsKey(targetId))
return 0;
return _ratingStrategy.Calculate(_reviews[targetId]);
}
public List<IReview> SearchReviews(string keyword) // Line 55
{
return _reviews.Values
.SelectMany(r => r)
.Where(r => r.Text.Contains(keyword))
.ToList();
}
}
Problem: The candidate uses a plain Dictionary<string, List<IReview>> with no synchronization. In a web application, two users can submit reviews for the same product simultaneously. Two threads calling _reviews[targetId].Add(review) at the same time can corrupt the list's internal array, lose reviews, or throw InvalidOperationException.
Bug: No lock around shared state
// Two threads submit reviews for "laptop-x1" at the same time
if (!_reviews.ContainsKey(targetId)) // Thread A: false
_reviews[targetId] = new(); // Thread A: creates list
// Thread B: also sees false (race condition!)
// Thread B: creates a NEW list, overwriting Thread A's
_reviews[targetId].Add(review); // Thread A's review is lost!
Taught in: Level 5 — Concurrent access and thread safety
Bug #2 — No Rate Limiting: Bot Can Submit Unlimited Reviews (Line 18)
Problem: There's no rate limitingRate limiting restricts how many times a user can perform an action within a time window. Without it, a bot can submit 1000 fake reviews per minute. With a limit of "5 reviews per hour per user," each submission checks a sliding window counter before proceeding.. A malicious user (or bot) can submit thousands of fake reviews per minute. This is called review bombing — a targeted attack to destroy a product's or seller's rating. Without rate limiting, there's nothing stopping it.
Bug: No rate limit check
public bool SubmitReview(IReview review)
{
// No rate limit check anywhere!
// A bot can call this 1000 times per second.
// Result: product rating destroyed in minutes.
foreach (var mod in _moderators) { /* ... */ }
_reviews[targetId].Add(review);
return true;
}
Fix: Add RateLimiter with sliding window
var rateCheck = _rateLimiter.CheckLimit(review.UserId);
if (!rateCheck.IsSuccess)
return ReviewResult<IReview>.Fail(rateCheck.Error!);
// Now: max 5 reviews per hour per user.
// Bot submits 6th review → instantly rejected.
Taught in: Level 5 — Rate limiting and abuse prevention
Bug #3 — Duplicate Reviews: Same User Can Review Same Product Twice (Line 38-39)
Problem: There's no check for duplicate reviews. The same user can submit 10 reviews for the same product. On Amazon, you get ONE review per product. Here, nothing prevents duplicates. A user who hates a product submits 50 one-star reviews, tanking the average. Even without malicious intent, accidental double-submits (user clicks the button twice) create duplicates.
Bug: No duplicate check
// User "alice" reviews laptop-x1: 5 stars ✓
// User "alice" reviews laptop-x1: 5 stars again!
// User "alice" reviews laptop-x1: and again!
// No check: reviews.Any(r => r.UserId == review.UserId)
// All 3 reviews are accepted and counted in the average.
Fix: Check for existing review by same user
lock (_lock)
{
var reviews = _reviews.GetOrAdd(targetId, _ => []);
if (reviews.Any(r =>
r.UserId == review.UserId &&
r.Status == ReviewStatus.Approved))
return ReviewResult<IReview>.Fail(
"You have already reviewed this item.");
reviews.Add(review);
}
Taught in: Level 5 — Duplicate detection and data integrity
Bug #4 — Observer Leak: No Unsubscribe Method (Line 16)
Problem: The code has Subscribe() but no Unsubscribe(). This is a memory leakA memory leak happens when objects stay in memory after they're no longer needed. Here, observers can never be removed from the list. If a seller closes their dashboard, the SellerNotifier stays in the observer list forever. Over weeks, the list grows indefinitely. Each review notification is sent to thousands of dead observers.. If a seller's dashboard is closed, the SellerNotifier observer stays in the list forever. Over time, the observer list grows without bound. Every review notification iterates through thousands of dead observers, wasting CPU and potentially throwing exceptions if the observer references disposed objects.
Bug: Subscribe without Unsubscribe
public void Subscribe(IReviewObserver observer)
{
_observers.Add(observer);
}
// No Unsubscribe() method!
// Observers accumulate forever.
// After 6 months: 10,000 dead observers in the list.
// Each review notification loops through all 10,000.
Fix: Add Unsubscribe method
public void Subscribe(IReviewObserver observer) =>
_observers.Add(observer);
public void Unsubscribe(IReviewObserver observer) =>
_observers.Remove(observer);
// Now observers can be cleaned up when no longer needed.
// Dashboard closed? Unsubscribe the notifier.
Problem: The SearchReviews() method uses r.Text.Contains(keyword), which is case-sensitiveCase-sensitive means "Battery" and "battery" are treated as different strings. A user searching for "battery life" won't find a review that says "Battery Life is amazing" because "B" doesn't equal "b". This is almost never what users expect. by default in C#. A user searching for "battery" won't find reviews that say "Battery" or "BATTERY." This makes the search feel broken — users think there are no reviews about battery life when there are hundreds.
Bug: Case-sensitive search
public List<IReview> SearchReviews(string keyword)
{
return _reviews.Values
.SelectMany(r => r)
.Where(r => r.Text.Contains(keyword)) // Case-sensitive!
.ToList();
// Search "battery" → misses "Battery life is great"
// Search "GREAT" → misses "This is great!"
// Users think search is broken.
}
Rule of thumb: In C#, always use StringComparison.OrdinalIgnoreCase for user-facing search. The default Contains() is case-sensitive, which is almost never what users expect. For full-text search at scale, use a dedicated search engine (Elasticsearch, Lucene) as we discussed in Level 7.
Taught in: Level 7 — Full-text search and scaling
How did you do?
All 5: Senior-level thinking. You're catching bugs that most developers miss in real code reviews.
3–4: Solid mid-level. You're building good instincts. Review the levels that correspond to the bugs you missed.
1–2: Keep learning! Go back through Levels 4–5 and pay extra attention to concurrency, rate limiting, and duplicate detection.
Section 16
The Interview — Both Sides of the Table
A Review & Rating system sounds like basic CRUD — "just save a review and show a number." That’s exactly what trips candidates up. Interviewers pick this problem because it hides Strategy, Observer, moderation pipelines, and rate-limiting behind a deceptively simple surface. Below are two full interview runs: the polished version and the realistic one (with stumbles and recovery). Both earn a hire. The difference is the journey.
Time
Candidate Says
Interviewer Thinks
0:00
“Before I code, let me scope this. Are we building reviews for products, sellers, or both? Do we need photo/video attachments? Is moderation in scope? What about ‘helpful’ voting on reviews?”
Excellent — not treating this as simple CRUD. Scoping questions reveal they know the hidden complexity.
2:00
“Functional: submit review with rating + text, edit review with full history, display aggregate rating, moderate content, vote reviews as helpful. Non-functional: extensible rating algorithms, rate-limit abuse prevention, decoupled notifications.”
F/NF split on a review system? That’s beyond what most candidates do. Non-functional requirements show production thinking.
4:00
“Entities: IReview interface with three types — TextReview, PhotoReview, VideoReview. EditHistory as a record. ReviewResult<T> for error handling. IRatingStrategy for swappable algorithms.”
Clean entity extraction. Interface for review types shows extensibility thinking. Result type instead of exceptions — modern approach.
7:00
“Rating calculation varies — simple average, weighted average, Bayesian average. That’s a Strategy pattern. Moderation also varies — profanity filter, spam detection, sentiment analysis. That’s another Strategy, chained into a pipeline.”
Two Strategy applications, each motivated by a specific ‘what varies?’ question. Pattern usage feels natural, not forced.
10:00
“When a review is submitted, multiple things need to happen: send notification to the product owner, update aggregate ratings, trigger moderation. These are independent reactions — that’s the Observer pattern. The ReviewService publishes events, observers subscribe.”
Observer motivated by decoupling, not name-dropping. Solid.
12:00
Starts coding: IReview hierarchy, IRatingStrategy implementations, ReviewService with Observer hooks...
Watching for: sealed classes, records for immutable data, clean DI wiring
22:00
“Edge cases: rate bombing — one user posting 50 reviews in a minute. I’ll add a RateLimiter with a sliding window. Edit abuse — every edit is stored in EditHistory so we can audit. Orphaned reviews when a product is deleted — soft delete with cascade logic.”
Proactive edge cases, including rate limiting. Most candidates never mention abuse prevention. Strong Hire signal.
26:00
“For scale: reads vastly outnumber writes. I’d use CQRS — write model handles submissions and edits, read model is a denormalized aggregate optimized for display. Eventually consistent via events. Full-text search via Elasticsearch or similar.”
CQRS for a review system is the right call. LLD-to-HLD bridge shows architectural breadth. Strong Hire.
Time
Candidate Says
Interviewer Thinks
0:00
“Review system... OK, so users leave a star rating and some text. Let me start with a Review class...”
Jumped to implementation. No scoping. Let’s see if they recover.
1:30
“Actually, wait — let me ask some questions first. Are there different review types? Do we need moderation? Is the rating just an average or something more sophisticated?”
Good recovery. Self-corrected within 90 seconds. The scoping habit is there, just needed a moment.
4:00
“I’ll calculate the average rating by summing all stars and dividing by count. Simple.”
Simple average has known problems. Will they realize?
5:00
“Hmm, but what if a product has only one 5-star review? That looks like a perfect product. Amazon doesn’t do that... they use some kind of weighted calculation. I should make the rating algorithm swappable — that’s Strategy.”
Self-discovered the problem with simple averages! This is BETTER than getting it right instantly — shows real-time reasoning.
8:00
“For moderation, I’ll add an IsApproved boolean on the review...”
A single boolean won’t handle multiple moderation checks. Let’s push...
8:30
Interviewer: “What if you need profanity check AND spam check AND sentiment analysis? All independent.”
Testing if candidate can evolve a boolean into a pipeline.
9:00
“Oh — a boolean can’t capture multiple checks. I need a pipeline of moderation strategies. Each one implements IModerationStrategy and returns a pass/fail with a reason. The pipeline runs them all and aggregates results.”
Needed a nudge but evolved cleanly. Boolean → pipeline shows growth under pressure.
15:00
Coding... pauses... “Let me think about notifications. When a review is posted, the product owner needs to know, the aggregate needs to update, and moderation needs to run. If I hardcode all that in SubmitReview, it’ll be a mess...”
Thinking out loud. Good sign — shows awareness of coupling.
16:00
“That’s Observer. The ReviewService fires an event, and each concern subscribes independently. Adding a new reaction means adding a new observer — zero changes to the service.”
Observer motivated by a real pain point. Took a moment to get there, but the reasoning is solid. Hire.
24:00
“Edge cases: What if a user edits their review 100 times? I’d rate-limit edits and store edit history. What about review bombing? Sliding window rate limiter per user. For scaling, reads dominate — so CQRS with a read-optimized aggregate store.”
Proactive edge cases and scaling. Honest about the path taken. Strong Hire.
Slow start — recovered with scoping questions in 90 seconds
Self-discovered Bayesian problem with simple averages
Needed nudge on moderation pipeline — evolved cleanly
Observer discovered through real coupling pain
Honest, structured recovery throughout
Common Follow-up Questions Interviewers Ask
“How would you handle seller responses to reviews?” — Tests if Review is extensible (nested replies vs flat)
“What if a product gets 10,000 reviews per minute during a sale?” — Tests write throughput and eventual consistency thinking
“How would you detect fake reviews?” — Tests if moderation pipeline is designed for ML integration
“How do you show ‘Most Helpful’ reviews?” — Tests sorting strategy and Wilson Score awareness
“What happens when a product is deleted?” — Tests cascade/orphan thinking
Key Takeaway: Two very different paths. Same outcome.
Interviewers don’t grade on polish — they grade on THINKING.
A stumble you recover from is often more impressive than a flawless run — because it shows how you handle real-world ambiguity.
Section 17
Articulation Guide — What to SAY
Knowing the design isn’t enough — you have to narrate it under pressure. Design skill and communication skill are separate muscles. You can have a brilliant review system locked in your head and still tank the interview because you went quiet during the hard parts. The 8 cards below cover the exact moments where phrasing matters most.
1. Opening the Problem
Situation: The interviewer says “Design a Review & Rating system.”
Say: “Before I start, let me scope this. How many review types — text only, or photos and videos too? Do we need content moderation? Is ‘helpful’ voting in scope? What about seller responses?”
Don’t say: “OK, I’ll make a Review class with a rating integer...” (jumping to data model without scoping)
Why it works: A review system seems simple. Scoping it proves you know it isn’t. The interviewer’s mental note: “This person asks before building — they won’t build the wrong thing in production.”
2. Entity Decisions
Situation: You’re deciding how to model reviews, ratings, and edit history.
Say: “IReview is an interface because we have text, photo, and video reviews — each with different validation rules. EditHistory is a record because once a snapshot is saved, it should never change. ReviewResult<T> wraps success or failure — no exceptions for expected validation failures.”
Don’t say: “I’ll make a Review class with all the fields.” (no reasoning about types)
Why it works: Shows deliberate type choices. Interface vs record vs class is a design decision, not boilerplate.
3. Rating Algorithm Choice
Situation: You’re about to explain why simple averages aren’t enough.
Say: “A product with one 5-star review shows 5.0, but a product with 500 reviews averaging 4.7 is clearly more trustworthy. Simple averages lie when sample sizes are small. Bayesian averaging solves this by pulling low-count ratings toward the global mean. I’ll make the algorithm swappable via IRatingStrategy — that’s the Strategy pattern.”
Don’t say: “I’ll use Strategy for rating calculation.” (pattern without the problem)
Why it works: You showed the PROBLEM (small samples distort averages), the SOLUTION (Bayesian), and the PATTERN (Strategy). Problem → solution → pattern, in that order.
4. Defending the Moderation Pipeline
Situation: The interviewer asks “Why not just a boolean IsApproved?”
Say: “A boolean captures the answer but not the reasoning. I need to know which check failed and why. Profanity check, spam detection, and sentiment analysis are independent concerns — each implements IModerationStrategy. A pipeline runs them all and collects results. Adding a new check means one new class, zero changes to existing ones.”
Don’t say: “I’ll add a ModerationResult enum.” (solves the label problem but not the pipeline problem)
Why it works: You defended the trade-off: “More classes, but each concern is isolated. The cost is X, the gain is Y, and for this problem Y wins.”
5. Observer for Notifications
Situation: You’re explaining how review submission triggers multiple side effects.
Say: “When a review is submitted, several things need to happen: notify the product owner, update the aggregate rating, and run moderation. If I put all that in SubmitReview(), it becomes a God method that changes every time I add a new reaction. Instead, the ReviewService publishes a ReviewSubmitted event, and each concern subscribes as an observer. Adding email notifications later means one new observer class — zero changes to the service.”
Don’t say: “I’ll use Observer because there are notifications.” (names the pattern without explaining the decoupling benefit)
Why it works: You named the coupling problem before the pattern. Interviewers hear: “This person thinks about consequences, not just functionality.”
6. Edge Cases
Situation: You’ve finished the happy path. Time to volunteer edge cases before the interviewer asks.
Say: “Let me think about what could go wrong. Rate bombing — a competitor floods a product with 1-star reviews. I’d add a sliding-window rate limiter per user. Edit abuse — rewriting a review 50 times. Every edit stores a snapshot in EditHistory, and I rate-limit edits too. What if a product is deleted while reviews exist? Soft delete the product, keep reviews for audit, hide from display.”
Don’t say:(nothing — most candidates wait to be asked)
Why it works: Proactive edge cases are the single strongest “Strong Hire” signal. You’re thinking about what breaks in production, not just what works on a whiteboard.
7. Scaling Bridge
Situation: The interviewer asks “What if this needs to handle millions of products?”
Say: “Reads vastly outnumber writes for reviews. I’d use CQRS — the write model handles submissions and edits, ensuring consistency. The read model is a denormalized aggregate per product, optimized for display. Updates propagate via events, so the read side is eventually consistent. For full-text search, I’d index reviews in Elasticsearch, updated asynchronously on review events.”
Don’t say: “I’d just add a cache.” (hand-wavy, no architecture)
Why it works: Shows you know what scales (read-heavy = denormalize reads) and what doesn’t (recalculating averages on every page load).
8. “I Don’t Know”
Situation: The interviewer asks about ML-based fake review detection and you’ve never trained a model.
Say: “I haven’t built an ML detection system, but the architecture supports it. The moderation pipeline already runs independent checks — an ML check would be another IModerationStrategy. It would receive the review text, call a prediction service, and return a confidence score. If the score exceeds a threshold, the review is flagged. The service boundary is clean because the interface is the same.”
Don’t say: “I don’t know ML.” (full stop, no reasoning)
Why it works: Honesty about a gap + clear reasoning about how the architecture accommodates it = respect. Bluffing or shutting down = red flag.
Pro Tip — Practice OUT LOUD, not just in your head
Reading these cards silently builds recognition. Saying them aloud builds production. Recognition fails under pressure because retrieval competes with anxiety. Muscle memory is automatic. Try this: set a 5-minute timer, explain this Review & Rating design to an imaginary interviewer out loud. You’ll hear where you hesitate — that’s where you need more practice.
Target three phrases for fluency: the Bayesian justification (“small samples distort averages”), the moderation pipeline insight (“independent concerns, not a boolean”), and the Observer motivation (“multiple reactions, zero coupling to the service”).
Section 18
Interview Questions & Answers
12 questions ranked by difficulty. Each has a “Think” prompt, a solid answer, and the great answer that gets “Strong Hire.” These aren’t hypothetical — they’re the exact questions interviewers ask when they see a Review & Rating design.
Q1: Why did you use the Strategy pattern for rating algorithms instead of a simple method?
Easy
Think: How many ways can you calculate a rating? Could the business change the algorithm without a code rewrite?
Imagine you run an e-commerce site. Marketing says “use simple averages.” Six months later, data science says “Bayesian averages are fairer.” A year after that, a PM wants Wilson Score for sorting. If the algorithm is a hard-coded method, every change means modifying and re-testing the same class. That’s fragile.
Answer: Rating algorithms vary independently from the rest of the system. Strategy lets us swap algorithms without touching ReviewService.
Great Answer: “Rating calculation has three legitimate algorithms today — simple average, weighted average, and Bayesian. The business will add more. Each one takes the same input (a list of ratings) and produces the same output (a score). That’s a textbook Strategy setup: same interface, different implementations. New algorithm = new class, zero changes to existing code. I can also A/B test algorithms by injecting different strategies per user segment.”
Q2: How does Bayesian average solve the “1 review = 5 stars” problem?
EasySTAR
Think: If a product has 1 review at 5 stars and another has 10,000 reviews averaging 4.7, which should rank higher? Why does a simple average fail here?
Think of a new restaurant with one Yelp review: “Best food ever! 5 stars!” Next to it, a restaurant with 3,000 reviews averaging 4.6. Should the new place rank higher? Obviously not — one person’s opinion isn’t statistically meaningful. But a simple average says 5.0 > 4.6. Bayesian averaging fixes this by mixing in the global average to “dilute” small sample sizes.
Bayesian Average — Visual Math
Answer: Bayesian average mixes in the global average weighted by a confidence parameter. With few reviews, the global average dominates. With many reviews, the product’s own average dominates.
Great Answer: “The formula is (C × M + sum_of_ratings) / (C + N). C is a confidence threshold — think of it as ‘how many phantom reviews at the global average do we add.’ A product with 1 real review and C=10 has 10 phantom reviews dragging it toward the mean. A product with 500 reviews barely feels those 10 phantom reviews. The business tunes C based on how aggressive they want the smoothing. This is why I made the algorithm swappable — tuning C is really tuning business policy, and different product categories might need different thresholds.”
Q3: How would you add Wilson Score as a new rating strategy?
Easy
Think: If the code follows OCP, how many files do you need to change to add a new algorithm?
Wilson Score is especially useful for ranking — it tells you the lower bound of a confidence interval, not just the point estimate. A product with 10 up and 1 down might have 90% positive, but Wilson Score says “we’re 95% confident the true positive rate is at least 74%.” That lower bound is the ranking score.
Answer: Create a new WilsonScoreStrategy : IRatingStrategy class. Register it in DI. Zero changes to ReviewService.
Great Answer: “One new class — WilsonScoreStrategy — that implements IRatingStrategy. The formula uses the count of positive and negative ratings, not a 1-5 scale, so I might need to define a threshold (e.g., 4+ = positive). Register it in the DI container. Now it’s injectable. The beauty: zero changes to ReviewService, BayesianStrategy, or any other existing code. That’s OCP in action.”
Q4: Walk through the moderation pipeline. How does a review get from submission to published?
Medium
Think: What checks need to happen? Are they sequential or parallel? What happens when one check fails?
Think of airport security. You go through multiple independent checks: metal detector, bag X-ray, ID verification, sometimes a pat-down. Each check is independent — the metal detector doesn’t care about your ID. If any check fails, you’re flagged. The moderation pipeline works the same way: multiple independent checks, all must pass, and each one has a clear reason if it fails.
Answer: The ModerationPipeline holds a list of IModerationStrategy implementations. On submission, it runs each strategy. If all pass, the review is published. If any fail, the review is flagged with the specific reasons.
Great Answer: “The pipeline runs three checks today: ProfanityFilter (regex + dictionary), SpamDetector (duplicate content and link density), and SentimentGuard (flags extremely toxic language). Each returns a ModerationResult with IsAccepted and Reason. The pipeline collects all results — not short-circuit — so the reviewer sees every issue at once. The review status moves from Pending to Published or Rejected. Adding an ML-based check later is just one new class implementing IModerationStrategy.”
Q5: How does the Observer pattern decouple reviews from notifications?
Medium
Think: Without Observer, where would the notification code live? How many things need to happen when a review is posted?
Without Observer, SubmitReview() would contain: save review, update aggregate, send email to product owner, trigger moderation, log the event. That’s five concerns in one method. Adding “push notification” means modifying this method. Adding “analytics tracking” means modifying it again. The method grows until no one dares touch it.
Answer: The ReviewService publishes a ReviewSubmitted event. Each concern — notifications, aggregation, moderation — subscribes as an observer. Adding new reactions means adding observers, not modifying the service.
Great Answer: “The IReviewObserver interface has methods like OnReviewSubmitted, OnReviewEdited, and OnReviewDeleted. Concrete observers include NotificationObserver, AggregateUpdateObserver, and ModerationTriggerObserver. The ReviewService holds a List<IReviewObserver> and notifies all of them after each operation. The service doesn’t know or care what the observers do. Adding a new reaction — say, updating a search index — means adding one class and registering it. Zero changes to existing code. That’s the power: the service’s cyclomatic complexity stays constant no matter how many reactions we add.”
Q6: How do you detect and prevent rate bombing?
Medium
Think: What does a rate-bombing attack look like? How is it different from a legitimate burst of reviews?
Rate bombing is when someone (or a bot army) floods a product with 1-star reviews to sabotage its rating. You need to distinguish between a genuinely bad product getting honest bad reviews and a coordinated attack. The key signal: abnormal velocity from few accounts in a short window.
Sliding Window Rate Limiter
Answer: A sliding-window rate limiter per user. Track submissions in a time window. If the count exceeds a threshold, block further submissions and flag the account for manual review.
Great Answer: “I’d use a RateLimiter with a configurable sliding window — say, max 5 reviews per user per hour. Implementation: store submission timestamps in a sorted list per user. On each submission, remove expired entries, check the count. Beyond the limit, return a ReviewResult.Fail("Rate limit exceeded"). For coordinated attacks from multiple accounts, I’d also track review velocity per product — if a product suddenly gets 50 reviews in 10 minutes when the baseline is 2 per day, flag the product for manual review. That’s a separate ProductVelocityMonitor observer.”
Q7: How do you handle review edits with full history?
Medium
Think: Why store edit history at all? What problems does it solve for moderation and trust?
Imagine a user writes a glowing 5-star review that passes moderation. A week later, they edit it to include spam links. Without edit history, there’s no evidence the review was ever different. With history, every version is preserved — you can audit changes, detect bait-and-switch abuse, and re-run moderation on edited content.
Answer: Each edit creates an immutable EditHistory record (a snapshot). The review always points to the latest version. Previous versions are preserved for audit.
Great Answer: “The EditHistory record stores the full snapshot: text, rating, timestamp, and the user who made the edit. On each edit, I create a new EditHistory entry, update the review’s current content, and fire a ReviewEdited event. The ModerationTriggerObserver picks up that event and re-runs the moderation pipeline on the new content. If the edit fails moderation, the review reverts to the last approved version. Rate-limit edits too — max 3 per day per review — to prevent abuse.”
Q8: How would you implement “helpful” voting on reviews?
Medium
Think: Who can vote? Can they change their vote? How does it affect review sorting?
The “Was this review helpful?” button seems trivial, but it introduces new entities (votes), new constraints (one vote per user per review), and a new sorting dimension (most helpful first). It also needs to resist vote manipulation.
Answer: A HelpfulVote entity (userId, reviewId, isHelpful). One vote per user per review enforced by a unique constraint. Total helpful count displayed on the review. Sorting by helpfulness uses the vote ratio.
Great Answer: “Each HelpfulVote is a record with UserId, ReviewId, and IsHelpful (bool). Enforce uniqueness per user-review pair. For sorting, I wouldn’t use a simple upvote count — a review with 100 up / 50 down shouldn’t outrank one with 10 up / 0 down. I’d use Wilson Score on the helpfulness votes to rank reviews. This gives a lower-bound confidence score that accounts for sample size. The helpful count updates trigger the AggregateUpdateObserver, which recalculates the review’s helpfulness score asynchronously.”
Q9: How would you handle seller responses to reviews?
Medium
Think: Is a seller response a new review type, a comment, or something else? Who can post a response? Can responses be edited?
On Amazon, sellers can publicly respond to reviews. This isn’t a “review of a review” — it’s a different entity with different rules. Only verified sellers can respond. Only one response per review. Responses go through moderation too.
Answer: A separate SellerResponse entity linked to a review. One response per review. Goes through the same moderation pipeline. Displayed beneath the review in the UI.
Great Answer: “A SellerResponse record: ReviewId, SellerId, Text, Timestamp. It’s NOT a subclass of IReview — it doesn’t have a rating, doesn’t affect aggregates, and has different authorization rules. One response per review enforced by a unique constraint. The response goes through the ModerationPipeline (same checks, different context). When a response is posted, the NotificationObserver notifies the original reviewer. The nice thing: because moderation and notifications are already observer-based and strategy-based, adding seller responses requires almost no changes to existing code.”
Q10: How would you implement real-time aggregate updates at scale?
Hard
Think: If 10,000 reviews come in per minute, can you recalculate the Bayesian average on every page load? What’s the read/write ratio for a review system?
A product page on Amazon might get 100,000 views per hour but only 10 new reviews. Recalculating the aggregate on every page load is wasteful. CQRS separates the write model (handle submissions, edits, moderation) from the read model (display aggregates, sorted reviews). The read model is denormalized and pre-calculated — page loads are fast because the work was already done.
CQRS for Review Aggregates
Answer: Use CQRS. Write model handles submissions and edits. Read model stores pre-calculated aggregates. Events bridge the two, eventually consistent.
Great Answer: “The write model is the source of truth: normalized Reviews table, full consistency. On each write, it publishes domain events. The read model subscribes to those events and maintains a denormalized ProductAggregate document: average rating, count, distribution (how many 1s, 2s, 3s...), and top-N reviews pre-sorted by helpfulness. Page loads hit the read model — one fast read, no joins, no recalculation. The trade-off is eventual consistency: a new review might take a second to appear in the aggregate. For a review system, that’s perfectly acceptable — nobody notices a 1-second delay on a product rating.”
Q11: How would you add full-text search across millions of reviews?
Hard
Think: Can SQL LIKE handle this? What kind of index do you need for “search reviews mentioning ‘battery life’”?
WHERE text LIKE '%battery life%' scans every row — it’s O(n) and will time out on millions of reviews. You need an inverted index — the same technology search engines use. Instead of scanning every review, you look up “battery” and “life” in the index and get back matching review IDs in milliseconds.
Search Architecture
Answer: Add a SearchIndexObserver that indexes reviews in Elasticsearch on submit/edit/delete events. Search queries hit Elasticsearch directly, not the primary database.
Great Answer: “Another observer! SearchIndexObserver listens for ReviewSubmitted, ReviewEdited, and ReviewDeleted events. On each event, it updates the Elasticsearch index. Search queries bypass the primary database entirely — they go straight to Elasticsearch. The index stores review text, rating, product ID, and helpfulness score. Elasticsearch gives us full-text search with relevance ranking, faceted filtering (by rating, by date), and highlighting of matching terms. The key: because we used Observer from the start, adding search indexing required one new class and zero changes to ReviewService.”
Q12: How would you detect fake reviews using machine learning?
Hard
Think: What signals indicate a fake review? How would the moderation pipeline integrate with an ML service?
Fake reviews have patterns: generic language (“Great product! Highly recommend!”), accounts that review dozens of products per day, reviews posted in bursts from accounts created at the same time, and suspicious IP clustering. An ML model can learn these patterns from labeled data.
Answer: Add an MlFakeDetector : IModerationStrategy that calls a prediction service. The service returns a confidence score. Above a threshold, the review is flagged.
Great Answer: “The architecture already supports this. I’d add MlFakeDetectorStrategy : IModerationStrategy. It sends the review text plus metadata (account age, review frequency, IP) to a prediction microservice. The service returns a fake-probability score from 0 to 1. Above 0.8: auto-reject. Between 0.5 and 0.8: flag for human review. Below 0.5: pass. The beauty is that the ML model can be retrained and redeployed without touching the review system at all — the interface is just Moderate(review) → ModerationResult. The prediction service can use any model (logistic regression, neural network, ensemble) behind that interface. That’s Strategy at the architecture level, not just the class level.”
Pattern: Questions 1-3 test whether you understand why you chose the pattern. Questions 4-9 test whether you can extend the design. Questions 10-12 test whether you can scale it. Interviewers follow this same progression — they start easy and push harder until they find your ceiling.
Section 19
10 Deadliest Review & Rating Interview Mistakes
Every one of these has ended real interviews. Review & Rating seems like easy CRUD — so candidates let their guard down and skip the design decisions that interviewers actually grade on. Don’t be that person.
●
Critical Mistakes — Interview Enders
#1 Jumping straight to code without scoping — “I’ll make a Review class with a rating field”
Why this happens: “It’s a review system — how complicated can it be?” Famous last words. You skip asking about review types, moderation, voting, seller responses. Five minutes in, the interviewer drops: “What about photo reviews?” and your model can’t handle it because everything is hardcoded for text-only.
Bad — Immediately Coding
// Interviewer: "Design a Review & Rating system"
// Candidate immediately types...
public class Review
{
public int Stars { get; set; }
public string Text { get; set; }
}
// 5 min later: "What about photo reviews?"
// Candidate: *panic rewrite*
Good — Scope First
// "Before I code: text only or photo/video too?
// Moderation? Helpful voting? Seller responses?
// What rating algorithm — simple or Bayesian?"
// Interviewer: "All of the above."
// NOW the candidate designs IReview, IRatingStrategy,
// ModerationPipeline before writing a single class.
What the interviewer thinks: “Doesn’t scope. Will build the wrong system in production for weeks before asking what’s actually needed.”
#2 God class — everything inside one ReviewService
Why this happens: You put submission, moderation, rating calculation, notifications, and rate limiting all in ReviewService. It starts at 50 lines and seems clean. But every feature the interviewer asks about gets bolted on. By the end, it’s 400 lines of tangled logic where changing the rating algorithm might break moderation.
Bad — God Class
public class ReviewService // does EVERYTHING
{
public void Submit(Review r) { /* save + moderate + notify + rate */ }
public double CalcAverage(string productId) { /* inline math */ }
public bool CheckProfanity(string text) { /* inline regex */ }
public void SendEmail(string to, string msg) { /* SMTP here */ }
public void RateLimit(string userId) { /* timestamp checks */ }
// 350 more lines...
}
Good — Separated Responsibilities
ReviewService // orchestrates submit/edit/delete
IRatingStrategy // Bayesian, weighted, Wilson
ModerationPipeline // chains IModerationStrategy checks
IReviewObserver // notifications, aggregates, indexing
RateLimiter // sliding window per user
// Each class: one job. Change rating without touching moderation.
What the interviewer thinks: “No separation of concerns. This person creates services that grow until nobody can maintain them.”
Fix: Ask “what changes independently?” Rating algorithms, moderation checks, notification channels, and rate limits all change for different reasons. Each gets its own abstraction.
Why this happens: You think “text, photo, and video reviews share common fields, so I’ll use inheritance.” One level is fine. Three levels deep and you’re fighting the hierarchy every time you add a new type or change shared behavior. A VerifiedPurchasePhotoReview doesn’t fit cleanly into any single chain.
Bad — Deep Hierarchy
abstract class EntityBase { ... }
abstract class AbstractReview : EntityBase { ... }
class TextReview : AbstractReview { ... }
class PhotoReview : TextReview { ... } // photo inherits text?!
class VideoReview : PhotoReview { ... } // video inherits photo?!
Good — Interface + Composition
public interface IReview { Guid Id; int Rating; string Text; }
public sealed record TextReview(...) : IReview;
public sealed record PhotoReview(..., List<string> PhotoUrls) : IReview;
public sealed record VideoReview(..., string VideoUrl) : IReview;
// Flat. Each type stands alone. No fragile base class problem.
What the interviewer thinks: “Over-abstracted. Will create class hierarchies that fight the domain instead of modeling it.”
Fix: Use interface + composition. IReview defines the contract. Sealed records implement it. Each type is independent — no fragile base class problem.
●
Serious Mistakes — Red Flags
#4 Ignoring concurrency — two users editing the same review
Why this happens: You design for single-user scenarios. But review aggregates are shared state — 10 users submitting reviews for the same product simultaneously means 10 concurrent aggregate updates. Without proper handling, you get lost updates or incorrect counts.
Fix: Use optimistic concurrency on aggregate updates. Each aggregate has a version number. If two updates race, one gets a version conflict and retries. For rate limiting, use ConcurrentDictionary or atomic operations.
Why this happens: You want to impress with pattern knowledge, so you add Factory for strategies, Mediator for coordination, and Chain of Responsibility for moderation when a simple list of strategies suffices. The interviewer asks “where does a review get submitted?” and the answer requires tracing through 8 abstractions.
Fix: Patterns solve problems. No problem = no pattern. Strategy is justified because rating algorithms genuinely vary. Observer is justified because reactions are genuinely independent. Don’t add patterns speculatively — add them when you feel the pain.
#6 Never mentioning tests — “I’ll test it later”
Why this happens: Under time pressure, testing feels like a luxury. But the interviewer is watching for testability as a design signal. If your code can’t be tested, it’s probably tightly coupled. Mentioning “I’d inject IRatingStrategy so I can test with a mock” costs 5 seconds and scores major points.
Fix: Say this once during the interview: “I’m using interfaces for strategies and observers so I can inject fakes in tests. I’d test Bayesian average with known inputs, moderation with test reviews containing profanity, and the rate limiter with a time provider I control.”
#7 Static singleton — ReviewService.Instance without acknowledging the trade-off
Why this happens: You want a single ReviewService but reach for static instead of DI. Static singletons can’t be mocked, can’t be replaced in tests, and hide their dependencies. The interviewer thinks: “This person hasn’t worked in a testable codebase.”
Fix: Register ReviewService as a singleton in the DI container. Same lifetime, but injectable and testable. Say: “I’d use DI singleton, not static — same single instance, but I can swap it for a fake in tests.”
●
Minor Mistakes — Missed Opportunities
#8 Magic numbers — if (count > 5) scattered everywhere
Why this happens: Rate-limit threshold? 5. Bayesian confidence? 10. Max edit count? 3. These numbers are scattered across the code without names. When the PM says “change the rate limit to 10,” you’re hunting through the codebase.
Fix: Use named constants or configuration: private const int MaxReviewsPerHour = 5; or inject from IOptions<RateLimitConfig>. Even in an interview, say: “I’d make this configurable — the threshold is a business decision, not a code decision.”
#9 Only happy path — no rate bombing, no edit abuse, no orphaned reviews
Why this happens: The design is clean, the patterns are right, the code compiles — but you never mention what goes wrong. In production, rate bombing happens on day one, edit abuse happens on week one, and orphaned reviews when products are deleted happen on month one. Clean code ≠ robust code.
Fix: Use the What If? framework: Concurrency (two updates at once), Failure (moderation service down), Boundary (rate limits exceeded), Weird Input (empty review, 0-star rating). Volunteer at least 2-3 edge cases unprompted.
#10 No scaling bridge — missing the LLD-to-HLD connection
Why this happens: You finish the LLD and stop. The interviewer expected you to mention what changes when the system grows from 1,000 reviews to 100 million. The CQRS bridge, search indexing, and caching strategy are all HLD topics that show architectural breadth.
Fix: End with: “For scale, reads vastly outnumber writes. I’d separate read and write models with CQRS. Aggregates are pre-calculated and cached. Full-text search uses an inverted index like Elasticsearch, updated via events. This is the HLD bridge.” Takes 30 seconds, massive impact.
Interviewer Scoring Rubric
Level
Requirements
Design
Code
Edge Cases
Communication
Strong Hire
Structured F+NF
Strategy + Observer natural
Clean modern C#
3+ proactive
Explains WHY
Hire
Key ones listed
1-2 patterns used
Mostly correct
When asked
Clear
Lean No
Partial
Forced or wrong pattern
Messy
Misses obvious
Quiet/verbose
No Hire
None
No abstractions
Can’t code
None
Can’t explain
Section 20
Memory Anchors — Never Forget This
You just built a Review & Rating system and discovered Strategy, Observer, and a moderation pipeline along the way. Now let’s lock those patterns into long-term memory. The trick isn’t rote repetition — it’s anchoring each concept to something vivid. A story, a place, a picture. When you can “see” it, you can recall it.
The CREATES Mnemonic — Your Universal LLD Approach
“Every system design CREATES a solution.” — This mnemonic works for EVERY LLD interview, not just Review & Rating. Repeat it until it’s automatic.
Memory Palace — Walk Through a Review Page
Imagine you’re looking at a product page on Amazon. Each part of the page maps to a key concept from this case study. Walk through it in your mind, and the design clicks into place.
Memory Palace — The Review Page
The Story Mnemonic — A Shopping Trip
Picture this: You buy a pair of headphones online. A week later, the app nudges you: “Rate your purchase!” You tap 4 stars and type a review — that’s the Entity (IReview, rating, text). Before your review goes live, an invisible security guard reads it — profanity? spam? offensive? That’s the Moderation Pipeline (Strategy). Once published, the product’s average rating updates, the seller gets notified, and the search index refreshes — three independent reactions triggered by one event. That’s Observer. Later, someone clicks “Helpful” on your review — and now you’re thinking about voting, rate limiting, and abuse prevention. That’s the Edge Cases.
Next time you open Amazon and see a star rating, think: “Entities, API, Trade-offs, Edge Cases.” The whole system is right there on the page.
Flashcards — Quiz Yourself
Click each card to reveal the answer. If you can answer without peeking, the pattern is sticking.
Why not simple averages?
Small sample distortion. One 5-star review shows 5.0 but means nothing statistically. Bayesian average fixes this by mixing in the global mean weighted by a confidence parameter.
Why Observer for notifications?
Decoupling. When a review is submitted, multiple independent things happen: notify seller, update aggregate, run moderation, index for search. Without Observer, all of these live in SubmitReview(). With Observer, each is a separate class that subscribes to events.
Why a moderation pipeline?
Independent checks. Profanity, spam, and sentiment are separate concerns. Each implements IModerationStrategy. The pipeline runs all of them and collects results. Adding a new check means one new class — zero changes to existing checks.
Smell → Pattern Quick Reference
Smell
Signal
Response
Multiple rating algorithms
Business wants to A/B test different calculations
Strategy pattern
Multiple independent moderation checks
Each check has its own rules and data
Strategy + pipeline
Submit triggers many side effects
Notifications, aggregation, indexing are independent
Observer pattern
Users flooding with reviews
Abnormal submission velocity
Rate Limiter
Edit history needed for audit
Each edit must be preserved immutably
Record snapshots
5 Things to ALWAYS Mention in a Review & Rating Interview
Strategy for rating algorithms (not hardcoded math)
Observer for decoupled notifications and aggregation
Moderation pipeline (not a boolean flag)
Rate limiting for abuse prevention (rate bombing + edit abuse)
CQRS scaling bridge (reads >> writes)
Section 21
Transfer — These Techniques Work Everywhere
You didn’t just learn how to build a review system. You learned a set of thinking moves — swappable algorithms, decoupled reactions, moderation pipelines, rate limiting, and CQRS for read-heavy systems. These five ideas appear in virtually every system that handles user-generated content. Below is the proof: the same techniques applied to four different domains. Same skeleton, different skin.
Technique
Reviews
Comments
Feedback Forms
Survey Systems
Real-world walkthrough
Browse → Buy → Rate → Review → Read others
Read article → Scroll down → Comment → Reply → Like
Notice how every row maps to the same thinking move, just applied to a different domain. The What varies? question always leads to Strategy. The independent reactions question always leads to Observer. The moderation question always leads to a pipeline of checks. These aren’t review-specific patterns — they’re user-generated content patterns.
The Big Insight: “Systems share STRUCTURE even when domains differ. The skills transfer because they target the structure, not the domain. A review system and a comment system look different on the surface, but underneath they’re both: user-generated content + moderation + aggregation + notifications.”
Section 22
The Reusable Toolkit
Six thinking tools you picked up in this case study. Each one is a portable mental move — not a review system trick, but something you can use in any LLD interview or real-world design. Here’s what each tool is, when to reach for it, and where you used it here.
Your Toolkit — 6 Portable Thinking Tools
SCOPE
Before building anything, ask 5 questions: Size (how many reviews?), Complexity (what types?), Operations (submit, edit, vote?), Performance (read/write ratio?), Extensions (what might change?). This takes 2 minutes and prevents 20 minutes of wasted work.
Review & Rating use: Scoping revealed photo/video reviews, moderation, helpful voting, and seller responses before a single line of code.
What Varies?
Ask: “Is there more than one way to do this? Could it change at runtime?” If yes, extract the algorithm behind an interface. New algorithms = new classes, zero changes to existing code. This is the OCP in action.
Review & Rating use: Rating algorithms vary (Bayesian, weighted, Wilson). Moderation checks vary (profanity, spam, ML). Both became Strategy implementations.
Who Reacts?
Ask: “When this happens, do other parts of the system need to know?” If yes, use Observer. The source publishes an event, reactions subscribe. Adding new reactions means new classes, not modifying the source. Keeps SRP intact.
Review & Rating use: Review submission triggers notifications, aggregate updates, moderation, and search indexing — all as independent observers.
What If?
After your happy path works, run through four categories: Concurrency (two users at once), Failure (moderation down), Boundary (rate limit hit), Weird Input (empty review, 0 stars). Each category surfaces edge cases the happy path ignores.
Review & Rating use: Rate bombing, edit abuse, orphaned reviews on product deletion, duplicate submissions, and moderation service failure.
Can I Test It?
Ask: “Can I write a unit test for this class without spinning up the entire app?” If not, your dependencies are too tight. Inject interfaces, use DI containers, provide fakes for external services. Testability is a design quality signal.
Review & Rating use:ReviewService takes IRatingStrategy, ModerationPipeline, and List<IReviewObserver> via DI. Tests can inject fakes for all of them.
CREATES
The 7-step universal LLD approach: Clarify → Requirements → Entities → API → Trade-offs → Edge cases → Scale. Works for every system, every interview. The steps map to the interview timeline in Section 16.
These 6 tools are your permanent inventory. They work for reviews, comments, feedback forms, surveys — any system with user-generated content. Domains change. The structural questions don’t. If you remember nothing else from this page, remember THESE.
Section 23
Practice Exercises
Three exercises that test whether you truly learned the thinking, not just memorized the code. Each one adds a new constraint that forces you to extend the design — exactly like a real interview follow-up question.
Exercise Difficulty Progression
Exercise 1: Helpful Voting System Medium
New constraint: Users can vote reviews as “helpful” or “not helpful.” Each user can vote once per review. Reviews should be sortable by helpfulness using Wilson Score, not just raw vote count.
Think: What new entity do you need? How do you enforce one-vote-per-user? Why is Wilson Score better than raw count for ranking? How does this affect the existing Observer pipeline?
Hint
Create a HelpfulVote record with UserId, ReviewId, and IsHelpful. Enforce uniqueness per user-review pair (either in-memory dictionary or database unique constraint). For ranking, don’t sort by upvote count — a review with 100 up / 50 down shouldn’t outrank one with 10 up / 0 down. Wilson Score gives a lower-bound confidence score that accounts for sample size. When a vote is cast, fire a ReviewVoted event so the AggregateUpdateObserver can recalculate the helpfulness score.
Solution Skeleton
HelpfulVoting.cs
public sealed record HelpfulVote(
Guid UserId, Guid ReviewId, bool IsHelpful, DateTimeOffset VotedAt);
public interface IHelpfulnessStrategy
{
double CalculateScore(int upvotes, int downvotes);
}
public sealed class WilsonScoreStrategy : IHelpfulnessStrategy
{
public double CalculateScore(int up, int down)
{
int n = up + down;
if (n == 0) return 0;
double z = 1.96; // 95% confidence
double phat = (double)up / n;
double denominator = 1 + z * z / n;
double centre = phat + z * z / (2 * n);
double spread = z * Math.Sqrt((phat * (1 - phat) + z * z / (4 * n)) / n);
return (centre - spread) / denominator; // lower bound
}
}
// In ReviewService:
public ReviewResult<HelpfulVote> VoteHelpful(
Guid userId, Guid reviewId, bool isHelpful)
{
if (_votes.ContainsKey((userId, reviewId)))
return ReviewResult<HelpfulVote>.Fail("Already voted");
var vote = new HelpfulVote(userId, reviewId, isHelpful, DateTimeOffset.UtcNow);
_votes[(userId, reviewId)] = vote;
NotifyObservers(obs => obs.OnReviewVoted(reviewId));
return ReviewResult<HelpfulVote>.Ok(vote);
}
Exercise 2: Photo Review Moderation Medium
New constraint: Photo reviews need image moderation in addition to text moderation. The image check calls an external API (e.g., Azure Content Moderator) which might take 2-3 seconds and could fail. The review should not block waiting for image moderation.
Think: How does the moderation pipeline handle async checks? What happens if the image check fails? Should the review be visible while image moderation is pending?
Hint
Add ImageModerationStrategy : IModerationStrategy that calls the external API. Since it’s slow, run it asynchronously: the review enters a PendingImageReview status. Text moderation runs synchronously (fast). If text passes, publish the review but mark it as “images pending.” When the async image check completes, the observer either confirms the review or flags it. If the external API fails, retry with exponential backoff. After 3 failures, flag for human review rather than auto-rejecting.
Solution Skeleton
ImageModeration.cs
public sealed class ImageModerationStrategy(
IImageModerationService externalApi) : IModerationStrategy
{
public ModerationResult Moderate(IReview review)
{
if (review is not PhotoReview photo)
return ModerationResult.Pass(); // not applicable
// Queue async check — don't block
_ = Task.Run(async () =>
{
foreach (var url in photo.PhotoUrls)
{
var result = await externalApi.AnalyzeAsync(url);
if (!result.IsSafe)
{
// Fire event: image failed moderation
// Observer moves review to "Rejected"
return;
}
}
// All images safe — Observer moves to "Published"
});
// Return "pending" — text is OK, images still checking
return ModerationResult.Pending("Image review in progress");
}
}
public enum ReviewStatus
{
Pending, // just submitted, text moderation running
PendingImages, // text OK, images still checking
Published, // all checks passed
Rejected, // failed moderation
FlaggedForReview // external API failed, needs human
}
Exercise 3: Trending Reviews Algorithm Hard
New constraint: The product page needs a “Trending Reviews” section that shows reviews gaining helpful votes rapidly. A review that got 20 helpful votes in the last hour should rank higher than one that got 100 helpful votes over a month. Design the algorithm as a new IRatingStrategy implementation and explain how it integrates with the CQRS read model.
Think: What’s the difference between “most helpful all-time” and “trending right now”? How do you calculate velocity? How often should the trending score be recalculated? Does this belong in the write model or the read model?
Hint
Trending is about velocity, not total count. Use a time-decayed scoring approach: each helpful vote contributes a score that decays exponentially with time. Recent votes contribute more than old ones. The formula: score = Σ e^(-λ × age_in_hours) where λ controls the decay rate. This belongs entirely in the read model — the TrendingUpdateObserver recalculates scores on each vote event and stores them in the denormalized read store. A background job periodically decays all scores to keep the trending list fresh even when no new votes arrive.
Solution Skeleton
TrendingScore.cs
public sealed class TimeDecayTrendingStrategy(
double decayLambda = 0.1) // higher = faster decay
{
public double CalculateTrendingScore(
IEnumerable<HelpfulVote> recentVotes,
DateTimeOffset now)
{
return recentVotes
.Where(v => v.IsHelpful)
.Sum(v =>
{
double ageHours = (now - v.VotedAt).TotalHours;
return Math.Exp(-decayLambda * ageHours);
});
}
}
// TrendingUpdateObserver — subscribes to vote events
public sealed class TrendingUpdateObserver(
TimeDecayTrendingStrategy trending,
IReadModelStore readStore) : IReviewObserver
{
public void OnReviewVoted(Guid reviewId)
{
var votes = readStore.GetRecentVotes(reviewId, hours: 72);
var score = trending.CalculateTrendingScore(votes, DateTimeOffset.UtcNow);
readStore.UpdateTrendingScore(reviewId, score);
}
// Background job: recalculate all scores every hour
// to decay reviews that stopped receiving votes
}
Scoring guide: If you identified that trending needs velocity (not total count), chose time-decay scoring, placed it in the read model, and used Observer for updates — you’ve nailed it. Bonus points for mentioning the background decay job and configurable λ.
Spaced Repetition: Try Exercise 1 today. Try Exercise 2 in three days (without re-reading). Try Exercise 3 in a week. If you can sketch the design from memory after a week, it’s permanent.
Section 24
Related Topics
You’ve discovered patterns, principles, and thinking tools in this case study. Here’s where to go next — each page below deepens a specific skill you’ve already started building.
Same difficulty tier. Strategy for pricing/discounts, Observer for inventory updates, Composite for nested product bundles. Many shared techniques with Review & Rating.
Skills that transfer: Strategy + Observer combo, edge cases, What If? frameworkComing Soon
Logging Framework
Decorator for composable log handlers, Singleton for the logger instance, Strategy for output formats. Takes the “pipeline of checks” idea from moderation and applies it to log routing.
Skills that transfer: Pipeline/Strategy, Decorator composition, DI singletonComing Soon
Recommended path: If you’ve completed this case study, try Shopping Cart next — it’s at the same difficulty level but adds Composite pattern for nested bundles. Then move to Logging Framework for a step into infrastructure design, where the Decorator pattern takes center stage. The thinking tools you built here (What Varies? Who Reacts? What If?) carry forward to every single one.