Case Study

Logging Framework

Build your own logging framework from scratch. 8 levels, each adding one constraint that breaks your code. By the end, you'll understand why Serilog and NLog are built the way they are.

8 Levels 12 Think First Challenges SVG Diagrams 10 Interview Q&As C# / .NET
Section 1

Every App Talks — Who's Listening?

Every application you've ever used produces invisible messages. When a user logs in, there's a log entry. When a payment fails, there's a log entry. When the server runs out of memory at 2 AM and crashes, the ONLY thing that tells you what happened is the log. Logging is the black box recorder of software — like the flight recorder on an airplane. You don't think about it until something goes wrong, and then it's the most important thing in the world.

But most developers treat logging as an afterthought: Console.WriteLine("something happened") sprinkled randomly through the code. No structure, no levelsLog levels are severity categories. DEBUG is for developers during coding. INFO is for normal operations. WARN means something unusual happened but we recovered. ERROR means something broke. FATAL means the app is going down. Levels let you filter noise from signal., no way to send logs to a file or a remote server, no context about what was happening when the message was written. That's not a logging system — that's shouting into the void and hoping someone hears.

Here's the thing: the difference between a junior developer's logging and a senior developer's logging isn't the framework they use — it's the thinking behind it. What to log, what NOT to log, how to structure it for searchability, how to avoid performance traps, how to make it testable. Those decisions come from understanding the architecture, not from importing a NuGet package.

We're going to build a real logging framework from scratch — the kind that powers SerilogThe most popular structured logging library for .NET. Serilog treats log data as structured events rather than text strings. Its "sinks" (destinations), "enrichers" (context adders), and "formatters" (output styles) are exactly the architecture we'll discover level by level., NLogA flexible .NET logging framework. NLog uses "targets" (our sinks), "layouts" (our formatters), and "rules" (our level filters). Different vocabulary, identical concepts., and log4net. 8 levels, each adding one constraint that breaks your previous code. By the end, you'll understand exactly why professional logging frameworks are designed the way they are — because you'll have reinvented their architecture from first principles.

Unlike a game (Tic-Tac-Toe) or a physical system (Parking Lot), logging is a cross-cutting concernA cross-cutting concern is something that touches every part of your application — not just one module. Logging, authentication, error handling, and caching are all cross-cutting. They don't belong to any single feature but weave through everything. — it touches every service, every controller, every background job. That's what makes it so interesting to design.

What makes logging a particularly rich design exercise? It touches nearly every pattern in the book. You need the Observer patternOne logger, many destinations. When a log message is produced, every registered "sink" gets notified. The logger doesn't know how many sinks exist or what they do — it just broadcasts. Adding a new sink requires zero changes to the logger. for routing to multiple destinations. You need StrategyDifferent formatting algorithms (plain text, JSON, XML) all do the same job — turn a LogEntry into a string — but in different ways. Swap the strategy without touching the logger. for swappable formatters. You need DecoratorEnrichers "wrap" the logging pipeline to add context (timestamps, thread IDs, user info) without modifying the core logger. Stack decorators to compose behavior: ThreadEnricher wrapping SourceEnricher wrapping the base logger. for layered enrichment. You need SingletonThe entire app shares one logger — but a DI-managed singleton (registered once in the service container), not a static field. This keeps it testable and mockable, unlike the old static Logger.Instance approach. for a shared instance. It's a design pattern playground disguised as a utility class.

Why logging in an LLD interview? It tests your ability to design infrastructure codeCode that supports the application but isn't a user-facing feature. Logging, caching, authentication, rate limiting, and configuration management are all infrastructure. They require different design thinking than business logic — they must be reliable, performant, and invisible to the end user. — code that must be reliable, performant, and invisible. It exposes whether you understand OCP (adding new sinks), DI (testable logger), and Decorator (enrichment). And it's a system every developer has used but few have designed. That gap is exactly what interviewers exploit.
Scope check: We're building the framework — the logger, sinks, enrichers, and configuration. We're NOT building the log viewer/dashboard (that's a separate UI concern) or the log storage backend (Elasticsearch, SQL). Think of it like building Serilog, not Seq. The framework produces logs; other tools consume them.

The Constraint Game — 8 Levels

L0: Log a Message
L1: Log Levels
L2: Multiple Sinks
L3: Enrich Logs
L4: One Logger
L5: Edge Cases
L6: Testability
L7: Scale It

Log Levels at a Glance

DEBUG
INFO
WARN
ERROR
FATAL

What You'll Build

Your Code logger.Log(...) Logger Filter by level Enrich with context Route to sinks LogEntry ConsoleSink FileSink RemoteSink ILogSink Write(LogEntry) Core Sinks Data

The System Grows — Level by Level

Each level adds one constraint. Here's a thumbnail of how the class diagram expands from 1 class to a full framework:

L0 Logger L1 Logger LogLevel LogEntry L2 ILogSink Con File L3 IEnricher Thrd Src L4 Singleton LogManager L5-6 ILogger Guard Mock L7 AsyncSink BatchWriter Channel<T> 1 3 6 9 11 14 18

System

Production-grade logging framework with log levelsSeverity categories (DEBUG, INFO, WARN, ERROR, FATAL) that let you filter noise. In dev you see everything. In production you only see WARN and above. The minimum level is configurable at runtime., multiple sinksA "sink" is a destination where log messages go. Console, file, database, cloud service, Slack channel — each is a sink. The Observer pattern lets one log message flow to many sinks simultaneously., context enrichment, DI-friendly design, and error handling.

Patterns

ObserverWhen the logger produces a log entry, every registered sink is notified. The logger doesn't know or care how many sinks there are or what they do — it just broadcasts. Classic Observer: one subject, many observers., StrategyDifferent formatting algorithms (plain text, JSON, XML) behind one interface. Swap the formatter without touching the logger or sinks. The "what varies?" question led us here., DecoratorEnrichers wrap the logging pipeline to add context (timestamp, thread ID, machine name) without modifying the core logger. Each enricher adds one piece of data and passes the entry along., SingletonThe entire application shares one logger instance. But we use DI singleton (registered once in the container) — not a static field — so it remains testable and mockable.

Skills

Real-world walkthrough, "What Varies?" for sinks & formatters, Observer for fan-out, Decorator for enrichment, CREATESThe 7-step interview framework: Clarify, Requirements, Entities, API, Trade-offs, Edge cases, Scale. Works for every LLD problem.

Stats

12 Think Firsts • SVG Diagrams • 10 Q&As • 80+ tooltips
~60 min (with challenges) • ~25 min (speed read)

Section 2

Before You Code — See the Real World

Before we write a single line of code, let's think about what logging actually looks like in real life. Not in a framework — in the physical world. Imagine you're a factory manager watching your production line. Things go wrong. Machines break, workers call in sick, deliveries arrive late. How do you keep track of what happened? You write it down in a log book. Each entry has a date, a description of what happened, and how serious it was. That same idea — that exact same structure — drives every logging framework ever built.

The genius of the "real-world walkthrough" technique is that it gives you the architecture for free. You don't need to know design patterns. You don't need 10 years of experience. You just need to observe the real-world process carefully and translate it to code. The nouns become classes. The verbs become methods. The relationships become interfaces. Let's do it.

You probably listed some of these: When it happened (timestamp), how bad it was (severity/level), what the message said (content), where in the code it came from (class/method name), which thread was running (thread ID), what user was affected (user context), and what went wrong technically (exception details).

Every item on your list becomes a field on our LogEntryThe data object that holds everything about a single log message. Think of it like a row in your log book: timestamp, level, message, source, and any extra context. It's immutable — once a log entry is written, it never changes. class. You just designed the core data model without writing any code.

Notice something else: you probably listed things in two categories — things about the event itself (timestamp, message, severity) and context around the event (thread ID, user, machine name). That distinction matters. The event data is core to every log entry. The context data is enrichment — optional layers added on top. We'll separate these in Level 3 using the Decorator pattern.

1. Event Something happens User login, payment fail... 2. Severity 🚨 How bad is it? DEBUG, INFO, WARN, ERROR... 3. Create Entry 📝 Build the log record Timestamp + level + message 4. Route 📨 Send to destinations Console, file, remote server... 5. Review 🔍 Search & diagnose Filter, grep, dashboards

Stage 1: Something Happens

What you SEE: A user clicks "Pay Now." A background job finishes processing 10,000 records. An API request times out after 30 seconds. A disk fills up to 98% capacity. Any event in your application's life.

What happens behind the scenes: The code at the point of the event decides this is worth recording. Not every line of code produces a log — just the moments that matter for debugging, auditing, or monitoring. Think of it like a journalist: they don't write down every breath they take. They write down the interesting things. Your code is the journalist; the logger is the notebook.

Design insight: The first noun: Event (or "log message"). Someone in your code decided "this is worth writing down." That decision point is where logger.Log() gets called. The caller decides what to log — the logger decides how to handle it.

What you SEE: Not all events are equal. "User logged in" is informational. "Payment failed" is an error. "Database connection pool exhausted" is fatalA fatal log means the application cannot continue. The process is about to crash or become unusable. These typically trigger immediate alerts to on-call engineers. You might see this once a month in a healthy system.. The person writing the log decides how serious it is.

What happens behind the scenes: The severity becomes a filter. In development, you want to see everything (DEBUG and up). In production, you only want WARN and above — otherwise you'd drown in thousands of DEBUG messages per second. The minimum level is configurable without changing code. Think of it like a volume knob — turn it down in production (only loud/important things get through), turn it up in development (hear everything). The knob doesn't change the music; it changes what you hear.

Design insight: New noun: LogLevel. It's a category with a natural ordering — DEBUG < INFO < WARN < ERROR < FATAL. That ordering makes filtering trivial: "show me everything at WARN or above." A fixed set of ordered categories — that's an enumAn enum in C# is a type with a fixed set of named integer constants. By assigning DEBUG=0, INFO=1, WARN=2, etc., you get comparison for free: if (entry.Level >= minimumLevel) filters with one line. with integer values.

What you SEE: A single line in a log file: [2025-01-15 14:32:07] [ERROR] PaymentService: Card declined for order #4521. That single line packs a lot: when, how bad, where, and what.

What happens behind the scenes: The logger assembles a log entry — a bundle of data: timestamp, level, message, source (which class/method), and optional extras like exception details or user ID. This entry is an immutable fact — once it happened, it happened. You never modify a log entry after creation. That's an important design signal: when data is created once and never changes, it should be modeled as an immutable type.

Design insight: Key noun: LogEntry. It has multiple fields (timestamp, level, message, source, exception). It never changes after creation — that makes it a recordIn C#, a record is an immutable reference type. record LogEntry(DateTime Timestamp, LogLevel Level, string Message) automatically gets Equals, GetHashCode, and ToString. Perfect for data that's created once and never modified.. The format of the entry (plain text vs JSON vs XML) can vary — that's a "what varies?" signal.

What you SEE: The same log message appears on your terminal, gets written to a file, and shows up in your cloud monitoring dashboard — all at the same time. One event, many destinations.

What happens behind the scenes: The logger doesn't know or care how many sinksA sink is a destination for log output. The name comes from the idea of water flowing into a sink. Console, file, database, HTTP endpoint, email, Slack webhook — each is a different type of sink. The logger broadcasts; sinks receive. are attached. It just says "here's a log entry" and every registered sink does its thing. Add a new sink (email alerts for FATAL?) — the logger doesn't change. Remove the file sink — the logger doesn't change. This is the textbook definition of the Open/Closed PrincipleThe O in SOLID. Your code should be open for extension (add new sinks) but closed for modification (the logger class doesn't change). You extend behavior by adding new implementations of the ILogSink interface, not by editing the logger. — open for extension, closed for modification.

Design insight: Key noun: Sink (or "log destination"). One logger, many sinks. When something happens, all observers get notified — that's the Observer patternA design pattern where a subject (the logger) maintains a list of observers (sinks) and notifies them automatically when something interesting happens. Adding or removing observers doesn't require changing the subject.. And since all sinks do the same job (write a log entry) but in different ways, they share an interface.

What you SEE: It's 2 AM. The pager goes off. You open the log file or monitoring dashboard and start searching: "Show me all ERROR entries from PaymentService in the last hour." You filter by level, by time, by source. The structure of the log entries is what makes this possible.

What happens behind the scenes: This is where structured logging pays off. If your logs are just random strings like "something went wrong", searching is painful — you're doing text matching on unstructured prose. But if every entry has typed fields (level, timestamp, source, exception), filtering becomes trivial: "give me all ERROR entries from PaymentService between 2 AM and 3 AM." The format of the output matters too — plain text for humans reading a console, JSONJavaScript Object Notation — a structured text format. JSON log entries like {"level":"ERROR","message":"Card declined","orderId":4521} are machine-readable, making them perfect for log aggregation tools like Elasticsearch, Splunk, or Seq. for machines ingesting into Elasticsearch or Splunk.

Design insight: New concept: Formatter. The same log entry can be displayed as plain text, JSON, or XML. The data is the same — only the representation varies. "Multiple algorithms, same interface" — that's the Strategy patternA design pattern that defines a family of interchangeable algorithms. The formatter is a strategy: ILogFormatter with PlainTextFormatter, JsonFormatter, etc. Swap the formatter without touching the logger or sinks..

What We Discovered

REAL WORLD CODE How serious is it? LogLevel (enum) One line in the log LogEntry (record) The person recording Logger (class) Where it gets stored ILogSink (interface) How it looks on paper ILogFormatter (Strategy)

Patterns Hiding in Plain Sight

We didn't go looking for design patterns. We just described how logging works in the real world — and three patterns appeared on their own. This is the "What Varies?" technique: when you see something that can change independently, there's almost always a pattern waiting to be discovered.

Observer One log entry → many sinks "Add a Slack sink? Logger doesn't change." Strategy Same data → different formats "Console = text, remote = JSON." Decorator Wrap to enrich → add context layers "Add thread ID? Wrap the logger."
The real world already told us the architecture. Five stages led us to five responsibilities. One event going to many destinations revealed Observer. Different output formats revealed Strategy. Layers of context revealed Decorator. We didn't "pick" these patterns from a catalog — the real-world walkthrough showed us where they live. This is the power of "see the real world before you code." Senior engineers do this instinctively. After a few case studies, so will you.

This is also the answer to one of the most common interview questions: "How do you decide which design pattern to use?" The answer is: you don't pick patterns from a menu. You observe the problem, ask "what varies independently?", and the right pattern emerges naturally. The real world is the world's best design pattern instructor.

Discovery Real World Code Type
Log LevelHow serious is the event?LogLevelenum (ordered severity)
Log EntryOne line in the log bookLogEntryrecord (immutable fact)
LoggerThe component that records eventsLoggerclass (stateful coordinator)
SinkWhere logs get storedILogSinkinterface (Observer)
FormatterHow log text looksILogFormatterinterface (Strategy)
EnricherExtra context added to entriesILogEnricherinterface (Decorator)
Minimum LevelNoise filter settingLogLevelconfiguration value

Think our walkthrough was just theoretical? Let's check how our discoveries map to the real frameworks that millions of .NET developers use every day:

Our DiscoverySerilogNLoglog4net
LogLevel (enum)LogEventLevelLogLevelLevel
LogEntry (record)LogEventLogEventInfoLoggingEvent
ILogSink (Observer)ILogEventSinkTargetAppender
ILogFormatter (Strategy)ITextFormatterLayoutLayout
Enricher (Decorator)ILogEventEnricherLayoutRendererFilter

Different names, identical concepts. The real-world walkthrough gave us the same architecture that professional framework authors arrived at independently. Serilog calls it a "Sink," NLog calls it a "Target," log4net calls it an "Appender" — but they're all the same thing: a destination that receives log entries. The names are cosmetic; the structure is universal. When you understand the structure, you can learn any framework in minutes instead of hours.

  1. Name 3 entities we discovered from the real-world walkthrough.
  2. Which entity is immutable (never changes after creation)?
  3. Which design pattern does "one logger, many sinks" suggest?
  4. Why does the formatting of log entries suggest the Strategy pattern?
  5. What's the difference between a "sink" and a "formatter"?

Answers: (1) LogLevel, LogEntry, Logger, Sink, Formatter — any 3. (2) LogEntry — it's a record of something that happened. (3) Observer — one subject broadcasts to many observers. (4) Because the same data (LogEntry) can be formatted multiple ways (text, JSON, XML) — "multiple algorithms, same interface." (5) A sink is where logs go (console, file, remote server). A formatter is how a log entry is converted to text. A sink may use a formatter, but they're separate concerns.

Skill Unlocked: Real-World Walkthrough

Walk through the physical process before coding. Every noun (event, severity, entry, destination, format) became a class, enum, or interface. Every verb (record, filter, route, format) became a method. The relationships between nouns (one logger → many sinks) revealed design patterns (Observer). The real world is your first class diagram — free of charge. This technique works for parking lots, elevators, vending machines, notification systems, anything. Master it once, use it everywhere.

Section 3 🟢 EASY

Level 0 — Log a Message

Constraint: "Log a message to the console with a timestamp."
This is where it all begins. The simplest possible logger — no levels, no sinks, no formatting options. Just: take a string, slap a timestamp on it, print it. We'll feel the pain of missing features soon enough.

Every great framework starts as a humble one-liner. For logging, that means: someone hands you a message, you write it to the console with a timestamp. That's it. No severity levels, no file output, no structured data — just the bare minimum that technically qualifies as "logging." The goal of Level 0 is to get something working, then let the next constraint break it.

Why start this simple? Because the constraints reveal the design. If we start with ILogSink and ILogFormatter on day one, we're guessing at what we'll need. That's premature abstractionAdding interfaces, patterns, and layers of indirection before you know you need them. It feels productive ("look how clean this is!") but it actually makes the code harder to understand and change, because you're guessing at future requirements. Let the constraints tell you when to abstract. — one of the most common traps in software design.

But when Level 1 says "distinguish INFO from ERROR" and our simple logger can't do it — that pain tells us exactly what abstraction to add. When Level 2 says "write to a file too" and we can't because Console.WriteLine is hardcoded — that's when the ILogSink interface earns its existence. Each pain point is a signpost pointing toward the right design decision. No guessing needed.

Think First #2

What's the simplest class that can log a message to the console? What data does it need? What single method does the caller use? Take 60 seconds.

60 seconds — try it before peeking.

A class called SimpleLogger with one method: Log(string message). Inside, it calls Console.WriteLine() with DateTime.UtcNowAlways use UTC (Coordinated Universal Time) for timestamps in logs. If your servers are in different time zones, UTC gives you a single consistent timeline. Local time (DateTime.Now) causes confusion when correlating logs across regions. prepended. No constructor, no fields, no config. ~5 lines of actual code.

Your Internal Monologue

"OK, simplest thing possible... A class with a Log() method that writes to the console. I could just make it a static method — Logger.Log("hello"). That's even simpler. One line, no new keyword needed."

"But wait... if I make it static, where does configuration go? When I add sinks and levels later, I'd need static fields for state. That makes it a global mutable state nightmare. And impossible to mock in tests. Hmm."

"Actually, let me make it an instance class instead. It's two extra lines for the caller (var logger = new SimpleLogger();), but it sets us up for everything that comes later. Configuration becomes fields. Different configs become different instances. And I can extract an ILogger interface later for testability."

"For the timestamp... DateTime.Now? No, DateTime.UtcNow. If this ever runs on servers in different time zones, UTC is the only sane choice. And I'll include milliseconds — when two things happen within the same second, ordering matters. Small details, but the kind of details that save you at 2 AM."

"Format: [timestamp] message. Simple, readable, grepable. I know this is incomplete — no levels, no flexibility — but that's OK. Level 0 is about getting started, not getting it perfect. I can already see 4 things that are wrong with this. Good. That's fuel for the next levels."

What Would You Do?

StaticLogger.cs
public static class Logger
{
    public static void Log(string message)
    {
        Console.WriteLine($"[{DateTime.UtcNow:yyyy-MM-dd HH:mm:ss}] {message}");
    }
}

// Usage:
Logger.Log("Application started");
The catch: It works for this level. But static methods can't hold configuration (minimum level, sinks list). When we add features in later levels, we'd need static fields for state — making it impossible to mockIn unit testing, a "mock" is a fake implementation you swap in to isolate the code under test. You can't easily mock a static class — it's hardwired into every caller. Instance-based + interface = easily mockable. or test. Static is a dead end for anything configurable.
SimpleLogger.cs
public sealed class SimpleLogger
{
    public void Log(string message)
    {
        Console.WriteLine($"[{DateTime.UtcNow:yyyy-MM-dd HH:mm:ss}] {message}");
    }
}

// Usage:
var logger = new SimpleLogger();
logger.Log("Application started");
Why this wins: An instance can hold configuration. When we add sinks, levels, and formatters, they become fields on the instance. We can pass different logger instances with different configs. And we can extract an ILogger interface for testing. A few extra characters now saves a full rewrite later.
CriterionStatic MethodInstance Class
ConfigurationNeeds static fields (global state)Fields on instance (clean)
Multiple configsOne global config onlyDifferent instances, different configs
TestabilityCan't mock easilyExtract ILogger interface
Thread safetyShared static state = riskInstance isolation possible
SimplicitySlightly simpler call siteOne extra line (new)
Decision Compass: "Will this need configuration later? → Instance class. Quick throwaway script? → Static is fine." For a logging framework that grows across 8 levels, instance wins easily.

Here's the complete Level 0 code. Read every line — there are only about 10 of them.

SimpleLogger.cs — Level 0
public sealed class SimpleLogger
{
    public void Log(string message)
    {
        var timestamp = DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff");
        Console.WriteLine($"[{timestamp}] {message}");
    }
}

Let's walk through each piece and understand why it's there:

Notice the deliberate choices here. We used DateTime.UtcNow instead of DateTime.Now (timezone safety). We included milliseconds in the format (ordering precision). We made the class sealed (signals it's not designed for inheritance). We used string interpolationC# feature where $"[{timestamp}] {message}" embeds variables directly in the string. Cleaner than string.Format() or concatenation with +. The compiler optimizes it to a single string allocation. for clean formatting. Small decisions, but each one matters when you're building infrastructure that the entire application depends on.

Quick test to prove it works:

Program.cs
var logger = new SimpleLogger();
logger.Log("Application started");
logger.Log("Processing user request...");
logger.Log("Request completed successfully");

// Output:
// [2025-01-15 14:32:07.123] Application started
// [2025-01-15 14:32:07.125] Processing user request...
// [2025-01-15 14:32:07.128] Request completed successfully

10 lines. It works. But can you already see what's missing?

We'll feel each of these pains in the coming levels. That's the point of incremental design — you don't add complexity until you need it.

Even Level 0 Is Better Than Raw Console.WriteLine

RawConsole.cs
// Scattered throughout the codebase...
Console.WriteLine("starting payment");
Console.WriteLine("payment ok");
Console.WriteLine("error happened");
Console.WriteLine("retrying...");
Console.WriteLine("done");

// Output:
// starting payment
// payment ok
// error happened
// retrying...
// done
//
// When did "error happened" occur? Which service?
// How severe? No idea.
Problems:
  • No timestamps — when did it happen?
  • No severity — how bad is it?
  • No consistency — every developer formats messages differently
  • No way to redirect to a file — console only
  • No way to filter noise — it's all or nothing
  • In production with 50 services, this is completely useless for debugging
WithSimpleLogger.cs
var logger = new SimpleLogger();
logger.Log("Starting payment processing");
logger.Log("Payment completed successfully");
logger.Log("Error: card declined for order #4521");
logger.Log("Retrying payment with fallback provider");
logger.Log("Payment retry succeeded");

// Output:
// [2025-01-15 14:32:07.123] Starting payment processing
// [2025-01-15 14:32:07.456] Payment completed successfully
// [2025-01-15 14:32:07.789] Error: card declined for order #4521
// [2025-01-15 14:32:08.012] Retrying payment with fallback provider
// [2025-01-15 14:32:08.234] Payment retry succeeded
//
// Now we know WHEN each thing happened.
// Centralized format. Easy to grep.
Even this basic version gives us:
  • Consistent timestamps — easy to grep and sort chronologically
  • Centralized formatting — change the format once, every log line updates
  • Searchable entry point — logger.Log is easy to find across the codebase
  • Instance-based — we can extend with levels, sinks, enrichers without changing call sites
  • Already better than the majority of "logging" in hobby projects and prototypes

Our Level 0:

Level 0 Output
[2025-01-15 14:32:07.123] User signed in
[2025-01-15 14:32:07.456] Payment failed

Production Serilog (what we're building toward):

Serilog Production Output
[14:32:07 INF] [AuthService] [Thread:12] [User:john@acme.com] User signed in
[14:32:07 ERR] [PaymentService] [Thread:15] [User:jane@acme.com] Payment failed
  CardType=Visa LastFour=4521 Amount=99.99
  System.Net.Http.HttpRequestException: Connection refused

See the gap? Level, source class, thread ID, user context, structured properties, exception stack traces. Each of those missing pieces becomes a constraint in one of our upcoming levels. By Level 7, our output will look like the Serilog version.

What Happens When You Call logger.Log()

Your Code Log("msg") string SimpleLogger + Add timestamp + Format string formatted Console [2025-01-15 14:32:07] msg

Growing Diagram — After Level 0

Class Diagram — Level 0
Class diagram after Level 0 — SimpleLogger class only SimpleLogger (no fields) + Log(message: string) : void 1 class, 1 method — the simplest starting point

What's Missing? (Pain Points for Next Levels)

No Severity "App started" and "DB crashed" look identical → Level 1 One Destination Console only. Close the terminal = logs gone forever → Level 2 No Context Which class? Which thread? Which user? → Level 3 Not Testable Console.WriteLine is hardcoded. Can't mock it. → Level 6

Before / After Your Brain

Before This Level

You see "logging framework" and think "just use Console.WriteLine everywhere" or "just install Serilog and move on."

After This Level

You know to start with the stupidest possible version (one class, one method, 7 lines), immediately identify what's missing, and let those pain points guide you to the next abstraction. You now instinctively ask "what's wrong with this?" instead of "what pattern should I add?"

What Level 0 HasWhat Level 0 Lacks
✅ Centralized logging method❌ No severity (DEBUG/INFO/ERROR)
✅ UTC timestamp with milliseconds❌ No file/remote output — console only
✅ Instance-based (extensible)❌ No context (source class, thread, user)
✅ Consistent output format❌ No filtering (can't silence DEBUG in prod)
✅ Searchable via grep❌ No structured data (just free-text strings)

Every item in the "Lacks" column becomes a constraint in one of the next 7 levels. That's how incremental design works — the gaps in your current solution are the roadmap for what comes next. This is also how you should present in an interview: "Here's my simplest version. Here are 4 things wrong with it. Let me evolve it." That progression shows design thinking, not design memorizing.

Common interview mistake: Many candidates jump straight to ILogger interfaces, Sink abstractions, and factory patterns in their first minute. That screams "memorized solution." Interviewers want to see you build up from simple to complex. Starting with this 7-line class and saying "this works, but here's what's wrong with it..." shows much stronger design thinking than arriving at the perfect architecture on slide one.
👃 "The Starting Line" Smell — When you're tempted to design the entire system upfront, resist. Ask: "What's the absolute minimum that technically works?" Build that. Then list what's wrong with it. Each pain point becomes the constraint for the next level. This isn't laziness — it's incremental designA development approach where you start simple and evolve the design as requirements emerge. Each constraint (new feature, new requirement) reveals the next abstraction. This is how real-world frameworks grow — Serilog didn't start with 200 sinks.. Every great framework started as a few lines. Serilog v0.1 probably looked a lot like our Level 0.
Transfer: This "start with the dumbest thing that works" approach applies everywhere:
  • Notification System — Level 0: send a plain email. No templates, no channels (SMS, push), no queuing. Just SendEmail(to, body).
  • Cache — Level 0: store a key-value pair in a Dictionary. No expiration, no eviction, no thread safety. Just Get(key) and Set(key, value).
  • Rate Limiter — Level 0: count requests per second with a simple counter. No sliding window, no distributed coordination. Just if (count > limit) reject.
The pattern is universal: build the skeleton, feel the pain, discover the fix.
Level 0 complete. Types: SimpleLogger (1 class). Lines of code: ~7. Patterns used: none yet — that's intentional. Pain points identified: 4 (no levels, one sink, no context, not testable). Each pain becomes the constraint for the next level.
Next up — Level 1: Log Levels. Right now, "user logged in" and "database crashed" look exactly the same in our output. An INFO message and a FATAL error are visually indistinguishable. That's a serious problem. In the next level, we'll add log levels (DEBUG, INFO, WARN, ERROR, FATAL) with integer ordering, a minimum level filter so you can silence noise in production while keeping full detail in development, and a LogEntry record to bundle timestamp + level + message into a single structured object. Our SimpleLogger is about to evolve.
Section 4 🟢 EASY

Level 1 — Log Levels

New Constraint: "Not every message matters equally. Debug noise shouldn't wake you up at 3 AM — but a Fatal error absolutely should."
What Breaks?

Our Level 0 Logger has one method: Log(string message). Every message is treated identically. A "user clicked button" debug trace and a "database is on fire" critical alert look exactly the same in the output. You can't filter, you can't prioritize, and in production you're drowning in thousands of debug lines while hunting for the one error that matters.

Real logging systems have a severity ladder. Think of it like a volume knob — you set a minimum level, and anything quieter gets ignored. In development you crank it all the way down to Debug so you see everything. In production you turn it up to Warning so only important stuff gets through. Same code, different verbosity, zero changes.

Think First #3

You need five severity levels: Debug, Info, Warning, Error, Fatal. A logger has a MinimumLevel setting. If someone logs a Debug message but the minimum is Warning, it should be silently ignored. How would you represent the levels so that "at or above minimum" is a simple comparison, not a chain of if-else?

60 seconds — think about how enums work under the hood.

Make LogLevel an enumIn C#, an enum is a value type that maps friendly names to integers under the hood. Debug = 0, Info = 1, Warning = 2 etc. This lets you compare levels with simple >= instead of string matching or switch statements. with ascending integer values: Debug = 0, Info = 1, Warning = 2, Error = 3, Fatal = 4. Then filtering is just if (entry.Level >= MinimumLevel) — one line, no branching. The numeric ordering does all the work.

Your Internal Monologue

"Right now every message is just a string. I need to tag each message with a severity. I could use strings like "DEBUG", "ERROR"... but then filtering means string comparison, and someone could typo "DEBG" and it'd silently pass through."

"An enum is better — the compiler catches typos. And if I give them ascending numbers, I can filter with a single >= check. Debug is 0, Info is 1, Warning is 2, Error is 3, Fatal is 4. If minimum is Warning (2), then Debug (0) and Info (1) are below — filtered out. Error (3) and Fatal (4) are above — they pass. That's... really elegant."

"I also need to bundle the level WITH the message. A plain string won't cut it anymore. I need a recordA C# record is an immutable data type — once created, its values can't change. Perfect for log entries because a log message should never be modified after it's created. Records also get free equality comparison and a nice ToString() output.LogEntry — that carries the level, the message, and a timestamp. Immutable, because you should never edit a log after the fact."

The Severity Ladder

Five levels, ordered from least critical to most. Each level includes everything above it — setting minimum to Warning means you also see Error and Fatal.

Debug = 0 Info = 1 Warning = 2 Error = 3 Fatal = 4 least critical → most critical

The Filter in Action

Set the minimum to Warning. Everything below that line gets silently dropped. Everything at or above passes through.

MinimumLevel = Warning ✘ BLOCKED DEBUG "User clicked button" INFO "Request processed OK" ✔ PASSED WARN "Cache miss, falling back to DB" ERROR "Payment gateway timeout" FATAL "Database connection lost"

What Would You Do?

StringTags.cs
public void Log(string level, string message)
{
    // "DEBUG", "INFO", "WARNING"...
    // How do you compare? Alphabetical won't work.
    // "DEBUG" < "ERROR" alphabetically, but
    // "WARNING" > "INFO"? Good luck filtering.
    if (ShouldLog(level))
        Console.WriteLine($"[{level}] {message}");
}

// Caller can pass ANYTHING: "DEBG", "warn", "CRITICAL"
// No compile-time safety. Typos become silent bugs.
Consequence: Strings have no inherent ordering. You'd need a dictionary mapping each string to a number, which is just reinventing enums with extra steps. And typos compile just fine — "DEBG" won't raise an error until production.
IntConstants.cs
public const int DEBUG = 0;
public const int INFO = 1;
public const int WARNING = 2;
public const int ERROR = 3;
public const int FATAL = 4;

public void Log(int level, string message)
{
    if (level >= _minimumLevel)
        Console.WriteLine($"[{level}] {message}");
}

// Filtering works! But...
// Log(42, "oops") compiles fine. What's level 42?
// Log output shows "[2] Cache miss" — what's 2?
Consequence: Filtering works because integers have natural ordering. But any integer is valid — Log(42, "oops") compiles fine. And the output shows numbers instead of names. Close, but not safe enough.
EnumRecord.cs
public enum LogLevel { Debug = 0, Info = 1, Warning = 2, Error = 3, Fatal = 4 }

public record LogEntry(LogLevel Level, string Message, DateTime Timestamp);

public void Log(LogLevel level, string message)
{
    if (level >= MinimumLevel)  // One comparison. Done.
    {
        var entry = new LogEntry(level, message, DateTime.UtcNow);
        Console.WriteLine($"[{entry.Level}] {entry.Message}");
    }
}

// Log(LogLevel.Debug, "click") — clear, safe, filterable
// Log(42, "oops") — COMPILE ERROR. Can't happen.
This is the winner. The enum gives you type safety (no invalid values), natural ordering (comparison with >=), and readable output ([Warning] not [2]). The record bundles level + message + timestamp into an immutableImmutable means "cannot be changed after creation." A log entry should never be secretly modified — that would be like erasing evidence. Records enforce this by default in C#. package. Best of both worlds.

The Solution

Three small pieces: an enum to rank severity, a record to bundle each log entry, and a one-line filter that drops anything below your threshold.

LogLevel.cs — the severity ladder as code
public enum LogLevel
{
    Debug   = 0,   // Developer-only noise: variable values, method entry/exit
    Info    = 1,   // Normal operations: "request processed", "user logged in"
    Warning = 2,   // Something odd but recoverable: cache miss, retry needed
    Error   = 3,   // Something failed: timeout, invalid input, caught exception
    Fatal   = 4    // System is dying: out of memory, database unreachable
}

The ascending integers are the key. Debug (0) < Info (1) < Warning (2) < Error (3) < Fatal (4). Filtering becomes a single >= comparison — no switch, no dictionary, no string parsing.

LogEntry.cs — immutable log message
public record LogEntry(
    LogLevel  Level,      // How severe is this?
    string    Message,    // What happened?
    DateTime  Timestamp   // When did it happen?
);

// Usage:
// var entry = new LogEntry(LogLevel.Error, "Timeout", DateTime.UtcNow);
// entry.Message = "changed"; // COMPILE ERROR — records are immutable

A recordRecords in C# are reference types with value-based equality. Two LogEntry instances with the same Level, Message, and Timestamp are considered equal. They also get a free, nicely formatted ToString() method. is perfect here. Log entries are facts — they describe something that already happened. Changing a log after the fact would be like rewriting history. Immutability by default prevents that.

Logger.cs — Level 1 with filtering
public class Logger
{
    public LogLevel MinimumLevel { get; set; } = LogLevel.Debug; // Level 1: configurable threshold

    public void Log(LogLevel level, string message)
    {
        if (level < MinimumLevel) return;  // The entire filter — one line

        var entry = new LogEntry(level, message, DateTime.UtcNow);
        Console.WriteLine($"[{entry.Timestamp:HH:mm:ss}] [{entry.Level}] {entry.Message}");
    }

    // Convenience methods — so callers don't repeat LogLevel every time
    public void Debug(string msg)   => Log(LogLevel.Debug, msg);
    public void Info(string msg)    => Log(LogLevel.Info, msg);
    public void Warning(string msg) => Log(LogLevel.Warning, msg);
    public void Error(string msg)   => Log(LogLevel.Error, msg);
    public void Fatal(string msg)   => Log(LogLevel.Fatal, msg);
}

The filter is the star: if (level < MinimumLevel) return;. One line decides whether a message lives or dies. The convenience methods (Debug(), Info(), etc.) are just sugar — they call Log() with the right level so callers don't have to type LogLevel.Warning every time.

Growing Diagram — Level 1

Level 0 was just a Logger with Log(string). Now we've added LogLevel, LogEntry, and filtering. New pieces are highlighted.

Logger + MinimumLevel: LogLevel + Log(level, msg) | Debug() ... «enum» LogLevel Debug | Info | Warning Error | Fatal «record» LogEntry Level | Message Timestamp bright border = new in Level 1 | muted = from Level 0

Before / After Your Brain

Before This Level

You log everything at the same volume. Debugging means scrolling through thousands of lines hoping to spot the one error.

After This Level

You instinctively tag every message with a severity and build a one-line filter using enum ordering. Noise disappears; signal stays.

Smell → Pattern: Categories Without Behavior — When your "types" differ only in identity (Debug vs. Error) but don't have unique methods or behavior, use an enum, not a class hierarchy. If they had different behaviors (like game states), you'd use the State pattern. But log levels are just ranked labels — enum is perfect.
Transfer: Same technique in a Notification System: Priority.Low, Medium, High, Critical. Users set their notification thresholdJust like MinimumLevel in logging, a notification threshold lets users say "only bother me for High priority and above." Same enum-based filtering, different domain. and only receive alerts at or above it. Same enum + >= filter, different domain.
Section 5 🟡 MEDIUM

Level 2 — Multiple Destinations

New Constraint: "Logs must go to the console AND a file AND a database. Next sprint the team wants Slack alerts too — and adding Slack should require ZERO changes to existing code."
What Breaks?

Our Level 1 Logger hardcodes Console.WriteLine(). Want to also write to a file? You'd add a StreamWriter call right next to it. Database? Another block of ADO.NET code. Slack? More lines jammed into the same method. Every new destination means cracking open Logger.Log() and stuffing more code in. The method grows, the class grows, and testing becomes a nightmare because you can't log to the console without also hitting the database.

Think about it this way: the act of writing a log entry is the same idea regardless of where it goes. Console, file, database, Slack — they all do the same thing (receive a log entry and put it somewhere) but in different ways. When you have multiple ways to do the same thing, and you want to swap or combine them freely, that's the textbook signal for the Strategy patternThe Strategy pattern defines a family of interchangeable algorithms behind a common interface. The caller picks which strategy to use at runtime without knowing the implementation details. Here, each "sink" (destination) is a strategy for writing logs..

Think First #4

Design an abstraction so that the Logger doesn't know (or care) where logs end up. It should be possible to send one log entry to multiple destinations simultaneously. And adding a brand-new destination (Slack, email, whatever) should mean writing one new class — nothing else changes.

60 seconds — think interface + list.

Create an ILogSink interface with one method: Write(LogEntry entry). Each destination (console, file, database) implements it. The Logger holds a List<ILogSink> and loops through all of them for every log entry. Adding Slack = one new class implementing ILogSink, then add it to the list. Zero existing code changes.

Your Internal Monologue

"I could just add more Console.WriteLine() and File.AppendAllText() calls inside Log()... but that's the same mistake as hardcoding pricing strategies in the Parking Lot. Every new destination means modifying the Logger class. That's an OCP violationThe Open/Closed Principle says a class should be open for extension (add new behavior) but closed for modification (don't change existing code). Adding a new sink should extend the system, not modify the Logger.."

"What if I make each destination its own class? They all do the same thing — take a LogEntry and put it somewhere. That's a common interfaceAn interface in C# defines a contract: "any class that implements me promises to have these methods." ILogSink says: "I promise I have a Write(LogEntry) method." The Logger doesn't care HOW you write — just that you CAN.: ILogSink with a Write(LogEntry) method."

"And I don't just want ONE sink — I want multiple simultaneously. So the Logger holds a List<ILogSink>. When a log comes in, loop through the list and call Write() on each one. Fan-out! Adding Slack later? Create SlackSink : ILogSink, add it to the list. Done. Not a single existing file touched."

The Fan-Out: One Entry, Many Destinations

A single log entry enters the Logger, passes the level filter, then gets broadcast to every registered sink. Each sink writes it to its own destination independently.

LogEntry Error | "Timeout" Logger level >= min? foreach sink... ConsoleSink Console.WriteLine() FileSink File.AppendAllText() DatabaseSink INSERT INTO logs all implement ILogSink

What Would You Do?

SwitchOnType.cs
public void Log(LogLevel level, string message)
{
    if (level < MinimumLevel) return;
    var entry = new LogEntry(level, message, DateTime.UtcNow);

    switch (_destinationType)
    {
        case "console":
            Console.WriteLine($"[{entry.Level}] {entry.Message}");
            break;
        case "file":
            File.AppendAllText(_path, $"[{entry.Level}] {entry.Message}\n");
            break;
        case "database":
            // ADO.NET insert...
            break;
        // Adding Slack? Add another case here. And here. And here.
    }
}
Consequence: Only supports ONE destination at a time. Console OR file, not both. And every new destination means modifying this switch. At 6 destinations, this method is 50+ lines of unrelated I/O code jammed together.
IfElseChain.cs
public void Log(LogLevel level, string message)
{
    if (level < MinimumLevel) return;
    var entry = new LogEntry(level, message, DateTime.UtcNow);

    if (_useConsole)
        Console.WriteLine($"[{entry.Level}] {entry.Message}");
    if (_useFile)
        File.AppendAllText(_path, $"[{entry.Level}] {entry.Message}\n");
    if (_useDatabase)
        ExecuteDbInsert(entry);
    if (_useSlack)
        PostToSlack(entry);
    // Every new destination = new boolean + new if-block
}
Consequence: Supports multiple destinations (progress!), but each new one requires a boolean flag AND an if-block in the Logger. The Logger becomes a dumping ground for every I/O library in the project. Testing one sink means constructing the entire Logger with all its flags.
StrategyPattern.cs
public interface ILogSink
{
    void Write(LogEntry entry);
}

public class Logger
{
    private readonly List<ILogSink> _sinks;
    public LogLevel MinimumLevel { get; set; } = LogLevel.Debug;

    public Logger(params ILogSink[] sinks)
        => _sinks = new List<ILogSink>(sinks);

    public void Log(LogLevel level, string message)
    {
        if (level < MinimumLevel) return;
        var entry = new LogEntry(level, message, DateTime.UtcNow);
        foreach (var sink in _sinks)
            sink.Write(entry);  // Fan-out: every sink gets the entry
    }
}

// Adding Slack = one new class, zero changes to Logger:
// public class SlackSink : ILogSink { ... }
This is the winner. The Logger doesn't know what a console, file, or database is. It just knows ILogSink. Multiple sinks at once via a list. Adding Slack = one new class. Removing database = remove it from the list. Each sink is testable in isolation. The Logger never changes.

Plug & Play: Configure Any Combination

Different environments, different sinks. The Logger's code is identical in all three — only the List<ILogSink> changes.

Development ConsoleSink MinimumLevel = Debug See everything, fast feedback loop Staging ConsoleSink FileSink MinimumLevel = Info Console + persistent file Production FileSink DatabaseSink SlackSink MinimumLevel = Warning File + DB + Slack alerts

The Solution

One interface, three implementations, and a Logger that loops through them. The Logger never mentions Console, File, or Database by name — it only knows ILogSink.

ILogSink.cs — the contract every destination must follow
public interface ILogSink
{
    void Write(LogEntry entry);
}

// That's it. One method. Every sink — Console, File, Database,
// Slack, Email, whatever comes next — just implements Write().
// The Logger never knows or cares what's on the other side.

The simplest possible interface. One method, one parameter. This is the seamA seam is a point in your code where you can change behavior without modifying the code around it. ILogSink is a seam — you can swap ConsoleSink for SlackSink without touching the Logger. Seams make code flexible and testable. that makes the entire system extensible.

ConsoleSink.cs — writes to the terminal
public class ConsoleSink : ILogSink
{
    public void Write(LogEntry entry)
    {
        var color = entry.Level switch
        {
            LogLevel.Fatal   => ConsoleColor.Red,
            LogLevel.Error   => ConsoleColor.Red,
            LogLevel.Warning => ConsoleColor.Yellow,
            LogLevel.Info    => ConsoleColor.Cyan,
            _                => ConsoleColor.Gray
        };

        Console.ForegroundColor = color;
        Console.WriteLine($"[{entry.Timestamp:HH:mm:ss}] [{entry.Level}] {entry.Message}");
        Console.ResetColor();
    }
}

Color-coded output for the terminal. Each sink owns its formatting — the Logger doesn't dictate how the message looks.

FileSink.cs — writes to a persistent file
public class FileSink : ILogSink
{
    private readonly string _path;

    public FileSink(string path) => _path = path;

    public void Write(LogEntry entry)
    {
        var line = $"[{entry.Timestamp:yyyy-MM-dd HH:mm:ss}] [{entry.Level}] {entry.Message}";
        File.AppendAllText(_path, line + Environment.NewLine);
    }
}

File output includes the full date because log files persist across days. Each sink formats for its own medium.

Logger.cs — Level 2 with multi-sink fan-out
public class Logger
{
    private readonly List<ILogSink> _sinks;                       // Level 2: multiple sinks
    public LogLevel MinimumLevel { get; set; } = LogLevel.Debug;   // Level 1: filtering

    public Logger(params ILogSink[] sinks)
        => _sinks = new List<ILogSink>(sinks);

    public void Log(LogLevel level, string message)
    {
        if (level < MinimumLevel) return;                          // Level 1: filter
        var entry = new LogEntry(level, message, DateTime.UtcNow);
        foreach (var sink in _sinks)                                // Level 2: fan-out
            sink.Write(entry);
    }

    // Convenience methods (unchanged from Level 1)
    public void Debug(string msg)   => Log(LogLevel.Debug, msg);
    public void Info(string msg)    => Log(LogLevel.Info, msg);
    public void Warning(string msg) => Log(LogLevel.Warning, msg);
    public void Error(string msg)   => Log(LogLevel.Error, msg);
    public void Fatal(string msg)   => Log(LogLevel.Fatal, msg);
}

The only change from Level 1: Console.WriteLine() became foreach sink.Write(entry). The Logger no longer knows WHERE logs go. This is the Strategy patternThe Strategy pattern lets you define a family of algorithms (sinks), put each in its own class, and make them interchangeable. The Logger picks strategies at construction time and delegates the "write" work to them. in action.

Before vs. After: Adding Slack

The real test: what happens when requirements change? Left: Slack means cracking open Logger. Right: Slack means creating one file.

❌ Hardcoded in Logger Logger.Log() Console.WriteLine(...); File.AppendAllText(...); ExecuteDbInsert(...); // ADD SLACK HERE ← modify! PostToSlack(entry); // + bool _useSlack Logger.cs modified + 2 new fields ✅ Strategy (ILogSink) Logger.Log() — UNCHANGED foreach (var sink in _sinks) sink.Write(entry); SlackSink : ILogSink (new!) just one new file ✔ 0 existing files changed

Growing Diagram — Level 2

The class diagram expands: ILogSink interface appears, concrete sinks implement it, and the Logger holds a list of them.

Logger MinimumLevel | Log() List<ILogSink> _sinks «enum» LogLevel Debug..Fatal «record» LogEntry Level | Message | Time «interface» ILogSink Write(LogEntry) ConsoleSink Console.WriteLine FileSink File.AppendAllText DatabaseSink INSERT INTO logs bright border = new in Level 2 | muted = from earlier levels

Before / After Your Brain

Before This Level

You see "log to console AND file" and think "add more lines inside the Log method."

After This Level

You smell "multiple algorithms, same interface" and instinctively reach for Strategy — one interface, one class per destination, a list for fan-out.

Smell → Pattern: Multiple Algorithms, Same Interface — When you have 3+ ways to do the same thing and the choice can change at runtime → Strategy pattern. Extract the varying behavior behind an interface. Need multiple strategies simultaneously? Hold them in a List<T> for fan-out.
Transfer: Same technique in a Payment System: IPaymentProcessor with CreditCardProcessor, PayPalProcessor, CryptoProcessor. The checkout page doesn't know which one it's calling — it just calls Process(order). Adding Apple Pay = one new class, zero changes to checkout.
Section 6 🟡 MEDIUM

Level 3 — Enriching Logs Without Touching Sinks

New Constraint: "Every log line needs a timestamp, a thread ID, and the caller's method name. But you must NOT modify ConsoleSink, FileSink, or DatabaseSink to add these."
What Breaks?

Our Level 2 sinks work perfectly — but their output is bare. A line like [Error] Payment failed doesn't tell you when it happened, which thread was running, or which method called it. You need this context to debug production issues. The naive fix? Open every sink and add timestamp/thread/caller logic. But that means copy-pasting the same enrichment code into ConsoleSink, FileSink, DatabaseSink, and every future sink. That's code duplication across classes — the worst kind.

Imagine a Russian nesting dollMatryoshka dolls — each doll wraps around a smaller one. Open the outer doll, there's another inside. In code, a Decorator wraps around an existing object, adding behavior before/after delegating to the inner object. You can stack as many layers as you want.. The innermost doll is your FileSink — it knows how to write to a file. You wrap it in a "timestamp doll" that prepends a timestamp before passing the entry inward. Wrap that in a "thread ID doll" that adds the thread number. Each layer adds one piece of context, then hands off to the layer beneath. The sink at the center has no idea it's being wrapped. That's the Decorator patternThe Decorator pattern attaches new behavior to an object by wrapping it. The wrapper implements the same interface as the wrapped object, so the caller can't tell the difference. You can stack multiple decorators like layers of an onion — each one adds something and delegates the rest..

Think First #5

You need to add timestamp, thread ID, and caller info to log messages — but WITHOUT modifying any existing sink class. Design it so you can mix and match enrichments freely: timestamp only, timestamp + thread ID, all three, or none. And adding a new enrichment (like machine name) should mean writing ONE new class.

60 seconds — think wrapping, not modifying.

Create a SinkDecorator that implements ILogSink and holds a reference to another ILogSink. Each decorator enriches the LogEntry message, then passes it to the inner sink. Stack them: ThreadIdDecorator(TimestampDecorator(FileSink)). Each layer adds its context and delegates. The FileSink at the center never changes.

Your Internal Monologue

"I could add $"[{DateTime.UtcNow}] " + entry.Message inside ConsoleSink. And inside FileSink. And inside DatabaseSink. That's three places with the same timestamp logic. What happens when the team wants thread ID too? Three more copy-paste edits. This is heading toward a maintenance disaster."

"Wait — what if I wrap the sink instead of modifying it? Something that takes in an ILogSink, enriches the message, then calls the inner sink's Write(). That wrapper would ALSO implement ILogSink, so from the outside it looks identical. The caller doesn't know whether it's talking to a raw sink or a wrapped one."

"And if the wrapper is also an ILogSink, I can wrap a wrapper! ThreadIdDecorator wrapping TimestampDecorator wrapping FileSink. Each layer adds one thing. Like Russian dolls. ...That's the Decorator patternA structural design pattern where you wrap an object with another object that has the same interface. The wrapper adds behavior before/after delegating to the wrapped object. Since both share the same interface, decorators are stackable — you can nest as many as you want.!"

The Russian Doll: Decorators Wrapping a Sink

Each decorator wraps the next. The outermost one receives the log entry first, adds its context, and passes it inward. The innermost (FileSink) just writes.

ThreadIdDecorator prepends [Thread-7] → passes inward TimestampDecorator prepends [14:32:05] → passes inward FileSink writes to disk — never modified Output: [Thread-7] [14:32:05] [Error] Payment failed

The Delegation Chain: Step by Step

Follow a single log entry as it passes through the decorator stack. Each layer enriches the message, then delegates to the inner sink.

LogEntry "Payment failed" ThreadIdDecorator adds [Thread-7] "[Thread-7] Payment failed" TimestampDecorator adds [14:32:05] "[14:32:05] [Thread-7] ..." FileSink writes to disk Final output in file: [14:32:05] [Thread-7] [Error] Payment failed ⭐ STAR pattern: Stack → Transform → Attach → Relay Each decorator does ONE job, then relays to the next layer

What Would You Do?

ModifyEverySink.cs
public class ConsoleSink : ILogSink
{
    public void Write(LogEntry entry)
    {
        // Enrichment copy-pasted into EVERY sink:
        var enriched = $"[{DateTime.UtcNow:HH:mm:ss}] " +
                       $"[Thread-{Thread.CurrentThread.ManagedThreadId}] " +
                       $"[{entry.Level}] {entry.Message}";
        Console.WriteLine(enriched);
    }
}

// Same 3 lines duplicated in FileSink, DatabaseSink, SlackSink...
// Change the timestamp format? Modify ALL of them.
Consequence: Same enrichment logic copy-pasted into every sink. Change timestamp format from HH:mm:ss to ISO 8601? Edit 4 files. Add caller info? Edit 4 files again. This is DRY violation"Don't Repeat Yourself" — every piece of knowledge should have a single, unambiguous representation in the system. When the same timestamp logic lives in 4 sinks, you have 4 places to update and 4 chances to make a mistake. across classes.
BaseClassApproach.cs
public abstract class EnrichedSink : ILogSink
{
    public void Write(LogEntry entry)
    {
        var enriched = $"[{DateTime.UtcNow:HH:mm:ss}] " +
                       $"[Thread-{Thread.CurrentThread.ManagedThreadId}] " +
                       $"[{entry.Level}] {entry.Message}";
        var newEntry = entry with { Message = enriched };
        WriteCore(newEntry);
    }

    protected abstract void WriteCore(LogEntry entry);
}

// ConsoleSink : EnrichedSink, FileSink : EnrichedSink...
// But what if you want timestamp WITHOUT thread ID?
// Or thread ID WITHOUT timestamp?
// Can't mix and match. All or nothing.
Consequence: Eliminates duplication (good!), but it's rigid. Every sink gets ALL enrichments or NONE. Want timestamp but no thread ID? Can't do it. Want to add caller info to only the file sink? Impossible without more base classes. Inheritance creates a rigid hierarchy that can't be mixed at runtime.
DecoratorPattern.cs
// Base: wraps any ILogSink
public abstract class SinkDecorator : ILogSink
{
    protected readonly ILogSink _inner;
    protected SinkDecorator(ILogSink inner) => _inner = inner;
    public abstract void Write(LogEntry entry);
}

// Each decorator adds ONE thing, then delegates
public class TimestampDecorator : SinkDecorator
{
    public TimestampDecorator(ILogSink inner) : base(inner) { }

    public override void Write(LogEntry entry)
    {
        var enriched = entry with
        {
            Message = $"[{DateTime.UtcNow:HH:mm:ss}] {entry.Message}"
        };
        _inner.Write(enriched);  // pass inward
    }
}

// Stack them: ThreadId(Timestamp(FileSink))
// Mix freely: Timestamp(ConsoleSink) — no thread ID
// Add CallerInfo? One new class. Zero existing changes.
This is the winner. Each decorator does ONE job. Stack them in any order, in any combination. TimestampDecorator(FileSink) for simple file logging. CallerInfo(ThreadId(Timestamp(FileSink))) for full debug context. Adding a new enrichment = one new class. No existing sink or decorator changes. Ever.

Mix & Match: Any Combination Works

Because every decorator is an ILogSink, you can compose them like LEGO bricks. Different sinks can have different enrichments.

Console (dev) TimestampDecorator ConsoleSink Output: [14:32:05] [Error] msg 1 decorator, lightweight File (production) ThreadIdDecorator TimestampDecorator FileSink Output: [Thread-7] [14:32:05] [Error] Payment failed Database (structured) DatabaseSink (no wrappers needed) DB columns handle timestamp/thread natively Same ILogSink interface everywhere — decorators are invisible to the Logger

The Solution

A base SinkDecorator class, plus one decorator per enrichment. Each wraps an inner ILogSink, enhances the entry, and delegates. Stack freely.

SinkDecorator.cs — the base wrapper
public abstract class SinkDecorator : ILogSink
{
    protected readonly ILogSink _inner;

    protected SinkDecorator(ILogSink inner)
        => _inner = inner ?? throw new ArgumentNullException(nameof(inner));

    public abstract void Write(LogEntry entry);
}

// Key insight: SinkDecorator IS an ILogSink (implements the interface)
// AND it HAS an ILogSink (holds a reference to the inner sink).
// This dual identity is what makes stacking possible.

The base class does almost nothing — it just holds a reference to the inner sink. Each concrete decorator overrides Write() to add its enrichment, then calls _inner.Write() to pass the entry inward. The IS-A / HAS-A dualityA SinkDecorator IS-A ILogSink (so the Logger can use it). It also HAS-A ILogSink (the thing it wraps). This dual relationship is the core of the Decorator pattern — the wrapper looks identical to the thing it wraps from the outside. is the heart of the Decorator pattern.

TimestampDecorator.cs — adds a timestamp
public class TimestampDecorator : SinkDecorator
{
    public TimestampDecorator(ILogSink inner) : base(inner) { }

    public override void Write(LogEntry entry)
    {
        // Enrich: prepend timestamp to the message
        var enriched = entry with
        {
            Message = $"[{DateTime.UtcNow:HH:mm:ss}] {entry.Message}"
        };

        // Delegate: pass enriched entry to whatever's inside
        _inner.Write(enriched);
    }
}

The with expression creates a copy of the record with only the Message changed — the original entry is untouched (immutability!). Then it calls _inner.Write(). The inner might be a real sink (FileSink) or another decorator (ThreadIdDecorator). This decorator doesn't know and doesn't care.

ThreadIdDecorator.cs — adds thread identification
public class ThreadIdDecorator : SinkDecorator
{
    public ThreadIdDecorator(ILogSink inner) : base(inner) { }

    public override void Write(LogEntry entry)
    {
        var threadId = Environment.CurrentManagedThreadId;
        var enriched = entry with
        {
            Message = $"[Thread-{threadId}] {entry.Message}"
        };
        _inner.Write(enriched);
    }
}

// Want CallerInfoDecorator? Same pattern:
// public class CallerInfoDecorator : SinkDecorator { ... }
// ONE new file. Zero existing changes. Ever.

Every decorator follows the same three-step recipe: (1) enrich the entry, (2) create a new copy with with, (3) call _inner.Write(). This predictable structure makes decorators easy to write, easy to read, and easy to test in isolation.

Program.cs — composing the decorator stack
// Dev: console with timestamp only
ILogSink devSink = new TimestampDecorator(
    new ConsoleSink()
);

// Production: file with full enrichment
ILogSink prodFileSink = new ThreadIdDecorator(
    new TimestampDecorator(
        new FileSink("app.log")
    )
);

// DB: no decorators needed (columns handle metadata)
ILogSink dbSink = new DatabaseSink(connectionString);

// Wire them all into the Logger
var logger = new Logger(devSink, prodFileSink, dbSink);

// This one call fans out to 3 sinks, 2 of them decorated:
logger.Error("Payment failed");
// Console: [14:32:05] [Error] Payment failed
// File:    [Thread-7] [14:32:05] [Error] Payment failed
// DB:      INSERT INTO logs (Level, Message, ...)

Look at the composition: new ThreadIdDecorator(new TimestampDecorator(new FileSink("app.log"))). Read it inside-out: start with FileSink, wrap it in Timestamp, wrap that in ThreadId. The Logger sees three ILogSink objects — it has no idea some are decorated and some aren't. That's the beauty: decoration is invisible to the consumer.

Inheritance vs. Decorator: The Flexibility Gap

With a base class, every sink gets the same enrichments. With decorators, each sink gets exactly what it needs — no more, no less.

❌ Base Class (rigid) EnrichedSink (timestamp + threadId) ConsoleSink FileSink ALL sinks get SAME enrichments Want timestamp-only for Console? Need ANOTHER base class ✘ N enrichments × M sinks = explosion ✅ Decorator (flexible) Each sink gets exactly what it needs: Timestamp(ConsoleSink) ThreadId(Timestamp(FileSink)) DatabaseSink — no wrappers Per-sink customization ✔ Add CallerInfo? One class. Zero changes ✔ N + M classes total (not N × M)

Growing Diagram — Level 3

SinkDecorator joins the diagram. It implements ILogSink AND holds a reference to one — the IS-A / HAS-A duality. Concrete decorators extend it.

Logger List<ILogSink> «interface» ILogSink Write(LogEntry) «abstract» SinkDecorator _inner: ILogSink | Write() wraps ConsoleSink FileSink TimestampDecorator ThreadIdDecorator CallerInfoDecorator? bright border = new in Level 3 | dashed box = future extension SinkDecorator: implements ILogSink (IS-A) + holds ILogSink (HAS-A)

Before / After Your Brain

Before This Level

You see "add behavior to existing objects" and think "modify the class" or "create a base class with the shared logic."

After This Level

You smell "wrap and extend" and instinctively reach for the Decorator pattern — same interface, wraps the original, adds behavior without touching it. Stackable like Russian dolls.

Smell → Pattern: Wrap and Extend — When you need to add behavior to an object WITHOUT modifying its class, and you want to mix-and-match those additions freely at runtime → Decorator pattern. The wrapper implements the same interface, so it's invisible to the caller. Stack them for composable enrichment.
Transfer: Same technique in an HTTP Pipeline: AuthDecorator(LoggingDecorator(CompressionDecorator(HttpHandler))). Each middlewareMiddleware in web frameworks (ASP.NET, Express.js) is literally the Decorator pattern applied to HTTP requests. Each middleware wraps the next handler, can inspect/modify the request before passing it through, and can inspect/modify the response on the way back. wraps the handler, adds its concern (auth, logging, compression), and delegates. The handler at the center has no idea it's being wrapped. Same Decorator, different domain.
Section 7

Level 4 — One Logger to Rule Them All 🟡 MEDIUM

New Constraint: "The entire application — every controller, every service, every background job — must share ONE logger instance. Two loggers means two files fighting over the same path. Thread safety is non-negotiable."
What breaks: Right now anyone can write new Logger(...) wherever they please. Create two loggers pointed at the same file? One of them gets an IOException because the OS locks the file for the first writer. Even if you dodge that, two loggers each maintain their own decorator chain — so filtering rules applied to one don't affect the other. Some parts of the app log at DEBUG while others silently swallow everything. There's no single source of truth for "the logger."

You need exactly one logger that the entire app shares. Three obvious approaches: a static class, a global variable, or Dependency InjectionInstead of a class creating its own dependencies internally, they're passed in from outside — usually through the constructor. This makes swapping implementations (real vs. test) trivial and keeps classes loosely coupled.. Each has trade-offs around testability, thread safety, and flexibility. Which would you pick, and why?

Your inner voice:

"I need exactly one instance. The simplest thing is a static class — you literally can't instantiate it. But then I can't swap it out in tests, and I can't put it behind an interface."

"A global static field like Logger.Instance is the classic Singleton patternA design pattern that ensures a class has only one instance and provides a global point of access to it. Think of it like a company having exactly one CEO — everyone in the company can reach the CEO, and there's never a second one.. It works, but every class that uses Logger.Instance is silently coupled to the concrete type. Hard to test, hard to replace."

"The modern approach: register the logger as a singleton lifetime in the DI container. The container creates one instance, hands the same one to everyone who asks, and I still program against an interface. Best of all worlds."

Three Ways to Get "One Logger"

Singleton Approaches Compared
Comparison of static class, classic singleton, and DI singleton Static Class static class Logger Cannot instantiate ✗ No interface possible ✗ Cannot mock in tests ✗ No decorator chain ✓ Thread-safe (if coded) ✓ Simple syntax Verdict: Rigid Classic Singleton Logger.Instance Private ctor + static field ✓ Can implement interface ⚠ Hard to swap in tests ✓ Supports decorators ⚠ Must add lock() manually ✗ Hidden dependency Verdict: Okay-ish DI Singleton ✅ AddSingleton<ILogger> Container manages lifetime ✓ Programs to interface ✓ Swap in tests easily ✓ Supports decorators ✓ Thread-safe by default ✓ Explicit dependency Verdict: Modern ✅ DI gives you singleton behavior + testability + interface-based design

What Would You Do?

Three developers, three ways to guarantee "one logger." Only one survives the "now write a unit test" challenge.

The idea: Make the logger a static class. No one can create a second instance because you can't instantiate a static class at all.

StaticLogger.cs — static class approach
public static class Logger
{
    private static readonly List<ILogSink> _sinks = [];
    private static readonly object _lock = new();

    public static void AddSink(ILogSink sink) { lock (_lock) _sinks.Add(sink); }

    public static void Log(LogLevel level, string message)
    {
        lock (_lock)
        {
            foreach (var sink in _sinks)
                sink.Write(level, message);
        }
    }
}

// Usage — no instance, just static calls
Logger.Log(LogLevel.Info, "App started");

Verdict: Guarantees one logger — but at what cost? You can't put a static class behind an ILogger interface. You can't mock it in unit tests. You can't wrap it with decorators (Timestamp, ThreadId) because decorators need an instance to wrap. Every class that calls Logger.Log() is silently glued to this concrete implementation. It's the simplest option and the hardest to evolve.

The idea: Use the classic GoF Singleton — private constructor, static Instance property, lazy initialization.

ClassicSingleton.cs — GoF singleton
public sealed class Logger : ILogger
{
    private static readonly Lazy<Logger> _instance = new(() => new Logger());
    public static Logger Instance => _instance.Value;

    private readonly List<ILogSink> _sinks = [];
    private readonly object _lock = new();

    private Logger() { }   // private ctor — no one else can create

    public void AddSink(ILogSink sink) { lock (_lock) _sinks.Add(sink); }

    public void Log(LogLevel level, string message)
    {
        lock (_lock)
        {
            foreach (var sink in _sinks)
                sink.Write(level, message);
        }
    }
}

// Usage — global access
Logger.Instance.Log(LogLevel.Info, "App started");

Verdict: Better — it implements ILogger, so decorators work. But every class still reaches for the global Logger.Instance. That's a hidden dependencyA hidden dependency is when a class uses something without declaring it. If you read the constructor, you'd never know it needs a logger — the dependency is buried inside method bodies. This makes the class harder to test and harder to understand.: reading a class's constructor won't tell you it uses a logger. Tests can't easily swap in a fake logger because the class doesn't accept one through its constructor — it grabs the global singleton internally.

The idea: Register ILogger with a singleton lifetime in the DI container. The container creates exactly one instance and hands the same one to every class that asks for ILogger.

DiSingleton.cs — DI-managed singleton ✅
// Program.cs — register once, container manages lifetime
var builder = WebApplication.CreateBuilder(args);

builder.Services.AddSingleton<ILogger>(sp =>
{
    var logger = new Logger(LogLevel.Debug);
    logger.AddSink(new ConsoleSink());
    logger.AddSink(new FileSink("app.log"));
    return new TimestampDecorator(
           new ThreadIdDecorator(logger));
});

// Any class just asks for ILogger in its constructor
public class OrderService(ILogger logger)
{
    public void PlaceOrder(Order order)
    {
        logger.Log(LogLevel.Info, $"Order {order.Id} placed");
    }
}

Verdict: This is the winner. One instance, guaranteed by the container. Every class declares its dependency in the constructor — no hidden globals. In tests, pass a FakeLogger or NullLogger directly. The decorator chain (Timestamp, ThreadId) is configured at startup and shared by everyone. Adding a new sink or decorator is a one-line change in Program.cs, not scattered across the codebase.

The Solution — Thread-Safe Logger + DI Registration

The logger needs to be thread-safeThread-safe means multiple threads can call the same method at the same time without corrupting data. In a web app, dozens of requests hit your logger simultaneously. Without thread safety, log entries get garbled, interleaved, or lost. because in a web application, dozens of requests hit the logger simultaneously. We use lock to ensure only one thread writes at a time.

Logger.cs — thread-safe core
public sealed class Logger : ILogger
{
    private readonly List<ILogSink> _sinks = [];
    private readonly LogLevel _minLevel;
    private readonly object _lock = new();  // guards shared state

    public Logger(LogLevel minLevel) => _minLevel = minLevel;

    public void AddSink(ILogSink sink)
    {
        lock (_lock) _sinks.Add(sink);
    }

    public void Log(LogLevel level, string message)
    {
        if (level < _minLevel) return;      // filter first (cheap)

        lock (_lock)                         // then write (one at a time)
        {
            foreach (var sink in _sinks)
                sink.Write(level, message);
        }
    }
}

The lock keyword ensures that if Thread A is writing to the console and file, Thread B waits until A finishes. Without it, two threads could interleave characters into the same log line, producing garbage like [INF[ERR] App stO] artedConnection failed.

Program.cs — singleton DI registration
var builder = WebApplication.CreateBuilder(args);

// One logger instance, shared app-wide
builder.Services.AddSingleton<ILogger>(sp =>
{
    var core = new Logger(LogLevel.Debug);
    core.AddSink(new ConsoleSink());
    core.AddSink(new FileSink("logs/app.log"));

    // Wrap with decorators (from Level 3)
    ILogger decorated = core;
    decorated = new CallerInfoDecorator(decorated);
    decorated = new ThreadIdDecorator(decorated);
    decorated = new TimestampDecorator(decorated);

    return decorated;  // everyone gets THIS decorated instance
});

builder.Services.AddScoped<OrderService>();
builder.Services.AddScoped<PaymentService>();

var app = builder.Build();
app.Run();

AddSingleton tells the DI container: "Create this object once, then hand the same instance to everyone who asks for ILogger." The decorator chain is built once at startup. Every service gets the fully decorated logger without knowing about the decoration.

OrderService.cs — clean constructor injection
// The service has NO idea whether the logger is a singleton,
// a transient, or a scoped instance. It just uses ILogger.
public class OrderService(ILogger logger)
{
    public void PlaceOrder(Order order)
    {
        logger.Log(LogLevel.Info, $"Placing order {order.Id}");

        try
        {
            // ... process order ...
            logger.Log(LogLevel.Info, $"Order {order.Id} completed");
        }
        catch (Exception ex)
        {
            logger.Log(LogLevel.Error, $"Order {order.Id} failed: {ex.Message}");
            throw;
        }
    }
}

Notice: zero mention of singletons, statics, or Logger.Instance. The service asks for ILogger in its constructor and uses it. Whether the container gives a real logger, a NullLoggerA logger that implements ILogger but does nothing — it silently swallows every message. Useful in unit tests when you want to test business logic without being distracted by log output., or a test spy is decided at composition time, not here.

How lock() Protects the Logger

Thread Safety — lock() sequence
Sequence diagram showing lock protecting concurrent log writes Thread A Logger (lock) Thread B Log("Order placed") LOCKED Log("Payment ok") WAIT ✓ write complete OK ✓ write complete

Growing Diagram — Level 4

Architecture — after Singleton + DI
Growing diagram after Level 4 showing DI container managing singleton logger DI Container (Singleton) OrderService PaymentService AuthService «ILogger» Timestamp → ThreadId → Logger ConsoleSink FileSink all share ONE logger instance

Before This Level

Anyone can new Logger() anywhere. Two instances = file lock fights, inconsistent filtering.

After This Level

DI container guarantees ONE decorated logger. Thread-safe via lock(). Every service gets the same instance through constructor injection.

Transfer: This "DI singleton vs static singleton" decision shows up everywhere. A database connection pool? DI singleton. An HTTP client factory? DI singleton. A configuration object? DI singleton. The pattern is always the same: register once, inject everywhere, test easily.
Section 8

Level 5 — Production-Grade Edge Cases 🔴 HARD

New Constraint: "The log file can't grow forever — rotate at 10 MB. Logging must never slow down the main thread. Support structured data (not just strings). Rate-limit to prevent log floods."
What breaks: Our FileSink appends to one file forever. On a busy server that's 50 MB/day — in a month the log file is 1.5 GB and grep takes 30 seconds. The lock-based writing blocks the calling thread: if the disk is slow, your API response waits. A retry loop that logs on every failure generates thousands of identical messages per second. And every message is a flat string — no way to search "show me all errors for OrderId=42."

Each of these four problems has a clean solution. Think about them independently:

  1. File rotation: How do you split one huge file into manageable chunks?
  2. Async logging: How do you avoid making the caller wait for the disk?
  3. Structured data: How do you attach key-value pairs (OrderId, UserId) to a log entry?
  4. Rate limiting: How do you suppress the 10,000th identical message?

Four Production Problems

Edge Cases — overview
Four production edge cases: rotation, async, structured, rate limit File Rotation app.log → app-001.log app-002.log → ... Problem: 1.5 GB in a month Fix: Split at 10 MB boundary Async Logging Caller → Queue → Disk (background thread) Problem: lock() blocks API Fix: Channel<T> buffer Structured Data { OrderId: 42, Level: "Error" } Problem: flat strings Fix: LogEntry record Rate Limiting 10,000 identical msgs → "suppressed 9,999" Problem: log floods Fix: Decorator + window Each problem is solved independently — composable via Decorator & Strategy All four are What If? scenarios: "What if the file grows forever?" "What if the disk is slow?"

Your inner voice:

"The beautiful thing is — we already built the extension points. File rotation? That's a smarter FileSink (Strategy). Async? That's a new Decorator that queues entries. Rate limiting? Another Decorator. Structured data? A richer LogEntry type instead of a bare string. Each feature slots into the existing architecture without rewriting anything."

What If? Framework

Each tab is a production scenario. Read the problem, then expand the solution.

The scenario: Your app runs 24/7. The log file grows to 2 GB. Searching takes forever, disk fills up, and backup scripts choke on the size.

File Rotation Flow
File rotation: current file reaches 10MB, renames, new file starts app.log 9.8 MB... 10 MB! app-2024-03-15.log archived (10 MB) app.log 0 bytes (fresh) app-2024-03-14.log app-2024-03-13.log ... older archives When current file hits the size limit, close it, rename with date, start fresh
RotatingFileSink.cs — rolls over at 10 MB
public sealed class RotatingFileSink : ILogSink, IDisposable
{
    private readonly string _basePath;
    private readonly long _maxBytes;
    private StreamWriter _writer;
    private long _currentSize;

    public RotatingFileSink(string basePath, long maxBytes = 10 * 1024 * 1024)
    {
        _basePath = basePath;
        _maxBytes = maxBytes;
        _writer = new StreamWriter(basePath, append: true);
        _currentSize = new FileInfo(basePath).Length;
    }

    public void Write(LogLevel level, string message)
    {
        if (_currentSize >= _maxBytes)
            Rotate();

        _writer.WriteLine(message);
        _writer.Flush();
        _currentSize += Encoding.UTF8.GetByteCount(message) + 2;
    }

    private void Rotate()
    {
        _writer.Dispose();
        var archive = _basePath.Replace(".log",
            $"-{DateTime.UtcNow:yyyy-MM-dd-HHmmss}.log");
        File.Move(_basePath, archive);
        _writer = new StreamWriter(_basePath, append: false);
        _currentSize = 0;
    }

    public void Dispose() => _writer.Dispose();
}

Notice this is just a new ILogSink — swap it into Program.cs where FileSink was. Zero changes to the logger, decorators, or services. The Strategy patternBecause ILogSink is an interface, any class that implements it can be plugged in. RotatingFileSink replaces FileSink without changing a single line in Logger, exactly as Strategy intended. pays off again.

The scenario: An API endpoint takes 200ms. But lock-based logging adds 5-50ms when the disk is slow. Under heavy load, threads queue up waiting for the lock. Your "fast" API becomes sluggish because of logging.

Async Logging Flow
Async logging: caller drops message into channel, background thread writes to sinks Caller Threads Log() returns instantly (<1µs) Channel<LogEntry> bounded queue (1024) BG Thread reads + writes Sinks Console File Network Caller never waits for disk I/O — just drops a message into the channel
AsyncLoggerDecorator.cs — non-blocking writes
public sealed class AsyncLoggerDecorator : ILogger, IAsyncDisposable
{
    private readonly ILogger _inner;
    private readonly Channel<(LogLevel, string)> _channel;
    private readonly Task _worker;

    public AsyncLoggerDecorator(ILogger inner, int capacity = 1024)
    {
        _inner = inner;
        _channel = Channel.CreateBounded<(LogLevel, string)>(capacity);
        _worker = Task.Run(ProcessQueue);   // background consumer
    }

    public void Log(LogLevel level, string message)
    {
        // Non-blocking: if the channel is full, drop the message
        _channel.Writer.TryWrite((level, message));
    }

    private async Task ProcessQueue()
    {
        await foreach (var (level, msg) in _channel.Reader.ReadAllAsync())
        {
            _inner.Log(level, msg);   // actual write happens here
        }
    }

    public async ValueTask DisposeAsync()
    {
        _channel.Writer.Complete();
        await _worker;                // drain remaining messages
    }
}

This is a DecoratorThe Decorator pattern wraps an existing object to add new behavior. AsyncLoggerDecorator wraps any ILogger and adds "non-blocking" behavior. The inner logger doesn't change at all — it still writes synchronously — but the callers never wait for it. that wraps any ILogger. Callers write to a Channel<T> (an in-memory queue) which returns instantly. A background thread drains the queue and passes each entry to the real logger. If the queue fills up, TryWrite silently drops the message — better than blocking the API.

The scenario: A bug happened for OrderId=42. You have 500,000 log lines. With flat strings, you grep "42" and get every line containing "42" — port numbers, timestamps, unrelated IDs. Needle, meet haystack.

LogEntry.cs — structured log data
// Instead of just a string, carry key-value pairs
public sealed record LogEntry
{
    public required LogLevel Level { get; init; }
    public required string Message { get; init; }
    public DateTime Timestamp { get; init; } = DateTime.UtcNow;
    public Dictionary<string, object> Properties { get; init; } = [];
}

// Usage: attach context to every log message
logger.Log(new LogEntry
{
    Level = LogLevel.Error,
    Message = "Payment failed",
    Properties =
    {
        ["OrderId"] = 42,
        ["UserId"] = "user-789",
        ["Amount"] = 99.95m,
        ["Gateway"] = "Stripe"
    }
});

// Now you can search: WHERE Properties["OrderId"] = 42
// Or export as JSON for Elasticsearch / ELK

Structured logging means each log entry is a data object, not a flat string. A JsonSink could write each entry as a JSON line, making it trivially searchable by tools like Elasticsearch, Seq, or Kibana. The flat ConsoleSink still works — it just formats the Properties into a readable string. Both sinks receive the same LogEntry; they differ only in how they render it.

The scenario: A database goes down. Every request logs "DB connection failed." That's 10,000 identical messages per minute. Your log file fills up with noise, rotation kicks in every few seconds, and the useful entries are buried.

RateLimitDecorator.cs — suppress repeated messages
public sealed class RateLimitDecorator : ILogger
{
    private readonly ILogger _inner;
    private readonly TimeSpan _window;
    private readonly Dictionary<string, (DateTime LastSeen, int Count)> _recent = [];

    public RateLimitDecorator(ILogger inner, TimeSpan? window = null)
    {
        _inner = inner;
        _window = window ?? TimeSpan.FromSeconds(30);
    }

    public void Log(LogLevel level, string message)
    {
        var now = DateTime.UtcNow;

        if (_recent.TryGetValue(message, out var entry)
            && now - entry.LastSeen < _window)
        {
            _recent[message] = (entry.LastSeen, entry.Count + 1);
            return;  // suppressed — same message within window
        }

        // Emit suppression notice if we held back duplicates
        if (entry.Count > 0)
            _inner.Log(level, $"(suppressed {entry.Count} duplicates)");

        _recent[message] = (now, 0);
        _inner.Log(level, message);
    }
}

Yet another Decorator. It tracks recently seen messages in a dictionary. If the same message appears again within the time window (default 30 seconds), it increments a counter instead of writing. When a different message arrives, it flushes the suppression count. Result: instead of 10,000 lines of "DB connection failed," you see one line plus "(suppressed 9,999 duplicates)."

Growing Diagram — Level 5

Architecture — after Edge Cases
Growing diagram after Level 5 with async, rate limit decorators and rotating sink Services (OrderService, PaymentService, ...) RateLimit Async Timestamp ThreadId Logger ConsoleSink RotatingFileSink JsonSink 5 decorators chained + 3 sinks — all composed in Program.cs, zero coupling between them New in L5: RateLimit, Async decorators + RotatingFileSink + JsonSink

Before This Level

One ever-growing file. Synchronous writes blocking every request. Flat strings impossible to search.

After This Level

Files rotate at 10 MB. Writes are async via Channel<T>. Structured LogEntry records. Rate limiting squashes floods. All plugged in as Decorators and Strategies.

Transfer: These same four edge cases appear in almost every system. A message queue needs rotation and rate limiting. An email service needs async sending and structured data. The specific solutions differ but the problems are universal — always ask "What if the file grows forever? What if the network is slow? What if the same event fires 10,000 times?"
Section 9 🔴 HARD

Level 6 — Make It Testable

New Constraint: "Every service that uses the logger must be fully unit-testable. Tests must verify what was logged, at what level, without touching files or the console. Time-dependent decorators must be testable with frozen time."
What breaks: Our TimestampDecorator calls DateTime.UtcNow internally — a static globalA static global is a value shared across the entire app through a static member. DateTime.UtcNow is one — you can't freeze it, fast-forward it, or control what time it returns. In a test, you need to verify "the log entry has the correct timestamp," but the timestamp changes every millisecond. you can't freeze. How do you assert the timestamp is correct when it changes every millisecond? The FileSink actually writes to disk — tests shouldn't create real files. And how do you verify that OrderService logs an error when payment fails, without scanning real console output?

Think First #8

You need three test doubles: one that captures log messages so you can assert on them, one that provides a fake clock, and a way to verify your DI wiring works correctly. What interfaces would you create?

60 seconds — list the interfaces before looking.

NeedInterfaceProductionTest Double
Capture log outputILogSink (already exists!)ConsoleSink, FileSinkTestSink — stores entries in a List
Control timeITimeProviderSystemTimeProviderFakeClock — returns a fixed time
Skip real loggingILogger (already exists!)Full decorated chainNullLogger or SpyLogger

Your inner voice:

"The key insight is: we already did most of the work. ILogger and ILogSink are interfaces from Level 0-2. Creating a TestSink is trivial — it implements ILogSink and stores messages in a List<string>. For timestamps, I need to extract DateTime.UtcNow behind an ITimeProvider — .NET 8 even ships one built-in (TimeProvider). For testing services that use the logger, I just pass a TestSink-backed logger through the constructor. DI made this trivially easy."

Production vs. Test Wiring

DI Wiring — production vs test
Side by side: production DI wiring vs test DI wiring Production OrderService ILogger Timestamp(ITimeProvider) → ThreadId → Logger ConsoleSink RotatingFileSink SystemTimeProvider Real clock, real files Unit Test OrderService ILogger Timestamp(FakeClock) → Logger TestSink (List<string>) FakeClock Frozen time, in-memory

The Solution — Test Doubles

We need three small pieces: a TestSink to capture output, a FakeClock to freeze time, and an ITimeProvider interface to make time injectable.

TestDoubles.cs — TestSink + FakeClock
// Captures everything that was logged — like a tape recorder
public sealed class TestSink : ILogSink
{
    public List<(LogLevel Level, string Message)> Entries { get; } = [];

    public void Write(LogLevel level, string message)
        => Entries.Add((level, message));
}

// Abstraction: anything that provides "now"
public interface ITimeProvider
{
    DateTime UtcNow { get; }
}

// Production: delegates to the real clock
public sealed class SystemTimeProvider : ITimeProvider
{
    public DateTime UtcNow => DateTime.UtcNow;
}

// Test: returns whatever time you set
public sealed class FakeClock : ITimeProvider
{
    public DateTime UtcNow { get; set; } = new(2025, 1, 15, 10, 30, 0);
}

TestSink is the key — it stores every log entry in a list. After your test runs, you can assert: "Was an Error logged? Does the message contain 'payment failed'?" No files, no console, no flakiness. FakeClock returns a fixed timestamp so your assertions are deterministic.

TimestampDecorator.cs — now uses ITimeProvider
// BEFORE (untestable — hardcoded DateTime.UtcNow):
// public void Log(LogLevel level, string msg)
//     => _inner.Log(level, $"[{DateTime.UtcNow:HH:mm:ss}] {msg}");

// AFTER (testable — injectable time):
public sealed class TimestampDecorator(
    ILogger inner,
    ITimeProvider clock) : ILogger
{
    public void Log(LogLevel level, string message)
    {
        var stamp = clock.UtcNow.ToString("yyyy-MM-dd HH:mm:ss");
        inner.Log(level, $"[{stamp}] {message}");
    }
}

One tiny change: DateTime.UtcNow becomes clock.UtcNow. In production, the DI container injects SystemTimeProvider. In tests, you pass FakeClock with a hardcoded date. The decorator's behavior is identical — the only difference is where "now" comes from.

LoggerTests.cs — deterministic unit tests
[Fact]
public void Timestamp_decorator_prepends_formatted_time()
{
    // Arrange
    var sink = new TestSink();
    var clock = new FakeClock { UtcNow = new(2025, 3, 15, 14, 30, 0) };
    var logger = new Logger(LogLevel.Debug);
    logger.AddSink(sink);
    ILogger decorated = new TimestampDecorator(logger, clock);

    // Act
    decorated.Log(LogLevel.Info, "Hello");

    // Assert — exact, deterministic, no flakiness
    Assert.Single(sink.Entries);
    Assert.Equal("[2025-03-15 14:30:00] Hello", sink.Entries[0].Message);
}

[Fact]
public void Order_service_logs_error_on_payment_failure()
{
    // Arrange
    var sink = new TestSink();
    var logger = new Logger(LogLevel.Debug);
    logger.AddSink(sink);
    var service = new OrderService(logger);

    // Act
    var result = service.PlaceOrder(new Order { Id = 42 });

    // Assert — verify the service logged what we expected
    Assert.Contains(sink.Entries,
        e => e.Level == LogLevel.Error
          && e.Message.Contains("Order 42"));
}

[Fact]
public void Level_filter_suppresses_debug_messages()
{
    var sink = new TestSink();
    var logger = new Logger(LogLevel.Warning);  // only Warning+
    logger.AddSink(sink);

    logger.Log(LogLevel.Debug, "should be dropped");
    logger.Log(LogLevel.Warning, "should appear");

    Assert.Single(sink.Entries);
    Assert.Equal("should appear", sink.Entries[0].Message);
}

Every test is fast (no I/O), deterministic (no random time), and isolated (no shared state). The TestSink is your assertion point — after the test runs, you inspect sink.Entries to verify exactly what was logged. This is the payoff of building on interfaces from Day 1.

How a Test Flows

Test execution flow
Test flow: create test doubles, run code, assert on captured entries 1. Arrange TestSink + FakeClock 2. Act service.PlaceOrder() 3. Assert sink.Entries contains... ✅ PASS fast + deterministic No files, no console, no network, no flakiness — pure in-memory verification

Growing Diagram — Level 6

Architecture — after DI + Testability
Growing diagram after Level 6 showing injectable abstractions Everything depends on abstractions (interfaces) ILogger ILogSink ITimeProvider Production: Logger FileSink SystemTimeProvider Tests: NullLogger TestSink FakeClock Same code, different wiring — DI makes production and test environments interchangeable

Before This Level

Tests require real files and real time. DateTime.UtcNow changes every millisecond. Assertions are flaky.

After This Level

TestSink captures output in memory. FakeClock freezes time. Tests are fast, deterministic, and run in parallel.

Transfer: This "extract the clock behind an interface" trick works for anything time-dependent: cache expiration, token lifetimes, scheduled jobs, rate limiters. And TestSink is the pattern for any "capture and assert" scenario: a TestEmailSender, a TestNotificationService, a TestPaymentGateway. If it has side effects, wrap it in an interface and provide a test double.
Section 10 🔴 HARD

Level 7 — Scale to Distributed Systems

New Constraint: "The app runs on 20 servers behind a load balancer. A single user request hops across 5 microservices. You need to trace that request end-to-end, search logs from all servers in one place, and machines — not humans — need to parse the output."
What breaks: Our logger writes to local files. With 20 servers, that's 20 separate log files. Finding what happened to Order #42 means SSH-ing into each server and grepping each file. A request that starts in the API, calls PaymentService, then NotificationService produces three disconnected log entries on three machines with no way to link them. And flat text logs can't be efficiently indexed by search engines like Elasticsearch.

Think First #9

Three problems: (1) linking log entries across services, (2) centralizing logs from many servers, (3) making logs machine-searchable. What single concept solves #1? What infrastructure solves #2? What format solves #3?

60 seconds — three problems, three answers.

ProblemSolutionHow
Linking across servicesCorrelation IDA unique identifier (usually a GUID) generated at the entry point of a request and propagated through every service call. Every log entry includes this ID, so you can search for it and see the entire journey of one request across all services.Generate a GUID at the gateway, propagate via HTTP header, include in every log entry
Centralizing logsELK StackElasticsearch (search engine) + Logstash (log pipeline) + Kibana (dashboard). Logs from all servers flow into Logstash, which transforms and sends them to Elasticsearch. Kibana provides a web UI to search, filter, and visualize. or similarShip logs to a central search engine (Elasticsearch, Seq, Datadog)
Machine-readable formatStructured JSON loggingEach log entry is a JSON object with typed fields, not a flat string

Your inner voice:

"A Correlation ID is just a GUID that travels with the request. The API gateway generates it, attaches it to an HTTP header (like X-Correlation-Id), and every downstream service reads and propagates it. Every log entry includes this ID. When something goes wrong, I search for that one GUID and see the entire journey — across 5 services, 20 servers."

"For centralization, the ELK stack is the industry standard: Elasticsearch stores and indexes logs, Logstash ingests them, Kibana visualizes them. Our logger doesn't need to know about ELK — it just writes structured JSON. A lightweight agent (Filebeat) watches the log files and ships them to Elasticsearch."

Correlation ID — One ID, Many Services

Correlation ID Flow
Correlation ID flowing from gateway through Order, Payment, and Notification services User API Gateway id = abc-123 OrderService abc-123 PaymentService abc-123 NotificationService abc-123 Search: abc-123 OrderService : Order placed PaymentService : Charged $99 Notification : Email sent ✓ full request journey

The Solution

Two new pieces: a CorrelationIdDecorator that stamps every log entry with the request's unique ID, and a JsonSink that outputs machine-readable JSON for Elasticsearch.

CorrelationIdDecorator.cs
// Stores the current request's correlation ID (per-thread / per-async-flow)
public static class CorrelationContext
{
    private static readonly AsyncLocal<string?> _id = new();
    public static string? CurrentId
    {
        get => _id.Value;
        set => _id.Value = value;
    }
}

// Decorator: prepends the correlation ID to every message
public sealed class CorrelationIdDecorator(ILogger inner) : ILogger
{
    public void Log(LogLevel level, string message)
    {
        var id = CorrelationContext.CurrentId ?? "no-corr-id";
        inner.Log(level, $"[{id}] {message}");
    }
}

// Middleware: reads or generates the ID for each request
public class CorrelationMiddleware(RequestDelegate next)
{
    public async Task InvokeAsync(HttpContext ctx)
    {
        var id = ctx.Request.Headers["X-Correlation-Id"]
                     .FirstOrDefault()
                 ?? Guid.NewGuid().ToString("N")[..12];

        CorrelationContext.CurrentId = id;
        ctx.Response.Headers["X-Correlation-Id"] = id;
        await next(ctx);
    }
}

AsyncLocal<T> is the key — it stores a value that flows with the async call chain, so every await in the same request sees the same ID. The middleware reads the ID from the incoming header (if the upstream service sent one) or generates a new one. Every log entry in this request now carries the same correlation ID.

JsonSink.cs — structured output for ELK
public sealed class JsonSink : ILogSink
{
    private readonly StreamWriter _writer;

    public JsonSink(string path)
        => _writer = new StreamWriter(path, append: true);

    public void Write(LogLevel level, string message)
    {
        // Build a JSON object that Elasticsearch can index
        var json = JsonSerializer.Serialize(new
        {
            timestamp = DateTime.UtcNow.ToString("o"),
            level = level.ToString(),
            message,
            correlationId = CorrelationContext.CurrentId,
            machineName = Environment.MachineName,
            threadId = Environment.CurrentManagedThreadId
        });

        _writer.WriteLine(json);
        _writer.Flush();
    }
}

// Output example (one JSON object per line):
// {"timestamp":"2025-03-15T14:30:00Z","level":"Error",
//  "message":"Payment failed","correlationId":"abc123def456",
//  "machineName":"web-server-03","threadId":14}

Each line is a standalone JSON object — this format is called NDJSONNewline-Delimited JSON — one JSON object per line. This format is the standard input for Elasticsearch, Filebeat, and most log aggregation tools. Unlike a JSON array, each line can be parsed independently, so the file can be streamed. (newline-delimited JSON). Filebeat watches this file and ships each line to Elasticsearch. Kibana can then search: "Show me all entries where correlationId = abc123 AND level = Error."

Program.cs — final production wiring
var builder = WebApplication.CreateBuilder(args);

builder.Services.AddSingleton<ITimeProvider, SystemTimeProvider>();
builder.Services.AddSingleton<ILogger>(sp =>
{
    var core = new Logger(LogLevel.Debug);
    core.AddSink(new ConsoleSink());
    core.AddSink(new RotatingFileSink("logs/app.log"));
    core.AddSink(new JsonSink("logs/app.json"));

    var clock = sp.GetRequiredService<ITimeProvider>();

    // Decorator chain: outermost runs first
    ILogger logger = core;
    logger = new CallerInfoDecorator(logger);
    logger = new ThreadIdDecorator(logger);
    logger = new TimestampDecorator(logger, clock);
    logger = new CorrelationIdDecorator(logger);
    logger = new RateLimitDecorator(logger);
    logger = new AsyncLoggerDecorator(logger);

    return logger;
});

var app = builder.Build();
app.UseMiddleware<CorrelationMiddleware>();
app.Run();

Look at the decorator chain: 6 decorators, each adding one responsibility. The outermost (AsyncLoggerDecorator) receives the call first and drops it into a queue. The message flows inward through RateLimit, CorrelationId, Timestamp, ThreadId, CallerInfo, and finally hits the core Logger which dispatches to 3 sinks. Every piece is independently testable, independently removable, independently replaceable.

The ELK Stack — Centralized Log Search

ELK Architecture
ELK stack: servers ship logs via Filebeat to Logstash, Elasticsearch, Kibana App Servers web-01 web-02 web-03 ...20 servers (JSON log files) Filebeat Logstash parse + transform Elasticsearch index + search (billions of log entries) Kibana visualize + alert (web dashboard) Kibana query: correlationId:"abc123" AND level:"Error"

The key insight: our logger doesn't need to know about Elasticsearch. It writes structured JSON to local files. Filebeat (a lightweight agent) watches those files and ships new lines to Logstash, which forwards them to Elasticsearch. Kibana provides the search UI. Our code stays simple; the infrastructure handles distribution.

Growing Diagram — Level 7 (Complete Architecture)

Final Architecture — all 7 levels
Complete logging framework architecture after all 7 levels CONSUMERS DECORATORS CORE SINKS INFRA OrderService PaymentService AuthService ... «ILogger» Async RateLimit CorrelationId Timestamp ThreadId CallerInfo Logger «ILogSink» ConsoleSink RotatingFileSink JsonSink Filebeat → Logstash → Elasticsearch → Kibana Patterns Used ✓ Strategy (ILogSink) ✓ Decorator (6 layers) ✓ Singleton (DI lifetime) ✓ DI (constructor injection) ✓ Observer (Filebeat → ELK)

Before This Level

Logs live on individual servers. No way to trace a request across services. Searching means SSH + grep on 20 machines.

After This Level

Correlation IDs link entries across services. Structured JSON logs flow to Elasticsearch. One Kibana query finds everything about any request, any user, any time range.

Transfer: Correlation IDs and centralized logging are table stakes for any distributed system — microservices, event-driven architectures, serverless functions. The same ELK stack (or alternatives like Seq, Datadog, Grafana Loki) powers logging at companies from startups to Netflix-scale. Our logging framework's ILogSink strategy made it trivial to add — just one new sink class.
Section 11

The Full Code — Everything Assembled

You built this logging framework one constraint at a time across eight levels. Now let's see every piece in one place. Each file is annotated with // Level N comments so you can trace which constraint forced each line into existence. Green types appeared early (Levels 0–2), yellow ones in the middle (Levels 3–5), and red ones in the advanced stages (6–7).

COMPLETE TYPE MAP — COLOR = LEVEL INTRODUCED L0–L2 (Foundation) L3–L5 (Patterns) L6–L7 (Advanced) Interfaces MODELS LogLevel enum · L0 LogEntry record · L1 StructuredLogEntry record · L7 INTERFACES ILogSink L1 SINKS (STRATEGY) ConsoleSink L1 FileSink L3 DbSink L3 FileRotationSink L6 TestSink L6 DECORATORS TimestampDecorator L4 ThreadIdDecorator L4 CallerInfoDecorator L4 CorrelationDecorator L7 AsyncDecorator L6 ENGINE Logger DI singleton · L0–L7 16 types total 3 models · 1 interface · 5 sinks · 5 decorators · 1 engine · 1 entry point

Now let's see the actual code. Each file is organized by responsibility — click through the tabs to read each one.

Models.cs — Data types the logger carries around
namespace Logging.Models;

// ─── LogLevel ────────────────────────────────────────  // Level 0
// Severity categories. The logger filters entries whose
// level is below the configured minimum.
public enum LogLevel { Trace, Debug, Info, Warning, Error, Fatal }

// ─── LogEntry ────────────────────────────────────────  // Level 1
// An immutable snapshot of one log event. Records give us
// value equality and a nice ToString() for free.
public record LogEntry(
    LogLevel Level,                                       // Level 0
    string Message,                                       // Level 0
    DateTimeOffset Timestamp,                             // Level 4
    string? Category = null,                              // Level 1
    string? ThreadId = null,                              // Level 4
    string? CallerInfo = null,                            // Level 4
    string? CorrelationId = null                          // Level 7
);

// ─── StructuredLogEntry ─────────────────────────────  // Level 7
// Extends LogEntry with key-value properties for machine-
// readable structured logging (JSON sinks love this).
public record StructuredLogEntry(
    LogLevel Level,
    string Message,
    DateTimeOffset Timestamp,
    Dictionary<string, object> Properties,                // Level 7
    string? Category = null,
    string? ThreadId = null,
    string? CallerInfo = null,
    string? CorrelationId = null
) : LogEntry(Level, Message, Timestamp, Category, ThreadId, CallerInfo, CorrelationId);
Sinks.cs — Where log entries end up
namespace Logging.Sinks;

// ─── ILogSink ────────────────────────────────────────  // Level 1
// The Strategy interface. Every destination implements
// this single method. The logger doesn't know or care
// whether it's writing to the console, a file, or a DB.
public interface ILogSink : IDisposable                   // Level 6
{
    void Write(LogEntry entry);                           // Level 1
}

// ─── ConsoleSink ─────────────────────────────────────  // Level 1
public sealed class ConsoleSink : ILogSink
{
    public void Write(LogEntry entry)
    {
        var color = entry.Level switch                    // Level 2
        {
            LogLevel.Error or LogLevel.Fatal => ConsoleColor.Red,
            LogLevel.Warning => ConsoleColor.Yellow,
            LogLevel.Info => ConsoleColor.Green,
            _ => ConsoleColor.Gray
        };
        var prev = Console.ForegroundColor;
        Console.ForegroundColor = color;
        Console.WriteLine($"[{entry.Level}] {entry.Message}");
        Console.ForegroundColor = prev;
    }
    public void Dispose() { /* nothing to release */ }
}

// ─── FileSink ────────────────────────────────────────  // Level 3
public sealed class FileSink : ILogSink
{
    private readonly StreamWriter _writer;
    private readonly object _lock = new();                // Level 5

    public FileSink(string path)
    {
        _writer = new StreamWriter(path, append: true)
            { AutoFlush = true };
    }
    public void Write(LogEntry entry)
    {
        lock (_lock)                                      // Level 5
        {
            _writer.WriteLine(
                $"{entry.Timestamp:O} [{entry.Level}] {entry.Message}");
        }
    }
    public void Dispose() => _writer.Dispose();          // Level 6
}

// ─── DbSink ─────────────────────────────────────────  // Level 3
public sealed class DbSink : ILogSink
{
    private readonly string _connectionString;

    public DbSink(string connectionString)
        => _connectionString = connectionString;

    public void Write(LogEntry entry)
    {
        // In production: INSERT INTO Logs (Level, Message, Timestamp...)
        // Simplified here for clarity.
        Console.WriteLine($"[DB] Persisted: {entry.Level} - {entry.Message}");
    }
    public void Dispose() { /* close DB connection */ }
}

// ─── FileRotationSink ────────────────────────────────  // Level 6
// Wraps file writing with automatic rotation when the
// file exceeds a size threshold.
public sealed class FileRotationSink : ILogSink
{
    private readonly string _basePath;
    private readonly long _maxBytes;
    private StreamWriter _writer;
    private long _bytesWritten;
    private int _fileIndex;
    private readonly object _lock = new();

    public FileRotationSink(string basePath, long maxBytes = 10_000_000)
    {
        _basePath = basePath;
        _maxBytes = maxBytes;
        _writer = OpenNewFile();
    }
    public void Write(LogEntry entry)
    {
        var line = $"{entry.Timestamp:O} [{entry.Level}] {entry.Message}";
        lock (_lock)
        {
            if (_bytesWritten + line.Length > _maxBytes)
                Rotate();
            _writer.WriteLine(line);
            _bytesWritten += line.Length + Environment.NewLine.Length;
        }
    }
    private void Rotate()
    {
        _writer.Dispose();
        _fileIndex++;
        _writer = OpenNewFile();
        _bytesWritten = 0;
    }
    private StreamWriter OpenNewFile()
        => new(Path.ChangeExtension(_basePath,
               $".{_fileIndex}.log"), append: false)
            { AutoFlush = true };

    public void Dispose() => _writer.Dispose();
}

// ─── TestSink ────────────────────────────────────────  // Level 6
// Captures entries in memory so unit tests can assert on them.
public sealed class TestSink : ILogSink
{
    public List<LogEntry> Entries { get; } = new();
    public void Write(LogEntry entry) => Entries.Add(entry);
    public void Dispose() => Entries.Clear();
}
Decorators.cs — Wrap sinks to add behavior
namespace Logging.Decorators;

// ─── TimestampDecorator ─────────────────────────────  // Level 4
// Wraps any ILogSink and stamps the current UTC time
// onto the entry before passing it along.
public sealed class TimestampDecorator : ILogSink
{
    private readonly ILogSink _inner;
    public TimestampDecorator(ILogSink inner) => _inner = inner;

    public void Write(LogEntry entry)
    {
        var stamped = entry with
            { Timestamp = DateTimeOffset.UtcNow };        // Level 4
        _inner.Write(stamped);
    }
    public void Dispose() => _inner.Dispose();
}

// ─── ThreadIdDecorator ──────────────────────────────  // Level 4
public sealed class ThreadIdDecorator : ILogSink
{
    private readonly ILogSink _inner;
    public ThreadIdDecorator(ILogSink inner) => _inner = inner;

    public void Write(LogEntry entry)
    {
        var tagged = entry with
            { ThreadId = Environment.CurrentManagedThreadId.ToString() };
        _inner.Write(tagged);
    }
    public void Dispose() => _inner.Dispose();
}

// ─── CallerInfoDecorator ────────────────────────────  // Level 4
public sealed class CallerInfoDecorator : ILogSink
{
    private readonly ILogSink _inner;
    public CallerInfoDecorator(ILogSink inner) => _inner = inner;

    public void Write(LogEntry entry)
    {
        var enriched = entry with
            { CallerInfo = GetCaller() };
        _inner.Write(enriched);
    }
    private static string GetCaller()
    {
        var frame = new System.Diagnostics.StackTrace(skipFrames: 3)
            .GetFrame(0);
        return frame?.GetMethod()?.DeclaringType?.Name ?? "Unknown";
    }
    public void Dispose() => _inner.Dispose();
}

// ─── CorrelationDecorator ───────────────────────────  // Level 7
// Tags every entry with a correlation ID so you can trace
// a single request across multiple log lines.
public sealed class CorrelationDecorator : ILogSink
{
    private readonly ILogSink _inner;
    private static readonly AsyncLocal<string?> _correlationId = new();

    public static string? CurrentCorrelationId
    {
        get => _correlationId.Value;
        set => _correlationId.Value = value;
    }
    public CorrelationDecorator(ILogSink inner) => _inner = inner;

    public void Write(LogEntry entry)
    {
        var tagged = entry with
            { CorrelationId = _correlationId.Value ?? Guid.NewGuid().ToString("N")[..8] };
        _inner.Write(tagged);
    }
    public void Dispose() => _inner.Dispose();
}

// ─── AsyncDecorator ─────────────────────────────────  // Level 6
// Offloads writing to a background thread so the caller
// never blocks on slow sinks (file I/O, DB, network).
public sealed class AsyncDecorator : ILogSink
{
    private readonly ILogSink _inner;
    private readonly Channel<LogEntry> _channel;
    private readonly Task _consumer;

    public AsyncDecorator(ILogSink inner, int capacity = 1024)
    {
        _inner = inner;
        _channel = Channel.CreateBounded<LogEntry>(capacity);
        _consumer = Task.Run(ConsumeAsync);
    }
    public void Write(LogEntry entry)
    {
        if (!_channel.Writer.TryWrite(entry))
            _inner.Write(entry); // back-pressure: write synchronously
    }
    private async Task ConsumeAsync()
    {
        await foreach (var entry in _channel.Reader.ReadAllAsync())
            _inner.Write(entry);
    }
    public void Dispose()
    {
        _channel.Writer.Complete();
        _consumer.GetAwaiter().GetResult();
        _inner.Dispose();
    }
}
Logger.cs — The orchestrator
namespace Logging;

// ─── Logger ─────────────────────────────────────────  // Level 0
// The single entry point for all logging. Registered as
// a DI singleton (Level 5) so every service shares the
// same configured pipeline.
public sealed class Logger : IDisposable                  // Level 6
{
    private readonly IReadOnlyList<ILogSink> _sinks;     // Level 1
    private readonly LogLevel _minimumLevel;              // Level 2
    private readonly object _lock = new();                // Level 5

    public Logger(
        IEnumerable<ILogSink> sinks,                     // Level 1
        LogLevel minimumLevel = LogLevel.Info)            // Level 2
    {
        _sinks = sinks.ToList().AsReadOnly();
        _minimumLevel = minimumLevel;
    }

    // ─── Convenience methods ─────────────────────────
    public void Trace(string msg)   => Log(LogLevel.Trace, msg);
    public void Debug(string msg)   => Log(LogLevel.Debug, msg);
    public void Info(string msg)    => Log(LogLevel.Info, msg);
    public void Warning(string msg) => Log(LogLevel.Warning, msg);
    public void Error(string msg)   => Log(LogLevel.Error, msg);
    public void Fatal(string msg)   => Log(LogLevel.Fatal, msg);

    public void Log(LogLevel level, string message)       // Level 0
    {
        if (level < _minimumLevel) return;                // Level 2

        var entry = new LogEntry(level, message,
            DateTimeOffset.UtcNow);                       // Level 4

        lock (_lock)                                      // Level 5
        {
            foreach (var sink in _sinks)                  // Level 1
            {
                try
                {
                    sink.Write(entry);                    // Level 1
                }
                catch (Exception ex)                      // Level 5
                {
                    Console.Error.WriteLine(
                        $"Sink {sink.GetType().Name} failed: {ex.Message}");
                }
            }
        }
    }

    public void Dispose()                                 // Level 6
    {
        foreach (var sink in _sinks)
            sink.Dispose();
    }
}
Program.cs — DI wiring and demo
using Logging;
using Logging.Sinks;
using Logging.Decorators;

// ─── DI Registration ────────────────────────────────  // Level 5
var builder = WebApplication.CreateBuilder(args);

// Build decorated sink pipelines:
// Console: timestamp + threadId + console output
ILogSink consolePipeline = new ConsoleSink();             // Level 1
consolePipeline = new TimestampDecorator(consolePipeline); // Level 4
consolePipeline = new ThreadIdDecorator(consolePipeline); // Level 4

// File: timestamp + caller + async + rotating file
ILogSink filePipeline = new FileRotationSink("logs/app.log"); // Level 6
filePipeline = new CallerInfoDecorator(filePipeline);     // Level 4
filePipeline = new TimestampDecorator(filePipeline);      // Level 4
filePipeline = new AsyncDecorator(filePipeline);          // Level 6

// Register Logger as singleton with both pipelines
builder.Services.AddSingleton(                            // Level 5
    new Logger(
        new[] { consolePipeline, filePipeline },
        LogLevel.Debug));

var app = builder.Build();

// ─── Usage ──────────────────────────────────────────
var logger = app.Services.GetRequiredService<Logger>();

logger.Info("Application started");                       // Level 0
logger.Debug("Loading configuration...");                 // Level 2
logger.Warning("Cache miss on user profile");             // Level 2
logger.Error("Payment gateway timeout after 30s");        // Level 5

// Correlation ID for request tracing                     // Level 7
CorrelationDecorator.CurrentCorrelationId = Guid.NewGuid().ToString("N")[..8];
logger.Info("Processing order #12345");
logger.Info("Charging payment method");
logger.Info("Order confirmed");

logger.Dispose();                                         // Level 6
Notice the growth: The Logger class itselfThe Logger is intentionally thin. It doesn't know how to write to a file, format timestamps, or rotate logs. It just iterates its sinks and calls Write(). All the interesting behavior lives in the sinks and decorators, which can be composed in any combination via DI. is barely 40 lines. All the complexity lives in sinks and decorators that you can mix, match, and swap without touching the core engine. That's the power of Strategy + Decorator working together.
Section 12

Pattern Spotting — X-Ray Vision

You've been using design patterns for the last eight levels. Some were obvious — we named them as we built them. Others are hiding in the code, doing their job without anyone labeling them. This section is about developing pattern recognitionThe ability to look at code and see the underlying design patterns at work. Senior engineers do this unconsciously — they glance at a Logger class and immediately see "that's a Strategy for sinks, Decorator for enrichment, Singleton for sharing." This skill comes from building patterns yourself. — the skill of seeing structural bones underneath working code.

Think First #10

We explicitly named three patterns during the build: Strategy (sinks), Decorator (enrichment wrappers), and Singleton (DI registration). But there's at least one MORE pattern hiding in our code that we never mentioned by name. Hint: think about what happens when the logger iterates through multiple sinks — what pattern governs that flow?

Take 30 seconds before revealing.

Chain of Responsibility — When the Logger iterates through its list of sinks and each one independently decides whether to process the entry, that's a chain. Each sink is a handler in the pipeline. If a FileSink fails, the next sink still gets its turn. The entry flows through the chain until every handler has had a chance to act.

The Three Explicit Patterns

These are the patterns we named during the build. For each one: where it lives, what it enables, and what breaks without it.

Strategy Pattern — "Swap where logs go without touching the logger"

WhereILogSink + ConsoleSink, FileSink, DbSink, FileRotationSink, TestSink
EnablesAdd a new log destination (Elasticsearch, Slack webhook, cloud storage) by implementing one interface. Zero changes to Logger.
Without itThe Logger would have if (writeToFile) ... else if (writeToDb) ... inside its Log() method. Adding a new destination means editing the core class every time.

Decorator Pattern — "Stack enrichments without modifying sinks"

WhereTimestampDecorator, ThreadIdDecorator, CallerInfoDecorator, CorrelationDecorator, AsyncDecorator
EnablesMix and match enrichments per sink. Console gets timestamp + threadId. File gets timestamp + caller + async. Each decorator adds one behavior and delegates to the next.
Without itYou'd need TimestampConsoleSink, TimestampFileSink, ThreadIdTimestampConsoleSink… a combinatorial explosionIf you have 5 sinks and 5 enrichments, you'd need 5 × 2&sup5; = 160 classes to cover every combination. With Decorator, you need 5 sinks + 5 decorators = 10 classes, and you can compose them freely. of classes.

Singleton (via DI) — "One logger instance, shared by everyone"

Wherebuilder.Services.AddSingleton(new Logger(...)) in Program.cs
EnablesEvery service, controller, and middleware gets the same Logger with the same configured sinks. No accidental duplicate file handles or missed log entries.
Without itEach class creates its own new Logger(...). Multiple file handles to the same log file cause corruption. Configuration is scattered across the codebase instead of centralized in DI.

Pattern X-Ray — See Through the Code

Here's the class diagram with colored overlays showing which pattern each type belongs to. Yellow is Strategy, purple is Decorator, cyan is Singleton. Notice how Decorator wraps Strategy — every decorator is also an ILogSink, which is what makes the stacking work.

PATTERN X-RAY OVERLAY Strategy (Sinks) Decorator (Wrappers) Singleton (Logger) Chain of Responsibility Logger DI Singleton · iterates sinks foreach sink ILogSink STRATEGY SINKS ConsoleSink FileSink DbSink FileRotationSink TestSink DECORATOR WRAPPERS Timestamp ThreadId CallerInfo Correlation Async wraps any ILogSink

How the Patterns Interact

Patterns don't live in isolation. Here's what happens when your code calls logger.Error("Payment failed"). The Singleton ensures there's one Logger. The Logger iterates its sinks (Chain). Each sink might be wrapped in Decorators that enrich the entry. Finally, the concrete Strategy sink writes to its destination.

ONE LOG CALL — PATTERNS COLLABORATING Your Code logger.Error() SINGLETON Logger filter by level foreach sink... DECORATOR CHAIN Timestamp → ThreadId → Caller Each enriches entry then calls _inner.Write() STRATEGY FileSink writes to disk Done log persisted also dispatches to... ConsoleSink (with its own decorator chain)

Chain of Responsibility

Where: The foreach (var sink in _sinks) loop inside Logger.Log(). Each sink is a link in the chain. If one sink throws an exception, the try/catch ensures the next sink still gets its turn.

Why it matters: This is a broadcast chainClassic Chain of Responsibility stops after one handler processes the request. The Logger uses a broadcast variant where every handler in the chain gets the entry. This is common in logging frameworks, middleware pipelines, and event systems where multiple handlers need to react to the same event. — every handler processes the entry, unlike the classic version where only one handler acts. This variant is extremely common in logging, middleware, and event systems.

The Takeaway: Four patterns in a logging framework. Strategy decides WHERE logs go. Decorator decides WHAT information they carry. Singleton ensures ONE configured instance. Chain of Responsibility ensures EVERY sink gets its turn. Each pattern handles one concern and passes the baton to the next.
Section 13

The Growing Diagram — Complete Evolution

You built this logging framework across eight levels. At Level 0 it was a single class that printed to the console. By Level 7, it's a composable pipeline with decorators, async offloading, file rotation, and correlation tracking. But it didn't happen all at once — it grew one constraint at a time.

Each stage below shows what was added at that level. Glowing boxes are new arrivals; the growth curveThe growth curve shows how many types exist at each level. A healthy design grows gradually — 1–3 new types per level. An unhealthy design dumps 10 types in Level 0 because someone tried to anticipate every future need. stays gentle because we never added more than one concept at a time.

Think First #11

Look at the 8 stages below. At which level does the design jump the most in complexity? Why is that the natural inflection point for a logging framework?

Design Evolution — L0 through L7
L0 Basic Log Logger LogLevel 2 L1 Sinks ILogSink ConsoleSink LogEntry +3 L2 Filtering MinLevel filter +0 (behavior only) L3 Multi-Sink FileSink DbSink +2 L4 Decorators Timestamp ThreadId CallerInfo +3 L5 Thread Safety lock + DI +0 (infra only) L6 Advanced Rotation Async TestSink +3 L7 Structured Correlation Structured +2 CUMULATIVE TYPE COUNT 2 L0 5 L1 5 L2 7 L3 10 L4 10 L5 13 L6 15 L7 TOTAL: 15 types across 8 levels

Here's the complete picture — every entity, its type, and the level that introduced it.

EntityKindLevelWhy This Kind?
LogLevelenumL0Category without behavior — severity is just a label
Loggersealed classL0Mutable orchestrator — holds sinks, manages lifecycle
LogEntryrecordL1ImmutableOnce a log entry is created, it should never be modified. Records enforce this naturally in C# — the 'with' expression creates a new copy instead of mutating. This is critical when multiple decorators enrich the same entry: each creates a fresh copy, so there are no race conditions. — log data is fixed after creation
ILogSinkinterfaceL1Strategy contract — where logs go is swappable
ConsoleSinksealed classL1Simplest sink — writes to stdout
FileSinksealed classL3Needs StreamWriter lifecycle + thread-safe writes
DbSinksealed classL3Persistent storage for production log queries
TimestampDecoratorsealed classL4Decorator — wraps any ILogSink, adds UTC timestamp
ThreadIdDecoratorsealed classL4Decorator — tags entry with current thread ID
CallerInfoDecoratorsealed classL4Decorator — captures calling class name via stack trace
FileRotationSinksealed classL6Production sink — rotates files when size exceeds threshold
AsyncDecoratorsealed classL6Decorator — offloads write to background channel
TestSinksealed classL6In-memory capture for unit test assertions
CorrelationDecoratorsealed classL7Decorator — tags entries with request correlation ID
StructuredLogEntryrecordL7Extends LogEntry with key-value properties for JSON sinks
"What if we designed everything up front?" You'd sketch 15 types, wire decorators, add async support, build file rotation — all before writing a single log line. The result?
  • You'd add AsyncDecorator before discovering that console writes are fast enough without it
  • You'd build FileRotationSink before your first log file even reaches 1 MB
  • You'd add CorrelationDecorator before having multiple concurrent requests to trace
Patterns designed without pain always have the wrong shape. Build incrementally, feel the constraint, then apply the pattern.
The lesson: Level 4 is the biggest jump (3 decorators at once) because enrichment is the natural inflection point for any logging framework. Once you have multiple sinks, you immediately need to enrich entries differently per sink. The Decorator pattern arrives precisely when the constraint demands it.
Section 14

Five Bad Solutions — Learn What NOT to Do

You've seen the good solution — built incrementally over 8 levels. Now let's study five bad approaches that people commonly reach for when building logging frameworks. Each one is tempting for a different reason, and each one breaks in a predictable way.

Think First #12

Of these five bad solutions, which one is the most dangerous and why? Hint: the most dangerous bug is the one that looks correct.

Bad Solution 1: "The God Logger"

Imagine one person in an office who handles the mail, answers the phone, fixes the printer, makes coffee, AND does payroll. On a quiet day it works. On a busy day, everything collapses because one person can't juggle six jobs at once.

That's what happens when you put everything in a single Logger class: console formatting, file writing, database inserts, timestamp enrichment, thread safety, and rotation logic — all tangled together. At first it feels productive. But the moment you want to add Slack notifications, you're wading through 1,500 lines where changing the file rotation risks breaking the console output.

GodLogger (1500 lines) Console formatting File writing + rotation DB inserts Timestamp / ThreadId 6 reasons to change Clean Architecture (15 types) Logger (orchestrator) ILogSink (interface) 5 Decorators 5 Sinks 1 reason to change each
GodLogger.cs
public class GodLogger
{
    private StreamWriter? _fileWriter;
    private string? _dbConnection;
    private LogLevel _minLevel = LogLevel.Info;

    public void Log(LogLevel level, string msg)
    {
        if (level < _minLevel) return;
        var ts = DateTime.Now.ToString("O");        // bug: local time
        var tid = Thread.CurrentThread.ManagedThreadId;
        var line = $"{ts} [{tid}] [{level}] {msg}";

        // Console — inline formatting
        Console.ForegroundColor = level >= LogLevel.Error
            ? ConsoleColor.Red : ConsoleColor.Gray;
        Console.WriteLine(line);
        Console.ResetColor();

        // File — inline with rotation
        if (_fileWriter != null)
        {
            _fileWriter.WriteLine(line);
            if (_fileWriter.BaseStream.Length > 10_000_000)
            {
                _fileWriter.Close();
                // ... 30 lines of rotation logic ...
            }
        }

        // DB — inline insert
        if (_dbConnection != null)
        {
            // ... 20 lines of SQL insert ...
        }
    }
}

What's wrong: Console formatting, file I/O, rotation, and DB writes are all in one method. Adding Slack notifications means editing this method. Changing rotation logic risks breaking console output. Testing file rotation requires constructing the entire logger.

CleanLogger.cs
// Logger: ONLY orchestration
public sealed class Logger(IEnumerable<ILogSink> sinks, LogLevel min)
{
    public void Log(LogLevel level, string msg)
    {
        if (level < min) return;
        var entry = new LogEntry(level, msg, DateTimeOffset.UtcNow);
        foreach (var sink in sinks) sink.Write(entry);
    }
}
// Each sink: ONE destination, ONE responsibility
public interface ILogSink { void Write(LogEntry entry); }

Why it works: The Logger is 10 lines. Each sink handles one destination. New destination? New class. No existing code touched.

How to Spot: If your Log() method has more than one if block checking which destination to write to, it's a God Class. Split by destination.

The opposite extreme. Instead of one God Class, everything gets an abstraction. You end up with AbstractLogSinkFactoryProviderStrategyMediator and a 12-step sequence diagram to trace a single log line. A new developer asks "where does logging happen?" and the answer requires visiting 8 files.

THE ABSTRACTION MAZE ILogProvider ILogSinkFactory AbstractSinkProvider ILogMediator LoggingPipeline Console.Write 6 layers to reach one Console.WriteLine. New dev: "Where does logging happen?" Answer: "Let me draw you a sequence diagram..."
OverEngineered.cs
public interface ILogProvider { ILogSinkFactory GetFactory(); }
public interface ILogSinkFactory { ILogSink Create(string name); }
public abstract class AbstractSinkProvider : ILogProvider { ... }
public interface ILogMediator { void Route(LogEntry entry); }
public class LoggingPipeline : ILogMediator
{
    private readonly ILogProvider _provider;
    private readonly ILogSinkFactory _factory;
    // 4 layers of indirection just to call Write()
}

What's wrong: Factories creating factories. Providers wrapping mediators. The actual Console.WriteLine() is buried under 6 layers. Every simple change requires tracing through half a dozen files.

JustRight.cs
// One interface. Implement it. Done.
public interface ILogSink { void Write(LogEntry entry); }

// Need enrichment? Decorator. Need async? Decorator.
// Need multiple destinations? List<ILogSink>.
// No factories, no mediators, no abstract providers.

Why it works: YAGNIYou Aren't Gonna Need It. Don't add abstraction layers until a real constraint forces them. Our logging framework uses exactly 1 interface (ILogSink) and composition via Decorator. No factories, no mediators. If you need a factory later, add it then — not now. — patterns solve problems. No problem means no pattern.

This one looks professional. Clean Strategy pattern, nice Decorator chain, proper DI. But zero error handling, no thread safety, no disposal. It works perfectly in development. First week in production with 500 concurrent requests: file corruption, lost log entries, and memory leaks from unclosed StreamWriters.

DEV ENVIRONMENT PRODUCTION (500 req/s) Clean code, all tests pass Single-threaded, small data File corruption (no lock) Memory leak (no Dispose) Lost entries (sink throws, loop stops)
HappyPath.cs
public sealed class Logger
{
    private readonly List<ILogSink> _sinks;

    public void Log(LogLevel level, string msg)
    {
        var entry = new LogEntry(level, msg, DateTimeOffset.UtcNow);
        foreach (var sink in _sinks)
            sink.Write(entry);  // no try/catch — one failure kills all
    }
    // no Dispose — StreamWriters leak forever
}

public sealed class FileSink : ILogSink
{
    private readonly StreamWriter _writer;
    public void Write(LogEntry entry)
    {
        // no lock — concurrent threads corrupt the file
        _writer.WriteLine($"[{entry.Level}] {entry.Message}");
    }
}
ProductionReady.cs
public sealed class Logger : IDisposable   // IDisposable!
{
    private readonly object _lock = new();

    public void Log(LogLevel level, string msg)
    {
        var entry = new LogEntry(level, msg, DateTimeOffset.UtcNow);
        lock (_lock)
        {
            foreach (var sink in _sinks)
            {
                try { sink.Write(entry); }
                catch (Exception ex)      // isolate failures
                {
                    Console.Error.WriteLine(
                        $"Sink failed: {ex.Message}");
                }
            }
        }
    }
    public void Dispose()
    {
        foreach (var s in _sinks) s.Dispose(); // clean up!
    }
}
This is the most dangerous one. It passes code review because it looks clean. The God Class is obviously bad. The Over-Engineer is obviously bad. But Bad Solution 3 sails through review and explodes at 2 AM in production.

Instead of DI, someone makes the Logger a static class or uses Logger.Instance. It seems convenient — call Logger.Log() from anywhere without injecting anything. But you just made the logger untestableStatic singletons can't be replaced with mocks in unit tests. If your OrderService calls Logger.Instance.Log(), your unit test for OrderService will actually write to the real log file. You can't assert on what was logged, and your tests become slow and fragile because they depend on the file system., because you can't swap it for a TestSink in unit tests.

STATIC SINGLETON DI SINGLETON Logger.Instance.Log("...") Can't mock · Can't configure per test Global state leaks between tests _logger.Log("...") Inject TestSink in tests · FileSink in prod Each test gets its own clean Logger
StaticSingleton.cs
public static class Logger
{
    private static readonly List<ILogSink> _sinks = new();
    public static void Configure(params ILogSink[] sinks)
        => _sinks.AddRange(sinks);
    public static void Log(LogLevel level, string msg)
    {
        foreach (var sink in _sinks) sink.Write(...);
    }
}
// In unit tests:
Logger.Configure(new TestSink()); // PROBLEM:
// this TestSink persists across ALL tests!
// Test A's logs leak into Test B's assertions.
DiSingleton.cs
// Register as DI singleton — same instance, but swappable
builder.Services.AddSingleton(
    new Logger(new[] { consoleSink, fileSink }, LogLevel.Debug));

// In unit tests: create a fresh Logger per test
var testSink = new TestSink();
var logger = new Logger(new[] { testSink }, LogLevel.Trace);
// No leakage. No global state. Full control.

Instead of an immutable LogEntry record, some developers pass raw strings everywhere. Each sink receives a pre-formatted string like "2024-03-15 [INFO] Order placed". The problem? Every sink gets the same format. The console can't colorize by level (it's buried in the string). The DB can't store level as an enum column. JSON sinks can't emit structured properties.

STRING: DATA LOST RECORD: DATA PRESERVED "2024-03-15 [INFO] Order placed" Can't filter by level · Can't query by time Each sink must parse the string to extract fields LogEntry(Info, "Order placed", ts) Each sink formats its own way Console colorizes · DB stores typed · JSON structures
StringConcat.cs
public class Logger
{
    public void Log(LogLevel level, string msg)
    {
        // Format ONCE, pass string to all sinks
        var line = $"{DateTime.Now:O} [{level}] {msg}";
        foreach (var sink in _sinks)
            sink.Write(line);   // sink gets a flat string
    }
}
// DbSink now has to PARSE the string to extract level:
public class DbSink
{
    public void Write(string line)
    {
        // Regex to extract level from "[Info]"?!
        var match = Regex.Match(line, @"\[(.*?)\]");
        // Fragile, slow, and breaks if format changes
    }
}
TypedEntry.cs
// Pass a typed record — each sink formats as needed
public record LogEntry(LogLevel Level, string Message,
    DateTimeOffset Timestamp);

public class DbSink : ILogSink
{
    public void Write(LogEntry entry)
    {
        // Direct access to typed fields — no parsing!
        cmd.Parameters.Add("@level", entry.Level.ToString());
        cmd.Parameters.Add("@msg", entry.Message);
        cmd.Parameters.Add("@ts", entry.Timestamp);
    }
}
Answer to Think First #12: Bad Solution 3 (Happy-Path) is the most dangerous. Solutions 1, 2, 4, and 5 are obviously wrong — any reviewer catches them. But the Happy-Path Logger looks production-ready. It has Strategy, Decorator, clean code. It passes every test in a single-threaded test suite. The bugs only surface under production concurrency, by which time your logs are corrupted and your StreamWriters are leaking memory.
Section 15

Code Review Challenge — Find 5 Bugs

A candidate submitted this logging framework as a pull request. It compiles. It runs. It handles basic console and file logging. But there are exactly 5 bugs hiding in it — issues that would cause real problems in production. Some are obvious if you know what to look for. Others are the kind of subtle mistakes that slip past junior reviewers and only surface at 3 AM when the on-call engineer is debugging a production outage.

This is how real code reviews work: the code looks fine at first glance. The bugs aren't syntax errors — they're design mistakes that only matter under load, across timezones, or during shutdown. Can you find them all before scrolling down?

PR Submitted "Logging framework" Under Review Reading code... 5 Issues Found Request changes Fixes Applied Push new commit Approved + Merged Ship it!

Read the code below carefully. Try to find all 5 issues before revealing the answers.

CandidateLogger.cs — Find 5 Bugs
public class Logger                                              // Line 1
{
    private readonly List<ILogSink> _sinks;
    private readonly LogLevel _minLevel;

    public Logger(List<ILogSink> sinks, LogLevel minLevel)       // Line 6
    {
        _sinks = sinks;
        _minLevel = minLevel;
    }

    public void Log(LogLevel level, string message)              // Line 12
    {
        if (level < _minLevel) return;
        var entry = new LogEntry(level, message,
            DateTime.Now);                                       // Line 16

        foreach (var sink in _sinks)                             // Line 18
            sink.Write(entry);
    }
}

public class FileSink : ILogSink                                 // Line 23
{
    private readonly StreamWriter _writer;

    public FileSink(string path)
    {
        _writer = new StreamWriter(path, append: true);
    }

    public void Write(LogEntry entry)                            // Line 31
    {
        _writer.WriteLine(
            "[" + entry.Level + "] " + entry.Timestamp + " " +  // Line 34
            entry.Message);
    }
}

public static class LogManager                                   // Line 39
{
    public static Logger Instance { get; private set; } = null!;

    public static void Initialize(List<ILogSink> sinks)
    {
        Instance = new Logger(sinks, LogLevel.Info);
    }
}

Bug Severity Overview

Not all bugs are created equal. The chart below ranks the 5 issues by severity and category. Thread safety and resource leaks are the hardest to catch because they don't show up in simple testing — they only surface under load or during shutdown. That's why they're the highest-severity bugs in the list.

BUG SEVERITY #1 No Lock THREAD #2 DateTime.Now TIME #3 String Concat PERF #4 No IDisposable RESOURCE #5 Static Logger TESTABILITY Thread safety and resource leaks are harder to catch than logic bugs

Found them? Reveal one at a time:

What's wrong: Multiple threads call Log() simultaneously. The foreach loop iterates _sinks without any synchronization. If FileSink.Write() calls StreamWriter.WriteLine() from two threads at the same time, the output gets interleavedThread interleaving means two threads writing "Hello" and "World" to the same file might produce "HeWollorld" instead of two clean lines. StreamWriter is not thread-safe by default. Without a lock, concurrent writes corrupt the output. — half of one log line gets mixed with half of another.

Why it's dangerous: This bug is invisible in development (single thread, low volume). It only manifests in production under concurrent load. The corrupted output looks like gibberish, and you can't even use the logs to debug the problem — because the logs themselves are corrupted.

The fix: Add lock (_lock) around the foreach loop, OR use a Channel<LogEntry> to serialize writes on a background thread (the async buffered approach from Level 7). The Channel approach is better for high throughput because callers never block.

Principle violated: Single Responsibility PrincipleEach class should have one reason to change. Thread safety is the Logger's responsibility — individual sinks shouldn't need to worry about whether they're being called from multiple threads. The Logger should guarantee that sinks are called one-at-a-time. — the Logger is responsible for safe invocation of its sinks. Sinks shouldn't need their own locking.

Taught in Level 5 — The thread-safety level where we added lock and then upgraded to Channel<T> for async writes. If you missed this bug, revisit Level 5.

What's wrong: DateTime.Now returns local time, which depends on the server's timezone setting. If your application runs in Azure across multiple regions (US East, EU West, Asia), each server stamps logs with a different timezone. An event at 3:00 PM US might appear after an event at 9:00 PM EU even though it happened first in absolute time.

Why it's dangerous: When debugging a production incident, you rely on log timestamps to reconstruct what happened in what order. If servers in different timezones produce unsortable timestamps, your incident investigation is blind. You'll waste hours on wrong causality chains.

The fix: Always use DateTimeOffset.UtcNow. It captures both the time and the UTC offset, so logs from any server anywhere in the world sort correctly. The rule of thumb: DateTime.NowReturns the local date and time. The exact value depends on the server's timezone setting. Two servers in different timezones produce different values at the same instant. Never use this for log timestamps. is for displaying to humans. DateTimeOffset.UtcNow is for storing and comparing.

Taught in Level 4 — The enrichment level where the TimestampEnricher was introduced. We used DateTimeOffset.UtcNow from the start. This bug appears when someone skips that enricher and hardcodes the timestamp in Logger.

What's wrong: "[" + entry.Level + "] " + entry.Timestamp + " " + entry.Message creates 4 intermediate string objects on every single log call. Each + allocates a new string because strings in C# are immutableOnce a string is created in .NET, it can never be changed. Every string operation (concatenation, substring, trim) creates a brand-new string object. In a hot path like logging, this creates massive garbage collection pressure. — the runtime creates a new object, copies the old characters plus the new ones, and throws away the old object.

Why it's dangerous: In a system logging 10,000 entries per second, that's 40,000 garbage strings per second hitting the garbage collectorThe .NET garbage collector automatically frees unused memory. But it has to pause your application to do so. More garbage means more frequent pauses. In a high-throughput logging system, string concatenation is one of the biggest sources of GC pressure.. Each GC pause freezes all threads briefly. At scale, this adds up to noticeable latency spikes — exactly what you don't want from a logging system that's supposed to be invisible.

The fix: Use string interpolationC#'s $"..." syntax. The compiler converts it to string.Format or string.Concat depending on context, and modern .NET further optimizes it with DefaultInterpolatedStringHandler to minimize allocations. ($"[{entry.Level}] {entry.Timestamp} {entry.Message}") which the compiler optimizes, or use StringBuilder / string.Create() for truly hot paths. For structured logging, you'd typically serialize the LogEntry properties directly instead of formatting a string at all.

Taught in Level 3 — The formatting level where we introduced structured output. The structured approach avoids this problem entirely because it serializes key-value pairs rather than concatenating strings.

What's wrong: FileSink creates a StreamWriter in its constructor but never disposes it. When the application shuts down, the StreamWriter's internal buffer might still have unwritten data. Those log entries — often the ones describing why the app is shutting down — are lost forever.

Why it's dangerous: This is a double hit. First, you lose the most important log entries (the ones from the final seconds before a crash). Second, in a long-running app, you leak file handlesOperating system resources that represent open files. Every OS has a limit on how many file handles a process can have open. StreamWriter holds one until disposed. Leak enough and the OS refuses to open any more files — including new log files. until the OS refuses to open more. Eventually, the logger itself can't create new log files, and you lose logging entirely — silently.

The fix: Make FileSink implement IDisposable and call _writer.Dispose() (which also flushes). Make Logger implement IDisposable too, and have it dispose all its sinks. Register the Logger in DI so the host calls Dispose() automatically on graceful shutdownWhen an application exits cleanly (Ctrl+C, SIGTERM, AppDomain unload), the DI container disposes all registered singletons. If your Logger is registered as a DI singleton and implements IDisposable, the framework flushes all buffers and closes all file handles for you..

Taught in Level 6 — The resource management level where we added IDisposable to both Logger and all sinks with unmanaged resources. If you missed this bug, revisit Level 6 — especially the "graceful shutdown" section.

What's wrong: LogManager.Instance is global, mutable state. Any code that calls LogManager.Instance.Log(...) is tightly coupled to a concrete Logger that can't be swapped in tests. Three specific problems arise from this design:

  • No mocking: You can't inject a MockLogSink into code that hard-references the static instance.
  • Test leakage: Since the instance is static, parallel test runsModern test runners like xUnit run tests in parallel by default for speed. If tests share a static Logger instance, log entries from Test A appear in Test B's assertions, causing random test failures. share the same instance. Log entries from Test A leak into Test B's assertions.
  • Unprotected initialization: Initialize() can be called multiple times, replacing the instance mid-flight. Any thread holding a reference to the old instance now writes to a disconnected logger.

The fix: Remove the static LogManager entirely. Register Logger as a DI singleton using builder.Services.AddSingleton<ILogger, Logger>(). In tests, create a fresh Logger per test with a TestSink. No global state, no leakage, full isolation. The DI containerThe Dependency Injection container manages object lifetimes. AddSingleton means one instance for the app. AddScoped means one per request. AddTransient means a new one every time. Using DI instead of static gives you all the benefits of Singleton without the testability cost. handles the "one instance" guarantee without any static fields.

Taught in Level 1 — The very first level established the Singleton via DI, not via a static class. This is the foundational design decision that all other levels build on. If you used static instead, every subsequent level's testability is compromised.

Score Yourself

How did you do?
  • All 5: Senior-level code review instincts. You'd catch these in a real PR and explain why each one matters. You understand that the most dangerous bugs are the ones that only appear under load or during shutdown.
  • 3–4: Solid mid-level. You see the structural issues (thread safety, static state) but might miss the subtle ones (DateTime timezone, string concat GC pressure). These are the "it works on my machine" bugs that only production reveals.
  • 1–2: Review Levels 4–6 in the Constraint Game. Thread safety and disposal are the most commonly missed issues in logging code — and the most commonly asked about in interviews.
  • 0: Don't worry — that's exactly why this section exists. Go through each bug explanation above, then re-read the candidate code. You'll be surprised how obvious the bugs become once you know what to look for.
Pro Tip for real code reviews: Build a mental checklist. For any class that touches I/O, ask three questions: (1) Is it thread-safe? (2) Does it implement IDisposable? (3) Can it be tested without hitting the real file system? These three questions alone would have caught bugs #1, #4, and #5. Add "is it using UTC?" and "is it allocating in a hot path?" to catch the other two.
Section 16

The Interview — Both Sides of the Table

A logging framework sounds boring until the interviewer asks "how do you add a new sink without touching existing code?" or "how does correlation work across async calls?" Suddenly it's a design patterns showcaseLogging naturally uses Strategy (sinks), Decorator (enrichment), and Singleton (shared logger instance) — three patterns in one system. That's why interviewers love this question.. Below you'll see two complete runs of the same 30-minute interview: the polished one where everything flows smoothly, and the realistic one with stumbles, pauses, and recovery. Both get hired. The difference is recovery speed, not perfection.

Pay attention to the "Interviewer Thinks" column — it reveals the hidden scoring that candidates never see. Every green comment is a mental checkbox. Every yellow comment is a "let's see if they recover" moment. Understanding what interviewers are silently tracking is the single biggest advantage you can have.

What Interviewers Actually Score

Most candidates think interviewers score on "did you use the right pattern?" Wrong. They score on five dimensions, and only one of them is about patterns. The others are about extensibility, thread safety, modern practices, and real-world awareness. Miss any two and you're below "Hire" even if your code compiles perfectly.

What Interviewers Score on a Logging Framework Design 1. PATTERN MOTIVATION Why Strategy for sinks? Why Decorator? Problem-driven, not name-dropped 2. EXTENSIBILITY Add sink/enricher with zero edits OCP in action 3. THREAD SAFETY Concurrent writes, shared state Production-ready thinking 4. STRUCTURED LOGGING Key-value pairs, not string concat Modern vs legacy thinking 5. REAL-WORLD AWARENESS Correlation IDs, async context, ELK Shows production experience Strong Hire = all 5 visible. No Hire = jumped to Console.WriteLine, skipped 1-3-5. Logging seems trivial. That's why it exposes shallow thinkers fast.

This is what the interview looks like when you've practiced the articulation cards from Section 17. Every phase flows into the next. The candidate never says a pattern name without first explaining the problem it solves. Notice how the "Interviewer Thinks" column goes green on every row — that's what "Strong Hire" looks like from the other side.

TimeCandidate SaysInterviewer Thinks
0:00 "Before I code, let me clarify. Are we building a library like Serilog, or an internal logging service? Do we need multiple output targets — console, file, database? Should it support structured logging with key-value properties? What about async and thread safety?" Scoping a "boring" problem. Shows this person treats every design seriously. Four scoping questions in 30 seconds.
2:00 "Functional: log at multiple severity levelsCategories like Debug, Info, Warning, Error, Fatal that indicate how serious a log entry is. Used for filtering — production might only capture Warning and above., write to pluggable sinks, enrich entries with context. Non-functional: thread-safe, zero changes to add a new sink, minimal allocation on hot pathsCode that runs very frequently (e.g., every API request). Even small inefficiencies multiply into big performance costs on hot paths.." F/NF split on logging? Senior-level framing. The perf mention is a bonus — shows they've dealt with high-throughput systems.
4:00 "Core entities: LogEntry record holds level, message, timestamp, properties dictionary. ILogSink interface — each output target implements it. ILogEnricher adds context. Logger orchestrates: enrichers decorate, then sinks consume." Clean entity extraction. Record for immutable data, interfaces for extension points. Type choices are deliberate.
6:00 "Sinks vary independently — console, file, HTTP — same interface, different implementations. That's StrategyThe Strategy pattern lets you swap algorithms (here: where logs go) at runtime without modifying the code that uses them.. Enrichers wrap each other to layer on context — timestamp, then correlation ID, then machine name. That's DecoratorThe Decorator pattern wraps an object with additional behavior. Each enricher wraps the previous one, adding its own context before passing the log entry along.." Two patterns, each motivated by a concrete problem. Not name-dropping — the problem came first, then the pattern name.
10:00 Starts coding: LogLevel enum, LogEntry record, ILogSink, ConsoleSink, FileSink, ILogEnricher, TimestampEnricher, Logger class with List<ILogSink>... Watching for: sealed classes, readonly fields, ConcurrentQueueA thread-safe queue in .NET that allows multiple threads to add and remove items without locks. Often used as the foundation for producer-consumer patterns. or lock usage, IDisposable on FileSink
20:00 "For thread safety, the Logger is a SingletonA single shared instance across the app. Registered as a singleton in DI so all classes share one logger with the same sinks and enrichers configured. registered in DI. Writes use a lock around the sink loop, or better — a Channel<LogEntry>A .NET async producer-consumer queue. Writers add items without blocking; a background reader processes them. Perfect for decoupling hot paths from slow I/O. as an async buffer so callers never block." Async buffering for logging. That's production-grade thinking, not textbook. DI singleton, not static — shows testability awareness.
24:00 "Edge cases: sink throws — catch per sink, don't let one broken sink kill all logging. Log entry too large — truncation policy. Circular dependencyWhen the logger tries to log its own errors through itself, creating an infinite loop. The fix: internal errors go to a separate channel (stderr) that bypasses the pipeline. — the logger itself should never log its own failures to the same pipeline." Proactive failure thinking. Most candidates forget that sinks can fail. The circular dependency catch is senior-level awareness.
27:00 "At scale: centralized log aggregation with ELK or SeqA structured log server built for .NET. It indexes structured log data and provides powerful query and dashboard capabilities. Often used with Serilog in .NET ecosystems.. Correlation IDs via AsyncLocal<T>A .NET type that stores a value scoped to the current async flow. It survives async/await boundaries automatically, unlike ThreadLocal which is bound to a single thread. so a single request's logs can be traced across services. SamplingOnly logging a percentage of entries (e.g., 10% of Debug) on high-volume paths to reduce load on sinks without losing visibility on errors. for high-throughput paths to avoid drowning the sink." LLD to HLD bridge. Correlation IDs show distributed systems awareness. Strong Hire — all 5 rubric dimensions covered.

This is what the interview looks like when you're nervous, didn't sleep well, or the question caught you off guard. The candidate stumbles twice, pauses once, and needs a nudge from the interviewer. They still get "Strong Hire" — because every stumble is followed by a recovery that shows real understanding. Pay attention to the yellow moments and how they turn green.

TimeCandidate SaysInterviewer Thinks
0:00 "Logging... OK, so we need a Logger class that writes to the console. Let me start with a static method..." Jumped straight to static Console.WriteLine wrapper. No scoping. Let's see if they recover.
1:30 "Actually, wait — should this support multiple outputs? Like file and database too? And is this a library or an internal tool?" Self-corrected to scope. Good recovery — better late than never. The fact that they stopped themselves is a positive signal.
4:00 "I'll make a Logger with a switch on output type... hmm, but that means every new output type changes the Logger class. That's an OCPOpen-Closed Principle: open for extension (new sinks), closed for modification (Logger doesn't change). One of the five SOLID principles — and the one most directly tested by logging framework interviews. violation. Let me use an interface instead — ILogSink." Self-corrected from switch to interface. Showing real-time design evolution — this is BETTER than getting it right instantly because it proves the reasoning process.
8:00 "For adding context — timestamp, machine name — I could put it all in the Logger... but that makes Logger responsible for everything. Each piece of context should be its own thing that wraps..." Discovering Decorator through SRP reasoning. This is organic, not memorized. Even better than the clean run — the interviewer can see the thought process.
12:00 Coding... pauses... "How should the enrichers chain together? Let me think for a moment..." Pausing is fine. Thinking silence beats rushing into wrong code. Most interviewers will wait 30-60 seconds comfortably.
13:00 "Each enricher takes a LogEntry, adds its property, and returns the enriched entry. The Logger loops through enrichers before passing to sinks. Or — enrichers could wrap each other like middleware. I'll go with the loop for simplicity, but mention the Decorator alternative." Considered two approaches, picked one with reasoning. Trade-off articulation is exactly what I'm scoring on. The "but here's the alternative" shows depth.
18:00 Finishes coding the basic Logger + sinks. "I think that covers the main flow." Good code but didn't mention thread safety or edge cases. Let me probe.
20:00 Interviewer: "What happens if the file sink throws an IOException mid-write?" Testing if candidate handles sink failures gracefully. This is the "production thinking" dimension.
20:30 "Oh — I should catch per-sink. If the file sink dies, console should still work. Let me wrap each sink call in try-catch and maybe add a fallback sink or just swallow and continue. I'd also log sink failures to stderrStandard error stream — a separate output channel from stdout. Writing to stderr bypasses the normal logging pipeline, making it a safe last-resort for logging system failures. as a last-resort channel." Needed a nudge but responded with a solid, layered approach. Per-sink isolation + stderr fallback = production thinking.
23:00 Interviewer: "What about thread safety? Multiple requests hitting this at once?" Second probe. If they handle this well, it's still a Strong Hire despite the slow start.
23:30 "Right, the Logger is shared across the app. I need a lock around the sink loop at minimum. For better performance, I'd use a Channel — callers enqueue and a background consumer writes to sinks. That way the calling thread never waits for disk I/O." Strong recovery. Went from "no mention" to "Channel-based async buffering" in 30 seconds. The performance reasoning (callers never wait for disk) seals it.
27:00 "For scaling — at the distributed level, I'd add a sink that ships to ELK or a message bus. Correlation IDs via AsyncLocal to trace requests across services. The LLD stays the same — just a new sink." LLD to HLD bridge achieved. Strong Hire — the stumbles early on actually made the recovery more impressive.

The CREATES Timeline — 30 Minutes, 7 Phases

Every logging framework interview follows the same arc, whether you're doing the clean run or the realistic one. The CREATES frameworkCREATES stands for Clarify, Requirements, Entities, API/Architecture, Trade-offs, Edge cases, Scale. It's a systematic way to structure your answer so you never forget a phase. gives you a roadmap so you never blank on "what should I talk about next?" Spend roughly 60% of your time on entities and code (the middle phases), and 40% on everything else.

The CREATES Timeline — 30 Minutes, 7 Phases CClarify0:00 RRequire2:00 EEntities4:00 AAPI6:00 TTrade-offs10:00 EEdge cases24:00 SScale27:00 CODING Spend ~60% of your time on entities + code. The other 40% on scope, edge cases, and scaling.

Scoring Rubric — How Interviewers Grade Each Phase

This is the rubric interviewers use (often on a shared doc). Each phase has a clear "Strong Hire" signal and a "No Hire" signal. The gap between them is usually one thing: reasoning. The candidate who explains why they chose an interface over a switch will always outscore the one who just writes an interface without explanation.

PhaseStrong Hire SignalNo Hire SignalWeight
Clarify Asks about scope, output targets, structured vs unstructured, async needs Jumps straight to Console.WriteLine without asking anything 10%
Requirements Splits functional/non-functionalFunctional: what the system does (log at levels, write to sinks). Non-functional: how it does it (thread-safe, extensible, low-allocation). Splitting these shows senior-level thinking about system qualities., mentions perf Only lists features, no quality attributes 10%
Entities Record for LogEntry, interfaces for extension points, deliberate type choices One God class with string properties 15%
Architecture Strategy (sinks) + Decorator (enrichers) both motivated by concrete problems Names patterns without explaining why they fit the logging domain 25%
Code Clean separation, sealed classes, readonly fields, IDisposable where needed Monolithic Logger with switch statements and no interfaces 20%
Edge Cases Sink failure isolation, circular logging, message truncation, graceful shutdownFlushing the async buffer before the app exits so the last few log entries aren't lost. Without graceful shutdown, logs describing why the app crashed are themselves lost. Assumes nothing ever fails 10%
Scale ELK/centralized aggregation, correlation IDs, sampling, "add a new sink" bridge "I'd use Serilog" (names library, shows no understanding) 10%

Scoring Summary — Both Runs

Both candidates scored "Strong Hire" despite very different paths. The Clean Run hit every phase on time. The Realistic Run stumbled twice, got nudged once, but recovered every time. The key insight: interviewers don't grade on polish — they grade on thinking. A stumble you recover from is often more impressive than a flawless run, because it proves you can handle ambiguity on a real team.

The Clean Run

Strong Hire

  • Scoped library vs service before line 1
  • F/NF split with perf awareness
  • Strategy for sinks, Decorator for enrichers — both motivated
  • Async buffer + per-sink error isolation
  • Correlation IDs + ELK scaling bridge
The Realistic Run

Strong Hire

  • Slow start — recovered with scope questions at 1:30
  • Self-corrected switch → ILogSink interface (live reasoning)
  • Discovered Decorator through SRP reasoning (not memorized)
  • Needed nudge on sink failure — responded with layered approach
  • Needed nudge on thread safety — responded with Channel + async
  • Honest, structured recovery throughout

Common Follow-Up Questions

After the main 30-minute walkthrough, interviewers often have 5-10 minutes left for follow-ups. These questions test depth — they're not asking you to redesign, they're asking you to extend your existing design under new constraints. If your architecture is truly extensible, every answer should start with "add a new class" rather than "change existing code."

"How would you add rate limiting?"

Expected answer: "A RateLimitDecorator that wraps the sink. It counts entries per second and drops or buffers excess. The Decorator pattern means zero changes to Logger or existing sinks."

"How do you test this without files?"

Expected answer: "Inject a MockLogSink that captures entries in a List<LogEntry>. Assert count, levels, and messages. No file system, no console, no flakiness. Runs in millisecondsFast tests mean you run them on every build. Slow tests mean you skip them. A mock sink completes in microseconds compared to milliseconds for file I/O.."

"What if logs need to be encrypted?"

Expected answer: "An EncryptionDecorator that wraps the sink and encrypts the message before writing. Or an EncryptionEnricher that encrypts the message property in the LogEntry before sinks see it."

"How do you handle log rotation?"

Expected answer: "The FileSink checks file size before each write. When it exceeds the limit, it closes the current file, renames it with a timestamp, and opens a new one. This is internal to FileSink — the Logger doesn't know or care."

Key Takeaway: Two very different styles. Same outcome. Interviewers don't grade on polish — they grade on THINKING. A stumble you recover from is often more impressive than a flawless run — because it proves you can handle ambiguity on a real team. The clean run shows preparation. The realistic run shows resilience. Both are "Strong Hire" qualities.
Section 17

Articulation Guide — What to SAY

Design skill and communication skill are separate muscles. You can have a brilliant logging framework in your head and still tank the interview because you can't narrate it under pressure. The 8 cards below cover the exact moments where phrasing decides whether the interviewer hears "this person gets it" or "this person memorized a pattern catalog." Each card gives you four things: the Situation (when you'd use this phrase), the Say (what to actually say), the Don't Say (the common mistake), and the Why (the psychology behind why one works and the other doesn't).

The golden rule is simple: problem first, pattern name second. Interviewers can tell in one sentence whether you understand a pattern or merely memorized its name. When you say "sinks vary independently — that's Strategy," the interviewer hears reasoning. When you say "I'll use Strategy because it's standard," they hear a textbook.

The Articulation Order — Get This Wrong and You Sound Like a Textbook 1. THE PROBLEM "Sinks vary independently..." 2. PATTERN NAME "That's Strategy." 3. TRADE-OFF "More files, but zero edits." 4. CODE "Here's ILogSink..." BAD order: "I'll use Strategy pattern" (step 2 first) — sounds memorized, not motivated

Say vs Don't Say — Side by Side

Here's the cheat sheet. The left column sounds like someone who has built logging systems. The right column sounds like someone who has read about logging systems. The difference is always the same: the good version names a concrete problem before naming a pattern.

Say vs Don't Say — The Phrases That Matter SAY THIS (problem-driven) "Sinks vary independently — that's Strategy." "Enrichers layer context — that's Decorator." "One logger shared app-wide via DI — Singleton." "If a sink throws, isolate it — don't kill logging." "Structured logging — key-value, not string.Format." "AsyncLocal for correlation across await boundaries." "Channel<T> decouples producers from slow I/O." DON'T SAY THIS (memorized) "I'll use Strategy because it's standard." "Decorator is a structural pattern." "Logger should be static." "Exceptions handle errors." "I'll use Console.WriteLine." "Correlation... I haven't heard of that." "I'll just make the writes synchronous."

8 Moments That Decide the Interview

Each card below targets a specific interview moment. The "Say" phrasing has been tested across dozens of mock interviews — it consistently triggers positive reactions from interviewers because it leads with the problem, names the pattern only after motivation, and acknowledges trade-offs.

1. Opening the Problem

Situation: The interviewer says "Design a logging framework." You have 5 seconds to decide how to start.

Say: "Before I start, let me scope this. Is it a library like SerilogSerilog is a popular .NET structured logging library. Mentioning it shows you know the industry landscape — but designing one from scratch shows deeper understanding. or an internal service? Multiple output targets? Structured logging with properties? What about async and thread safety requirements?"

Don't say: "OK, so I'll make a Logger class with a Write method..." (jumping to implementation without understanding the problem space)

Why it works: Scoping a "simple" problem is more impressive than scoping a complex one — it shows the habit is automatic. Interviewers mark "clarifying questions" as the first rubric item. Skip it and you start with negative points.

2. Entity Decisions

Situation: You're explaining how you modeled the core types. The interviewer is watching to see if you pick types deliberately or by habit.

Say: "LogEntry is a recordIn C#, a record is an immutable reference type with built-in value equality and easy cloning via 'with' expressions. Perfect for data bags like log entries that shouldn't change after creation. — immutable data bag with level, message, timestamp, and a properties dictionary. LogLevel is an enum because levels are categories with no behavior. ILogSink is an interface because each sink has genuinely different write logic."

Don't say: "I'll use a string for the log level." (no type modeling, no reasoning about why one C# type fits better than another)

Why it works: Record vs class vs enum shows you choose types deliberately based on behavior and mutabilityWhether an object can change after creation. Log entries should be immutable — once created, they pass through enrichers and sinks without being modified. This prevents race conditions in multi-threaded logging.. Interviewers love hearing "it's a record because it's an immutable data bag."

3. Strategy for Sinks

Situation: You're introducing the Strategy pattern. Interviewer asks "why not just a switch?"

Say: "Each sink — console, file, HTTP, database — has genuinely different write logic. A switch means every new sink changes the Logger class. With ILogSink, I add a new class and register it. Zero edits to Logger. That's OCPOpen-Closed Principle: open for extension (add new sinks), closed for modification (Logger doesn't change). One of the five SOLID principles. in action."

Don't say: "Strategy is the standard pattern for varying algorithms." (textbook definition, no connection to logging — the interviewer can't tell if you understand it or just memorized it)

Why it works: You connected Strategy to a concrete need — genuinely different write logic across sinks. The phrase "zero edits to Logger" is the magic sentence. It proves you understand what OCP actually means in practice.

4. Decorator for Enrichment

Situation: Explaining how context gets added to log entries. The interviewer wants to see if you can justify Decorator vs just adding all context in one place.

Say: "Each enricher adds one piece of context — timestamp, correlation ID, machine name. They chain together: the timestamp enricher wraps the base, then correlation wraps timestamp. Each one adds its property and passes the entry along. That's DecoratorThe Decorator pattern wraps an object to add behavior without modifying the original. Each wrapper implements the same interface, so they can be stacked in any order and combination. — layering behavior without modifying the original."

Don't say: "I'll add all context in the Logger constructor." (couples context logic to Logger — every new enrichment type forces Logger to change)

Why it works: The "layering" metaphor makes Decorator concrete. Each enricher has one job — SRPSingle Responsibility Principle: each class should have one reason to change. A TimestampEnricher only changes if timestamp format changes. A CorrelationEnricher only changes if correlation logic changes. at the enrichment level. The phrase "adds its property and passes along" is how you make Decorator click for interviewers who think visually.

5. Concurrency

Situation: Multiple threads logging simultaneously. This is where most candidates either panic or say "I'll use static."

Say: "The Logger is shared across the app — DI SingletonOne instance for the app's lifetime. DI Singleton is testable (inject a mock). Static singleton is not. In logging, a single instance prevents multiple file handles competing for the same log file.. Multiple threads will call Log() concurrently. The simplest safe approach: a lock around the sink loop. For high throughput, a Channel<LogEntry>A .NET async producer-consumer queue. Writers add items without blocking; a background reader processes them. Perfect for decoupling hot paths from slow I/O like file or network writes. decouples producers from consumers — callers enqueue and return instantly."

Don't say: "It's thread-safe because I used static." (static means globally accessible, not thread-safe — a common and dangerous misconception)

Why it works: Shows you know the difference between shared instance and thread-safe instance. The Channel mention signals real async experience and tells the interviewer you've dealt with high-throughput systems.

6. Edge Cases

Situation: You've finished the happy path. Most candidates stop here. The interviewer is watching to see if you proactively think about failure.

Say: "What if a sink throws? Catch per-sink — one broken target shouldn't kill all logging. What if the log message is 10MB? Truncation policyA rule that limits log entry size. Without it, a single huge message can exhaust memory, fill disk space, or crash the sink. Typical limit: 32KB per entry with the excess replaced by '[truncated]'.. What about the logger itself failing? Stderr as a last-resort channel. And circular logging — the logger must never log its own internal errors to the same pipeline."

Don't say: (nothing — most candidates never mention sink failure, and the interviewer notices the silence)

Why it works: Proactive failure thinking is the strongest "Strong Hire" signal for infrastructure code. Logging is the tool you use to debug everything else — if the logger itself can fail silently, you're flying blind in production.

7. Scaling Bridge

Situation: Interviewer asks "What about 50 microservices all logging?" This is the LLD-to-HLD bridge question.

Say: "Locally, each service uses our Logger with sinks. At scale, add an ElasticsearchSink that ships logs to a centralized ELK stackElasticsearch + Logstash + Kibana. Elasticsearch stores and indexes logs, Logstash transforms and routes them, and Kibana provides dashboards for searching and visualizing. The industry standard for centralized log management.. Correlation IDs via AsyncLocal<T> let you trace a single request across services. Sampling reduces volume on hot paths. The LLD doesn't change — we just register a new sink."

Don't say: "I'd use Serilog." (names a library but shows no understanding of the architecture behind it — the interviewer wants to hear how you would build it)

Why it works: Bridges LLD to HLD. Shows the logging design is ready for distributed systems without modification. The key insight: "the LLD doesn't change — we just register a new sink." That sentence proves your architecture is extensible.

8. "I Don't Know"

Situation: Interviewer asks about log aggregation at Netflix scale and you've never operated ELK at that volume. This moment defines your maturity as an engineer.

Say: "I haven't operated ELK at that scale, but the approach is clear: our ILogSink interface means we'd write a KafkaSinkA sink that publishes log entries to Apache Kafka, a distributed message bus. Decouples log production from consumption and handles traffic spikes via partitioned topics. that publishes to a message bus. A separate consumer indexes into Elasticsearch. The logging framework doesn't change — the sink handles the transport. I'd research partitioning and retention policies."

Don't say: "I don't know distributed logging." (full stop, no reasoning — the interviewer has no signal to work with)

Why it works: Honesty + clear reasoning about the approach = respect. The sink abstraction protects you — you can always say "new sink, same interface." Interviewers don't expect you to know everything. They expect you to reason about what you don't know using the abstractions you do know.

The 5 Most Common Phrasing Mistakes

These aren't hypothetical — they're the exact phrases that cost real candidates points in real interviews. Each one sounds fine in your head but triggers a negative reaction from the interviewer. The fix is always the same: say the problem before the pattern.

What Candidates SayWhat to Say InsteadWhy It Matters
"I'll use the Strategy pattern here." "Sinks have different write logic, so I'll put each behind an ILogSink interface." Problem-first shows understanding; pattern-first shows memorization.
"The logger is a Singleton." "All classes share one logger via DI — avoids duplicate file handles." Explains the reason for one instance, not just the pattern name.
"Decorator is a structural patternThe Gang of Four categorized patterns into Creational, Structural, and Behavioral. Knowing the category is fine, but leading with it sounds like you memorized a textbook classification rather than understanding what Decorator actually does.." "Each enricher wraps the previous one and adds one property — that's Decorator." Category labels add zero value. Concrete behavior adds everything.
"I'll make it thread-safe." "Multiple threads call Log() — I'll lock the sink loop, or use a Channel for async." Names the specific threat and the specific mitigation. "Make it thread-safe" is vague.
"I'll handle errors." "If a sink throws, catch per-sink so one broken target doesn't kill all logging." Specifies what can fail and how you isolate it. Generic "handle errors" says nothing.

Quick Reference — The 3 Sentences to Memorize

If you're short on prep time, memorize these three sentences. They cover the three most-asked design points in a logging framework interview. Each one follows the articulation order: problem → pattern → trade-off.

Sinks

"Each sink has different write logic — that's StrategyEncapsulate each algorithm (sink) behind a common interface. New sink = new class, zero edits to Logger.. New sink = one class, zero edits."

Enrichers

"Enrichers wrap each other and add one property — that's Decorator. Stack them in any order."

Thread Safety

"Shared logger + concurrent threads = lock the sink loop or use a Channel<T> for async writes."

Confidence Curve — Why Practice Matters

Most candidates know these concepts in their heads but stumble when they have to say them out loud under time pressure. The chart below shows the typical confidence curve: reading builds recognition, but only speaking builds production fluency. The gap between "I know this" and "I can say this clearly in 10 seconds" is bigger than you think.

The Fluency Gap — Reading vs Speaking Practice Sessions Confidence 1x Read 3x Read 1x Speak 3x Speak Mock Interview Reading Speaking The Gap First time saying it aloud
Pro Tip — Practice OUT LOUD, Not Just in Your Head

Reading these cards silently builds recognition. Saying them aloud builds production. There's a reason actors rehearse lines and musicians rehearse songs — your brain stores "things I can say fluently" differently from "things I've read." Target three phrases for fluency:

  • Strategy justification: "sinks vary independently — different write logic"
  • Decorator insight: "enrichers layer context without modifying the entry"
  • Singleton trade-off: "DI singleton, not static — testable"

These are the exact spots where candidates go vague under pressure. If you can say each one clearly in under 10 seconds, you're in the top 20% of candidates. Time yourself.

Section 18

Interview Questions & Answers

12 questions ranked by difficulty. Each has a "Think" prompt, a solid answer, and the great answer that earns "Strong Hire." These aren't hypothetical — they're the exact questions interviewers ask when they see a logging framework design.

Think: How many sinks exist today? How many might exist next month? What happens to Logger each time?

Think about a universal remote. Hard-wired for "TV, DVD, speakers" it works — until you buy a soundbar. With a switch, every new device means opening the remote's code. With an interface, each device implements a common contract.

Answer: Each sink has genuinely different write logic. An interface lets each own its logic. New sink = one new class, zero edits to Logger.

Great Answer: "A switch creates shotgun surgeryA code smell where a single change forces edits in multiple places — the enum, the switch, and configuration.: add a sink, edit the enum, edit the switch, edit config. With ILogSink, I write one class, register in DI, done. Logger never changes. That's OCP."

What to SAY: "Different write logic per target = interface, not switch. One new class, zero edits. Strategy payoff."
Think: Three enrichers (Timestamp, Correlation, MachineName) — what order? What does each add?

Like assembling a gift: the item (raw entry), then a box (timestamp), wrapping paper (correlation ID), ribbon (machine name). Each layer adds something without opening the previous ones.

Enricher Decorator Chain — Each Layer Adds Context Raw LogEntry Level: Error Msg: "Disk full" TimestampEnricher + Timestamp: 14:32:05 Adds UTC time CorrelationEnricher + CorrelationId: abc-123 Reads AsyncLocal MachineEnricher + Machine: web-prod-03 Adds hostname Enriched LogEntry (ready for sinks) Error | "Disk full" | 14:32:05 | abc-123 | web-prod-03 Each enricher added one property. None knew about the others. Order is configurable.

STAR: S: API errors lacked debug context. T: Add timestamp, correlation ID, machine name without modifying existing code. A: Built enricher chain — each implements ILogEnricher.Enrich(LogEntry) adding one property. Logger iterates before sinks. R: New enricher = one class, one DI registration, zero changes elsewhere.

What to SAY: "Each enricher adds one property and passes along. Like wrapping a gift — layers add without opening previous ones."
Think: Should a log entry change after creation? What if two threads mutate the same entry?

Once born — "Error at 14:32 on web-03" — it should never change. A recordA C# type with value-based equality and immutability by default. Use 'with' to create modified copies. enforces this. Enrichers use with to produce copies, leaving originals untouched.

Answer: Log entries are immutable data. Record = value semantics, immutability, concise syntax.

Great Answer: "Fixed after creation = record. Enrichers use with { Properties = ... } to produce new entries. The original is never mutated, so multiple enrichers can safely process concurrently without locks."

What to SAY: "Fixed after creation = record. Enrichers produce copies, not mutations. Thread-safe by design."
Think: How many files do you create? How many existing files do you edit?

Answer: Write ElasticsearchSink : ILogSink, implement Write(LogEntry), register in DI. Zero edits to Logger or other sinks.

Great Answer: "One new file. Implements ILogSink.Write(LogEntry) — serialize to JSON, POST to bulk API. Register: builder.Services.AddSingleton<ILogSink, ElasticsearchSink>(). Logger discovers it via IEnumerable<ILogSink> constructor injection. Zero changes to Logger.cs, ConsoleSink.cs, or FileSink.cs."

What to SAY: "One new class, one DI registration, zero existing edits. OCP via Strategy."
Think: Can you unit-test a class that depends on a static logger?
Static Logger vs DI Singleton Logger static class Logger One instance. No interface. Can't mock — real sinks fire in tests Can't swap sinks per test Global state — tests pollute each other Fine for scripts. Untestable in real apps. ILogger via DI (Singleton lifetime) One instance. Behind an interface. Tests inject mock — no real I/O Swap sinks freely per test Each test gets its own config Same lifetime. Fully testable.

Answer: Both are single-instance. Static can't be mocked. DI Singleton is behind ILogger — tests inject a fake.

Great Answer: "Static can't implement an interface, so tests can't replace it. DI Singleton is behind ILogger — tests inject a TestSink that collects in memory. Same lifetime, completely different testability."

What to SAY: "Same lifetime as static, but behind an interface. Tests inject a fake. DI Singleton wins."
Think: Should one broken sink kill all other sinks?

Like a fire alarm with siren, phone, and lights — if the phone system dies, the siren still works. Each channel is independent.

Answer: Catch per-sink. FileSink throwing IOException doesn't affect console or HTTP sinks.

Great Answer: "Try-catch per sink. Failed sink's exception goes to stderr (never the same logger — infinite recursion risk). For persistent failures, add a circuit breakerAfter N failures, skip that sink for a cooldown. Retry after cooldown. Prevents hammering a broken target.. Other sinks unaffected."

What to SAY: "Per-sink isolation. One down, others continue. Stderr as last resort. Circuit breaker for persistence."
Think: Where should filtering happen — Logger, Sink, or both?

Answer: Global minimum on Logger (gates before enrichment) + per-sink minimum for fine-grained control.

Great Answer: "Two layers. Logger global minimum: if Warning, Debug/Info discarded before enrichment (perf win). Each sink can also filter — console Info+, file Warning+. Ops tunes per-sink without code changes."

What to SAY: "Two-layer filtering: Logger gates globally, sinks filter locally."
Think: Can you search for "all orders over $500" in $"Order {id} total {amount}" strings?

Spreadsheet vs Word doc. Both hold data, but only the spreadsheet lets you filter column B > 500. String logs = Word doc. Structured logs = spreadsheet.

Answer: Interpolation bakes values into one string — unsearchable. Structured logging preserves key-value pairs for queries.

Great Answer: "$"Order {orderId} failed" is opaque. Structured logging keeps orderId as a searchable property. Elasticsearch indexes it, dashboards aggregate it, alerts trigger on it. Same human message, but machine-queryable."

What to SAY: "Strings for humans. Structured properties for machines. You need both."
Think: If FileSink takes 5ms, should the API response wait?

Answer: Channel<LogEntry> as producer-consumer queue. Caller enqueues instantly. Background task drains to sinks.

Great Answer: "Log() writes to a bounded Channel and returns — nanoseconds, not milliseconds. Background Task drains and dispatches. Bounded = backpressureWhen consumer can't keep up, buffer fills. Policy decides: drop oldest, drop newest, or block producer.: if sinks lag, oldest entries drop (configurable). Hot path never touches I/O."

What to SAY: "Channel as async buffer. Callers enqueue, background drains. Hot path never touches I/O."
Think: A request touches 3 services. How do you find ALL logs for it?
Correlation ID Flow Across Services User GET /order/42 API Gateway Generates ID abc-123 Order Svc AsyncLocal=abc-123 Inventory Svc AsyncLocal=abc-123 Payment Svc AsyncLocal=abc-123 Kibana: filter CorrelationId="abc-123" → all services' logs One search finds every log from every service for that request

Answer: Generate ID at entry point. AsyncLocal<string> flows through async/await. CorrelationEnricher stamps every entry. Across services, pass as HTTP header.

Great Answer: "Within process: AsyncLocal<string> survives async/await. Across services: middleware reads X-Correlation-Id header, sets AsyncLocal. Outgoing calls propagate via delegating handlerHttpClient handler that adds headers to outgoing requests automatically.. One Kibana filter shows every log from every service."

What to SAY: "AsyncLocal within process, HTTP header across services. One ID, one filter, full trace."
Think: Can 50 services write directly to Elasticsearch?
Centralized Logging — ELK Pipeline Svc A Svc B Svc N Kafka Buffer Logstash Transform Elasticsearch Index Kibana Dashboards Entire pipeline = ONE new KafkaSink class. The LLD doesn't change.

Answer: Add KafkaSink. Kafka buffers. Logstash transforms. Elasticsearch indexes. Kibana visualizes.

Great Answer: "LLD unchanged — add a KafkaSink. Kafka handles spikes. Logstash transforms into Elasticsearch. Sampling: 10% Debug, 100% Errors. Retention: 7 days Debug, 90 days Error. Whole pipeline is one new sink class."

What to SAY: "One KafkaSink. Kafka buffers, Logstash transforms, Elasticsearch indexes, Kibana visualizes."
Think: What if your test sink just collected entries in a list?

Answer: TestSink : ILogSink stores entries in a List<LogEntry>. Tests inspect the list.

Great Answer: "Three layers: (1) Logger: inject TestSink, assert count/levels/properties. (2) Enrichers: pass raw entry, verify property added. (3) Pipeline: Logger + enrichers + TestSink, assert fully enriched entry. Zero real I/O — everything behind interfaces."

What to SAY: "TestSink collects in-memory. Three layers: dispatching, enrichment, full pipeline. Zero I/O."
Section 19

10 Deadliest Logging Framework Interview Mistakes

Every one of these has ended real interviews. Logging sounds simple, so candidates lower their guard — and that's exactly when these mistakes strike. They're organized by severity: fatal mistakes that end the interview immediately, serious ones that drop you from "Hire" to "Lean No," and minor ones that won't fail you but won't let you shine either.

Mistake Severity Map FATAL — Interview Enders #1 Console.WriteLine as the design #2 God class Logger does everything #3 No scoping — jump to code #4 Ignoring thread safety Any one = likely No Hire Signals: no system thinking, no production awareness SERIOUS — Red Flags #5 Pattern name-dropping #6 Static singleton without trade-off #7 String concat instead of structured #8 No level filtering discussion Drops you from Hire to Lean No Signals: junior-level habits, not production-ready MINOR — Missed Chances #9 Happy path only — no sink failure #10 No scaling/correlation mention Won't fail you, but won't shine Signals: solid but not senior-level depth

Fatal Mistakes — Interview Enders

Why this happens: "It's just logging — we're writing text to an output." So you wrap Console.WriteLine in a static method and call it done. But the interviewer is testing whether you can design a pluggable, extensible system. A Console.WriteLine wrapper is a utility function, not a framework.

What the interviewer thinks: "Can't distinguish a utility from a framework. Will build rigid systems that need rewrites when requirements change."

Fix: Start with the interface: ILogSink. Console is ONE sink. File is another. HTTP is another. The Logger orchestrates — it doesn't hardcode where logs go.

Why this happens: "Logging is one concern, so one class." So you put level filtering, timestamp formatting, file writing, console coloring, and HTTP posting all inside Logger.cs. It starts at 80 lines. By the time the interviewer asks about correlation IDs and async buffering, it's 400 lines and every change risks breaking something else.

What the interviewer thinks: "No SRP. This person creates classes that grow until nobody can maintain them."

Fix: Ask "what changes independently?" Sinks change independently (Strategy). Enrichers change independently (Decorator). Filtering changes independently (configurable levels). Logger just orchestrates.

Why this happens: "Design a logging framework" sounds clear enough. So you start writing a Logger class immediately. Five minutes in: "Does it support multiple outputs?" "Is it structured?" "Thread-safe?" You've already committed to a design that can't accommodate these.

What the interviewer thinks: "Doesn't scope. Will build the wrong thing in production for two weeks before asking what the customer needed."

Fix: 2-3 minutes of clarification: Library or service? Multiple sinks? Structured? Async? Thread-safe? Then F/NF requirements. THEN code.

Why this happens: "It's just writing strings." So the Logger has a List<string> buffer that 50 threads write to simultaneously. In production: lost entries, corrupted output, interleaved lines, random ArgumentOutOfRangeException from the List resizing.

What the interviewer thinks: "No concurrency awareness. This code will corrupt data in production on day one."

Fix: A shared Logger needs either a lock around the sink loop, a ConcurrentQueue, or a Channel<LogEntry> for async buffering. Mention the trade-off: lock is simple, Channel is non-blocking.

Serious Mistakes — Significant Red Flags

Why this happens: You memorized the pattern catalog. "Sinks use Strategy. Enrichers use Decorator. Logger is Singleton." All correct — but the interviewer asks "why Strategy?" and you can't articulate the design problem it solves. Naming patterns without motivation sounds like reciting a textbook.

What the interviewer thinks: "Memorized, not understood. Can name patterns but can't apply them to new problems."

Fix: Always lead with the problem: "Each sink has genuinely different write logic. A switch means every new sink changes Logger. So I use an interface — that's Strategy." Problem first, pattern name second.

Why this happens: "Logger should be a singleton" — and you make it a static class because that's the simplest way. It works, but you never mention the testability cost. The interviewer is waiting for you to say "DI Singleton gives the same lifetime but lets tests inject a mock." Without it, you look like you've never written a unit test.

What the interviewer thinks: "Doesn't think about testability. Has probably never mocked a dependency."

Fix: "The Logger is a Singleton — one instance for the app's lifetime. But I register it through DI, not as a static class, because DI Singleton is behind an interface and tests can inject a mock. Same lifetime, testable."

Why this happens: "Logging is text, right?" So you use $"Order {orderId} failed at {DateTime.Now}". It's readable — for humans. But in production with 10M logs, you can't search "show me all orders where orderId = 42" without regex. Structured logging preserves each value as a named, queryable field.

What the interviewer thinks: "Old-school approach. Has never debugged in production with Elasticsearch or a log aggregator."

Fix: Mention structured logging early: "Log entries carry a properties dictionary — key-value pairs, not baked-into strings. Sinks can format however they want: console reads the message, Elasticsearch indexes the properties."

Why this happens: You implement LogLevel as an enum but never discuss how levels are used for filtering. The interviewer wonders: "Do Debug logs go to production? Is filtering at the Logger level, sink level, or both? Can ops change it without a redeploy?"

What the interviewer thinks: "Implemented levels but doesn't understand their purpose in production ops."

Fix: "Two-layer filtering. Global minimum on the Logger — if set to Warning, Debug and Info are discarded before enrichment even runs. Each sink can also have its own minimum. Both are configurable at runtime via appsettings.json."

Minor Mistakes — Missed Opportunities

Why this happens: Your design is clean — Strategy for sinks, Decorator for enrichers, everything looks production-ready. But you never mention what happens when a sink throws. In production: the file system fills up, the HTTP endpoint goes down, the database connection drops. If one sink failure kills all logging, your "production-ready" design fails on day one.

What the interviewer thinks: "Clean code but no robustness thinking. Works in dev, breaks in prod."

Fix: Proactively say: "What if a sink throws? Catch per-sink — isolate failures. Stderr as last resort. For persistent failures, circuit breaker pattern."

Why this happens: Your LLD is solid for a single service. But the interviewer is thinking: "50 microservices all logging separately — how do you find one user's request across all of them?" If you never mention correlation IDs or centralized aggregation, the design stays at junior-level scope.

What the interviewer thinks: "Good LLD but no distributed systems awareness. Can't bridge to HLD."

Fix: At the end, say: "At scale, I'd add a KafkaSink for centralized aggregation. Correlation IDs via AsyncLocal<T> let me trace a single request across services. The LLD doesn't change — just register a new sink."

Before You Answer Any Question — Run These 3 Checks 1. WHAT VARIES? Sinks? Enrichers? Both? 2. WHAT COULD FAIL? Sinks down? Thread race? 3. HOW DOES IT SCALE? 50 services? Correlation? If your answer touches all three, the interviewer hears "this person thinks like a senior."

Interviewer Scoring Rubric

LevelRequirementsDesignCodeEdge CasesCommunication
Strong Hire Structured F+NF Patterns natural, motivated Clean modern C# 3+ proactive Explains WHY
Hire Key ones listed 1-2 patterns Mostly correct When asked Clear
Lean No Partial Forced or wrong Messy Misses obvious Quiet/verbose
No Hire None No abstractions Can't code None Can't explain
The good news: Every mistake on this list has a simple fix you can practice in 5 minutes. The fixes aren't advanced concepts — they're habits. Scope before code. Name the problem before the pattern. Mention thread safety without being asked. Proactively discuss failures. Bridge to scaling. Do these consistently and you'll avoid all 10 mistakes without thinking about it.
Section 20

Memory Anchors — Never Forget This

You just built a production-ready logging framework that uses three design patterns working together. Now let's lock those patterns into long-term memory so they come back instantly in your next interview or design session. The trick isn't rote repetition — it's anchoring each concept to something vivid.

The CREATES Mnemonic — Your Universal LLD Approach

Clarify → Requirements → Entities → API → Trade-offs → Edge cases → Scale

“Every system design CREATES a solution.” — This mnemonic works for EVERY LLD interview, not just logging. Repeat it until it’s automatic.

CREATES Flow — How a Log Message Travels
CREATES mnemonic showing the seven-step log message journey C Client calls R Routes (Singleton) E Evaluates level A Applies (Decorator) T Targets (Strategy) E Emits entry S Stored SINGLETON DECORATOR STRATEGY "Client Routes, Evaluates, Applies, Targets, Emits, Stores" — that's the whole framework.

Memory Palace — Walk Through a Server Room

Imagine walking into a server room with three stations. Each station represents one of the three core patterns. As you walk through, the pattern clicks into place.

Memory Palace — Server Room → Design Patterns
Memory palace mapping server room stations to design patterns THE SERVER ROOM Master Console "One screen, one logger" "Everyone types here" SINGLETON Wrapping Station "Add timestamp layer" "Add JSON layer" DECORATOR Routing Switch "Pick a destination" "Console, File, or Cloud" STRATEGY ENTER EXIT "One console, wrap the message, pick the route" — that's the whole framework.

Pattern Decision Tree — Which Pattern Do I Need?

When you're staring at a logging requirement, ask these questions in order. Each answer points you to the right pattern.

Decision Tree — Logging Requirement → Pattern
Decision tree mapping logging requirements to design patterns New logging requirement? "Need one shared instance?" "Need to layer behavior?" "Need to swap output target?" SINGLETON DECORATOR STRATEGY One logger per app Thread-safe access Global config +Timestamp +JSON formatting +Encryption Console vs File vs Cloud Swap at runtime Different per environment

Smell → Pattern Quick Reference

Code Smell → Pattern Mapping
Code smells mapped to the logging patterns that fix them SMELL PATTERN Multiple logger instances fighting over a file Singleton Adding timestamp/JSON requires changing the logger Decorator Hardcoded Console.WriteLine everywhere Strategy Massive god-class logger doing everything Decorator + Strategy

Flashcards — Quiz Yourself

Click each card to reveal the answer. If you can answer without peeking, the pattern is sticking.

One instance, one file handle. If every class created its own logger, they'd all fight over the same log file — corrupted writes, lost entries, resource leaks. A Singleton guarantees one shared instance with thread-safe access.

Decorator pattern. Create a JsonFormatterDecorator that wraps the base ILogger. It transforms the message into JSON, then passes it to the inner logger. No existing code changes — just add a new wrapper class.

Strategy pattern. Both ConsoleLogSink and FileLogSink implement ILogSink. Inject the one you want at startup, or swap at runtime. The logger doesn't know or care which sink is active — it just calls Write().

Outermost first, innermost last. If you wrap as new JsonDecorator(new TimestampDecorator(baseLogger)), the JSON decorator runs first (converts to JSON), then the timestamp decorator adds the time, then the base logger writes. Order matters!

Client → Routes (Singleton) → Evaluates level → Applies decorators → Targets sink (Strategy) → Emits entry → Stored. Seven steps tracing how a single log message flows through the entire framework.

Section 21

Transfer — These Patterns Work Everywhere

You didn't just learn how to build a logger. You learned seven structural thinking moves that appear in virtually every system that processes data. The three core patterns — "ensure one shared resource" (Singleton), "layer optional behavior" (Decorator), and "swap algorithms at runtime" (Strategy) — are joined by four supporting techniques: thread safety, structured data modeling, pipeline architecture, and error isolation. Below is the proof: the exact same techniques, applied to four completely different domains.

The point of this section is to convince you that learning one system well is worth more than skimming ten systems. If you can explain why Singleton works for a logger, you can explain why it works for a metrics collector, an audit service, or a cache manager. The structural problem is the same — only the domain vocabulary changes.

Transfer Matrix — 7 Techniques Across 4 Systems

Each row is a technique you learned in the logging framework. Each column is a different system. Read across a row to see how the same structural move applies to monitoring, auditing, and error tracking. Read down a column to see how one system uses multiple techniques together.

TechniqueLogging FrameworkMonitoring SystemAudit TrailError Tracking
Singleton One LogManagerThe single entry point for all logging in the app. Every class gets the same logger instance — no duplicated file handles, no race conditions. per app One MetricsCollector aggregating all health checks One AuditService ensuring every action is recorded One ErrorReporter sending all crashes to one dashboard
The structural problem: Multiple parts of a system need to write to the same shared resource. One instance, shared globally via DI. That's Singleton.
Decorator +Timestamp +JSON +Encryption layers on log messages +AlertThreshold +RateLimit +Dashboard layers on metrics +UserContext +IPAddress +Compliance layers on audit entries +StackTrace +Environment +Breadcrumbs layers on error reports
The structural problem: You need to add optional features without changing the core. Wrap it in a Decorator. Each layer adds one capability. Stack as many as you need.
Strategy Console vs File vs Cloud output sink PrometheusAn open-source monitoring system that pulls metrics from your app. It stores time-series data and supports powerful queries. The Strategy pattern lets you swap between Prometheus, Datadog, or CloudWatch exporters. vs Datadog vs CloudWatch exporter Database vs EventStore vs Blockchain storage Sentry vs Bugsnag vs Custom webhook reporter
The structural problem: More than one way to do the same job, and the choice varies by environment. Extract it behind an interface. New implementation = new class, zero changes.
Thread Safety lock around sink writes, Channel<T> for async ConcurrentDictionary for metric counters lock around event sequence numbers SemaphoreSlimA lightweight synchronization primitive that limits how many threads can access a resource concurrently. Unlike lock (which is binary), SemaphoreSlim can allow N concurrent accessors. for rate-limited error reporting
The structural problem: Multiple threads hitting shared mutable state simultaneously. Identify the state, pick the minimal synchronization tool.
Structured Data LogEntry record with key-value properties MetricSample with name, value, tags, timestamp AuditEvent with actor, action, resource, outcome ErrorReport with exception, context, breadcrumbs
The structural problem: Data that flows through a pipeline must be structured, not stringly-typed. Immutable records with typed properties beat concatenated strings every time.
Pipeline Entry → Enrichers → Sinks Sample → Aggregators → Exporters Event → Validators → StoresThe final destination for audit events. Could be a database, event store, or even blockchain. The pipeline pattern means the store is swappable — same as logging sinks. Error → Enrichers → Reporters
The structural problem: Data flows through a sequence of processing stages. Each stage transforms or routes. The pipeline pattern makes each stage independently testable and replaceable.
Error Isolation Per-sink try-catch, stderr fallback Per-exporter circuit breaker Per-store retry with dead-letter queue Per-reporter timeout with fallback to local file
The structural problem: One failing component must not take down the entire system. Isolate failures, provide fallbacks, and never let infrastructure code crash the app it serves.

Transfer Web — One Framework, Many Domains

Think of the Logging Framework as the center of a web. Every pattern you learned radiates outward into entirely different systems. The domain changes, but the structural move stays the same. The lines are dashed because the connection isn't copy-paste — it's structural analogy. You don't take Logger code and paste it into a monitoring system. You take the thinking (Singleton for shared resource, Decorator for layering, Strategy for swapping) and apply it fresh.

Transfer Web — Patterns Radiate to Other Domains
Transfer web showing logging patterns connecting to other domains Logging Singleton + Decorator + Strategy Monitoring All three patterns Audit Trail Singleton + Decorator Error Tracking Decorator + Strategy Config Mgmt Singleton + Strategy Caching Singleton + Decorator Notification Decorator + Strategy

Technique Heat Map — Which Technique Appears Where?

The darker the cell, the more critical that technique is to that domain. Notice how Decorator shows up as CRITICAL in almost every row — that's because layering optional behavior is the single most reusable pattern in infrastructure code. Also notice that thread safety is HIGH or CRITICAL everywhere — because infrastructure systems are almost always multi-tenantServing multiple users, threads, or requests concurrently. Any shared resource (logger, cache, config) must handle concurrent access safely. This is why thread safety shows up as critical in virtually every infrastructure domain..

Technique Heat Map — Reusability Across Domains
Heat map showing technique reusability across seven domains Singleton Decorator Strategy Logging Monitoring Audit Trail Error Tracking Config Mgmt Caching Notification CRITICAL CRITICAL CRITICAL CRITICAL HIGH CRITICAL HIGH CRITICAL MEDIUM MEDIUM CRITICAL HIGH CRITICAL LOW HIGH CRITICAL HIGH MEDIUM LOW CRITICAL CRITICAL

How to Apply This in Your Next Design

When you sit down with a new system design problem — whether it's a caching layer, a notification service, or a payment gateway — run through the transfer matrix mentally. Ask yourself: "Does this system have a shared resource?" (Singleton). "Does it layer optional behavior?" (Decorator). "Does it have swappable implementations?" (Strategy). "Is it multi-threaded?" (Thread Safety). "Does data flow through stages?" (Pipeline). You'll find that most infrastructure systems match 4-5 of these seven rows. The domain vocabulary changes, but the structural problems are the same ones you just solved in logging.

The insight: Patterns aren't domain-specific. They target structural problems that recur everywhere: "one shared resource" (Singleton), "optional behavior layers" (Decorator), "swappable algorithms" (Strategy), "concurrent access" (Thread Safety), "typed data flow" (Structured Data), "sequential processing" (Pipeline), "fault isolation" (Error Isolation). Learn the structure once, apply it to logging, monitoring, auditing, caching, notifications — anything. The person who learns one system deeply beats the person who skims ten systems superficially, because they can transfer the structural insight to any new domain in minutes.
Section 22

The Reusable Toolkit

Six thinking tools you picked up in this case study. Each one is a portable mental move — not a logging trick, but something you can use in any LLD interview or real-world design. Think of them as questions you ask yourself whenever you sit down to design a system. If you remember nothing else from this page, remember these six questions — they work for payment systems, caching layers, notification services, and anything else that processes data through a pipeline.

The trick isn't knowing the pattern name. It's knowing when to reach for it. Each card below gives you the trigger question (the moment you'd think of this tool), a plain English explanation of how to use it in any context, and the specific logging example where you saw it in action on this page.

Your 6-Tool Mental Toolkit

Your Toolkit — 6 Portable Thinking Tools
Toolbox with six labeled design thinking tools TOOLKIT One Instance? → Singleton How: DI registration Logging: LogManager Level 1 Layer It? → Decorator How: Wrap + delegate Logging: Enrichers Level 4 Swap It? → Strategy How: Interface + inject Logging: ILogSink Level 2 Thread Safe? → Lock / Lazy<T> How: Identify shared state Logging: Sink writes Level 5 Extend It? → OCP How: New class, not edit Logging: New sinks Level 3 Test It? → DI + Interfaces How: Mock interfaces Logging: Mock ILogSink Level 6

Deep Dive — Each Tool Explained

One Instance?

Ask yourself: "Should every class create its own copy, or should everyone share one?" If sharing matters — because of file locks, resource limits, or consistency — you need a SingletonEnsures a class has exactly one instance and provides a global point of access. In logging, this prevents multiple file handles competing for the same log file..

How to use: Register the type as a DI SingletonDependency Injection Singleton means the DI container creates exactly one instance and gives the same instance to every class that requests it. Unlike a static singleton, it's testable — you can swap it for a mock.. Avoid the static singleton anti-pattern — it kills testability.

Logging use: LogManager.Instance — one logger, one file handle, thread-safe writes. Every class in the app shares the same logger without creating duplicate resources.

Layer It?

Ask yourself: "Can I add this feature by wrapping the existing object instead of changing it?" If yes, use a DecoratorWraps an object to add behavior without modifying the original class. Each decorator implements the same interface, so you can stack them in any combination.. Each layer adds one thing and delegates the rest to the inner object.

How to use: Create a wrapper class that implements the same interface as the thing it wraps. The wrapper adds its behavior, then calls the inner object. Stack as many wrappers as you need — order is configurable.

Logging use: TimestampDecorator, JsonDecorator, EncryptionDecorator — each adds one enrichment and passes the entry along.

Swap It?

Ask yourself: "Is there more than one way to do this, and the choice might change?" If yes, extract the algorithm behind an interface — that's the StrategyDefines a family of algorithms, encapsulates each one, and makes them interchangeable. The client code doesn't know which algorithm is running. pattern. New algorithm = new class, zero changes elsewhere.

How to use: Define an interface for the varying behavior. Each implementation is a separate class. Inject the correct one via DIDependency Injection: instead of creating dependencies yourself, you declare what you need and let a container provide it. Makes swapping implementations trivial — just change the registration. based on configuration or environment.

Logging use: ILogSinkConsoleLogSink, FileLogSink, CloudLogSink — swap per environment without touching Logger.

Thread Safe?

Ask yourself: "Can multiple threads hit this at the same time?" If yes, you need synchronization. The first step is always the same: identify the shared mutable stateData that multiple threads can read and write simultaneously. In logging, the StreamWriter's internal buffer is shared mutable state — without protection, concurrent writes corrupt the output.. Then pick the right tool: lock for simple cases, Channel<T> for high-throughput producer-consumer scenarios.

How to use: Identify what's shared (file handle, list, counter). Add the minimal synchronization needed. Lazy<T> for creation, lock for access, ConcurrentQueue for collections.

Logging use: Lazy<LogManager> for instance creation, lock around file writes to prevent interleaved output from concurrent threads.

Extend It?

Ask yourself: "Can I add a new feature without modifying existing code?" That's the Open/Closed PrincipleSoftware should be open for extension but closed for modification. You can add new behavior (new decorator, new strategy) without changing any existing class. This is what makes architectures resilient to change.. Decorator and Strategy both achieve this — new wrappers and new sinks are pure additions to the codebase.

How to use: If you find yourself editing existing classes to add a feature, stop. Ask whether a new class that implements an existing interface would work instead. If it would, you've found an extension point.

Logging use: Adding SlackLogSink = one new class. No changes to LogManager, decorators, or existing sinks. The architecture absorbs new features without surgery.

Test It?

Ask yourself: "Can I write a unit testA test that checks a single piece of behavior in isolation. Unit tests should run in milliseconds, with no file system, no network, and no database. Mocking makes this possible by replacing real dependencies with fakes. for this without spinning up the real file system or cloud service?" If not, inject interfaces instead of concrete classes. Mock the interface in tests, assert the right behavior was triggered.

How to use: Every dependency that touches I/O (files, network, database) should be behind an interface. In tests, swap the real implementation for a mock that records what was called.

Logging use: Inject a MockLogSink in tests. Verify that the TimestampDecorator prepends the correct format without touching the real file system. Fast, isolated, repeatable.

Self-Check — Can You Use Each Tool?

Before you leave this page, run through this checklist. Each item maps to one tool above. If you can answer "yes" to all six, you've internalized the thinking moves — not just the pattern names.

  • Singleton: Can you explain why a logging framework needs exactly one instance — without saying "because it's a Singleton"? (Hint: file locks, consistent config, thread-safe writes)
  • Decorator: Can you draw the enricher chain on a whiteboard — showing how TimestampEnricher wraps the base, and CorrelationEnricher wraps that? Can you explain why order matters?
  • Strategy: Can you add a new SlackLogSink to the logging framework without editing any existing file? If you'd need to touch Logger.cs, your design has a coupling problemWhen adding a new feature forces changes to existing code, the code is tightly coupled. Good architecture (OCP) means new features are pure additions — new files, not edits to old files..
  • Thread Safety: Can you explain what goes wrong when two threads call FileSink.Write() at the same time? Can you name two ways to fix it? (lock vs Channel)
  • OCP: Can you point to three places in the logging framework where you can extend behavior without modifying existing code? (new sink, new enricher, new formatter)
  • Testability: Can you write a test that verifies "the logger called all registered sinks" without actually writing to a file or console? What do you inject instead?

Decision Flowchart — Which Tool Do I Reach For?

When you're in the middle of a design and a new requirement appears, use this flowchart to pick the right tool. Start at the top with the requirement, follow the questions, and land on the tool. Most requirements map to exactly one tool. Some (like "add a new enricher") combine two (Decorator + OCP).

New Requirement? Ask These Questions New Requirement Need exactly one instance? Yes Singleton No Add optional behavior layer? Yes Decorator No Swap between implementations? Yes Strategy No Multiple threads hitting it? Yes Thread Safety OCP New class, not edit Test It Mock the interface

When NOT to Use Each Tool

Knowing when to use a tool is important. Knowing when not to use it is just as important. Over-applying patterns is the second most common mistake in LLD interviews (after under-applying them). Each tool has a specific trigger — use it only when that trigger fires.

ToolUse WhenDon't Use When
Singleton Shared resource with contention (file handles, connection pools) Every class — most objects don't need global access. Default to transientA new instance every time it's requested from the DI container. This is the safest default because there's no shared state and no concurrency issues..
Decorator Optional, composable behavior (enrichment, formatting, caching) When the "layers" never change or there's only one possible combination. A simple method call is cheaper than a Decorator chain.
Strategy Multiple implementations that can be swapped by config or environment When there's genuinely only one way to do something. Don't create an interface for a class that will never have a second implementation.
Thread Safety Shared mutable state accessed from multiple threads Immutable objects (like LogEntry records) — immutable data is inherently thread-safeIf an object can't change after creation, multiple threads can read it simultaneously without any risk of corruption. That's why LogEntry is a record — no locking needed..
OCP Extension points where new behavior is likely (sinks, enrichers, formatters) Internal implementation details that are unlikely to change. Over-abstracting stable code adds complexity without benefit.
Test via DI Any dependency that touches I/O (files, network, database, clock) Pure logic with no side effects — just test the output directly. Mocking Math.Max() would be absurd.
These 6 tools are your permanent inventory. They work for every framework, every service, every system that processes data through a pipeline. Domains change — logging today, caching tomorrow, notifications next week. The structural questions don't change. "One instance? Layer it? Swap it? Thread safe? Extend it? Test it?" If you can answer those six questions for any system, you can design it well.
Section 23

Practice Exercises

Three exercises that test whether you truly learned the thinking, not just memorized the code. Each one adds a new constraint that forces you to extend the logging framework — exactly like a real interview follow-up question.

Exercise 1: JSON Formatter Decorator Easy

Constraint: Create a JsonFormatterDecorator that transforms every log entry into a structured JSON object with fields: timestamp, level, message, and source. The decorator must implement ILogger and wrap any existing logger — including other decorators.

Think: How does your decorator interact with the existing TimestampDecorator? If you stack both, do you get a timestamp inside the JSON and a prepended timestamp? What order should you wrap them in?

Your decorator takes an ILogger in the constructor. In its Log() method, build a JSON string using System.Text.Json.JsonSerializer, then pass the JSON string to the inner logger's Log() method. If you stack TimestampDecorator outside the JSON decorator, the timestamp is prepended to the JSON string (ugly). If you put it inside, the timestamp gets included before JSON formatting (also not ideal). The cleanest solution: put timestamp inside the JSON as a field, and skip the separate TimestampDecorator when using JSON mode.

JsonFormatterDecorator.cs
public class JsonFormatterDecorator : ILogger
{
    private readonly ILogger _inner;
    private readonly string _source;

    public JsonFormatterDecorator(ILogger inner, string source)
    {
        _inner = inner;
        _source = source;
    }

    public void Log(LogLevel level, string message)
    {
        var entry = new
        {
            timestamp = DateTime.UtcNow.ToString("o"),
            level = level.ToString(),
            message,
            source = _source
        };

        string json = JsonSerializer.Serialize(entry);
        _inner.Log(level, json);
    }
}

// Usage:
ILogger logger = new ConsoleLogger();
logger = new JsonFormatterDecorator(logger, "OrderService");
logger.Log(LogLevel.Info, "Order placed");
// Output: {"timestamp":"2026-03-15T...","level":"Info",
//          "message":"Order placed","source":"OrderService"}
Exercise 2: Log Rotation & Compression Medium

Constraint: The FileLogSink should automatically rotate log files when they exceed 10 MB. When rotation happens, the old file is compressed to .gz and a fresh file starts. Only the last 5 rotated files are kept — older ones are deleted.

Think: Is rotation a Decorator concern (wrapping the sink) or a Strategy concern (different sink behavior)? Where does compression fit — is it a separate decorator, or built into the rotation logic? How do you handle the case where multiple threads trigger rotation simultaneously?

Rotation is best modeled as a Decorator that wraps the FileLogSink. Before each write, the RotatingFileDecorator checks the current file size. If it exceeds the threshold, it: (1) closes the current file, (2) compresses it using GZipStream, (3) deletes old archives beyond the retention count, and (4) opens a new file. Wrap the rotation check in a lock to prevent multiple threads from rotating simultaneously. Compression can use System.IO.Compression.GZipStream.

RotatingFileDecorator.cs
public class RotatingFileDecorator : ILogSink, IDisposable
{
    private readonly string _basePath;
    private readonly long _maxBytes;
    private readonly int _maxArchives;
    private readonly object _lock = new();
    private StreamWriter _writer;

    public RotatingFileDecorator(
        string basePath,
        long maxBytes = 10 * 1024 * 1024,
        int maxArchives = 5)
    {
        _basePath = basePath;
        _maxBytes = maxBytes;
        _maxArchives = maxArchives;
        _writer = new StreamWriter(basePath, append: true);
    }

    public void Write(string entry)
    {
        lock (_lock)
        {
            if (new FileInfo(_basePath).Length >= _maxBytes)
                Rotate();

            _writer.WriteLine(entry);
            _writer.Flush();
        }
    }

    private void Rotate()
    {
        _writer.Close();

        // Compress old file
        string archive = $"{_basePath}.{DateTime.Now:yyyyMMddHHmmss}.gz";
        using (var input = File.OpenRead(_basePath))
        using (var output = File.Create(archive))
        using (var gz = new GZipStream(output, CompressionLevel.Optimal))
            input.CopyTo(gz);

        File.Delete(_basePath);

        // Prune old archives
        var archives = Directory.GetFiles(
                Path.GetDirectoryName(_basePath)!, "*.gz")
            .OrderByDescending(f => f)
            .Skip(_maxArchives);
        foreach (var old in archives)
            File.Delete(old);

        _writer = new StreamWriter(_basePath, append: false);
    }

    public void Dispose() => _writer?.Dispose();
}
Exercise 3: Distributed Tracing Hard

Constraint: In a microservicesAn architecture where an application is built as a collection of small, independent services. Each service runs in its own process and communicates over the network. A single user request might pass through 5-10 different services. system, a single user request passes through 5 services. Every log entry across all services must share a correlationId so you can trace the entire request journey. The correlation ID is generated by the first service and propagated through HTTP headers.

Think: Is this a Decorator that adds the correlation ID to every log entry? How does the correlation ID get from one service to the next? Where is it stored within a single service — static field? Thread-local? AsyncLocal<T>?

Use AsyncLocal<string> to store the correlation ID per async context (it survives await boundaries). Create a CorrelationDecorator that reads the current correlation ID and prepends it to every log entry. In your HTTP middleware, check for an X-Correlation-Id header: if it exists, use it (the request came from another service); if not, generate a new GUID (this is the first service). When calling downstream services, include the correlation ID in the outgoing X-Correlation-Id header.

DistributedTracing.cs
// 1. Correlation context (shared across async calls)
public static class CorrelationContext
{
    private static readonly AsyncLocal<string?> _id = new();
    public static string? CurrentId
    {
        get => _id.Value;
        set => _id.Value = value;
    }
}

// 2. Decorator that enriches every log entry
public class CorrelationDecorator : ILogger
{
    private readonly ILogger _inner;
    public CorrelationDecorator(ILogger inner) => _inner = inner;

    public void Log(LogLevel level, string message)
    {
        var id = CorrelationContext.CurrentId ?? "no-correlation";
        _inner.Log(level, $"[{id}] {message}");
    }
}

// 3. ASP.NET middleware to propagate the ID
public class CorrelationMiddleware(RequestDelegate next)
{
    public async Task InvokeAsync(HttpContext ctx)
    {
        CorrelationContext.CurrentId =
            ctx.Request.Headers["X-Correlation-Id"]
                .FirstOrDefault()
            ?? Guid.NewGuid().ToString("N");

        ctx.Response.Headers["X-Correlation-Id"] =
            CorrelationContext.CurrentId;

        await next(ctx);
    }
}

// 4. HttpClient handler to forward the ID downstream
public class CorrelationHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage req, CancellationToken ct)
    {
        if (CorrelationContext.CurrentId is { } id)
            req.Headers.Add("X-Correlation-Id", id);
        return base.SendAsync(req, ct);
    }
}
Scoring guide: If you identified AsyncLocal<T> for per-request storage, a Decorator for log enrichment, and middleware for HTTP propagation, you've nailed the hard part. The exact syntax matters less than the three-layer architecture: storage, enrichment, propagation.
Spaced Repetition: Try Exercise 1 today. Try Exercise 2 in three days (without re-reading). Try Exercise 3 in a week. If you can sketch the design from memory after a week, it's permanent.