GoF Creational Pattern

Singleton Pattern

One instance, shared by everyone. No duplicates allowed — and everyone knows where to find it.

29 Q&As 6 Bug Studies 10 Pitfalls 4 Testing Strategies C# / .NET
Section 1

TL;DR

Heads up: In modern .NET, you usually don't build Singletons by hand. Instead, you tell the framework "make one of these and share it everywhere" using services.AddSingleton<T>(). The classic hand-built Singleton is still worth learning — you'll see it in interviews, legacy code, and it teaches important concepts. This page covers both approaches.

What: Some things should only exist once in your entire application. A configuration manager, a database connection pool, a logger — you don't want five different copies floating around, each with different settings. Singleton guarantees: one instance, shared by everyone who needs it.

When: Use it whenever creating multiple instances of something would cause problems — duplicate connections, inconsistent settings, wasted memory, or conflicting state.

In C# / .NET: The modern way is simple: builder.Services.AddSingleton<IMyService, MyService>() — the framework creates one instance and hands it to everyone who asks for it. No manual locking, no static properties.

Quick Code:

public sealed class AppConfig { private static readonly Lazy<AppConfig> _instance = new(() => new AppConfig()); public static AppConfig Instance => _instance.Value; private AppConfig() { /* load config */ } }
Section 2

Prerequisites

Singleton is one of the simplest patterns to understand, but these foundations will help the C# examples make sense right away.
Section 3

Real-World Analogies

A country has exactly one president at any time. You can't just "create" a new president — there's a formal process. Everyone in the country refers to the same person when they say "the president."

Real WorldWhat it meansIn code
The countryThe whole systemYour application
The presidentThe one and only instanceThe Singleton object
"You can't just create one"Only the system decides when it's createdPrivate constructor
Official channel to reach the presidentEveryone uses the same access pointInstance property
Only one at a timeEven if many people ask simultaneouslyThread-safe creation
Section 4

Core Pattern & UML

GoFGang of Four — the four authors of "Design Patterns" (1994), the definitive catalog of 23 design patterns. Definition: "Ensure a class only has one instance, and provide a global point of access to it."

UML Class Diagram

Singleton Pattern UML Class Diagram Singleton static instance : Singleton Singleton() + static getInstance() : Singleton + operation() : void + getData() : Object Private constructor prevents direct instantiation. getInstance() creates on first call, returns same instance after. Client + doWork() : void Singleton --> uses LEGEND private + public static dependency
ParticipantRoleResponsibility
Singleton The class with a single instance Holds a private static reference to itself; exposes a static access point; hides the constructor
instance The single shared object Created lazily on first access; same reference returned to every caller
getInstance() Global access point Creates the instance if it doesn't exist, returns it if it does
Client Consumer Accesses the Singleton exclusively through getInstance()
Section 5

Code Implementations

public sealed class AppConfig { // Lazy<T> guarantees thread-safe, lazy initialization // The lambda runs ONCE, on first access private static readonly Lazy<AppConfig> _instance = new(() => new AppConfig()); // Public access point — everyone calls this public static AppConfig Instance => _instance.Value; // Private constructor — no one can "new AppConfig()" private AppConfig() { // Load configuration from appsettings.json, env vars, etc. } public string GetSetting(string key) => /* ... */ ""; }
Recommended

This is the modern, preferred approach in C#. Lazy<T> handles thread safety and lazy init in one line.

public sealed class AppConfig { private static volatile AppConfig? _instance; private static readonly object _lock = new(); public static AppConfig Instance { get { // First check — no lock needed (fast path) if (_instance is null) { lock (_lock) { // Second check — inside lock (thread-safe) _instance ??= new AppConfig(); } } return _instance; } } private AppConfig() { } }
Legacy

Works but more verbose than Lazy<T>. You might see this in older codebases.

// In ASP.NET Core — the DI container manages the singleton lifetime // No static Instance, no private constructor tricks needed builder.Services.AddSingleton<IAppConfig, AppConfig>(); // Now inject via constructor — the DI container ensures ONE instance public class OrderService { private readonly IAppConfig _config; public OrderService(IAppConfig config) // same instance everywhere { _config = config; } }
Best Practice

In modern .NET apps, prefer DI-managed singletons. The container handles lifecycle, thread safety, and testability.

Section 6

Jr vs Sr Implementation

Problem Statement

Build a Logger that writes to a file. The entire application must share the same logger instance. It must handle concurrent writes from multiple threads.

How a Junior Thinks

"I need one logger. I'll make it static. A static class can hold the file stream, and I'll just call Logger.Log() everywhere."

public static class Logger { private static StreamWriter _writer = new StreamWriter("app.log", append: true); public static void Log(string message) { _writer.WriteLine($"[{DateTime.Now}] {message}"); _writer.Flush(); } }

Problems

No Thread Safety

Multiple threads calling Log() simultaneously will corrupt the file — interleaved writes, partial lines, or IOException.

Not Testable

Static classes can't implement interfaces. Unit tests can't swap in a fake logger — they'll write to real files.

Resource Leak

The StreamWriter is never disposed. If the app crashes, buffered log entries may be lost.

How a Senior Thinks

"I need a single instance, but I also need it to be testable, thread-safe, and properly disposable. I'll code against an interface, use a lock for thread safety, and register it as a DI singleton."

public interface IAppLogger : IDisposable { void Log(LogLevel level, string message); void LogInfo(string message); void LogError(string message, Exception? ex = null); } public sealed class FileLogger : IAppLogger { private readonly StreamWriter _writer; private readonly object _lock = new(); public FileLogger(string filePath) { _writer = new StreamWriter(filePath, append: true) { AutoFlush = true }; } public void Log(LogLevel level, string message) { lock (_lock) // thread-safe writes { _writer.WriteLine( $"[{DateTime.UtcNow:O}] [{level}] {message}"); } } public void LogInfo(string msg) => Log(LogLevel.Info, msg); public void LogError(string msg, Exception? ex = null) => Log(LogLevel.Error, ex is null ? msg : $"{msg} | {ex}"); public void Dispose() => _writer.Dispose(); } // DI container ensures single instance builder.Services.AddSingleton<IAppLogger>( sp => new FileLogger("logs/app.log")); // Inject in any service — always same instance public class OrderService { private readonly IAppLogger _logger; public OrderService(IAppLogger logger) => _logger = logger; }

Design Decisions

Interface Segregation

Coding against IAppLogger allows swapping FileLogger with ConsoleLogger, DatabaseLogger, or a test double — without changing any service code.

Thread Safety via Lock

The lock (_lock) ensures only one thread writes at a time. AutoFlush = true means no data loss on crash.

DI-Managed Lifetime

Using AddSingleton lets the DI container handle creation and disposal. No static fields, no hidden global state.

Section 7

Evolution of Singleton in .NET

Singleton implementations have evolved dramatically across .NET versions. Understanding this history helps you recognize legacy code and apply modern best practices.

No generics, no Lazy<T>. Developers hand-rolled everything.

// .NET 1.x — manual lock, no volatile, no null-coalescing public class Singleton { private static Singleton _instance = null; private static object _lock = new object(); public static Singleton Instance { get { lock (_lock) // lock on EVERY call — slow { if (_instance == null) _instance = new Singleton(); return _instance; } } } private Singleton() { } }

Lock acquired on every single access — massive contentionWhen multiple threads compete for the same lock or resource, forcing some to wait — a major performance bottleneck. under load. No volatile meant potential memory visibility issues on multi-core CPUsProcessors with multiple independent cores that can execute instructions simultaneously, causing memory visibility issues without proper synchronization.

Generics arrived. The double-checked lockingDouble-Check Lock — a concurrency pattern that checks a condition before and after acquiring a lock, avoiding the lock overhead on subsequent calls. pattern became the gold standard. volatile usage became understood.

// .NET 2.0+ — double-checked locking with volatile public sealed class Singleton { private static volatile Singleton _instance; private static readonly object _lock = new object(); public static Singleton Instance { get { if (_instance == null) // fast path — no lock { lock (_lock) { if (_instance == null) // double check _instance = new Singleton(); } } return _instance; } } private Singleton() { } }

Lock only acquired on first creation. After that, the fast path returns immediately. volatile prevents instruction reorderingCPU optimization that executes instructions out of program order for performance — can cause bugs in lock-free code without memory barriers bugs.

Lazy<T> was introduced — the single most important improvement for Singleton in C#. One line replaced 15 lines of error-prone locking code.

// .NET 4.0+ — Lazy<T> handles everything public sealed class Singleton { private static readonly Lazy<Singleton> _instance = new Lazy<Singleton>(() => new Singleton()); public static Singleton Instance => _instance.Value; private Singleton() { } }

ASP.NET Core's built-in DI container made AddSingleton the standard. The class itself no longer needs to know it's a singleton — the container handles it.

// Modern .NET — the class is just a class public sealed class AppConfig : IAppConfig { public AppConfig(ILogger<AppConfig> logger) { // regular constructor — DI injects dependencies } } // Registration (one line in Program.cs): builder.Services.AddSingleton<IAppConfig, AppConfig>();

No private constructor. No static field. No Instance property. The DI container owns the lifetime. The class is testable, swappable, and clean.

.NET 8 introduced keyed servicesA .NET 8+ DI feature allowing multiple implementations of the same interface, resolved by a string or enum key. — you can register multiple singleton instances of the same interface, keyed by a string or enum. This solves the "one singleton per type" limitation.

// .NET 8 — keyed singletons for multi-tenant or strategy patterns builder.Services.AddKeyedSingleton<ICache>("redis", (sp, key) => new RedisCache(sp.GetRequiredService<IConfiguration>())); builder.Services.AddKeyedSingleton<ICache>("memory", (sp, key) => new MemoryCache()); // Inject by key public class ProductService( [FromKeyedServices("redis")] ICache cache) { }

.NET 9 introduced HybridCacheA .NET 9+ caching API that combines in-memory (L1) and distributed (L2) caching with stampede protection built in. — a Singleton-registered service that combines L1 (in-memory) + L2 (distributed) caching with stampede protectionPreventing multiple threads from simultaneously computing the same expensive value when a cache entry expires. built in.

Also, source-generatedSource generators — a C# compiler feature that generates additional source code at compile time, enabling zero-reflection patterns. DI registration reduces reflection overhead at startup.

// .NET 9 — HybridCache replaces custom IMemoryCache + IDistributedCache singletons builder.Services.AddHybridCache(options => { options.DefaultEntryOptions = new() { Expiration = TimeSpan.FromMinutes(5) }; }); // Usage — no more manual cache-aside pattern public class ProductService(HybridCache cache, AppDbContext db) { public async Task<Product> GetAsync(int id, CancellationToken ct) => await cache.GetOrCreateAsync( $"product:{id}", async token => await db.Products.FindAsync(id, token), cancellationToken: ct); }
Section 8

Singleton in the .NET Framework

Singletons are everywhere in .NET — you use them daily without thinking about it. The diagram below shows which framework services are Singletons in a typical ASP.NET Core request pipelineThe sequence of middleware components that process an HTTP request and response in ASP.NET Core:

ASP.NET Core DI Service Lifetimes SINGLETON — one instance for entire app lifetime IConfiguration ILoggerFactory IMemoryCache IHttpClientFactory IOptions<T> IHostEnvironment SCOPED — one instance per HTTP request DbContext HttpContext IOptionsSnapshot TRANSIENT — new instance every injection Validators IEmailSender ⚡ HTTP Request uses LIFETIME Singleton Scoped Transient

Here are the most important ones in detail:

IHttpClientFactory manages singleton-like HttpMessageHandler pools. The factory itself is a singleton registered via AddHttpClient(). It reuses handlers to prevent socket exhaustionRunning out of available TCP sockets because HttpClient instances weren't reused — each leaves sockets in TIME_WAIT for ~240 seconds. while rotating them to respect DNS rotationPeriodically refreshing DNS lookups so connections switch to new server IPs — important for cloud deployments. changes.

// Registration (once in Program.cs) builder.Services.AddHttpClient<GitHubService>(client => client.BaseAddress = new Uri("https://api.github.com/")); // Usage — HttpClient injected, handler is pooled singleton public class GitHubService(HttpClient client) { public Task<string> GetReposAsync(string user) => client.GetStringAsync($"users/{user}/repos"); }

ASP.NET Core's in-memory cache is registered as a singleton via AddMemoryCache(). The single instance is shared across all requests, making it an ideal thread-safe store for frequently accessed data.

builder.Services.AddMemoryCache(); // singleton registration public class ProductService(IMemoryCache cache, AppDbContext db) { public async Task<Product?> GetByIdAsync(int id) { return await cache.GetOrCreateAsync($"product:{id}", async entry => { entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5); return await db.Products.FindAsync(id); }); } }

ILoggerFactory is registered as a singleton. It creates category-specific ILogger<T> instances (also singletons). The factory holds references to all log providers (Console, File, Application Insights) and routes messages to them.

// ILogger<T> is a singleton — same instance for all injections public class OrderService(ILogger<OrderService> logger) { public void PlaceOrder(Order order) { logger.LogInformation("Placing order {OrderId}", order.Id); // ... } }

IConfiguration is a singleton that holds the merged configuration from all providers (JSON, environment variables, secrets). IOptions<T> wraps strongly-typed config sections as singletons. IOptionsMonitor<T> is also a singleton but supports live reloadAutomatically picking up configuration changes at runtime without restarting the application.

// Bind config section to a POCO — registered as singleton builder.Services.Configure<SmtpSettings>( builder.Configuration.GetSection("Smtp")); // IOptions<T> = singleton (reads once) // IOptionsSnapshot<T> = scoped (reads per request) // IOptionsMonitor<T> = singleton (reads on change) public class EmailService(IOptionsMonitor<SmtpSettings> opts) { public void Send(string to, string body) { var smtp = opts.CurrentValue; // always latest // ... send email using smtp.Host, smtp.Port } }

BackgroundServiceA .NET base class for long-running hosted services that run in the background (queue processing, scheduled tasks). is registered as a singleton that starts with the app and runs background work (queue processing, health monitoring, scheduled tasks). The host manages its lifecycle — calling StartAsync on startup and StopAsync on shutdown.

// Singleton that processes queued emails in the background public class EmailSenderWorker : BackgroundService { private readonly Channel<EmailMessage> _queue; private readonly IServiceScopeFactory _scopeFactory; public EmailSenderWorker( Channel<EmailMessage> queue, // Singleton channel IServiceScopeFactory scopeFactory) { _queue = queue; _scopeFactory = scopeFactory; } protected override async Task ExecuteAsync(CancellationToken ct) { await foreach (var email in _queue.Reader.ReadAllAsync(ct)) { using var scope = _scopeFactory.CreateScope(); var smtp = scope.ServiceProvider.GetRequiredService<ISmtpClient>(); await smtp.SendAsync(email); } } } // Registration: builder.Services.AddSingleton(Channel.CreateBounded<EmailMessage>(1000)); builder.Services.AddHostedService<EmailSenderWorker>();
Section 9

When To Use / When Not To

Singleton Decision Tree — Should you use the Singleton pattern? Need exactly ONE shared instance? NO Don't use Singleton. Transient / Scoped YES Would TWO instances break the system? NO Static class or Scoped is enough YES Need testability & DI? NO Classic Singleton Lazy<T> YES AddSingleton<>
Section 10

Comparisons

Singleton vs Static Class

Singleton vs Monostate

Singleton vs DI Lifetime (AddSingleton)

Section 11

SOLID Mapping

PrincipleRelationExplanation
SRPSingle Responsibility Principle — A class should have only one reason to change. Each class should do one thing and do it well. Depends Singleton itself doesn't violate SRP, but developers often overload it with unrelated responsibilities.
OCPOpen/Closed Principle — Software entities should be open for extension but closed for modification. You should be able to add new behavior without changing existing code. Supports When using DI-managed singleton with interfaces, you can swap implementations without modifying clients.
LSPLiskov Substitution Principle — Subtypes must be substitutable for their base types. If you swap a class for its subclass, the program should still work correctly. Supports DI-managed singletons behind interfaces are easily substitutable. Classic sealed singletons can't be subclassed — LSP is moot but safe.
ISPInterface Segregation Principle — No client should be forced to depend on methods it does not use. Prefer many small interfaces over one large one. Depends A "god singleton" with 20 methods violates ISP. Fix: split into focused interfaces (ICache, IConfig) even if one class implements both.
DIPDependency Inversion Principle — High-level modules should not depend on low-level modules. Both should depend on abstractions (interfaces). Can Violate Classic Singleton.Instance creates tight coupling. DI-managed singleton fixes this.
Section 12

Bug Case Studies

Friday, 5:47 PM. A fintech app goes live after a marketing campaign drives 10x normal traffic. Within 3 minutes, the database connection pool is exhausted. All API calls return 500. The on-call engineer sees 47 DbConnectionManager instances in the memory dump — there should be exactly 1.

Here's how it unfolded. The development team built a Singleton to manage database connections. It worked perfectly in local testing because only one request comes in at a time on your laptop. But in production, the marketing campaign drove a surge of traffic right when the app booted up. Dozens of threads tried to get the database connection manager at the exact same millisecond.

Each thread asked the same question: "Does an instance exist yet?" And because no instance existed yet, every single thread answered "Nope!" and went ahead to create its own. Imagine 47 people all arriving at an empty parking spot at the same time — each one starts pulling in because they all see it's open. The result? Each of those 47 instances opened its own pool of 10 database connections. That's 470 connections hitting a database that can only handle 100.

The database started rejecting connections. API calls started returning 500 errors. The on-call engineer spent the first hour blaming the database — "Maybe it needs more connections?" Scaling the database didn't help because the problem wasn't the database. It was the application creating 47 copies of something that should only exist once.

Time to Diagnose

4 hours. The bug was intermittent — it only appeared under concurrent load, never in single-threaded local testing. The engineer initially blamed the database.

What Went Wrong: Two Threads Race to Create the Singleton Time Thread A Thread B _instance field T1 _instance == null? YES _instance == null? YES null T2 new Instance() + 10 conns new Instance() + 10 conns Instance A (overwritten by B) T3 47 threads all pass null-check in same millisecond 47 different instances! 47 instances x 10 connections = 470 connections DB limit: 100. CRASH. The null-check has no lock — every thread passes it simultaneously during the first millisecond // ❌ Not thread-safe — race condition on first access public class DbConnectionManager { private static DbConnectionManager? _instance; public static DbConnectionManager Instance { get { // ❌ Two threads can both see _instance == null simultaneously if (_instance == null) { // Thread A enters here, starts constructing... // Thread B also enters — _instance is STILL null _instance = new DbConnectionManager(); // Now there are 2 instances, each with its own connection pool } return _instance; } } private readonly List<DbConnection> _pool = new(); private DbConnectionManager() { // Each instance opens 10 connections — 47 instances = 470 connections for (int i = 0; i < 10; i++) _pool.Add(new DbConnection("Server=prod;...")); } // Under 10x traffic, 47 threads hit the race window in the first ms // Result: 47 instances × 10 connections = 470 connections (limit is 100) }

Walking through the buggy code: Look at the if (_instance == null) check on line 10. This is where everything goes wrong. The check and the assignment on line 14 are two separate operations — they're not atomic (not done as a single step). Between the moment Thread A checks "is it null?" and the moment it finishes creating the new instance, Thread B can also check "is it null?" and get the same answer: yes. There's no lock, no gate, nothing stopping both threads from walking right through that door at the same time. In the constructor, each instance opens 10 database connections. Multiply that by 47 threads, and you've blown past the database's connection limit before anyone knows what happened.

// ✅ Lazy<T> guarantees exactly one instance, thread-safe public class DbConnectionManager { private static readonly Lazy<DbConnectionManager> _instance = new(() => new DbConnectionManager()); public static DbConnectionManager Instance => _instance.Value; private readonly List<DbConnection> _pool = new(); private DbConnectionManager() { for (int i = 0; i < 10; i++) _pool.Add(new DbConnection("Server=prod;...")); } // Lazy<T> uses a lock internally — only ONE thread runs the factory // All other threads wait and get the same instance }

Why the fix works: Lazy<T> wraps the creation logic in an internal lock. When the first thread calls .Value, it acquires the lock and runs the factory function (the () => new DbConnectionManager() part). Every other thread that calls .Value at the same time just waits. Once the first thread finishes, the instance is cached. From that point on, accessing .Value is essentially free — it's just reading a pre-computed value, no lock involved. This is why Lazy<T> is the gold standard: it gives you thread safety during creation and zero overhead afterward.

Search your codebase for if (_instance == null) or if (instance == null) followed by new. If there's no lock statement or Lazy<T> wrapping it, you have this bug. It will work perfectly on your laptop and fail catastrophically under real traffic. Another red flag: any Singleton class that uses a private static field assigned via a manual null check instead of Lazy<T> or a static readonly initializer.

If you see a manual null-check Singleton in a code review, flag it immediately. Lazy<T> exists for exactly this reason. The 1-line fix would have prevented a 4-hour outage.

Monday morning, 9:02 AM. A new notification feature shipped on Friday passes all tests in staging. First user triggers a notification — works perfectly. Second user, 30 seconds later — ObjectDisposedException: Cannot access a disposed context instance. Every subsequent notification fails. The team rolls back, confused — "it worked in staging because staging only had one tester hitting it at a time."

Here's the story. The team built a NotificationService that needed to look up user email addresses in the database, then send notifications. A developer registered it as a Singleton (makes sense — it's a service, you only need one). The service needed a DbContext to query the database, so they injected it through the constructor. Standard stuff, right?

The problem is about lifetimes. Think of it like this: a Singleton lives for the entire life of your application — like a permanent employee who never leaves the building. A Scoped service (like DbContext) lives for the duration of one HTTP request — like a visitor who comes in, does their thing, and leaves. When the Singleton grabbed that DbContext through its constructor, it was like the permanent employee grabbing the visitor's badge and keeping it. When the visitor left (request 1 ended), their badge was deactivated (DbContext disposed). But the permanent employee is still holding that dead badge and trying to use it for every future visitor.

Request 1 worked because the DbContext was still alive during that request. But the moment request 1 ended, the DI container cleaned up all scoped services — including that DbContext. Request 2 came in, the Singleton tried to use the same DbContext, and boomObjectDisposedException. The Singleton was holding a reference to a dead object.

Time to Diagnose

45 minutes. The stack trace was clear, but understanding why a DbContext was disposed while code was still using it took investigation. The term "captive dependencyA scoped or transient service accidentally captured by a singleton, causing it to live far longer than intended." wasn't known to the team.

What Went Wrong: Singleton Captures a Scoped Dependency NotificationService (Singleton — lives forever) _db = DbContext Request 1 Scope DbContext ALIVE Works perfectly! Request 1 ends. DbContext DISPOSED. Request 2 Scope Singleton uses DEAD DbContext ObjectDisposedException! The Singleton grabbed DbContext at construction time and HELD ON. DbContext was disposed with Request 1. Singleton still holds the dead ref. // ❌ Singleton captures a Scoped dependency — captive dependency // Registered as: services.AddSingleton<NotificationService>(); public class NotificationService { // ❌ AppDbContext is Scoped — disposed after each request // But the Singleton lives forever, holding a dead reference private readonly AppDbContext _db; public NotificationService(AppDbContext db) => _db = db; public async Task SendAsync(int userId) { // Request 1: _db is alive — works perfectly ✅ // Request 1 ends: DI scope disposes _db // Request 2: _db is DEAD — ObjectDisposedException 💀 var user = await _db.Users.FindAsync(userId); await _emailSender.SendAsync(user.Email, "Hello!"); } } // Program.cs builder.Services.AddDbContext<AppDbContext>(); // Scoped (default) builder.Services.AddSingleton<NotificationService>(); // Singleton // DI injects the Scoped DbContext into the Singleton at construction time // The Singleton holds onto it forever — past the scope's lifetime

Walking through the buggy code: Look at the constructor: public NotificationService(AppDbContext db) => _db = db;. This stores the DbContext as a field. The problem is that AddDbContext registers DbContext as Scoped by default — it's designed to live for one request and then be thrown away. But the Singleton lives forever. So the Singleton receives a DbContext that was created during the first request's scope, stores it as _db, and keeps using it for every future request. After that first request ends and the scope is cleaned up, _db points to a disposed (dead) object.

// ✅ Inject IServiceScopeFactory — create fresh scope per operation public class NotificationService { private readonly IServiceScopeFactory _scopeFactory; public NotificationService(IServiceScopeFactory sf) => _scopeFactory = sf; public async Task SendAsync(int userId) { // Each call gets its own scope with a fresh DbContext using var scope = _scopeFactory.CreateScope(); var db = scope.ServiceProvider.GetRequiredService<AppDbContext>(); var user = await db.Users.FindAsync(userId); await _emailSender.SendAsync(user.Email, "Hello!"); // scope.Dispose() cleans up the DbContext when done } } // Program.cs — enable scope validation to catch this at startup builder.Host.UseDefaultServiceProvider(opts => { opts.ValidateScopes = true; // throws if Singleton captures Scoped });

Why the fix works: Instead of capturing a DbContext directly, the Singleton now holds an IServiceScopeFactory — which is itself a Singleton, so there's no lifetime mismatch. Every time SendAsync is called, it creates a brand new scope and gets a fresh DbContext from that scope. When the method finishes, the using block disposes the scope and its DbContext cleanly. Each operation gets its own fresh database connection, and nothing stale hangs around. The ValidateScopes = true line is your safety net — it makes the DI container check at startup whether any Singleton is trying to capture a Scoped service, and throws an exception before the app even starts.

Look for any class registered as AddSingleton that takes a DbContext, IDbConnection, or any scoped service in its constructor. The rule of thumb: a Singleton should never accept a Scoped or Transient dependency directly. If it needs one, it should inject IServiceScopeFactory or IServiceProvider and create a scope per operation. The easiest preventive measure: set ValidateScopes = true in your development configuration and the DI container will catch it for you automatically.

In ASP.NET Core, set ValidateScopes = true in development. The DI container will throw at startup if a singleton depends on a scoped service.

2:15 AM, deployment pipeline. A zero-downtime deployment rotates containers. For 200ms, the new container starts before the config volume is mounted. The ConfigManager singleton's constructor fires, can't find appsettings.json, and throws FileNotFoundException. Lazy<T> caches the exception. The config file appears 200ms later — but it doesn't matter.

To understand what happened, imagine a vending machine that jams on the first coin. Even after you fix the jam, the machine refuses to accept coins because it "remembers" it's broken. That's exactly what Lazy<T> does by default when the creation logic fails. It enters a "faulted" state and never tries again — it just replays the same error over and over.

Here's the timeline: At 2:15:00.000 AM, the new container starts. The first request comes in and tries to access ConfigManager.Instance. The Lazy<T> runs the factory function, which tries to load appsettings.json. The file isn't there yet (the config volume takes 200ms to mount), so it throws FileNotFoundException. Lazy<T> catches this exception and remembers it permanently.

At 2:15:00.200 AM — just 200 milliseconds later — the config volume is mounted and the file is on disk. But it's too late. The Lazy<T> has already decided "I tried, it failed, I'm done." Every subsequent call to ConfigManager.Instance gets the cached FileNotFoundException. The irony: the engineers can see the file on disk, but the exception keeps saying it's missing. It took 90 minutes and a "have you tried restarting it?" suggestion before someone realized the error was being replayed from memory, not happening in real-time.

Time to Diagnose

90 minutes. The misleading part: the config file existed on disk. The exception said it didn't. No one suspected the exception was being replayed from cache.

What Went Wrong: Lazy<T> Caches Exceptions Forever Time 2:15:00.000 Factory runs. FileNotFound! CACHED. 2:15:00.200 Config file NOW EXISTS on disk. Too late! 2:15:00.201 Call .Value again. REPLAYS cached error. Every future call Same cached error. FOREVER. Lazy<T> internal state: FAULTED (ExecutionAndPublication mode) The factory never runs again. The cached exception is the permanent answer. The file existed after 200ms, but Lazy<T> already made its decision. Only a process restart clears the faulted state. // ❌ Lazy<T> default mode caches exceptions FOREVER public class ConfigManager { // ❌ Default LazyThreadSafetyMode = ExecutionAndPublication // If the factory throws, the exception is cached permanently private static readonly Lazy<ConfigManager> _instance = new(() => new ConfigManager()); public static ConfigManager Instance => _instance.Value; private readonly IConfiguration _config; private ConfigManager() { // ❌ During deployment, config volume isn't mounted yet (200ms delay) // This throws FileNotFoundException _config = new ConfigurationBuilder() .AddJsonFile("appsettings.json", optional: false) // THROWS! .Build(); } // 2:15:00.000 AM: Container starts, Lazy factory throws FileNotFoundException // 2:15:00.200 AM: Config volume mounted — file now exists // 2:15:00.201 AM: Next call to Instance → Lazy REPLAYS the cached exception // 2:15:00.202 AM: Every subsequent call → same cached exception forever // The Lazy<T> is "poisoned" — only a process restart clears it }

Walking through the buggy code: The Lazy<T> is created with the default mode (ExecutionAndPublication). This mode has a specific behavior: if the factory function throws an exception, that exception is cached permanently. Look at the constructor — it calls AddJsonFile("appsettings.json", optional: false). That optional: false means "throw an exception if the file doesn't exist." During the 200ms deployment window, the file truly doesn't exist. The constructor throws. Lazy<T> stores that exception in its internal state and marks itself as "faulted." From that point on, every call to .Value just re-throws the stored exception without ever trying to run the factory again.

// ✅ Option A: PublicationOnly mode — does NOT cache exceptions private static readonly Lazy<ConfigManager> _instance = new( () => new ConfigManager(), LazyThreadSafetyMode.PublicationOnly // retries on failure ); // If factory throws, next call retries (no cached exception) // Trade-off: multiple threads may run the factory concurrently // (only one result wins, others are discarded) // ✅ Option B: Manual retry with Interlocked (full control) private static volatile ConfigManager? _instance; public static ConfigManager Instance { get { if (_instance is not null) return _instance; var temp = new ConfigManager(); // may throw — that's OK, no caching Interlocked.CompareExchange(ref _instance, temp, null); return _instance; // If constructor throws, _instance stays null // Next call retries — eventually the config file will be there } }

Why the fix works: Option A switches the Lazy<T> to PublicationOnly mode. In this mode, if the factory throws, the Lazy<T> does NOT cache the exception. The next time someone calls .Value, it tries the factory again. The trade-off is that multiple threads might run the factory at the same time (only one result wins), but that's a small price to pay for resilience. Option B skips Lazy<T> entirely and uses Interlocked.CompareExchange for manual control. If the constructor throws, _instance stays null and the next call tries again. Both approaches give you the retry behavior you need for environments where resources might not be ready immediately on startup.

Search for new Lazy< without a second parameter. If you see new Lazy<T>(() => ...) without specifying LazyThreadSafetyMode, the default is ExecutionAndPublication, which caches exceptions. Ask yourself: "Can this constructor ever fail?" If it touches the network, file system, database, or any external resource, the answer is yes. Switch to PublicationOnly for anything that might have transient failures.

Black Friday, 11:30 AM. An e-commerce payment service starts intermittently failing. SocketException: Address already in use. The ops team scales from 3 to 10 pods — makes it worse. Running netstat shows 64,000 sockets in TIME_WAITA TCP socket state lasting ~240 seconds after connection close, during which the port cannot be reused. state.

This bug is sneaky because the developer did everything they were taught to do: use a using block to dispose of resources when done. That's usually the right pattern! But HttpClient is special. Here's why.

When you create a new HttpClient, it opens a TCP connection to the server. When you dispose it, the HttpClient object gets cleaned up — but the underlying TCP socket doesn't close immediately. The operating system keeps the socket in a state called TIME_WAIT for about 240 seconds (4 minutes). This is a safety feature in TCP — it prevents old packets from a previous connection from being confused with a new one. You can't disable it; it's baked into the TCP protocol.

Now do the math: 300 payments per second, each creating a new socket that hangs around for 240 seconds. That's 300 x 240 = 72,000 sockets sitting in TIME_WAIT. Most operating systems cap the number of available sockets at around 64,000. So within about 3.5 minutes, the app runs out of sockets entirely. New connections fail with SocketException. And here's the cruel irony: scaling to more pods makes it worse because each pod creates its own sockets, accelerating the exhaustion.

Time to Diagnose

2 hours. The misleading part: the code looked correct — it used using blocks, which is normally the right pattern. The team initially blamed the payment gateway for being slow.

What Went Wrong: Socket Exhaustion from new HttpClient() PaymentService (Singleton) new HttpClient() new HttpClient() new HttpClient() x300 per second Sockets in TIME_WAIT After Dispose(): socket lingers for ~240 seconds (TCP rule) 300/s x 240s = 72,000! OS limit: 64,000. CRASH. PaymentService (with IHttpClientFactory) Pooled Handler (reused, rotated every 2min) 1 socket serves 1000s of requests. No leak. BAD: new per call GOOD: pooled handler Scaling to more pods makes the BAD approach worse — each pod exhausts sockets independently // ❌ Creating a new HttpClient per call — socket exhaustion // Registered as: services.AddSingleton<PaymentService>(); public class PaymentService { public async Task<bool> ChargeAsync(decimal amount) { // ❌ New HttpClient per call — looks correct (using = dispose) // But Dispose() doesn't release the TCP socket immediately! using var client = new HttpClient(); client.BaseAddress = new Uri("https://api.pay.com/"); var resp = await client.PostAsync("charge", ...); return resp.IsSuccessStatusCode; // client.Dispose() called here — but the underlying TCP socket // enters TIME_WAIT state for ~240 seconds before OS reclaims it } // At 300 payments/second: // 300 × 240 seconds = 72,000 sockets in TIME_WAIT // OS limit is 64,000 → SocketException: Address already in use // Scaling to more pods makes it WORSE — more instances = more sockets }

Walking through the buggy code: Look at using var client = new HttpClient(); inside the method. Every call to ChargeAsync creates a brand new HttpClient. The using keyword means it gets disposed at the end of the method. That sounds responsible, right? The problem is that HttpClient.Dispose() only closes the HTTP layer — the underlying TCP socket doesn't go away. The operating system keeps it in TIME_WAIT state for 240 seconds as a safety measure. At high volume, sockets accumulate faster than the OS can reclaim them, and you run out of available network ports.

// ✅ Use IHttpClientFactory — pools handlers, rotates DNS // Program.cs: builder.Services.AddHttpClient<PaymentService>(c => c.BaseAddress = new Uri("https://api.pay.com/")); // Service — HttpClient is injected, managed by the factory public class PaymentService { private readonly HttpClient _client; public PaymentService(HttpClient client) => _client = client; public async Task<bool> ChargeAsync(decimal amount) { // HttpClient is reused — no socket leak var resp = await _client.PostAsync("charge", ...); return resp.IsSuccessStatusCode; } } // IHttpClientFactory pools HttpMessageHandler instances internally // Handlers are rotated every 2 minutes (prevents stale DNS) // One handler serves thousands of requests — no socket exhaustion

Why the fix works: IHttpClientFactory manages a pool of HttpMessageHandler instances behind the scenes. When you call AddHttpClient<PaymentService>, the factory creates a handler that can be reused across thousands of requests. One handler = one TCP connection = one socket, serving thousands of requests through that same socket. The factory also rotates handlers every 2 minutes to prevent stale DNS problems (if a server's IP address changes). This gives you the best of both worlds: efficient socket reuse and fresh DNS resolution.

Search for new HttpClient() anywhere in your codebase. If it appears inside a method (especially one called frequently), you have this bug. Also watch for using var client = new HttpClient() — the using block makes it look correct, but that's exactly the pattern that causes socket exhaustion. The fix is always the same: use IHttpClientFactory via AddHttpClient in your DI registration.

IHttpClientFactory pools the underlying HttpMessageHandler instances and rotates them every 2 minutes, preventing both socket exhaustion and stale DNS issues.

Wednesday, discovered by the finance team — 3 days after going live. "Why did 4,200 non-VIP customers get the 20% VIP discount?" A singleton PricingEngine stored _currentDiscount as an instance field. When a VIP customer's request called SetDiscount(0.20m), that value stuck in the singleton. The next 4,200 requests — regardless of customer type — all got the VIP price. Revenue impact: $38,000.

Think of it like a restaurant where one waiter serves every table. A VIP guest tells the waiter "apply my 20% discount." The waiter writes it on a sticky note and sticks it to themselves. Now every guest who comes after gets the same discount — the waiter doesn't know the discount was only meant for that one VIP guest. The sticky note (the mutable field) is shared across all customers because there's only one waiter (one Singleton instance).

The developer thought of the pricing engine as a calculator — set a discount, then calculate prices. That logic makes sense for a single-user desktop app. But in a web server, hundreds of users are hitting the same Singleton simultaneously. One user's SetDiscount() call changes the field for everyone. The bug was invisible in testing because unit tests run one at a time. In production, 4,200 regular customers got VIP pricing over the course of 3 days before the finance team noticed revenue was mysteriously down.

Time to Diagnose

3 days to detect (finance caught the anomaly in revenue reports), 20 minutes to fix once the developer saw the code. The hardest part wasn't the fix — it was calculating the refund impact across 4,200 orders.

What Went Wrong: One User's Data Leaks to Everyone PricingEngine (Singleton) _currentDiscount = ??? VIP Customer SetDiscount(0.20) 4,200 Regular Users GetPrice(100) = $80 ! Before VIP: _currentDiscount = 0.00 VIP calls SetDiscount: _currentDiscount = 0.20 All future requests: _currentDiscount STILL 0.20 $38,000 revenue loss The Singleton's field is shared by ALL requests — one user's change affects everyone // ❌ Mutable state in a Singleton — shared across ALL requests // Registered as: services.AddSingleton<PricingEngine>(); public class PricingEngine { // ❌ This field is shared by every thread, every request, every user private decimal _currentDiscount; public void SetDiscount(decimal d) => _currentDiscount = d; public decimal GetPrice(decimal basePrice) => basePrice * (1 - _currentDiscount); // 10:00:00 AM: Request from regular user → _currentDiscount = 0 // GetPrice(100) = $100 ✅ // 10:00:01 AM: Request from VIP → SetDiscount(0.20m) // _currentDiscount is now 0.20 for EVERYONE // 10:00:02 AM: 4,200 regular users call GetPrice(100) = $80 // All get VIP price — $38,000 revenue loss // The field persists because the Singleton lives for the app's lifetime }

Walking through the buggy code: The critical problem is private decimal _currentDiscount; — a mutable instance field on a Singleton. When a VIP request calls SetDiscount(0.20m), it changes this field. Since there's only one instance of the Singleton, that change is visible to every single request from that point forward. The GetPrice method uses _currentDiscount to calculate prices — and every user now sees the VIP discount. It's not a race condition or a thread-safety issue — the code works exactly as written. The design is simply wrong: per-user data should never live on a Singleton.

// ✅ Stateless Singleton — pass context as parameter, don't store it public class PricingEngine // Singleton — no mutable fields { public decimal GetPrice(decimal basePrice, UserContext user) { var discount = user.IsVip ? 0.20m : 0m; return basePrice * (1 - discount); // Each call computes discount from user context // No shared state — no leaking between requests } } // ✅ Alternative: Use Scoped service for per-request state public class PricingContext // Scoped — one per request { public decimal Discount { get; set; } } // Program.cs builder.Services.AddSingleton<PricingEngine>(); // stateless logic builder.Services.AddScoped<PricingContext>(); // per-request state

Why the fix works: The fixed version makes the Singleton stateless — it has no mutable fields at all. Instead of storing the discount as a field and hoping callers call SetDiscount before GetPrice, the method takes a UserContext parameter. Each call computes the correct discount for that specific user. Nothing is stored between calls, nothing leaks. The alternative approach separates concerns: stateless logic stays on the Singleton, per-request data lives on a Scoped service. This is the golden rule: Singletons should be stateless or only hold immutable/thread-safe data. Per-request data belongs on Scoped services.

Look at any class registered as AddSingleton and check for non-readonly, non-static fields. If you see a Singleton with public void SetX() or public X Property { get; set; }, that's a red flag. Singletons should ideally have only readonly fields set in the constructor. If the Singleton needs to work with per-user or per-request data, that data should be passed as a method parameter, not stored as a field.

2024, e-commerce platform, .NET 8. The NotificationService singleton exposed an event Action<OrderEvent> OnOrderPlaced. Each HTTP request created a scoped OrderHandler that subscribed to the event in its constructor. The handler processed the order and was supposed to be GC'd after the request. Memory grew 2 MB/hour in production.

Here's an analogy: imagine a bulletin board (the Singleton event) where people pin their phone numbers. Every time a customer walks in (new request), a clerk (OrderHandler) pins their phone number on the board. When the customer leaves, the clerk is supposed to leave too — but their phone number stays pinned. The board grows and grows. Nobody removes old numbers. After a few days, the board has 847,000 phone numbers on it, each one keeping a reference to a clerk who should have left long ago.

In C#, when you do event += handler, the event's internal delegate chain holds a strong reference to the subscriber. As long as that reference exists, the garbage collector cannot collect the subscriber, even if nothing else in the program uses it. The Singleton lives forever, so its delegate chain lives forever, which means every single handler ever subscribed stays alive in memory forever — unless you explicitly unsubscribe with -=.

The memory dump told the story: 847,000 OrderHandler instances, all alive, all reachable from the Singleton's event delegate. At 2 MB/hour, production eventually ran out of memory during weekend low-traffic periods when GC pressure was lower and the memory kept climbing without any natural cleanup.

Time to Diagnose

4 hours. dotnet-dump showed 847,000 live OrderHandler instances — all reachable via the Singleton's event delegate chainA multicast delegate holding references to multiple subscriber methods — if not cleaned up, prevents garbage collection..

What Went Wrong: Event Delegate Chain Prevents Garbage Collection NotificationService (Singleton — lives forever) OnOrderPlaced (delegate chain) Never goes away. Strong refs OrderHandler #1 (Req 1) OrderHandler #2 (Req 2) OrderHandler #3 (Req 3) ... #847,000 GC cannot collect ANY of them. Delegate chain = strong reference. Memory growth: 2 MB/hour Each handler: ~2.5 KB 847K handlers = ~2 GB leaked Fix: -= in Dispose() or use MediatR / IObservable<T> Every += without a matching -= is a memory leak when the publisher outlives the subscriber // ❌ Scoped handler subscribes to Singleton event, never unsubscribes // Registered as: services.AddScoped<OrderHandler>(); public class OrderHandler : IDisposable { private readonly NotificationService _notifier; // Singleton! public OrderHandler(NotificationService notifier) { _notifier = notifier; // ❌ Every request adds a NEW handler to the Singleton's delegate chain _notifier.OnOrderPlaced += HandleOrder; } private void HandleOrder(OrderEvent e) { /* process order */ } public void Dispose() { // ❌ Forgot to unsubscribe! // The Singleton's event delegate still holds a strong reference // to this OrderHandler — GC can NEVER collect it } // Request 1: OrderHandler #1 subscribes → delegate chain has 1 handler // Request 2: OrderHandler #2 subscribes → delegate chain has 2 handlers // Request 100,000: chain has 100,000 handlers — 2 MB/hour growth // dotnet-dump shows 847,000 live OrderHandler instances // All reachable via Singleton's event delegate — not GC-eligible }

Walking through the buggy code: In the constructor, _notifier.OnOrderPlaced += HandleOrder; adds this handler to the Singleton's event delegate chain. Every HTTP request creates a new OrderHandler, and each one subscribes. Now look at Dispose() — it's empty. It never calls -= HandleOrder. When the request ends, the DI container calls Dispose(), but the Singleton's delegate chain still has a reference to this handler. The garbage collector looks at the handler, sees "the Singleton is still referencing this object," and decides it's still alive. After 100,000 requests, the chain has 100,000 handlers — none of them collectable.

// ✅ Fix: Unsubscribe in Dispose() public class OrderHandler : IDisposable { private readonly NotificationService _notifier; public OrderHandler(NotificationService notifier) { _notifier = notifier; _notifier.OnOrderPlaced += HandleOrder; } private void HandleOrder(OrderEvent e) { /* process order */ } public void Dispose() { // ✅ Remove the handler — breaks the strong reference _notifier.OnOrderPlaced -= HandleOrder; // Now GC can collect this OrderHandler after the request ends } } // ✅ Better: Use MediatR notifications (no direct event coupling) public class OrderPlacedHandler : INotificationHandler<OrderPlacedEvent> { public Task Handle(OrderPlacedEvent e, CancellationToken ct) { // No subscription/unsubscription — MediatR manages the lifecycle return Task.CompletedTask; } } // Or use IObservable<T> where subscription returns IDisposable

Why the fix works: Adding _notifier.OnOrderPlaced -= HandleOrder; in Dispose() removes this handler from the delegate chain. Once removed, the Singleton no longer holds a reference to the handler, so GC can collect it normally. The "even better" approach uses MediatR or IObservable<T> where subscription management is built into the framework — you never have to remember to unsubscribe manually. With IObservable<T>, subscribing returns an IDisposable that you dispose to unsubscribe, making it impossible to forget.

Search for += in any class that implements IDisposable. For every += you find, check if the corresponding -= exists in Dispose(). If not, you have a potential memory leak. This is especially dangerous when the event publisher is a Singleton and the subscriber is a Scoped or Transient service. Use dotnet-counters or dotnet-dump to monitor for growing object counts in production.

Singleton + C# events = memory leak trap. The Singleton lives forever, so its delegate chain holds strong references to every subscriber. Always unsubscribe in Dispose(), or prefer IObservable<T> / MediatRA .NET library implementing the Mediator pattern — decouples request senders from handlers via in-process messaging. where subscription cleanup is built-in.

Section 13

Pitfalls & Anti-Patterns

Mistake: Stuffing unrelated state into a singleton: user session, app config, feature flags, cache — all in one class.

Why This Happens: You might think "well, I only need one of these, so let me put everything in one place." It feels convenient — one class to hold all your app-wide stuff. This is especially tempting early in a project when you're moving fast and don't want to create multiple classes. The thought process goes: "It's a Singleton, it lives forever, it's accessible everywhere — perfect place to dump all my global state!"

But this creates a "god objectA class that knows too much or does too much — violates Single Responsibility by accumulating unrelated responsibilities." — a class that knows too much and does too much. Every part of your application depends on it. When you change the caching logic, you risk breaking the config logic. When you test the feature flags, you're dragging along the session state. It's like putting your wallet, keys, groceries, and pet hamster all in one bag — technically it works, but good luck finding your keys when you need them.

// ❌ One Singleton to rule them all — god object public sealed class AppState { private static readonly Lazy<AppState> _instance = new(); public static AppState Instance => _instance.Value; // Unrelated responsibilities crammed into one class public IConfiguration Config { get; set; } public Dictionary<string, bool> FeatureFlags { get; set; } public IMemoryCache Cache { get; set; } public UserSession CurrentSession { get; set; } public string ConnectionString { get; set; } } // ✅ Separate Singleton per responsibility builder.Services.AddSingleton<IConfiguration>(config); builder.Services.AddSingleton<IFeatureFlagService, LaunchDarklyService>(); builder.Services.AddSingleton<IMemoryCache, MemoryCache>(); builder.Services.AddScoped<IUserSession, HttpUserSession>(); // Each service has ONE job. Testable. Swappable. Independent.

Fix: Create separate singletons per responsibility. Register each via DI. Each class should do one thing — that's the Single Responsibility Principle in action.

God Singleton AppState.Instance Config + FeatureFlags Cache + Session ConnectionString Change cache → risk breaking config Separate Services IConfiguration IFeatureFlagService IMemoryCache IUserSession Each: 1 job, testable, swappable

Mistake: Storing per-request data (current user, tenant ID) in a singleton.

Why This Happens: You might think "I need to know who the current user is from anywhere in my code, so I'll just set it on the Singleton at the start of each request." This feels natural if you're coming from desktop or mobile development, where there's only one user at a time. In a web server, though, dozens or hundreds of requests are being processed simultaneously, and they all share the same Singleton instance.

The result is a security nightmare: User A's data leaks into User B's response. Imagine a multi-tenant SaaS app where one company sees another company's data because the tenant ID was stored on a Singleton and got overwritten between requests. This isn't just a bug — it's a data breach.

// ❌ Per-request data stored on a Singleton — data leak! public sealed class RequestContext { private static readonly Lazy<RequestContext> _i = new(); public static RequestContext Instance => _i.Value; public int CurrentUserId { get; set; } // ❌ Shared across ALL threads public string TenantId { get; set; } // ❌ User A's tenant leaks to User B } // ✅ Per-request data belongs on a Scoped service public class RequestContext { public int CurrentUserId { get; set; } public string TenantId { get; set; } } // Registration — Scoped = one instance per HTTP request, isolated builder.Services.AddScoped<RequestContext>(); // Each request gets its own RequestContext. No leaking between users.

Fix: Use AddScoped for per-request state. Singletons should only hold immutable or thread-safe data — like configuration, caches with proper eviction, or stateless logic.

Singleton (shared across threads) RequestContext.Instance User A User B User A sets TenantId = "acme" User B overwrites it = "globex" User A sees Globex data = DATA BREACH Scoped (per-request) User A own RequestContext User B own RequestContext Isolated per request No data leaking between users

Mistake: Sprinkling Logger.Instance.Log(...) throughout your codebase instead of injecting ILogger.

Why This Happens: It's incredibly convenient. Logger.Instance.Log("something") works from anywhere — no constructor parameters, no DI setup, no interfaces. You might think "why bother with all that ceremony when I can just call .Instance?" This shortcut feels productive at first, but it's creating invisible wires throughout your codebase.

The problem is that when you read a class's constructor, you should be able to see everything it depends on. With Logger.Instance, the dependency is hidden inside the method body. Unit tests can't swap it for a fake because there's no injection point — the class is hardwired to the concrete Singleton. When you later want to change the logger implementation, you have to find and update every single call site.

// ❌ Hidden dependency — can't see it from the constructor public class OrderService { public void PlaceOrder(Order order) { // Where does this come from? No constructor, no injection. // How do you test this without writing to a real log file? Logger.Instance.Log($"Order {order.Id} placed"); // Hardcoded to Logger class — can't swap for a fake in tests } } // ✅ Explicit dependency — visible, testable, swappable public class OrderService { private readonly ILogger _logger; public OrderService(ILogger logger) => _logger = logger; public void PlaceOrder(Order order) { _logger.Log($"Order {order.Id} placed"); // In production: FileLogger (Singleton via DI) // In tests: FakeLogger that captures messages } } // Registration — same single instance, no .Instance calls needed builder.Services.AddSingleton<ILogger, FileLogger>();

Fix: Register via services.AddSingleton<ILogger, FileLogger>() and inject through constructors. Same single instance, zero coupling. Your constructor becomes a menu of what the class needs.

Hidden Dependency OrderService() no constructor params buried inside method Logger.Instance.Log() Can't swap for test fake Explicit Injection OrderService(ILogger logger) dependency is visible _logger.Log() Testable: inject FakeLogger

Mistake: Forgetting the sealed keyword on the singleton class.

Why This Happens: Honestly, most developers just forget. The sealed keyword isn't required for the code to compile and work. But without it, you've left a door open. Another developer on your team might think "I need a special version of this Singleton" and try to inherit from it. To make inheritance work, they'd change the constructor from private to protected — and suddenly the whole "only one instance" guarantee is broken because the subclass can create its own instances.

// ❌ Not sealed — someone can subclass and break the pattern public class Logger // Missing: sealed { private static readonly Lazy<Logger> _instance = new(); public static Logger Instance => _instance.Value; private Logger() { } } // Later, a developer "extends" it: public class SpecialLogger : Logger // Constructor must be protected for this { // Now there's Logger.Instance AND new SpecialLogger() = 2 instances! } // ✅ Sealed — cannot be subclassed, intent is clear public sealed class Logger { private static readonly Lazy<Logger> _instance = new(); public static Logger Instance => _instance.Value; private Logger() { } // private stays private — no subclass can change this } // Bonus: sealed enables JIT devirtualization — method calls are slightly faster

Fix: Always mark singleton classes as sealed. This prevents inheritance and signals intent clearly. As a bonus, the JIT compiler can optimize method calls on sealed types (devirtualization).

Not Sealed public class Logger Logger.Instance new SpecialLogger() 2 instances = pattern broken! Sealed public sealed class Logger Logger.Instance Cannot subclass. 1 instance guaranteed. + JIT devirtualization bonus

Mistake: Doing heavy work (loading large files, network calls, database migrations) inside the singleton constructor, and using eager initialization.

Why This Happens: The constructor feels like the natural place to set things up. You think "the Singleton needs a warmed cache and a database connection, so let me load everything in the constructor." This works fine when the loading takes 50ms. But what about when it takes 30 seconds? The DI container creates all Singletons before the app starts accepting requests. If your constructor blocks, the entire startup stalls.

Kubernetes health checks have a timeout. If your app doesn't respond within that window, Kubernetes kills the pod and starts a new one — which also stalls on the same constructor, gets killed, and the cycle repeats forever. In serverless environments (AWS Lambda, Azure Functions), this directly adds to your cold-start latency and can push you past timeout limits.

// ❌ Heavy work in constructor blocks app startup public sealed class CacheManager { public CacheManager() { // ❌ This runs DURING startup — blocks everything var data = File.ReadAllBytes("catalog.dat"); // 500 MB file! Thread.Sleep(5000); // Simulating slow network call _cache = DeserializeAndBuild(data); // Takes 30 seconds } } // ✅ Lightweight constructor + async warmup via IHostedService public sealed class CacheManager { public CacheManager() { } // Instant — no heavy work public async Task InitializeAsync(CancellationToken ct) { var data = await File.ReadAllBytesAsync("catalog.dat", ct); _cache = DeserializeAndBuild(data); } } // Warmup runs during startup pipeline (not in constructor) public class CacheWarmup(CacheManager cache) : IHostedService { public Task StartAsync(CancellationToken ct) => cache.InitializeAsync(ct); public Task StopAsync(CancellationToken ct) => Task.CompletedTask; }

Fix: Keep constructors lightweight. Move heavy setup to an async InitializeAsync() method triggered by IHostedService during the startup pipeline. Or use Lazy<T> so initialization happens on first use rather than at app start.

Heavy Constructor App Start Constructor: 30s blocking K8s health check → timeout Pod killed → restarts → killed again Infinite restart loop Lightweight + IHostedService App Start Instant ctor Ready! IHostedService: InitializeAsync() Heavy work runs in background App is responsive immediately

Mistake: A singleton holds file handles, database connections, or unmanaged resources but never implements IDisposable.

Why This Happens: You might think "Singletons live forever, so cleanup doesn't matter — it'll all get cleaned up when the process exits." That's partially true for simple cases, but in long-running services (web servers, background workers), resources that aren't explicitly cleaned up can accumulate. File handles have OS-level limits. Database connections have pool limits. If your Singleton opens a file on startup and the app restarts 100 times during a deployment, that's 100 unclosed file handles. Eventually the OS says "no more."

// ❌ Holds resources but no cleanup mechanism public sealed class AuditLogger { private readonly StreamWriter _file; public AuditLogger() { _file = new StreamWriter("audit.log", append: true); } // Where does _file get closed? Nowhere. Resource leak. } // ✅ Implements IDisposable — DI container calls Dispose on shutdown public sealed class AuditLogger : IDisposable { private readonly StreamWriter _file; public AuditLogger() { _file = new StreamWriter("audit.log", append: true); } public void Dispose() { _file.Flush(); _file.Dispose(); // Clean shutdown — file handle released } } // When registered via AddSingleton, the DI container auto-disposes on shutdown builder.Services.AddSingleton<AuditLogger>();

Fix: Implement IDisposable (or IAsyncDisposable for async cleanup). When registered via DI (AddSingleton), the container calls Dispose automatically on shutdown. For manual singletons, hook into IHostApplicationLifetime.ApplicationStopping.

No IDisposable AuditLogger + StreamWriter App restarts 100 times... 100 unclosed file handles OS limit reached → crash IDisposable AuditLogger : IDisposable Dispose() → Flush + Close DI container auto-calls on shutdown

Mistake: One singleton CacheManager for all tenants. Tenant A caches product data, Tenant B sees Tenant A's prices.

Why This Happens: Multi-tenancy is often added later in a product's life. The original code was built for a single tenant and worked fine. When multi-tenant support was added, the Singleton cache was left untouched. The developer thought "caching is caching, it doesn't matter who the data belongs to." But in a multi-tenant SaaS app, Tenant A's product catalog with premium prices should never be visible to Tenant B who has different pricing. A shared Singleton cache without tenant isolation is a data breach waiting to happen.

// ❌ Shared cache without tenant isolation public sealed class CacheManager { private readonly ConcurrentDictionary<string, object> _cache = new(); public void Set(string key, object val) => _cache[key] = val; public object? Get(string key) => _cache.GetValueOrDefault(key); } // Tenant A: cache.Set("products", tenantAProducts) // Tenant B: cache.Get("products") → gets Tenant A's data! // ✅ Tenant-isolated cache — keyed by tenant ID public sealed class CacheManager { private readonly ConcurrentDictionary<string, ConcurrentDictionary<string, object>> _tenantCaches = new(); public void Set(string tenantId, string key, object val) { var cache = _tenantCaches.GetOrAdd(tenantId, _ => new()); cache[key] = val; } public object? Get(string tenantId, string key) { return _tenantCaches.TryGetValue(tenantId, out var cache) ? cache.GetValueOrDefault(key) : null; } } // Tenant A's data is completely isolated from Tenant B.

Fix: Use a ConcurrentDictionary<string, TenantCache> keyed by tenant ID. Or use .NET 8 keyed services: services.AddKeyedSingleton<ICache>(tenantId, ...). The key rule: every piece of data in a multi-tenant Singleton must include tenant context.

Shared Cache CacheManager (one for all) Tenant A: "products" Tenant B: "products" Same key = same data Tenant B sees Tenant A's prices! Cross-tenant data breach Tenant-Isolated Cache CacheManager (keyed by tenantId) Tenant A cache "products" → A's data Tenant B cache "products" → B's data Completely isolated No cross-tenant leaks

Mistake: Singleton A depends on Singleton B, and Singleton B depends on Singleton A — either directly or through a chain.

Why This Happens: It usually starts innocently. You build OrderService and it needs InventoryService to check stock. Later, someone adds a feature to InventoryService that needs to look up order history — so they inject OrderService. Now you have a circle: OrderService needs InventoryService, and InventoryService needs OrderService. Neither can be created without the other already existing. The DI container detects this and throws at startup with a message like "A circular dependency was detected."

With manual singletons (using Lazy<T> or static fields), the symptom is worse: a stack overflow during initialization, or a deadlock where both singletons wait for the other to finish constructing.

// ❌ OrderService needs InventoryService, which needs OrderService public class OrderService { public OrderService(InventoryService inv) { } // needs Inventory } public class InventoryService { public InventoryService(OrderService orders) { } // needs Order — circular! } // DI container: "I can't create OrderService without InventoryService, // but I can't create InventoryService without OrderService. CRASH." // ✅ Break the cycle with Lazy<T> injection public class InventoryService { private readonly Lazy<OrderService> _orders; public InventoryService(Lazy<OrderService> orders) => _orders = orders; // OrderService is NOT resolved during construction — only on first use // This breaks the circular initialization } // ✅ Better: Extract shared logic into a third service public class OrderHistoryQuery { /* reads order data */ } public class InventoryService { public InventoryService(OrderHistoryQuery query) { } // no circular dep }

Fix: Break the cycle with Lazy<T> injection, an intermediary interface, or the Mediator pattern. If two singletons both need each other, they probably belong in one class or need an event-based decoupling.

Circular Dependency OrderService InventoryService A needs B, B needs A DI container: CRASH Cycle Broken OrderService InventoryService OrderHistoryQuery No cycle. Both depend on shared query.

Mistake: Creating a "reusable" base class like:

// ❌ DO NOT DO THIS public abstract class Singleton<T> where T : class, new() { private static readonly Lazy<T> _instance = new(); public static T Instance => _instance.Value; } public class Logger : Singleton<Logger> { } // Looks clean, right?

Why Bad: The new() constraint forces a public parameterless constructor — completely defeating the pattern's core guarantee. Anyone can write new Logger(). It also couples every consumer to a concrete base class, making DI migration painful. And it pollutes the inheritance hierarchy (what if Logger needs to extend something else?).

Fix: Either use Lazy<T> directly in each class with a private constructor, or (better) use AddSingleton<T>() via DI. There's no value in a generic base class when the DI container already manages singleton lifetime.

Generic Base Class Singleton<T> where T : new() Logger : Singleton<Logger> new() = public constructor required Anyone can call new Logger() = broken! Just Use DI AddSingleton<ILogger, FileLogger>() private constructor stays private No base class needed DI manages lifetime. Pattern intact.

Mistake: Singleton exposes a C# event and short-lived objects subscribe without unsubscribing:

// Singleton lives forever — holds strong references to all subscribers public class EventBus { public event Action<OrderPlaced>? OrderPlaced; } // Scoped handler subscribes but never unsubscribes public class OrderHandler : IDisposable { public OrderHandler(EventBus bus) => bus.OrderPlaced += Handle; private void Handle(OrderPlaced e) { /* ... */ } // ❌ Dispose never removes the handler public void Dispose() { } }

Why Bad: The Singleton lives for the entire app lifetime. Every request creates a new OrderHandler that subscribes to the event. The Singleton's event delegate chain keeps a strong reference to every handler, preventing GC. After 10K requests → 10K leaked handlers in memory.

Fix: Always unsubscribe in Dispose(), or use WeakReferenceA .NET class that holds a reference to an object without preventing its garbage collection.-based event patterns. Better yet, use IObservable<T> with System.Reactive (subscriptions return IDisposable) or MediatR notifications (no direct coupling).

Event Leak EventBus (Singleton, lives forever) ... 10K leaked handlers Handlers subscribe but never unsubscribe GC can't collect them → memory grows OutOfMemoryException Proper Cleanup EventBus (Singleton) OrderHandler : IDisposable Dispose() → -= Handle Handler unsubscribes, GC can collect it
Section 14

Testing Strategies

Testing Singleton-dependent code is one of the biggest challenges. Here are battle-tested approaches from production codebases.

Extract an interface, register via DI, and inject mocks in tests. The class doesn't know or care that it's a Singleton.

// Production: registered as Singleton builder.Services.AddSingleton<ICacheService, RedisCacheService>(); // Test: inject a fake public class OrderServiceTests { [Fact] public void PlaceOrder_CachesResult() { // Arrange var fakeCache = new FakeCacheService(); var sut = new OrderService(fakeCache); // Act sut.PlaceOrder(new Order { Id = 42 }); // Assert Assert.True(fakeCache.ContainsKey("order:42")); } } public class FakeCacheService : ICacheService { private readonly Dictionary<string, object> _store = new(); public void Set(string key, object val) => _store[key] = val; public T? Get<T>(string key) => _store.TryGetValue(key, out var v) ? (T)v : default; public bool ContainsKey(string key) => _store.ContainsKey(key); }

For fast tests without manual fakes, use a mocking frameworkA library (like Moq or NSubstitute) that creates fake implementations of interfaces for unit testing..

using Moq; [Fact] public void GetProduct_ReturnsFromCache_WhenCached() { // Arrange var mockCache = new Mock<ICacheService>(); mockCache.Setup(c => c.Get<Product>("product:1")) .Returns(new Product { Id = 1, Name = "Widget" }); var sut = new ProductService(mockCache.Object); // Act var result = sut.GetProduct(1); // Assert Assert.Equal("Widget", result.Name); mockCache.Verify(c => c.Get<Product>("product:1"), Times.Once); }

For integration tests that need the real DI pipeline, use WebApplicationFactoryA test helper that bootstraps the full ASP.NET Core pipeline in-memory for integration tests. to override Singleton registrations in the test host.

public class ApiTests : IClassFixture<WebApplicationFactory<Program>> { private readonly HttpClient _client; public ApiTests(WebApplicationFactory<Program> factory) { _client = factory.WithWebHostBuilder(builder => { builder.ConfigureServices(services => { // Remove the real Singleton var descriptor = services.SingleOrDefault( d => d.ServiceType == typeof(ICacheService)); if (descriptor != null) services.Remove(descriptor); // Replace with test double services.AddSingleton<ICacheService, InMemoryCacheService>(); }); }).CreateClient(); } [Fact] public async Task GetProducts_Returns200() { var response = await _client.GetAsync("/api/products"); response.EnsureSuccessStatusCode(); } }

When refactoring isn't possible, use a static reset method guarded by [InternalsVisibleTo].

// In the Singleton class (production code): public sealed class LegacyConfig { private static readonly Lazy<LegacyConfig> _instance = new(() => new LegacyConfig()); public static LegacyConfig Instance => _instance.Value; // Allow tests to reset — only accessible from test assembly internal static void ResetForTesting() { typeof(Lazy<LegacyConfig>) .GetField("m_boxed", BindingFlags.NonPublic | BindingFlags.Instance)! .SetValue(_instance, null); } } // In AssemblyInfo.cs: [assembly: InternalsVisibleTo("MyApp.Tests")] // In test: public class LegacyConfigTests : IDisposable { public void Dispose() => LegacyConfig.ResetForTesting(); [Fact] public void Config_LoadsCorrectly() { /* ... */ } }

This is a hack for legacy code. It uses reflectionThe ability to inspect and manipulate types, methods, and properties at runtime — powerful but slower than compile-time approaches. and is fragile. Always prefer Strategy 1 (interface + DI) for new code.

Section 15

Performance Considerations

Once initialized, accessing a Singleton via Lazy<T>.Value is essentially a volatileA C# keyword that prevents the compiler and CPU from caching a variable in registers or reordering reads/writes. read — sub-nanosecond. The lock is only acquired during first creation.

ApproachFirst AccessSubsequent AccessContention Risk
Lazy<T> (default)Lock + constructionVolatile read (fast)None after init
Double-checked lockLock + constructionVolatile read (fast)None after init
Static constructorCLRCommon Language Runtime — the virtual machine that executes .NET code, handling memory management, GC, and JIT compilation. class init (once)Direct field access (fastest)None
DI containerContainer lookup + constructionDirect reference (fastest)None after init

Singletons are never garbage collected. This is by design — but has implications:


GC Gen 2The long-lived generation in .NET's garbage collector. Objects surviving Gen 0 and Gen 1 are promoted here and collected less frequently. promotion:
The Singleton object and everything it references moves to Gen 2 (long-lived heap), where GC runs less frequently.


Memory anchoring:
If the Singleton holds a reference to a large collection, that collection is also pinned forever.


Cache growth:
A Singleton ConcurrentDictionary without eviction policy will grow unbounded until OOM.

For caches: use IMemoryCache with size limits and expiration. For data stores: use weak references or periodic cleanup. Monitor with dotnet-counters for Gen 2 heap size.

Initialization is thread-safe but method-level locking can become a bottleneck:

// BAD — global lock blocks ALL threads public sealed class MetricsCollector { private readonly object _lock = new(); private readonly Dictionary<string, int> _counters = new(); public void Increment(string metric) { lock (_lock) // every thread waits here! { _counters[metric] = _counters.GetValueOrDefault(metric) + 1; } } } // GOOD — lock-free with ConcurrentDictionary + Interlocked public sealed class MetricsCollector { private readonly ConcurrentDictionary<string, int> _counters = new(); public void Increment(string metric) { _counters.AddOrUpdate(metric, 1, (_, count) => count + 1); } }

Rule of thumb: If your Singleton method is called more than 1000 times/second, avoid locks. Use Interlocked, ConcurrentDictionary, Channel<T>, or lock-free data structures.

Section 16

How to Explain in an Interview

Opening: "The Singleton pattern ensures a class has exactly one instance throughout the application."

Core: "It works by making the constructor private — so no one can call new — and exposing a static property that returns the single instance. In C#, the cleanest way is using Lazy<T>, which handles thread safety automatically."

Example: "A good real-world example is a configuration manager. You don't want multiple copies of your app's config floating around — one source of truth keeps things consistent."

When: "I'd use it when I genuinely need one shared instance — like connection pools or caches. But in modern .NET, I prefer registering it with the DI container using AddSingleton, because that gives me the same behavior with better testability."

Close: "The key trade-off is that classic Singleton introduces hidden global state and tight coupling, which is why the DI approach is preferred in production."

Section 17

Interview Q&As

Think First What problem does having a single instance solve? Think about shared resources.

Singleton is a creational design pattern that:

  1. Ensures a class has exactly one instance throughout the application
  2. Provides a global access point to that instance
  3. Controls instantiation via a private constructor

It exists because some resources — configuration, connection pools, caches — must be shared and creating duplicates would waste memory, cause inconsistency, or break coordination.

Great Answer Bonus "It's one of the GoF creational patterns. The key insight is that it's not just about having one instance — it's about controlled access to a shared resource. In modern .NET, the DI container provides this guarantee more cleanly than the classic pattern."
Think First Consider interfaces, inheritance, DI, and lifecycle control.

Static class limitations:

  • Cannot implement interfaces
  • Cannot be inherited or used polymorphically
  • Cannot be injected via DI
  • Eagerly loaded — initialized on first access to any member
  • Cannot be serialized or passed as a parameter

Singleton advantages:

  • Can implement interfaces and participate in DI
  • Supports lazy loadingDeferring the initialization or loading of a resource until the point at which it is actually needed. and controlled initialization
  • Can be swapped with a test double
  • Can be serialized and passed around as an object
Singleton vs Static Class — Key Differences static class Helper { } ✗ Cannot implement interfaces ✗ Cannot be injected via DI ✗ Cannot be mocked in tests ✗ No lazy loading control ✗ Cannot be serialized/passed around Good for: pure utility functions (Math, Path) No state, no dependencies, no polymorphism sealed class Service : IService { } ✓ Implements interfaces ✓ Full DI support ✓ Easily mocked for testing ✓ Lazy or eager initialization ✓ Can be serialized/passed as parameter Good for: services with state or dependencies Config, caches, connection pools, loggers
Great Answer Bonus "In modern .NET, I'd register the class as AddSingleton<IService, Service>() — the container handles the single-instance guarantee, and I can inject a mock in tests. Static classes can't do that."
Think First What happens if two threads call Instance at the exact same time when the instance doesn't exist yet?

Three approaches to thread-safe Singleton:

  1. Lazy<T> — thread-safe by default, simplest and recommended
  2. Double-checked locking with volatile — classic pattern but verbose and error-prone
  3. Static initializer — CLR guarantees thread safety for static constructors

Here's what each approach looks like in code:

// 1. Lazy<T> — recommended (1 line!) private static readonly Lazy<MyService> _instance = new(() => new MyService()); public static MyService Instance => _instance.Value; // 2. Double-checked locking — classic but verbose private static volatile MyService? _instance; private static readonly object _lock = new(); public static MyService Instance { get { if (_instance is null) lock (_lock) _instance ??= new MyService(); return _instance; } } // 3. Static initializer — CLR handles thread safety private static readonly MyService _instance = new(); public static MyService Instance => _instance;

The first approach (Lazy<T>) is the clear winner: it's one line, handles all the thread-safety mechanics internally, and the intent is immediately obvious to anyone reading the code. The second approach (double-checked locking) requires you to get volatile, the lock, and the null-check ordering exactly right — it's easy to introduce subtle bugs. The third approach (static initializer) is simple but eager — the instance is created when the class is first accessed, even if you never use Instance.

Great Answer Bonus "I prefer Lazy<T> because it's a single line, handles thread safety, and the intent is clear. Double-check locking is a red flag in code reviews — it's easy to get wrong and Lazy<T> exists for exactly this reason."
Think First Consider testability, multi-tenancy, and hidden dependencies.

Avoid Singleton when:

  1. The class has mutable state that varies per request or per user
  2. You need multiple instances in the future (multi-tenant, multi-database)
  3. It's used as a shortcut for global state instead of proper DI
  4. It makes unit testing hard by introducing hidden dependencies
  5. The object holds per-request context (user identity, tenant, culture)
Great Answer Bonus "The pattern itself isn't bad — the misuse is. If you find yourself accessing Singleton.Instance directly throughout the codebase instead of injecting it, you've created tight coupling that's hard to refactor. DI-managed singletons solve this entirely."
Think First CPUs and compilers can reorder instructions. What could go wrong if Thread A assigns _instance before the constructor finishes?

volatile prevents two critical optimizations:

  1. CPU register caching — forces reads/writes to go to main memory, so all threads see the latest value
  2. Instruction reordering — prevents the compiler or CPU from moving the assignment before the constructor completes

Without volatile, Thread B could read a non-null _instance that points to a partially constructed object — fields not yet initialized, leading to NullReferenceException or corrupt state.

Great Answer Bonus "This is exactly why I avoid double-checked locking entirely. Lazy<T> handles the memory barrierA CPU instruction ensuring memory operations before the barrier complete before those after it — prevents instruction reordering. correctly under the hood — no need to reason about CPU cache coherence yourself."
Think First How does .NET ensure the factory lambda runs exactly once even under contention?

Lazy<T> uses three internal mechanisms:

  1. A state flag tracking: not created, creating, created, or faulted
  2. A lock object (by default LazyThreadSafetyMode.ExecutionAndPublication) that blocks other threads while the factory runs
  3. A cached value — once created, subsequent calls to .Value skip the lock entirely (fast path)

If the factory throws, Lazy<T> enters a faulted state and re-throws the same exception on every subsequent .Value call — it does NOT retry.

Lazy<T> Internal State Machine Not Created Initial state .Value Creating Lock held. Others wait. Success Created Fast path: no lock! .Value (cached) Throws Faulted Exception cached forever .Value (re-throws) After creation: .Value is just a volatile read (sub-nanosecond). After fault: same exception replayed.
Great Answer Bonus "If you need retry-on-failure semantics, you'd use LazyThreadSafetyMode.PublicationOnly — multiple threads can race to create, but only one value wins. Or better, wrap the factory in a resilience policy like PollyA .NET resilience library providing retry, circuit-breaker, timeout, and fallback policies for transient fault handling.."
Think First Think about object lifetime in terms of HTTP requests in ASP.NET Core.
  1. AddSingletonone instance for the entire app lifetime. Created on first request, reused across all requests and threads
  2. AddScopedone instance per HTTP request. Created when a request starts, disposed when it ends. Different requests get different instances
  3. AddTransientnew instance every time it's injected. No sharing at all
Service Lifetimes: Singleton vs Scoped vs Transient Request 1 Request 2 Request 3 Singleton Instance A (same for ALL requests) Scoped Instance B Instance C Instance D Transient E F G H I J Singleton: 1 instance total | Scoped: 1 per request | Transient: 1 per injection Danger: Singleton capturing Scoped = "captive dependency" (ObjectDisposedException)
Great Answer Bonus "A common bug is injecting a Scoped service into a Singleton — the Scoped service gets captured and lives forever, essentially becoming a Singleton with stale data. ASP.NET Core's ValidateScopes option catches this in development."
Think First Think about reflection, serialization, and cloning — and which of these actually apply in modern .NET.

Yes, a classic (manual) Singleton can be broken via:

  1. ReflectionActivator.CreateInstance(typeof(T), true) can invoke the private constructor, creating a second instance
  2. Serialization/DeserializationSystem.Text.Json or JsonConvert deserializing JSON into the type creates a new instance (if the parameterless constructor is accessible via [JsonConstructor] or source generators)
  3. Cloning — if the class implements ICloneable, Clone() creates a duplicate. Fix: don't implement ICloneable on singletons
  4. Multiple AssemblyLoadContexts — in .NET Core, each AssemblyLoadContext that loads the assembly gets its own set of static fields, so each context gets a separate Singleton instance. This is relevant in plugin architectures using AssemblyLoadContext.Default vs custom contexts

Here's how reflection breaks a Singleton, and how to defend against it:

// Breaking it via reflection: var sneakyInstance = (MySingleton)Activator.CreateInstance( typeof(MySingleton), nonPublic: true // bypasses the private constructor! ); // sneakyInstance != MySingleton.Instance — pattern broken! // Defending against it: public sealed class MySingleton { private static int _instanceCount; private MySingleton() { if (Interlocked.Increment(ref _instanceCount) > 1) throw new InvalidOperationException( "Only one instance allowed. Use MySingleton.Instance."); } }
Great Answer Bonus "To defend against reflection, throw an InvalidOperationException in the constructor if an instance already exists. For serialization, implement a custom JsonConverter<T> that returns the existing instance. But honestly, DI-managed singletons sidestep all of this — the container controls instantiation, so reflection and serialization aren't vectors."
Think First If a class calls Singleton.Instance directly, how do you swap it with a fake in tests?

Two approaches:

  1. Extract an interface — make the Singleton implement IMyService, inject via constructor, mock in tests
  2. Use DI — register with AddSingleton<IMyService, MyService>() in production, inject a mock in test setup

If you're stuck with a legacy Singleton that uses .Instance directly, you can use a seam: add a static SetInstance() method for tests only (guard it with #if DEBUG or [InternalsVisibleTo]).

Great Answer Bonus "The fact that testing Singletons is hard is itself a design signal. If I see Singleton.Instance calls scattered across a codebase, my first refactoring step is wrapping it behind an interface and injecting it via DI."
Think First What happens if someone inherits from your Singleton?

sealed prevents inheritance, which matters because:

  1. A subclass could call the protected constructor, creating a second instance
  2. The subclass could override methods, changing the Singleton's behavior unpredictably
  3. It violates the pattern's intent — "exactly one instance" should mean exactly that
Great Answer Bonus "In C#, sealed also enables JIT optimizations — the compiler can devirtualize method calls on sealed types, making them faster."
Think First Can you have Singleton behavior without the Singleton pattern?

DI containers provide Singleton lifetime management without the pattern:

  1. The class itself is a regular class — no private constructor, no static field
  2. The container guarantees single-instance by holding a reference and reusing it
  3. Consumers get the same instance via constructor injection — no .Instance calls
  4. Testability is built in — swap the registration in test configuration
Classic Singleton vs DI-Managed Singleton Classic Pattern OrderService Logger.Instance.Log() UserService Logger.Instance.Log() Logger static Instance private ctor hardcoded coupling Hidden dependency. Can't mock. Class knows it's a Singleton. DI-Managed OrderService ctor(ILogger logger) UserService ctor(ILogger logger) FileLogger : ILogger normal ctor container manages Explicit dependency. Testable. Class doesn't know it's a Singleton.
Great Answer Bonus "DI Singleton separates two concerns the classic pattern conflates: the 'single instance' policy and the class's own logic. The container handles the policy; the class focuses on its job. That's SRP in action."
Think First Does Lazy<T> retry? Does double-checked locking? What about static constructors?

Behavior varies by approach:

  1. Lazy<T> (default mode) — enters faulted state. Every subsequent .Value call re-throws the same exception. No retry.
  2. Double-checked locking_instance remains null, so next call will retry the constructor
  3. Static constructor — CLR marks the type as permanently broken. Any access throws TypeInitializationException forever
  4. DI container — depends on the container. ASP.NET Core's default will throw on first resolve and retry on subsequent requests

This is one of the most important distinctions to know. Each approach has a completely different failure mode, and picking the wrong one can mean the difference between a transient glitch and a permanent outage. The takeaway: if your constructor touches anything external (file system, network, database), either use PublicationOnly mode or move the risky work out of the constructor entirely.

Great Answer Bonus "This is why Singleton constructors should be lightweight. If initialization can fail (database, network), do it in a separate Initialize() method with retry logic, not in the constructor."
Think First Many senior developers argue it is. Why? And when is it legitimate?

Arguments against (why it's considered an anti-pattern):

  • Introduces hidden global state — any code can access it, making dependencies invisible
  • Violates DIP — high-level modules depend on a concrete class, not an abstraction
  • Makes unit testing difficult — shared state leaks between tests
  • Creates tight coupling — every call to .Instance is a hardcoded dependency

Arguments for (when it's legitimate):

  • True shared resources (thread pool, logger, configuration) need controlled access
  • DI-managed Singleton avoids all the anti-pattern issues while preserving single-instance semantics
Great Answer Bonus "The classic GoF Singleton with .Instance is the anti-pattern. The concept of a single shared instance is fine — it's how you access it that matters. DI-managed singletons are a clean implementation of the same concept."
Think First ASP.NET Core processes requests on thread pool threads. What if 100 requests hit at once and all use the same Singleton?

Critical considerations:

  1. Thread safety of internal state — if the Singleton has mutable fields, concurrent requests will cause race conditions
  2. No per-request state — storing user ID, tenant, or HttpContext in a Singleton leaks data between requests
  3. Captured Scoped services — if a Singleton captures an injected DbContext (Scoped), it holds onto a disposed context after the first request ends
  4. Memory pressure — Singletons live for the entire app lifetime, so any data they accumulate never gets GC'd
100 Concurrent Requests, 1 Singleton Instance Thread 1 (Req A) Thread 2 (Req B) Thread 3 (Req C) ... Thread 100 PricingEngine (Singleton) _discount field _cache dictionary _dbContext (if captured) Mutable field: Thread A writes, B reads stale Dictionary: concurrent writes corrupt data Captured DbContext: disposed after Req 1 ends Safe Singleton: immutable data + ConcurrentDictionary + no captured Scoped services Config (immutable), IMemoryCache (thread-safe), stateless logic (pass context as params)
Great Answer Bonus "I always ask: does this Singleton hold immutable data or mutable state? Immutable (config, routing tables) is safe. Mutable needs ConcurrentDictionary, lock, or better yet — don't make it a Singleton."
Think First Can you achieve shared state with public constructors and multiple instances?

MonostateA pattern where all instances share the same state via static fields but are constructed normally — an alternative to Singleton. uses static fields with public constructors:

  • You can create multiple instances, but they all share the same static backing fields
  • The "single state" is invisible to consumers — they think they have separate objects
  • Works with DI and interfaces (unlike classic Singleton)

Key differences:

  • Singleton: one instance, one state. Enforced by private constructor.
  • Monostate: many instances, shared state. Enforced by static fields.
Great Answer Bonus "Monostate is a stealth Singleton — it hides the shared state, which can be surprising. I prefer explicit Singleton via DI because the single-instance policy is visible in the registration code."
Think First If a Singleton holds a file handle or database connection, when does it get cleaned up?

ASP.NET Core DI handles disposal automatically:

  1. If the Singleton implements IDisposable or IAsyncDisposable, the container calls Dispose() on application shutdown (when the root ServiceProvider is disposed) — but only for instances the container created
  2. Externally-created instances are NOT disposed: If you register via AddSingleton(new MyService()), the container does NOT call Dispose() — you own the lifecycle. Use IHostApplicationLifetime.ApplicationStopping to clean up manually
  3. Factory-created instances ARE disposed: AddSingleton<IService>(sp => new MyService()) — the container created it via the factory, so it owns disposal
  4. For manual Singleton (Lazy<T>), you must handle disposal yourself — register a callback on IHostApplicationLifetime.ApplicationStopping
Great Answer Bonus "The key rule is: whoever creates it, disposes it. AddSingleton<T>() and factory overloads — container disposes. AddSingleton(instance) — you dispose. If your singleton is IDisposable, always prefer the factory overload so the container owns the full lifecycle. For manual disposal, hook into IHostApplicationLifetime.ApplicationStopping."
Think First What happens when a Singleton takes a Scoped service as a constructor dependency?

Captive Dependency occurs when a longer-lived service captures a shorter-lived one:

  1. Singleton is created once and gets a Scoped DbContext injected
  2. After the first request, the DbContext is disposed — but the Singleton still holds the reference
  3. All subsequent requests use the disposed DbContextObjectDisposedException
  4. Even if it doesn't throw immediately, the data is stale — you're reading from a disconnected context

The term "captive" is perfect — the Scoped service is held captive by the Singleton, forced to live far longer than it was designed for. Think of it like a temp employee (Scoped) whose badge gets cloned by a permanent employee (Singleton). When the temp's contract ends, their access is revoked — but the permanent employee is still walking around with the cloned badge, trying to use it. Here's how to fix it:

// ❌ BAD: Singleton captures Scoped service directly public class ReportService // AddSingleton { private readonly AppDbContext _db; // Scoped — dies after request 1 public ReportService(AppDbContext db) => _db = db; } // ✅ GOOD: Singleton creates fresh scope per operation public class ReportService // AddSingleton { private readonly IServiceScopeFactory _scopeFactory; public ReportService(IServiceScopeFactory sf) => _scopeFactory = sf; public async Task<Report> GenerateAsync() { using var scope = _scopeFactory.CreateScope(); var db = scope.ServiceProvider.GetRequiredService<AppDbContext>(); return await db.Reports.ToListAsync(); // scope.Dispose() cleans up the DbContext } }
Great Answer Bonus "Fix this by injecting IServiceScopeFactory into the Singleton, then creating a scope per operation: using var scope = _scopeFactory.CreateScope(); var db = scope.ServiceProvider.GetRequiredService<DbContext>();"
Think First Think about framework classes you use daily — which ones should only exist once?
  1. HttpClient — should be a Singleton (or use IHttpClientFactory) to reuse TCP connections and avoid socket exhaustion
  2. IConfiguration — registered as Singleton by default in ASP.NET Core
  3. ILoggerFactory — single factory, creates logger instances per category
  4. IMemoryCache — shared in-memory cache, Singleton by default
  5. IHostEnvironment — app environment info, immutable Singleton
  6. JsonSerializerOptions — should be reused as Singleton for performance (reflection caching)
Great Answer Bonus "The HttpClient case is the most commonly asked. Creating a new one per request causes socket exhaustion — TIME_WAIT sockets pile up. Microsoft's official guidance is to use IHttpClientFactory, which manages a Singleton HttpMessageHandler pool."
Think First If you deploy to 5 servers behind a load balancer, how many Singleton instances exist?

Singleton is a per-process concept, not per-cluster. With horizontal scalingAdding more server instances (scale out) rather than upgrading one server (scale up).:

  1. 5 servers = 5 separate Singleton instances, each with its own state
  2. In-memory caches will be inconsistent across servers
  3. Rate limiters counting in a Singleton will be per-server, not global
  4. Session data stored in a Singleton is lost if the next request hits a different server
Singleton is Per-Process — Not Per-Cluster Load Balancer Pod 1 Singleton A cache: {x: 1} Pod 2 Singleton B cache: {x: 2} Pod 3 Singleton C cache: {x: 3} Pod 4 Singleton D cache: {x: 1} Pod 5 Singleton E cache: {} 5 pods = 5 DIFFERENT Singletons Each has its own cache, counters, state. User hits Pod 1, then Pod 3 = cache miss! Fix: Distributed state (Redis, DB) Singleton becomes a local PROXY to shared store. All pods read/write to the same Redis instance.

Solution: Use a distributed store (RedisAn in-memory data store used as a cache, message broker, or distributed lock provider., database) for shared state. The Singleton becomes a local proxy to the distributed system.

Great Answer Bonus "This is why I always clarify scope. A DI Singleton gives you one instance per process. For true global uniqueness in a distributed system, you need a coordination layer — Redis for cache, a database for state, or a distributed lock for mutual exclusion."
Think First Does it matter when the instance is created if it's going to be used anyway?

Eager — instance created at class load time:

  • Simpler, guaranteed thread-safe (CLR handles static init)
  • Downside: pays the cost at startup even if never used

Lazy — instance created on first access:

  • Defers cost until actually needed
  • Better for expensive init or optional features
  • Requires thread-safety mechanisms (Lazy<T>, locks)
Great Answer Bonus "For most production Singletons, the choice doesn't matter — they're used immediately at startup. I default to Lazy<T> for clarity of intent, but if the Singleton is always needed, eager is simpler and has no disadvantage."
Think First A Singleton lives forever. Who calls Dispose()? When?

Yes, but the lifecycle question is critical:

  1. DI-managed — the container calls Dispose() on application shutdown. This is the cleanest approach
  2. Manual Singleton — nobody calls Dispose automatically. You need to hook into AppDomain.ProcessExit or IHostApplicationLifetime.ApplicationStopping
  3. Using-block anti-pattern — if someone wraps a Singleton in a using block, it gets disposed while other code still uses it
Great Answer Bonus "The tension between Singleton and IDisposable is a design smell. If an object needs deterministic cleanup, its lifetime should be managed — which is exactly what DI containers do."
Think First The whole point of Singleton is one instance. How do you refresh its data without creating a new one?

Three production approaches:

  1. IOptionsMonitor<T> — ASP.NET Core's built-in pattern. Registered as Singleton, automatically detects file changes and exposes updated values via .CurrentValue
  2. Volatile reference swap — the Singleton holds a volatile reference to an immutable config object. On reload, create a new config object and atomically swap the reference
  3. Event-driven reload — expose a Reload() method that re-reads the config file. Use a ReaderWriterLockSlim to allow concurrent reads during reload
Great Answer Bonus "I'd use IOptionsMonitor<T> — it's battle-tested, handles file system watchers, and integrates with the configuration pipeline. Rolling your own is error-prone, especially around thread safety during reload."
Think First Think about DateTime.Now, Thread.CurrentThread, HttpContext.Current. What do they have in common?

Ambient Context provides a static access point to context-specific data:

  • Similar to Singleton in that it uses a static property for access
  • Different because the value can change per thread, per scope, or per async context
  • Uses AsyncLocal<T> or ThreadLocal<T> under the hood
  • Examples: TimeProvider.System, CultureInfo.CurrentCulture, IHttpContextAccessor
Great Answer Bonus "Ambient Context shares Singleton's biggest flaw — hidden dependencies. In .NET 8+, Microsoft introduced TimeProvider as the recommended abstraction for time, specifically because DateTime.Now is untestable ambient context. The trend is moving away from both patterns toward explicit injection."
Think First Each microservice is a separate process. What does "Singleton" even mean in that context?

Singleton in microservices has different semantics:

  1. Per-service Singleton — standard DI Singleton within one service. Safe and useful for local caches, config, logger
  2. Cross-service uniqueness — requires distributed coordination: leader election (ZooKeeper, etcd), distributed locks (Redis SETNX), or database constraints
  3. Shared state — never use in-memory Singletons for state shared across services. Use Redis, database, or message queues
  4. Multiple replicas — if your service runs 10 pods in Kubernetes, you have 10 Singleton instances, each independent
Great Answer Bonus "In microservices, I think of Singleton as process-scoped, not system-scoped. For system-scoped uniqueness, I use a dedicated service or distributed primitives — trying to make an in-process Singleton behave globally is a recipe for split-brainA distributed systems failure where two partitions each believe they are the sole active instance, leading to conflicting state. bugs."
Think First Think beyond just "one instance." What does a production system need?

A production-ready Singleton checklist:

  1. Interface-first — implement IMyService for testability and swappability
  2. DI-managed lifetimeAddSingleton<IMyService, MyService>(), no static Instance
  3. Thread-safe internal state — use ConcurrentDictionary, Interlocked, or immutable data
  4. No captured Scoped dependencies — inject IServiceScopeFactory if needed
  5. Health check — expose state for monitoring (IHealthCheck)
  6. Graceful disposal — implement IAsyncDisposable for clean shutdown
  7. Configuration reload — use IOptionsMonitor<T> for dynamic config
  8. Logging — log initialization, errors, and state transitions
Great Answer Bonus "The best Singleton is one where the class itself doesn't know it's a Singleton. It's a regular class with a regular constructor — the DI container handles the lifetime. That way, you can change it to Scoped or Transient later without touching the class."
Think First Where does each Blazor hosting model run? What does that mean for shared state?

Completely different semantics:

  1. Blazor ServerA .NET web framework where UI logic runs on the server and DOM updates are sent to the browser via SignalR. — the app runs on the server. A Singleton registered via AddSingleton is shared across all connected users/circuits. This is the same as ASP.NET Core — one instance per server process. Danger: mutable state in the Singleton leaks between users
  2. Blazor WebAssembly (WASM) — the app runs in the browser. A Singleton is scoped to that single browser tab. Each tab gets its own DI container, so each tab gets its own "singleton." There's no cross-tab sharing. Refreshing the page destroys and recreates the container
  3. Blazor WASM + prerendering — during server-side prerender, the Singleton lives on the server momentarily, then a new one is created client-side. State doesn't transfer automatically
Great Answer Bonus "In Blazor Server, use Scoped instead of Singleton for per-user state — Scoped is per-circuit (per-connection). Singletons should only hold truly global, thread-safe state like caches or configuration. In WASM, Singleton and Scoped are effectively the same since there's only one 'scope' per tab."
Think First What atomic CPU operations can replace a lock?

Yes, using Interlocked.CompareExchange:

public sealed class LockFreeSingleton { private static LockFreeSingleton? _instance; public static LockFreeSingleton Instance { get { if (_instance is not null) return _instance; var newInstance = new LockFreeSingleton(); // Atomically set _instance if it's still null Interlocked.CompareExchange(ref _instance, newInstance, null); // Return whatever is in _instance (might be ours or another thread's) return _instance; } } private LockFreeSingleton() { } }

Trade-off: Multiple threads may each create an instance, but only one wins the CASCompare-And-Swap — an atomic CPU instruction that updates a memory location only if it still holds an expected value. (Compare-And-Swap). Losers' instances get GC'd. This is fine when construction is cheap and side-effect-free. This is exactly what LazyThreadSafetyMode.PublicationOnly does internally.

Great Answer Bonus "This is the publication-only pattern. It avoids lock contention entirely — great for high-throughput scenarios. But if the constructor has side effects (opens a file, starts a timer), you'll get duplicate side effects. Use ExecutionAndPublication mode instead when construction has side effects."
Think First What are the trade-offs between safety, performance, and exception handling?
Mode Locking Duplicate Init? Exception Cached? Use When
ExecutionAndPublication Full lock (default) No — exactly one thread creates Yes — all threads see same exception forever Constructor has side effects (file I/O, network, timers)
PublicationOnly Lock-free (CAS) Yes — multiple threads may create, one wins No — retries on failure Cheap, side-effect-free constructor; want retry on transient errors
None No synchronization Undefined behavior under concurrency Yes Single-threaded scenarios only (startup code, tests)

Key insight: The default Lazy<T>() uses ExecutionAndPublication. The gotcha is exception caching — if the factory throws, the Lazy<T> is permanently broken. Use PublicationOnly when connecting to external resources that might transiently fail.

Great Answer Bonus "I'd use PublicationOnly for database connection singletons — if the first attempt fails due to a transient network error, subsequent calls retry instead of caching the exception. The trade-off (potentially creating two connections momentarily) is worth the resilience."
Think First Constructors can't be async. How do you initialize a Singleton that needs to await something?

Three approaches, from best to acceptable:

  1. IHostedServiceThe .NET interface for services that are started and stopped with the application host. / IHostedLifecycleService warmup (recommended):
public class CacheWarmupService(ICache cache) : IHostedService { public async Task StartAsync(CancellationToken ct) => await cache.InitializeAsync(ct); // Runs before Kestrel accepts requests public Task StopAsync(CancellationToken ct) => Task.CompletedTask; } // Registration builder.Services.AddSingleton<ICache, RedisCache>(); builder.Services.AddHostedService<CacheWarmupService>();
  1. AsyncLazy<T> pattern — wrap Lazy<Task<T>>:
public class AsyncLazy<T> : Lazy<Task<T>> { // Task.Run offloads to thread pool — remove it if you want // to run on the calling thread (e.g., in ASP.NET Core) public AsyncLazy(Func<Task<T>> factory) : base(factory) { } public TaskAwaiter<T> GetAwaiter() => Value.GetAwaiter(); } // Usage — consumers must await public class MyService(AsyncLazy<ExpensiveResource> resource) { public async Task DoWork() { var r = await resource; // Initializes on first await, cached thereafter r.Execute(); } }
  1. Semaphore-guarded InitializeAsync — use SemaphoreSlim(1,1) to gate first-call initialization. Works but callers must check IsInitialized or call EnsureInitializedAsync() before every operation
Great Answer Bonus "I prefer IHostedService because initialization completes before the app starts accepting traffic — no cold-start latency on the first request. If initialization fails, the app fails to start, which is exactly what you want (fail fast). AsyncLazy is my fallback when the resource genuinely should be lazily initialized."
Section 18

Practice Exercises

Implement a thread-safe Singleton for a ConnectionPool class without using Lazy<T>. Use double-checked locking. Ensure correctness with the volatile keyword.

  • Use a private static volatile field for the instance
  • Create a private static readonly object for the lock
  • Check null before AND after acquiring the lock
public sealed class ConnectionPool { private static volatile ConnectionPool? _instance; private static readonly object _lock = new(); public static ConnectionPool Instance { get { if (_instance is null) { lock (_lock) { _instance ??= new ConnectionPool(); } } return _instance; } } private ConnectionPool() { /* init pool */ } }

You have a FileLogger singleton used throughout the app via FileLogger.Instance. Refactor it so it can be registered with ASP.NET Core's DI container and unit tested with a fake implementation.

  • Extract an ILogger interface with void Log(string message)
  • Make FileLogger implement ILogger
  • Remove the static Instance property — let DI manage the lifetime
  • Register with services.AddSingleton<ILogger, FileLogger>()
// 1. Define the interface public interface ILogger { void Log(string message); } // 2. Implement it (no more static Instance!) public sealed class FileLogger : ILogger { private readonly StreamWriter _writer; public FileLogger() { _writer = new StreamWriter("app.log", append: true); } public void Log(string message) { _writer.WriteLine($"[{DateTime.UtcNow:O}] {message}"); _writer.Flush(); } } // 3. Register in DI services.AddSingleton<ILogger, FileLogger>(); // 4. Inject via constructor public class OrderService { private readonly ILogger _logger; public OrderService(ILogger logger) => _logger = logger; } // 5. Unit test with fake public class FakeLogger : ILogger { public List<string> Messages { get; } = new(); public void Log(string message) => Messages.Add(message); } [Fact] public void OrderService_Logs_On_PlaceOrder() { var fake = new FakeLogger(); var svc = new OrderService(fake); svc.PlaceOrder(new Order()); Assert.Contains("Order placed", fake.Messages[0]); }

Implement a ConfigManager singleton that loads settings from a JSON file on first access. Add a Reload() method that re-reads the file without breaking thread safety. Consumers should always see a consistent snapshot — never a half-updated config.

  • Use an immutable settings record/class for the config data
  • Store the current config in a volatile field
  • On reload, build a complete new config object, then swap the reference atomically
  • Interlocked.Exchange ensures atomic reference swap
// Immutable snapshot of configuration public sealed record AppSettings( string DbConnectionString, int MaxRetries, TimeSpan Timeout ); public sealed class ConfigManager { private static readonly Lazy<ConfigManager> _instance = new(() => new ConfigManager()); public static ConfigManager Instance => _instance.Value; private volatile AppSettings _current; public AppSettings Settings => _current; private ConfigManager() { _current = LoadFromFile(); } public void Reload() { var newSettings = LoadFromFile(); // Atomic swap — readers never see a half-updated object Interlocked.Exchange(ref _current, newSettings); } private static AppSettings LoadFromFile() { var json = File.ReadAllText("appsettings.json"); var doc = JsonDocument.Parse(json); return new AppSettings( doc.RootElement.GetProperty("ConnectionString").GetString()!, doc.RootElement.GetProperty("MaxRetries").GetInt32(), TimeSpan.FromSeconds( doc.RootElement.GetProperty("TimeoutSeconds").GetInt32()) ); } }

The following CacheManager singleton has 4 bugs. Find them all and write the corrected version.

public class CacheManager { private static CacheManager _instance; private Dictionary<string, object> _cache = new(); public CacheManager() { } public static CacheManager Instance { get { if (_instance == null) _instance = new CacheManager(); return _instance; } } public void Set(string key, object val) => _cache[key] = val; public object Get(string key) => _cache[key]; }
  • Is the class sealed? Can someone subclass it?
  • Is the constructor private?
  • Is instance creation thread-safe?
  • Is the Dictionary thread-safe for concurrent reads/writes?

The 4 bugs:

  1. Not sealed — subclasses can bypass the singleton guarantee
  2. Public constructor — anyone can call new CacheManager()
  3. No thread-safe init — race condition on _instance check
  4. Dictionary is not thread-safe — concurrent writes corrupt data
public sealed class CacheManager // Bug 1: sealed { private static readonly Lazy<CacheManager> _instance = new(() => new CacheManager()); // Bug 3: thread-safe public static CacheManager Instance => _instance.Value; // Bug 4: ConcurrentDictionary for thread safety private readonly ConcurrentDictionary<string, object> _cache = new(); private CacheManager() { } // Bug 2: private constructor public void Set(string key, object val) => _cache[key] = val; public object? Get(string key) => _cache.TryGetValue(key, out var val) ? val : null; }
Section 19

Cheat Sheet

  • private static readonly Lazy<T> _i = new(() => new T()); public static T Instance => _i.Value;
  • services.AddSingleton <IService, Service>(); // inject via constructor ctor(IService svc)
  • sealed class ✓ implement interface ✓ thread-safe state ✓ no captured Scoped deps ✓ Classic only: private ctor static Instance DI-managed: normal ctor AddSingleton<T>()
  • Section 20

    Thread Safety Deep Dive

    This section goes beyond "use Lazy<T>" and explains why thread safety matters at the CPU level. Understanding this makes you dangerous in interviews and invaluable in production debugging.

    Modern CPUs don't read/write main memory directly — each core has its own L1/L2 cache. Without explicit memory barriersCPU instructions that enforce ordering constraints on memory operations, preventing the processor from reordering reads and writes, Core 1 might write a value that Core 2 never sees.

    // WITHOUT volatile — broken on multi-core CPUs private static Singleton _instance; // no volatile! // Thread 1 (Core 1): _instance = new Singleton(); // CPU might reorder: assign _instance BEFORE constructor finishes // Result: _instance points to half-constructed object // Thread 2 (Core 2): if (_instance != null) // sees non-null (from Core 1's cache) _instance.DoWork(); // BOOM — fields not initialized yet!

    The C# compiler or the CPU JIT can reorder the new Singleton() into: (1) allocate memory, (2) assign reference to _instance, (3) run constructor. Thread 2 sees the reference after step 2 but before step 3. The object exists but its fields are zero/null. This bug is intermittent, unreproducible in debug mode, and only appears under high load on multi-core machines.

    volatile inserts memory barriers (also called memory fences) around reads and writes:

    OperationWithout volatileWith volatile
    ReadMay read stale value from CPU cacheAlways reads from main memory (acquire fence)
    WriteMay stay in CPU cache, other cores don't see itImmediately flushed to main memory (release fence)
    ReorderingCompiler/CPU can move reads/writes around freelyNo reads/writes can move across the barrier
    // WITH volatile — safe on all architectures private static volatile Singleton? _instance; // The volatile write ensures: // 1. Constructor FULLY completes before _instance is assigned // 2. The assignment is visible to ALL cores immediately // 3. No instruction can be reordered past the volatile write // The volatile read ensures: // 1. Thread 2 reads the latest value, not a cached copy // 2. If _instance is non-null, the object is FULLY constructed

    Lazy<T> handles all of this internally. Here's what it does under the hood (simplified):

    // Simplified version of what Lazy<T> does internally: public class SimplifiedLazy<T> where T : class { private Func<T>? _factory; private volatile T? _value; // volatile for visibility private volatile bool _initialized; // volatile for ordering private object _lock = new(); public T Value { get { // Fast path — no lock after initialization if (_initialized) return _value!; // volatile read: guaranteed fresh lock (_lock) { if (!_initialized) { _value = _factory!(); // volatile write: visible to all cores _initialized = true; // volatile write: ordered after _value _factory = null; // allow GC of factory delegate } } return _value!; } } }

    After the first call, Lazy<T>.Value is just a volatile read + branch — no lock, no contention, sub-nanosecond. The lock only exists for the initialization race. That's why it's the gold standard.

    Section 21

    Real-World Mini-Project: Building a Rate Limiter

    Production note: .NET 7+ includes System.Threading.RateLimiting with built-in FixedWindowRateLimiter, SlidingWindowRateLimiter, TokenBucketRateLimiter, and ConcurrencyLimiter. Use the built-in library in production. We build one from scratch here to understand the internals — which is exactly what interviewers want to see.

    Let's build a production-grade Rate Limiter Singleton — from a naive first attempt to a battle-tested implementation. This is the kind of progression interviewers love to see.

    // ATTEMPT 1: A junior's first try public class RateLimiter { private static RateLimiter _instance = new(); private Dictionary<string, List<DateTime>> _requests = new(); public bool IsAllowed(string clientId, int maxRequests, TimeSpan window) { if (!_requests.ContainsKey(clientId)) _requests[clientId] = new List<DateTime>(); // Remove expired entries _requests[clientId].RemoveAll(t => t < DateTime.Now - window); if (_requests[clientId].Count >= maxRequests) return false; _requests[clientId].Add(DateTime.Now); return true; } }

    1. Not thread-safe — Dictionary corrupts under concurrent access.
    2. Public constructor — anyone can create a second instance.
    3. DateTime.Now — uses local time, breaks on DST changes.
    4. Unbounded memory — expired entries only cleaned on access, idle clients never cleaned.
    5. No interface — untestable.
    6. Not sealed — can be subclassed.

    // ATTEMPT 2: Thread-safe, but still has issues public sealed class RateLimiter { private static readonly Lazy<RateLimiter> _instance = new(); public static RateLimiter Instance => _instance.Value; private readonly ConcurrentDictionary<string, ConcurrentQueue<long>> _requests = new(); private RateLimiter() { } public bool IsAllowed(string clientId, int maxRequests, TimeSpan window) { var queue = _requests.GetOrAdd(clientId, _ => new ConcurrentQueue<long>()); var cutoff = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() - (long)window.TotalMilliseconds; // Dequeue expired entries while (queue.TryPeek(out var oldest) && oldest < cutoff) queue.TryDequeue(out _); if (queue.Count >= maxRequests) return false; queue.Enqueue(DateTimeOffset.UtcNow.ToUnixTimeMilliseconds()); return true; } }

    1. Race condition between Count check and Enqueue — two threads can both see count=99 and both add, exceeding the limit.
    2. No memory cleanup for idle clients.
    3. Static Instance — not testable, not DI-friendly.
    4. Policy is hardcoded per call — should be configurable.

    public interface IRateLimiter { /// <summary> /// Attempts to acquire a permit for the given client. /// Returns true if allowed, false if rate limit exceeded. /// </summary> bool TryAcquire(string clientId); /// <summary>Returns current usage stats for monitoring.</summary> RateLimitStats GetStats(string clientId); } public record RateLimitStats( int CurrentCount, int MaxAllowed, TimeSpan WindowSize, TimeSpan? RetryAfter ); public sealed class SlidingWindowRateLimiter : IRateLimiter, IDisposable { private readonly ConcurrentDictionary<string, ClientWindow> _clients = new(); private readonly IOptionsMonitor<RateLimitOptions> _options; private readonly TimeProvider _timeProvider; private readonly ILogger<SlidingWindowRateLimiter> _logger; private readonly Timer _cleanupTimer; public SlidingWindowRateLimiter( IOptionsMonitor<RateLimitOptions> options, TimeProvider timeProvider, ILogger<SlidingWindowRateLimiter> logger) { _options = options; _timeProvider = timeProvider; _logger = logger; // Periodic cleanup of idle clients — prevents unbounded memory growth _cleanupTimer = new Timer( _ => CleanupIdleClients(), null, TimeSpan.FromMinutes(5), TimeSpan.FromMinutes(5)); } public bool TryAcquire(string clientId) { var opts = _options.CurrentValue; var now = _timeProvider.GetUtcNow().ToUnixTimeMilliseconds(); var window = _clients.GetOrAdd(clientId, _ => new ClientWindow()); lock (window.Lock) // per-client lock, not global { // Slide the window: remove expired timestamps var cutoff = now - (long)opts.Window.TotalMilliseconds; while (window.Timestamps.Count > 0 && window.Timestamps.Peek() < cutoff) window.Timestamps.Dequeue(); if (window.Timestamps.Count >= opts.MaxRequests) { _logger.LogWarning( "Rate limit exceeded for {ClientId}: {Count}/{Max}", clientId, window.Timestamps.Count, opts.MaxRequests); return false; } window.Timestamps.Enqueue(now); window.LastAccess = now; return true; } } public RateLimitStats GetStats(string clientId) { var opts = _options.CurrentValue; if (!_clients.TryGetValue(clientId, out var window)) return new(0, opts.MaxRequests, opts.Window, null); lock (window.Lock) { var now = _timeProvider.GetUtcNow().ToUnixTimeMilliseconds(); var cutoff = now - (long)opts.Window.TotalMilliseconds; var active = window.Timestamps.Count(t => t >= cutoff); TimeSpan? retryAfter = active >= opts.MaxRequests ? TimeSpan.FromMilliseconds(window.Timestamps.Peek() - cutoff) : null; return new(active, opts.MaxRequests, opts.Window, retryAfter); } } private void CleanupIdleClients() { var cutoff = _timeProvider.GetUtcNow() .AddMinutes(-30).ToUnixTimeMilliseconds(); var removed = 0; foreach (var kvp in _clients) { if (kvp.Value.LastAccess < cutoff) { _clients.TryRemove(kvp.Key, out _); removed++; } } if (removed > 0) _logger.LogInformation("Cleaned up {Count} idle rate limit entries", removed); } public void Dispose() => _cleanupTimer.Dispose(); private sealed class ClientWindow { public readonly object Lock = new(); public readonly Queue<long> Timestamps = new(); public long LastAccess; } } public class RateLimitOptions { public int MaxRequests { get; set; } = 100; public TimeSpan Window { get; set; } = TimeSpan.FromMinutes(1); } // Registration — DI handles the Singleton lifetime builder.Services.Configure<RateLimitOptions>( builder.Configuration.GetSection("RateLimit")); builder.Services.AddSingleton(TimeProvider.System); builder.Services.AddSingleton<IRateLimiter, SlidingWindowRateLimiter>(); // Usage in middleware: app.Use(async (context, next) => { var limiter = context.RequestServices.GetRequiredService<IRateLimiter>(); var clientId = context.Connection.RemoteIpAddress?.ToString() ?? "unknown"; if (!limiter.TryAcquire(clientId)) { var stats = limiter.GetStats(clientId); context.Response.StatusCode = 429; context.Response.Headers.RetryAfter = stats.RetryAfter?.TotalSeconds.ToString("F0") ?? "60"; await context.Response.WriteAsync("Rate limit exceeded"); return; } await next(); }); public class RateLimiterTests { private readonly FakeTimeProvider _time = new(); private readonly IRateLimiter _sut; public RateLimiterTests() { var options = Options.Create(new RateLimitOptions { MaxRequests = 3, Window = TimeSpan.FromMinutes(1) }); var monitor = Mock.Of<IOptionsMonitor<RateLimitOptions>>( m => m.CurrentValue == options.Value); _sut = new SlidingWindowRateLimiter( monitor, _time, NullLogger<SlidingWindowRateLimiter>.Instance); } [Fact] public void Allows_Requests_Under_Limit() { Assert.True(_sut.TryAcquire("client-1")); Assert.True(_sut.TryAcquire("client-1")); Assert.True(_sut.TryAcquire("client-1")); } [Fact] public void Blocks_When_Limit_Exceeded() { for (int i = 0; i < 3; i++) _sut.TryAcquire("client-1"); Assert.False(_sut.TryAcquire("client-1")); // 4th request blocked } [Fact] public void Window_Slides_After_Expiry() { for (int i = 0; i < 3; i++) _sut.TryAcquire("client-1"); Assert.False(_sut.TryAcquire("client-1")); // blocked _time.Advance(TimeSpan.FromMinutes(1)); // window expires Assert.True(_sut.TryAcquire("client-1")); // allowed again } [Fact] public void Isolates_Clients() { for (int i = 0; i < 3; i++) _sut.TryAcquire("client-1"); Assert.True(_sut.TryAcquire("client-2")); // different client } }

    Why This Is Production-Ready

    Per-Client Locking

    Each client gets its own lock object. Client A's rate check never blocks Client B. Zero contention between unrelated clients.

    Memory Cleanup

    A background timer removes idle clients every 5 minutes. Without this, a DDoS with 1M unique IPs would OOM the server.

    TimeProvider Abstraction

    .NET 8's TimeProvider replaces DateTime.UtcNow. Tests use FakeTimeProvider to control time without waiting.

    Hot-Reload Config

    IOptionsMonitor<T> lets you change rate limits at runtime via config file — no restart needed.

    Section 22

    Migration Guide: Classic → DI Singleton

    You've inherited a codebase with DatabaseManager.Instance sprinkled across 47 files. Here's how to migrate without breaking anything.

    // BEFORE: monolithic singleton public sealed class DatabaseManager { private static readonly Lazy<DatabaseManager> _instance = new(); public static DatabaseManager Instance => _instance.Value; private DatabaseManager() { } public User GetUser(int id) { /* ... */ } public void SaveOrder(Order o) { /* ... */ } } // STEP 1: Extract interface (right-click → Extract Interface in VS) public interface IDatabaseManager { User GetUser(int id); void SaveOrder(Order order); } // Make the class implement it — ZERO behavior change public sealed class DatabaseManager : IDatabaseManager { // ... everything else stays the same for now }

    Risk: Zero. Adding an interface is backward-compatible. Existing .Instance calls still work.

    // In Program.cs — register the existing instance builder.Services.AddSingleton<IDatabaseManager>( _ => DatabaseManager.Instance); // bridge: DI serves the same instance // Now BOTH access paths return the same object: // OLD: DatabaseManager.Instance.GetUser(1) ← still works // NEW: _dbManager.GetUser(1) ← injected via constructor

    Risk: Minimal. Old code continues using .Instance. New code uses DI. Same object either way.

    // BEFORE (each file, one at a time): public class OrderService { public void PlaceOrder(Order order) { DatabaseManager.Instance.SaveOrder(order); // direct coupling } } // AFTER: public class OrderService { private readonly IDatabaseManager _db; public OrderService(IDatabaseManager db) => _db = db; public void PlaceOrder(Order order) { _db.SaveOrder(order); // injected, testable } }

    Pace: One file per PR. Each PR is small, reviewable, and independently deployable. The .Instance bridge ensures old and new code coexist.

    // When ALL 47 files are migrated and no references to .Instance remain: // 1. Search the codebase: "DatabaseManager.Instance" — should return 0 results // 2. Remove the static infrastructure: public sealed class DatabaseManager : IDatabaseManager { // DELETED: private static readonly Lazy<DatabaseManager> _instance = new(); // DELETED: public static DatabaseManager Instance => _instance.Value; // Constructor is now public — DI creates the instance public DatabaseManager(IConfiguration config, ILogger<DatabaseManager> logger) { // Can now accept dependencies via constructor! } } // 3. Update registration: builder.Services.AddSingleton<IDatabaseManager, DatabaseManager>();

    The class is now a regular class. The DI container manages its Singleton lifetime. It can accept constructor dependencies, be unit tested with mocks, and be swapped for a different implementation without changing any consumer.

    Section 23

    Code Review Checklist

    Use this checklist when reviewing Singleton code in PRs. Print it, bookmark it, tattoo it on your forearm.

    #CheckWhy It MattersRed Flag
    1 Is the class sealed? Prevents subclass from creating second instance public class MySingleton without sealed
    2 Is the constructor private (or DI-managed)? Prevents new MySingleton() outside the class Public constructor on a class named *Singleton or *Manager
    3 Is it thread-safe? Multiple threads will access it concurrently in web apps Dictionary instead of ConcurrentDictionary, no locks on mutable state
    4 Does it capture Scoped services? Scoped services get disposed, Singleton holds dead references Constructor taking DbContext, HttpContext, or any Scoped service
    5 Does it hold mutable per-request state? Leaks data between users — security vulnerability Instance fields like _currentUser, _tenantId, _requestData
    6 Is it accessed via DI or .Instance? .Instance creates hidden dependencies MySingleton.Instance.DoWork() in business logic
    7 Does it implement an interface? Required for testability and swappability Concrete class injected directly: ctor(MySingleton s)
    8 Does it implement IDisposable if it holds resources? File handles, connections, timers need cleanup StreamWriter, HttpClient, Timer fields without Dispose
    9 Is the constructor lightweight? Heavy constructors block startup or first request Network calls, file I/O, database queries in constructor
    10 Does it have unbounded growth? Singletons live forever — collections only grow Dictionary or List without size limits or eviction
    11 Does it accept CancellationToken on async methods? Long-lived singletons must support graceful shutdown Async methods without CancellationToken parameter — blocks shutdown
    12 Does it expose events without unsubscribe guidance? Singleton event publishers leak subscribers (see Pitfall 10) Public event Action without corresponding remove pattern or IObservable
    Automate it: Enable these Roslyn analyzers to catch Singleton issues at compile time:
    • CA1063 — Implement IDisposable correctly
    • CA2000 — Dispose objects before losing scope
    • CA1812 — Avoid uninstantiated internal classes (catches orphaned singletons)
    • ASP0000 — Calling sync methods on async paths (common in singleton init)
    • Microsoft.Extensions.DependencyInjection.Analyzers — detects captive dependencies at compile time (.NET 8+: ValidateOnBuild + ValidateScopes)