
Introduction
Imagine you’re working at a library with only 5 shelves available on your desk. You keep your most-used books there for quick access. When someone brings you a 6th book and all shelves are full, you put away whichever book you haven’t touched in the longest time. That’s LRU Cache in a nutshell.
LRU (Least Recently Used) is a cache eviction policy that removes the item that was accessed the least recently when the cache is full and a new item needs to be stored. It’s one of the most widely used caching strategies in real systems — Redis, browsers, operating systems, and database engines all use it.
In this article, we’ll:
- Understand why LRU matters and where it shows up in the wild
- Build one from scratch in TypeScript
- See how to use it in an Express API
- Explore how Redis leverages it and how to configure it
Why LRU? The Problem with Naive Caching
Suppose you have a cache that can hold 3 items. You receive these requests in order:
GET /user/1 → cache miss, fetch from DB, store in cache → Cache: [1]
GET /user/2 → cache miss, fetch from DB, store in cache → Cache: [1, 2]
GET /user/3 → cache miss, fetch from DB, store in cache → Cache: [1, 2, 3]
GET /user/1 → cache hit! → Cache: [2, 3, 1]
GET /user/4 → cache miss, cache is full!
Which item should we evict to make room for /user/4? With LRU, the answer is /user/2 — it was accessed longest ago. The assumption is: recently used items are likely to be used again soon.
This is backed by the locality of reference principle, which holds true for most real-world workloads.
How LRU Cache Works Internally
The classic LRU implementation combines two data structures:
- A HashMap — for O(1) lookups by key
- A Doubly Linked List — to track access order (most recent at head, least recent at tail)
Every time you get an item, you move it to the front of the list (most recently used). Every time you set an item, you add it to the front. If the cache is full, you remove the item at the tail (least recently used).
Most Recent ←————————————————→ Least Recent
[HEAD] <-> [node3] <-> [node2] <-> [node1] <-> [TAIL]
This gives us:
- O(1) get — HashMap lookup
- O(1) set — HashMap insert + linked list prepend
- O(1) eviction — remove tail node
Building LRU Cache in TypeScript
Step 1: The Node and Cache Structure
class LRUNode<K, V> {
key: K;
value: V;
prev: LRUNode<K, V> | null = null;
next: LRUNode<K, V> | null = null;
constructor(key: K, value: V) {
this.key = key;
this.value = value;
}
}
class LRUCache<K, V> {
private capacity: number;
private map: Map<K, LRUNode<K, V>>;
private head: LRUNode<K, V>; // dummy head (most recent side)
private tail: LRUNode<K, V>; // dummy tail (least recent side)
constructor(capacity: number) {
if (capacity <= 0) throw new Error("Capacity must be positive");
this.capacity = capacity;
this.map = new Map();
// Dummy sentinel nodes — simplifies edge case handling
this.head = new LRUNode<K, V>(null as K, null as V);
this.tail = new LRUNode<K, V>(null as K, null as V);
this.head.next = this.tail;
this.tail.prev = this.head;
}
Step 2: Internal Linked List Operations
private addToFront(node: LRUNode<K, V>): void {
node.next = this.head.next;
node.prev = this.head;
this.head.next!.prev = node;
this.head.next = node;
}
private removeNode(node: LRUNode<K, V>): void {
node.prev!.next = node.next;
node.next!.prev = node.prev;
}
private removeLRU(): LRUNode<K, V> {
const lru = this.tail.prev!; // The node just before the dummy tail
this.removeNode(lru);
return lru;
}
Step 3: The Public API
get(key: K): V | undefined {
const node = this.map.get(key);
if (!node) return undefined;
// Move to front — it's now the most recently used
this.removeNode(node);
this.addToFront(node);
return node.value;
}
set(key: K, value: V): void {
if (this.map.has(key)) {
// Update existing node and move it to front
const node = this.map.get(key)!;
node.value = value;
this.removeNode(node);
this.addToFront(node);
} else {
if (this.map.size >= this.capacity) {
// Evict the least recently used
const lru = this.removeLRU();
this.map.delete(lru.key);
}
const node = new LRUNode(key, value);
this.addToFront(node);
this.map.set(key, node);
}
}
delete(key: K): boolean {
const node = this.map.get(key);
if (!node) return false;
this.removeNode(node);
this.map.delete(key);
return true;
}
get size(): number {
return this.map.size;
}
clear(): void {
this.map.clear();
this.head.next = this.tail;
this.tail.prev = this.head;
}
}
Let’s Test It
const cache = new LRUCache<string, string>(3);
cache.set("a", "Apple");
cache.set("b", "Banana");
cache.set("c", "Cherry");
console.log(cache.get("a")); // "Apple" → "a" is now most recent
cache.set("d", "Date"); // Cache full → evicts "b" (least recently used)
console.log(cache.get("b")); // undefined → evicted!
console.log(cache.get("c")); // "Cherry"
console.log(cache.get("d")); // "Date"
Real-World Use Case 1: Express API with In-Memory LRU
Here’s a practical example: caching user profiles fetched from a database using our LRU cache inside an Express route.
import express, { Request, Response } from "express";
const app = express();
// Simulate a database call (slow)
async function fetchUserFromDB(userId: string): Promise<Record<string, unknown>> {
await new Promise((resolve) => setTimeout(resolve, 100)); // 100ms latency
return { id: userId, name: `User ${userId}`, email: `user${userId}@example.com` };
}
// Initialize LRU cache: max 500 users, keys are strings, values are user objects
const userCache = new LRUCache<string, Record<string, unknown>>(500);
app.get("/users/:id", async (req: Request, res: Response) => {
const { id } = req.params;
const cacheKey = `user:${id}`;
// Check cache first
const cached = userCache.get(cacheKey);
if (cached) {
res.set("X-Cache", "HIT");
return res.json(cached);
}
// Cache miss: fetch from DB
const user = await fetchUserFromDB(id);
userCache.set(cacheKey, user);
res.set("X-Cache", "MISS");
res.json(user);
});
app.listen(3000, () => console.log("Server running on port 3000"));
What’s happening here:
- First request for
/users/42takes ~100ms (DB hit) - Subsequent requests are served from the LRU cache in under 1ms
- If 501 different users are fetched, the oldest accessed one gets evicted automatically
Real-World Use Case 2: API Response Caching with TTL
LRU alone doesn’t handle expiration. Let’s extend it to support a time-to-live (TTL) — so stale data doesn’t sit in cache forever.
interface CacheEntry<V> {
value: V;
expiresAt: number; // timestamp in ms
}
class LRUCacheWithTTL<K, V> {
private inner: LRUCache<K, CacheEntry<V>>;
constructor(capacity: number) {
this.inner = new LRUCache<K, CacheEntry<V>>(capacity);
}
set(key: K, value: V, ttlMs: number): void {
this.inner.set(key, {
value,
expiresAt: Date.now() + ttlMs,
});
}
get(key: K): V | undefined {
const entry = this.inner.get(key);
if (!entry) return undefined;
if (Date.now() > entry.expiresAt) {
// Expired — treat as a miss and clean up
this.inner.delete(key);
return undefined;
}
return entry.value;
}
}
// Usage: cache GitHub repo data for 5 minutes
const repoCache = new LRUCacheWithTTL<string, unknown>(200);
async function getGithubRepo(owner: string, repo: string) {
const key = `${owner}/${repo}`;
const cached = repoCache.get(key);
if (cached) return cached;
const data = await fetch(`https://api.github.com/repos/${owner}/${repo}`).then(
(r) => r.json()
);
repoCache.set(key, data, 5 * 60 * 1000); // Cache for 5 minutes
return data;
}
Real-World Use Case 3: NestJS with a Caching Service
In a NestJS application, you’d typically wrap the cache in an injectable service:
import { Injectable } from "@nestjs/common";
@Injectable()
export class CacheService<K = string, V = unknown> {
private cache: LRUCache<K, V>;
constructor(capacity = 1000) {
this.cache = new LRUCache<K, V>(capacity);
}
get(key: K): V | undefined {
return this.cache.get(key);
}
set(key: K, value: V): void {
this.cache.set(key, value);
}
invalidate(key: K): boolean {
return this.cache.delete(key);
}
}
import { Controller, Get, Param } from "@nestjs/common";
import { CacheService } from "./cache.service";
import { UsersService } from "./users.service";
@Controller("users")
export class UsersController {
constructor(
private readonly cache: CacheService<string, User>,
private readonly users: UsersService
) {}
@Get(":id")
async getUser(@Param("id") id: string) {
const key = `user:${id}`;
const cached = this.cache.get(key);
if (cached) return cached;
const user = await this.users.findById(id);
this.cache.set(key, user);
return user;
}
}
Redis and LRU: Production-Grade Caching
When your application scales beyond a single server, an in-memory cache isn’t enough — you need a shared, distributed cache. That’s where Redis comes in. Redis natively supports LRU eviction as a memory management policy.
How Redis Implements LRU
Redis uses an approximated LRU algorithm (not exact LRU) for performance reasons. When memory is full and maxmemory-policy is set to an LRU variant, Redis samples a small set of keys and evicts the one with the oldest lru_clock timestamp. It’s slightly less accurate than a true LRU but much faster and memory-efficient.
Configuring Redis LRU
In redis.conf:
# Set the maximum memory Redis can use
maxmemory 256mb
# Eviction policy
# allkeys-lru → evict ANY key using LRU (recommended for pure cache use)
# volatile-lru → evict only keys with TTL set, using LRU
maxmemory-policy allkeys-lru
# Number of samples to check for LRU (higher = more accurate, more CPU)
maxmemory-samples 10
Or at runtime via CLI:
redis-cli CONFIG SET maxmemory 256mb
redis-cli CONFIG SET maxmemory-policy allkeys-lru
Using Redis as an LRU Cache from TypeScript
import { createClient } from "redis";
const redis = createClient({ url: "redis://localhost:6379" });
await redis.connect();
// --- SET with TTL ---
async function cacheSet(key: string, value: unknown, ttlSeconds = 300): Promise<void> {
await redis.setEx(key, ttlSeconds, JSON.stringify(value));
}
// --- GET ---
async function cacheGet<T>(key: string): Promise<T | null> {
const raw = await redis.get(key);
if (!raw) return null;
return JSON.parse(raw) as T;
}
// --- DELETE (cache invalidation) ---
async function cacheDelete(key: string): Promise<void> {
await redis.del(key);
}
Real Example: Caching Product Listings
import express from "express";
import { createClient } from "redis";
const app = express();
const redis = createClient({ url: "redis://localhost:6379" });
await redis.connect();
interface Product {
id: string;
name: string;
price: number;
}
// Simulated DB fetch
async function fetchProductsFromDB(category: string): Promise<Product[]> {
await new Promise((r) => setTimeout(r, 200)); // Simulate DB latency
return [
{ id: "1", name: `${category} Item A`, price: 29.99 },
{ id: "2", name: `${category} Item B`, price: 49.99 },
];
}
app.get("/products/:category", async (req, res) => {
const { category } = req.params;
const cacheKey = `products:${category}`;
// 1. Try Redis cache
const cached = await redis.get(cacheKey);
if (cached) {
res.set("X-Cache", "HIT");
return res.json(JSON.parse(cached));
}
// 2. Cache miss — query DB
const products = await fetchProductsFromDB(category);
// 3. Store in Redis with 10-minute TTL
// Redis will evict this via LRU if memory pressure is high
await redis.setEx(cacheKey, 600, JSON.stringify(products));
res.set("X-Cache", "MISS");
res.json(products);
});
app.listen(3000);
Cache Invalidation with Redis
// Invalidate single key when a product is updated
async function onProductUpdated(productId: string, category: string): Promise<void> {
await redis.del(`products:${category}`);
await redis.del(`product:${productId}`);
console.log(`Cache invalidated for product ${productId}`);
}
// Invalidate all keys matching a pattern (use with care in large datasets)
async function invalidateCategory(category: string): Promise<void> {
const keys = await redis.keys(`products:${category}*`);
if (keys.length > 0) {
await redis.del(keys);
}
}
Checking Redis LRU Stats
# Check memory usage and eviction stats
redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human|mem_fragmentation"
# Check how many keys have been evicted
redis-cli INFO stats | grep evicted_keys
# Inspect LRU clock of a specific key
redis-cli OBJECT IDLETIME mykey
LRU vs Other Eviction Policies
| Policy | Evicts | Best For |
|---|---|---|
| LRU | Least Recently Used | General caching, hot-data patterns |
| LFU | Least Frequently Used | Skewed access patterns (some items always popular) |
| FIFO | First In, First Out | Simple, fair eviction (not access-aware) |
| Random | Random item | Very low overhead when access pattern is uniform |
| TTL-based | Expired items first | Data with natural expiration (sessions, tokens) |
Redis also offers allkeys-lfu and volatile-lfu if LFU fits your workload better.
Common Pitfalls
1. Cache Stampede (Thundering Herd)
When a popular cached item expires, many concurrent requests may all miss the cache at the same time, hammering your database.
// Fix: use a mutex / single-flight pattern
const inflight = new Map<string, Promise<unknown>>();
async function getWithSingleFlight<T>(
key: string,
fetchFn: () => Promise<T>
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
// If already in-flight, wait for it
if (inflight.has(key)) {
return inflight.get(key) as Promise<T>;
}
// Start the fetch and share the promise
const promise = fetchFn().then(async (data) => {
await redis.setEx(key, 300, JSON.stringify(data));
inflight.delete(key);
return data;
});
inflight.set(key, promise);
return promise;
}
2. Over-Caching
Don’t cache everything. Cache items that are:
- Expensive to compute or fetch (DB queries, API calls)
- Accessed frequently
- Unlikely to change often
Caching items that change every request is worse than no cache — you pay the overhead without any benefit.
3. Unbounded Cache Growth
Always set a capacity limit. An LRU cache without a bound will grow until your process runs out of memory.
// Bad — no capacity limit
const cache = new Map<string, unknown>();
// Good — bounded LRU
const cache = new LRUCache<string, unknown>(1000);
Quick Performance Reference
| Operation | In-Memory LRU | Redis (local) | Redis (network) |
|---|---|---|---|
| GET | ~0.001ms | ~0.1ms | ~1-5ms |
| SET | ~0.001ms | ~0.1ms | ~1-5ms |
| Eviction | automatic | automatic | automatic |
| Persistence | ❌ | ✅ (RDB/AOF) | ✅ (RDB/AOF) |
| Shared across servers | ❌ | ✅ | ✅ |
Conclusion
LRU Cache is a beautifully simple idea with deep practical value. By keeping recently used data close and automatically evicting stale items, it strikes a natural balance between speed and memory efficiency.
Here’s what to take away:
- Build your own LRU with a HashMap + Doubly Linked List for O(1) get/set
- Use in-memory LRU for single-process, low-latency caching needs
- Use Redis with
allkeys-lrufor shared, distributed, production-grade caching - Add TTL to prevent serving stale data
- Watch for stampedes and set sensible capacity limits
The next time your application is slow because it keeps hitting the database for the same popular records, you know exactly what to reach for.
Happy caching!