How It Works: In-Memory & Redis‑Backed Caching for Performance
Updated: 2025-08-24
Summary
Cache hot data to cut latency and offload databases. Use in‑process memory for microsecond reads; use Redis for cross‑pod sharing, eviction policies, and persistence options.
When to Use Which
- In‑memory: ultra‑hot keys, per‑pod working set, no cross‑pod consistency needed.
- Redis: shared cache across pods, rate limits, sessions, feature flags, large TTL windows.
- Both: check local first → fall back to Redis → DB.
Core Patterns
- Read‑through: miss → load DB → set cache → return.
- Write‑through: on write, update DB and cache together.
- Write‑behind: enqueue write, update DB later (careful with loss on crash).
- Negative caching: cache 404s briefly to avoid repeated misses.
- Cache stampede protection: single‑flight + locks + jitter on TTL.
Go: local LRU + singleflight + Redis fallback
// go.mod: github.com/patrickmn/go-cache, github.com/redis/go-redis/v9
var (
local = gocache.New(5*time.Minute, 10*time.Minute) // in-memory
rdb = redis.NewClient(&redis.Options{Addr: "redis:6379"})
sf = singleflight.Group{}
)
func GetUser(ctx context.Context, id string) (User, error) {
if v, ok := local.Get("user:"+id); ok { return v.(User), nil }
// prevent stampede across pods via Redis mutex
lockKey := "lock:user:"+id
got, _ := rdb.SetNX(ctx, lockKey, 1, 5*time.Second).Result()
if !got {
// another worker is loading; wait on singleflight (same process) to reduce DB load
v, err, _ := sf.Do("user:"+id, func() (any, error) {
time.Sleep(100 * time.Millisecond)
if v, ok := local.Get("user:"+id); ok { return v.(User), nil }
return User{}, errors.New("retry")
})
if err == nil { return v.(User), nil }
}
defer rdb.Del(ctx, lockKey)
// check Redis before DB
if b, err := rdb.Get(ctx, "user:"+id).Bytes(); err == nil {
var u User; json.Unmarshal(b, &u)
local.Set("user:"+id, u, time.Minute)
return u, nil
}
// load DB then set both caches with jittered TTL
u, err := dbLoadUser(ctx, id)
if err != nil { return User{}, err }
local.Set("user:"+id, u, time.Minute)
ttl := time.Minute + time.Duration(rand.Intn(30))*time.Second
rdb.Set(ctx, "user:"+id, mustJSON(u), ttl)
return u, nil
}
Redis Keys, TTLs, and Locking
# Basic ops
SET user:123 {"id":123,"name":"Ada"} EX 300
GET user:123
DEL user:123
# Simple mutex (avoid stampede): expires in 5s
SETNX lock:user:123 1 EX 5
# Jitter TTL to avoid thundering herds
# e.g., set EX randomly between 240..360 seconds
Eviction, Memory, and Sizing
- Choose eviction: LRU/LFU/volatile‑TTL. Monitor hit rate and evictions.
- Keep values compact: JSON with trimmed fields or MessagePack.
- Don’t cache everything—target the 20% of reads causing 80% of load.
Invalidation Strategies
- Key‑scoped: `DEL user:123` on update.
- Versioned keys: `v2:user:123`; bump prefix on incompatible schema.
- Time‑based: short TTLs + background refresh jobs for critical keys.
Observability
- Export hit/miss/evictions/latency metrics per tier (local/Redis/DB).
- Sample big payloads; log key patterns not raw values.
- Alert on miss storms and rising Redis CPU/mem.
Security
- Don’t cache secrets or raw PII. Redact.
- mTLS to Redis in prod; rotate credentials; restrict network access.
- For sessions/tokens in Redis: short TTLs, server‑side invalidation on logout.
Pitfalls
- Global invalidation blasts causing DB meltdowns; phase refresh.
- Over‑serialization cost exceeds query savings.
- Using Redis as your primary database accidentally.
Taylor Swift
“I remember it all too well.”
Comments
Post a Comment