Two npm packages for Valkey-backed LLM caching. Semantic caching with vector search, and multi-tier agent caching with LangGraph support. TypeScript, built-in OTel + Prometheus.
Valkey-native semantic cache · LangChain & Vercel AI adapters · valkey-search required
Install, connect to Valkey, write an embed function, and serve semantically similar prompts from cache.
BetterDBSemanticCache for LangChain and createSemanticCacheMiddleware for Vercel AI SDK.
Threshold tuning, uncertain hit strategies, per-category overrides, TTL management, and Prometheus metrics.
Multi-tier agent cache · LangGraph BetterDBSaver · vanilla Valkey 7+, no modules
LLM cache, tool cache, and session store on vanilla Valkey 7+ - three tiers, one connection.
Exact-match LLM caching with cost tracking. Per-tool TTL policies and toolEffectiveness() recommendations.
Per-field TTL, sliding-window refresh, and multi-step agent state patterns.
BetterDBSaver on vanilla Valkey 7+ - no Redis 8, no RedisJSON, no RediSearch required.