HomeUse Cases → Conversation Memory

Conversation Memory
for Chatbots

Scalable, low-latency session storage for chatbot and agent conversations. Six Valkey data structures - LIST, HASH, JSON, STRING, STREAM, and FT.SEARCH - powering a complete AI memory system.

ChatbotsSessionsVector SearchSemantic CacheAgent State

Cookbooks

5 step-by-step guides - each introduces a new Valkey data structure for a different aspect of conversation memory

Live Demo

Watch Valkey commands fire in real-time as conversations are stored, searched, and cached

How Valkey Powers Conversation Memory

CHAT HISTORY
RPUSH chat:{session} {msg}
LRANGE chat:{session} -20 -1
EXPIRE chat:{session} 3600

LIST for ordered messages. O(1) append, O(N) tail read. ~0.1ms.

SESSION METADATA
HSET meta:{session} user_id ..
HINCRBY meta:{session} tokens 150
HGETALL meta:{session}

HASH for metadata. Atomic field updates. ~0.1ms.

SEMANTIC SEARCH
JSON.SET mem:{id} $ {doc}
FT.SEARCH memory_idx
  "(*)==>[KNN 5 @emb $vec]"

JSON + HNSW vector index. Find by meaning. ~1-3ms.

SEMANTIC CACHE
FT.SEARCH cache_idx KNN 1
  → score ≥ 0.90 → HIT
JSON.SET llmcache:{id} $ ..

Cache LLM responses by meaning. Cut costs 40%+. ~1-3ms.

AGENT CHECKPOINTS
HSET agent:state:{run} step 2
HGETALL agent:state:{run}
SET tool:cache:{hash} .. EX 300

HASH for state, STRING+TTL for tool cache. Resume on crash.

EVENT LOG
XADD agent:log:{run} *
  action "search" details ..
XREAD STREAMS agent:log 0

STREAM for ordered audit trail. Replay and debug. ~0.1ms.

Complete source code on GitHub

All cookbooks use Valkey GLIDE (official client) with Python examples. Compatible with ElastiCache for Valkey.