LangChain + Valkey

4 cookbooks for using Valkey as the complete persistence layer for LangGraph agents - checkpointing, semantic caching, and vector search through the official langgraph-checkpoint-aws package.

01

Getting Started

Install langgraph-checkpoint-aws[valkey], connect to Valkey, and persist your first LangGraph agent with ValkeySaver.

Beginner~15 minPython
02

LLM Response Caching

Cache expensive Bedrock LLM calls with ValkeyCache. Benchmark cache miss (~4s) vs cache hit (~1ms) and slash your inference costs.

Intermediate~20 minPython
03

Semantic Search with ValkeyStore

Store documents with vector embeddings and search by meaning using HNSW indexes, BedrockEmbeddings, and ValkeyStore.

Intermediate~20 minPython
04

Full Agent - All Three Components

Wire ValkeySaver + ValkeyStore + ValkeyCache together in an IT help desk agent with semantic caching and checkpointing.

Advanced~25 minPython

🎮 Try It Live

See LangGraph checkpointing, LLM caching, and semantic search in action with Valkey.

Open Interactive Demo
⭐ LangGraph on GitHub 📦 valkeyforai Repo