4 cookbooks for using Valkey as the complete persistence layer for LangGraph agents - checkpointing, semantic caching, and vector search through the official langgraph-checkpoint-aws package.
Install langgraph-checkpoint-aws[valkey], connect to Valkey, and persist your first LangGraph agent with ValkeySaver.
Cache expensive Bedrock LLM calls with ValkeyCache. Benchmark cache miss (~4s) vs cache hit (~1ms) and slash your inference costs.
Store documents with vector embeddings and search by meaning using HNSW indexes, BedrockEmbeddings, and ValkeyStore.
Wire ValkeySaver + ValkeyStore + ValkeyCache together in an IT help desk agent with semantic caching and checkpointing.
See LangGraph checkpointing, LLM caching, and semantic search in action with Valkey.
Open Interactive Demo