How to give your agents memory that survives across conversations using LangGraph's Store API

Checkpoints are not long-term memory

This is the most common LangGraph misconception. Checkpoints save the state of a single thread — they let a graph pause and resume mid-run. But they are tied to one conversation thread. When a user starts a new conversation, checkpoints from previous sessions are not accessible.

Long-term memory requires the Store API: a key-value store that lives outside any thread, accessible from any run, for any user.

Feature Checkpointer Store
Scope Single thread Cross-thread, cross-session
Purpose Pause/resume, short-term context Long-term facts, preferences, history
Auto-populated Yes — every state update is saved No — you call store.put() explicitly
Backends PostgreSQL, SQLite, Redis PostgreSQL, Redis (+ InMemoryStore for dev)
Typical use Agent resumes after interruption User preferences, learned facts, summaries

Store API basics

The Store organises data in namespaces — think of them as folders. Each item has a string key and a dict value.

from langgraph.store.memory import InMemoryStore
 
store = InMemoryStore()
 
# Write a memory
store.put(
    namespace=('user_preferences', 'user-123'),
    key='communication_style',
    value={'style': 'concise', 'format': 'bullet_points', 'updated': '2026-04-10'}
)
 
# Read it back
item = store.get(
    namespace=('user_preferences', 'user-123'),
    key='communication_style'
)
print(item.value)  # {'style': 'concise', ...}
 
# Search by namespace
results = store.search(namespace=('user_preferences', 'user-123'))
for r in results:
    print(r.key, r.value)
 
# Delete
store.delete(namespace=('user_preferences', 'user-123'), key='communication_style')
 

Connecting Store to a LangGraph graph

Pass the store to compile(). Inside any node, access it via the RunnableConfig or by injecting it as a parameter.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.store.memory import InMemoryStore
from langchain_core.messages import BaseMessage
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
 
store = InMemoryStore()
llm = ChatOpenAI(model='gpt-4o-mini')
 
class State(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]
    user_id: str
 
def chat_node(state: State, config: RunnableConfig, *, store: InMemoryStore):
    user_id = state['user_id']
 
    # Load long-term memories for this user
    memories = store.search(namespace=('memories', user_id))
    memory_text = '\n'.join(
        f'- {m.value["fact"]}' for m in memories
    ) if memories else 'No memories yet.'
 
    system_prompt = (
        f'You are a helpful assistant.\n'
        f'What you remember about this user:\n{memory_text}'
    )
    msgs = [{'role': 'system', 'content': system_prompt}] + state['messages']
    response = llm.invoke(msgs)
 
    # Extract and save new facts (simplified — use a dedicated extraction step in prod)
    if 'my name is' in state['messages'][-1].content.lower():
        name = state['messages'][-1].content.split('my name is')[-1].strip().split()[0]
        store.put(
            namespace=('memories', user_id),
            key='name',
            value={'fact': f'User name is {name}'}
        )
 
    return {'messages': [response]}
 
builder = StateGraph(State)
builder.add_node('chat', chat_node)
builder.add_edge(START, 'chat')
builder.add_edge('chat', END)
 
graph = builder.compile(store=store)  # <-- pass store here
 
The store parameter is automatically injected into node functions by LangGraph when you include it as a keyword-only argument and pass the store to compile(). No extra wiring needed.

Production backend: PostgresStore

pip install langgraph-checkpoint-postgres psycopg
 
from langgraph.store.postgres import PostgresStore
import psycopg
 
DB_URI = 'postgresql://user:password@localhost:5432/mydb'
 
# InMemoryStore drop-in replacement
with PostgresStore.from_conn_string(DB_URI) as store:
    store.setup()  # creates tables on first run
 
    graph = builder.compile(store=store)
 
    # Run the graph — memories persist across restarts
    result = graph.invoke(
        {'messages': [HumanMessage('My name is Alice.')], 'user_id': 'user-123'}
    )
 

Namespace design patterns

Namespaces are tuples of strings. Design them around your access patterns, not your data structure.

Pattern Namespace example Use case
Per-user facts ('memories', user_id) User preferences, biography, past decisions
Per-user per-topic ('memories', user_id, 'projects') User's project list, task history
Global knowledge ('knowledge', 'company_faq') Facts all agents share
Agent scratchpad ('scratchpad', agent_id, session_id) Temporary working memory for a long session

Memory summarisation pattern

As memories accumulate, searching them becomes expensive and the context window fills up. Periodically summarise old memories into a single compressed entry.

async def summarise_memories(store, user_id: str, llm):
    """Compress all memories for a user into a single summary."""
    memories = store.search(namespace=('memories', user_id))
    if len(memories) < 20:
        return  # not enough to summarise yet
 
    facts = '\n'.join(f'- {m.value["fact"]}' for m in memories)
    summary = await llm.ainvoke(
        f'Summarise these facts about a user into 5-10 concise bullet points:\n{facts}'
    )
 
    # Delete old memories
    for m in memories:
        store.delete(namespace=('memories', user_id), key=m.key)
 
    # Write summary
    store.put(
        namespace=('memories', user_id),
        key='summary',
        value={'fact': summary.content}
    )
 

Semantic search in Store

PostgresStore and some other backends support vector similarity search — find memories by meaning, not just by key lookup.

from langchain_openai import OpenAIEmbeddings
from langgraph.store.postgres import PostgresStore
 
embeddings = OpenAIEmbeddings(model='text-embedding-3-small')
 
with PostgresStore.from_conn_string(
    DB_URI,
    index={'embed': embeddings, 'dims': 1536}
) as store:
    store.setup()
 
    # Memories are automatically embedded on put()
    store.put(
        namespace=('memories', 'user-123'),
        key='preference_1',
        value={'fact': 'User prefers concise bullet-point answers over long paragraphs'}
    )
 
    # Semantic search — finds relevant memories even with different wording
    results = store.search(
        namespace=('memories', 'user-123'),
        query='How should I format my response?',
        limit=3
    )
    for r in results:
        print(r.value['fact'], '| score:', r.score)