Make your app remember — support multi-turn interactions, follow-ups, and personalization with memory.
Out of the box, LLMs forget everything after each message. That's fine for isolated prompts, but frustrating in real conversations.
LangChain's memory modules fix this. They let you track past exchanges, maintain continuity, and build smarter chat systems.
LangChain supports different memory strategies:
For this chapter, we'll use ConversationBufferMemory.
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
from langchain.llms import GooglePalm
llm = GooglePalm()
memory = ConversationBufferMemory()
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
chain.run("Hi, I'm Mark.")
chain.run("What's my name?")
This will return:
"You just told me your name is Mark."
Boom — your app now has memory.
LangChain memory brings a human touch to LLM interactions.
In Chapter 4, we'll add tool use — so your LLM can calculate, search, or hit APIs as part of its reasoning.
👉 Continue to Chapter 4: Using LangChain Tools — Add Web Search, Math, and Custom Actions