Loading...
Build your first working AI app — wire up a question-answering bot powered by LangChain + Gemini
Now that you understand LangChain's core components, it's time to build something real: a basic Q&A app.
This project will help you:
You'll create a workflow like this:
User input → Prompt template → Gemini response → Output answer
It's simple, powerful, and foundational for more complex chains later.
pip install langchain google-generativeai
Set your Gemini API key:
export GOOGLE_API_KEY="your-key"
from langchain.llms import GooglePalm
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
llm = GooglePalm(temperature=0.7)
prompt = PromptTemplate(
input_variables=["question"],
template="Answer the following question in 2–3 concise sentences:\n\n{question}"
)
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.run("What is Retrieval-Augmented Generation?")
print(response)
This setup introduces key components:
It also gives you a starting point to integrate this into chatbots, APIs, or web frontends.
In Chapter 3, we'll turn your stateless app into a conversation-ready chatbot using LangChain memory.
👉 Continue to Chapter 3: LangChain Memory & State — Building Context-Aware Conversations