Loading...
Give your app real-world skills — connect to APIs, fetch results, and let your LLM take action.
LLMs can reason. But they can't calculate, search the web, or hit APIs — unless you give them tools.
LangChain lets you attach tools to your LLMs so they can take actions, not just give answers.
This is the foundation for agents, copilots, and advanced LLM workflows.
from langchain.agents import initialize_agent, Tool
from langchain.llms import OpenAI
from langchain.agents.agent_types import AgentType
from langchain.tools import DuckDuckGoSearchRun
from langchain.tools.python.tool import PythonREPLTool
tools = [
PythonREPLTool(), # For math or Python logic
DuckDuckGoSearchRun(name="Search")
]
llm = OpenAI()
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
agent.run("What is 17 * 23, and who won the 2023 World Cup?")
The agent will:
LangChain agents follow a think → act → observe → repeat cycle.
This architecture allows your LLM to use tools as functions, enabling complex workflows.
By now, you've built:
In Chapter 5, we'll bring it all together with vector search and knowledge retrieval — powering apps like "Chat with Your Docs."