ReAct Agents - Introduction

Beyond the Brain: Why ReAct is the Key to Smarter, Action-Oriented AI Agents
As data scientists and AI practitioners, we’ve all been mesmerized by the incredible fluency of Large Language Models (LLMs). They can write poetry, debug code, and explain quantum physics. But for a long time, they’ve been just that: brains in a vat. They could talk about the world, but they couldn't interact with it.
If you asked an early LLM, "What's the weather in San Francisco and who won the Warriors game last night?" it would either hallucinate an answer or, if well-behaved, tell you it didn't have access to real-time information.
This limitation was a critical barrier. To build truly useful assistants, we don't just need a model that knows things; we need one that can do things.
Enter ReAct. This simple yet powerful framework is fundamentally changing the game, transforming LLMs from passive text generators into autonomous agents that can reason, plan, and act.
🤔 What is ReAct?
ReAct, introduced by researchers in the 2022 paper "ReAct: Synergizing Reasoning and Acting in Language Models", is a framework that stands for "Reasoning and Acting."
The core idea is to combine the impressive reasoning capabilities of LLMs (often seen in Chain-of-Thought (CoT) prompting) with the ability to take actions—that is, to use external tools.
Instead of just thinking its way to a final answer in one go, a ReAct agent operates in a continuous loop. This loop looks like this:
- Thought: The LLM reasons about the task. It breaks down the problem, forms a plan, and identifies what it needs to know or do next.
- Action: Based on its thought, the agent selects and uses a tool. This could be a web search, a calculator, a database query, or an API call.
- Observation: The agent receives the result (the "observation") from the tool. This is new information from the "outside world."
- Repeat: The agent takes this new observation, feeds it back into its "Thought" step, and refines its plan.
This Thought-Action-Observation cycle repeats until the agent has gathered enough information and reasoned its way to a final, complete answer.
🧐 Why is This a Game-Changer?
To grasp why ReAct is so important, let's look at what it fixes.
The Problem with "Reason-Only" (Chain-of-Thought)
Chain-of-Thought (CoT) prompting was a huge leap forward. By asking a model to "think step-by-step," we got more accurate and logical answers.
- Prompt:
What is the capital of France? - Answer:
Paris. - Prompt:
A jogger runs 3 miles east and 4 miles north. How far is he from his starting point? Please think step-by-step. - CoT Answer:
Thought: The user is asking for the hypotenuse of a right triangle. The two legs are 3 miles and 4 miles. I need to use the Pythagorean theorem: a² + b² = c². So, 3² + 4² = c². That is 9 + 16 = 25. The square root of 25 is 5. Final Answer: The jogger is 5 miles from his starting point.
This is great for self-contained logic, but it has a massive flaw: it can't access new information. If its internal knowledge is outdated or incomplete, it will confidently hallucinate. CoT is all "reason," no "act."
The Problem with "Act-Only"
Other approaches focused just on "acting"—using tools. You could train a model to recognize that "weather in SF" maps to a get_weather(city="San Francisco") API call.
This is useful but rigid. It struggles with complex, multi-step problems that require planning. If the user asks, "Book me a flight to the cheapest warm-weather destination in Europe next week," an "act-only" model breaks down. It doesn't know how to reason about "warm-weather," "cheapest," or "next week" in a structured way.
ReAct: The Best of Both Worlds
ReAct synergizes these two approaches. The Reasoning (Thought) provides the planning and flexibility, while the Acting (Action) provides the real-world grounding.
Let's revisit our earlier question: "What's the weather in San Francisco and who won the Warriors game last night?"
A ReAct agent would tackle it like this:
Thought: The user has two distinct questions. First, the weather in San Francisco. Second, the result of the last Warriors game. I need to find this information. I'll start with the weather.
Action:
search(query="weather in San Francisco")Observation: "San Francisco is currently 58°F and clear."
Thought: Great, I have the weather. Now I need the Warriors game result. I should search for that.
Action:
search(query="Golden State Warriors game result last night")Observation: "The Golden State Warriors defeated the Los Angeles Lakers 121-114 last night."
Thought: I have all the information needed to answer the user's request.
Final Answer: The weather in San Francisco is currently 58°F and clear. The Golden State Warriors won their game last night, defeating the Los Angeles Lakers 121-114.
This isn't just a better answer; it's a more trustworthy and auditable one. You can see the agent's exact line of reasoning and verify the information it retrieved.
🛠️ Where are ReAct Agents Being Used?
The ReAct framework is the foundation for many of the most advanced AI agent systems being built today, including those in popular frameworks like LangChain. Its applications are broad:
- Complex Question Answering: Answering "multi-hop" questions that require finding and combining multiple pieces of information (e.g., "Who is the director of the movie that won Best Picture the year the lead actress from Barbie was born?").
- Autonomous Assistants: Handling complex commands like "Plan a 3-day itinerary for a trip to Tokyo, focusing on tech and food, and find me a hotel near the Shinjuku station for under $200 a night."
- Data Analysis: An agent that can be given a CSV file and a question. It can "think" about the question, "act" by writing and executing Python/SQL code to analyze the data, "observe" the results (like a chart or table), and then "think" again to form a final conclusion.
- Customer Support Bots: A bot that can "think" about a user's problem (e.g., "My order #12345 hasn't arrived"), "act" by querying the company's order database via an API, "observe" the order status, and then "think" to provide a helpful, accurate update.
🚀 The Future is Agentic
ReAct is more than just a prompting technique; it's a mental model for building a new class of AI. It's the bridge between the LLM's "world model" and the real world.
By giving models the ability to stop, think, and gather more information, we're moving from simple chatbots to true autonomous agents. These agents can solve complex problems, learn from their environment, and, most importantly, get things done. As developers and data scientists, this framework isn't just something to know about—it's something to start building with.
The brain is out of the vat. Now, let's see what it can do.
This video explains the ReAct framework in more detail, showing how it enables large language models to think and interact with external tools.