Artificial Intelligence

Step by step guide to building customizable multi-tool AI agents with Langgraph and Claude

In this comprehensive tutorial, we guide users to create powerful multi-tool AI agents using Langgraph and Claude, and optimize for a variety of tasks including mathematical calculations, web search, weather query, text analysis, and real-time information retrieval. It starts by simplifying dependency installation to ensure it can be easily set up even for beginners. The user is then introduced to structured implementations of dedicated tools such as security calculators, effective web search utilities, leveraging DuckDuckgo, a simulated weather information provider, detailed text analyzer and time-enhancing features. The tutorial also clearly describes the integration of these tools in a complex proxy architecture built with Langgraph, and illustrates practical usage through interactive examples and clear explanations, facilitating beginners and advanced developers to quickly deploy custom, multi-functional AI agents.

import subprocess
import sys


def install_packages():
    packages = [
        "langgraph",
        "langchain",
        "langchain-anthropic",
        "langchain-community",
        "requests",
        "python-dotenv",
        "duckduckgo-search"
    ]
   
    for package in packages:
        try:
            subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"])
            print(f"✓ Installed {package}")
        except subprocess.CalledProcessError:
            print(f"✗ Failed to install {package}")


print("Installing required packages...")
install_packages()
print("Installation complete!n")

We automate the installation of the required Python packages required to build a Langgraph-based multi-tool AI agent. It uses subprocesses to run PIP commands silently and ensures that every package from long chain components to web search and environment processing tools is successfully installed. This setup simplifies the environment preparation process and makes the laptop portable and beginner friendly.

import os
import json
import math
import requests
from typing import Dict, List, Any, Annotated, TypedDict
from datetime import datetime
import operator


from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
from duckduckgo_search import DDGS

We import all the necessary libraries and modules for building multi-tool AI agents. It includes Python standard libraries for common functions such as OS, JSON, MATH and DATETIME, as well as external libraries such as requests for HTTP calls and DuckDuckgo_Search for implementing web searches. The Langchain and Langgraph ecosystems bring message types, tool decorators, status graph components, and checkpoint utilities, while ChatanThropic can integrate with the Claude model for dialogue intelligence. These imports form the basic building blocks that define tools, agent workflows, and interactions.

os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Here"


ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")

We set up and retrieved the human API keys needed to authenticate and interact with the Claude model. The OS.Environ family allocates API keys (you should replace them with valid ones), while OS.GetEnv searches them safely for later use in model initialization. This method ensures that the key is accessed throughout the script without having to hardcode multiple times.

from typing import TypedDict


class AgentState(TypedDict):
    messages: Annotated[List[BaseMessage], operator.add]


@tool
def calculator(expression: str) -> str:
    """
    Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more.
   
    Args:
        expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)")
   
    Returns:
        Result of the calculation as a string
    """
    try:
        allowed_names = {
            'abs': abs, 'round': round, 'min': min, 'max': max,
            'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
            'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
            'log': math.log, 'log10': math.log10, 'exp': math.exp,
            'pi': math.pi, 'e': math.e
        }
       
        expression = expression.replace('^', '**')  
       
        result = eval(expression, {"__builtins__": {}}, allowed_names)
        return f"Result: {result}"
    except Exception as e:
        return f"Error in calculation: {str(e)}"

We define the internal state of the agent and implement powerful calculator tools. The AgentState class uses TypedDict to build Agent Memory, especially to track messages exchanged during conversations. Register it as an AI-friendly utility with the calculator feature decorated with @tool, which can safely evaluate mathematical expressions. It can perform safe calculations by limiting available functions to predefined sets of mathematical modules and replacing common syntax with Python’s instructions for operators. This ensures that the tool can handle simple arithmetic and advanced features such as triangle or logarithm, while preventing unsafe code execution.

@tool
def web_search(query: str, num_results: int = 3) -> str:
    """
    Search the web for information using DuckDuckGo.
   
    Args:
        query: Search query string
        num_results: Number of results to return (default: 3, max: 10)
   
    Returns:
        Search results as formatted string
    """
    try:
        num_results = min(max(num_results, 1), 10)  
       
        with DDGS() as ddgs:
            results = list(ddgs.text(query, max_results=num_results))
       
        if not results:
            return f"No search results found for: {query}"
       
        formatted_results = f"Search results for '{query}':nn"
        for i, result in enumerate(results, 1):
            formatted_results += f"{i}. **{result['title']}**n"
            formatted_results += f"   {result['body']}n"
            formatted_results += f"   Source: {result['href']}nn"
       
        return formatted_results
    except Exception as e:
        return f"Error performing web search: {str(e)}"

We have defined a Web_search tool that can get real-time information from the Internet through the DuckDuckgo_Search Python package through the DuckDuckgo Search API. The tool accepts search queries and optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckgo search session, retrieves the results and formats them neatly for use with user-friendly displays. If a result is not found or an error occurs, the function handles gracefully by returning an informative message. The tool provides real-time search capabilities for the agent, enhancing responsiveness and practicality.

@tool
def weather_info(city: str) -> str:
    """
    Get current weather information for a city using OpenWeatherMap API.
    Note: This is a mock implementation for demo purposes.
   
    Args:
        city: Name of the city
   
    Returns:
        Weather information as a string
    """
    mock_weather = {
        "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65},
        "london": {"temp": 15, "condition": "Rainy", "humidity": 80},
        "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70},
        "paris": {"temp": 18, "condition": "Overcast", "humidity": 75}
    }
   
    city_lower = city.lower()
    if city_lower in mock_weather:
        weather = mock_weather[city_lower]
        return f"Weather in {city}:n" 
               f"Temperature: {weather['temp']}°Cn" 
               f"Condition: {weather['condition']}n" 
               f"Humidity: {weather['humidity']}%"
    else:
        return f"Weather data not available for {city}. (This is a demo with limited cities: New York, London, Tokyo, Paris)"

We define a Weather_Info tool that simulates retrieving current weather data for a given city. Although it is not connected to the on-site weather API, it uses a pre-determined dictionary of simulated data from major cities such as New York, London, Tokyo and Paris. Once the city name is received, the function normalizes it to lowercase and checks whether it exists in the simulated dataset. If found, it will return temperature, weather conditions, and humidity in a readable format. Otherwise, it will notify the user that the weather data is not available. The tool is a placeholder that can later be upgraded to get real-time data from the actual weather API.

@tool
def text_analyzer(text: str) -> str:
    """
    Analyze text and provide statistics like word count, character count, etc.
   
    Args:
        text: Text to analyze
   
    Returns:
        Text analysis results
    """
    if not text.strip():
        return "Please provide text to analyze."
   
    words = text.split()
    sentences = text.split('.') + text.split('!') + text.split('?')
    sentences = [s.strip() for s in sentences if s.strip()]
   
    analysis = f"Text Analysis Results:n"
    analysis += f"• Characters (with spaces): {len(text)}n"
    analysis += f"• Characters (without spaces): {len(text.replace(' ', ''))}n"
    analysis += f"• Words: {len(words)}n"
    analysis += f"• Sentences: {len(sentences)}n"
    analysis += f"• Average words per sentence: {len(words) / max(len(sentences), 1):.1f}n"
    analysis += f"• Most common word: {max(set(words), key=words.count) if words else 'N/A'}"
   
    return analysis

The Text_Analyzer tool provides detailed statistical analysis of a given text input. It calculates metrics such as character count (with or without spaces), word count, sentence count, and average words, and identifies the most common words. The tool gracefully handles empty input by prompting the user to provide valid text. It uses simple string manipulations along with Python’s collection and maximum functionality to extract meaningful insights. It is a valuable utility for language analysis or content quality checking in the AI ​​agent toolkit.

@tool
def current_time() -> str:
    """
    Get the current date and time.
   
    Returns:
        Current date and time as a formatted string
    """
    now = datetime.now()
    return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"

The Current_time tool provides an easy way to retrieve the current system date and time in a human-readable format. Using Python’s DateTime module, it captures the current moment and formats it as Yyyy-MM-DD HH:MM:SS. This utility is especially useful for responding to time stamps or answering user queries about the current date and time in the AI ​​proxy interactive stream.

tools = [calculator, web_search, weather_info, text_analyzer, current_time]


def create_llm():
    if ANTHROPIC_API_KEY:
        return ChatAnthropic(
            model="claude-3-haiku-20240307",  
            temperature=0.1,
            max_tokens=1024
        )
    else:
        class MockLLM:
            def invoke(self, messages):
                last_message = messages[-1].content if messages else ""
               
                if any(word in last_message.lower() for word in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']):
                    import re
                    numbers = re.findall(r'[d+-*/.()sw]+', last_message)
                    expr = numbers[0] if numbers else "2+2"
                    return AIMessage(content="I'll help you with that calculation.",
                                   tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}])
                elif any(word in last_message.lower() for word in ['search', 'find', 'look up', 'information about']):
                    query = last_message.replace('search for', '').replace('find', '').replace('look up', '').strip()
                    if not query or len(query) 

We initialize the language model that powers AI agents. If there is a valid anthropomorphic API key, it will use the Claude 3 Haiku model for high-quality response. Without API keys, a useless simulation is defined to simulate basic tool routing behavior based on keyword matching, allowing the proxy to offline with limited functionality. The bind_tools method links defined tools to the model so that they can be called as needed.

def agent_node(state: AgentState) -> Dict[str, Any]:
    """Main agent node that processes messages and decides on tool usage."""
    messages = state["messages"]
    response = llm_with_tools.invoke(messages)
    return {"messages": [response]}


def should_continue(state: AgentState) -> str:
    """Determine whether to continue with tool calls or end."""
    last_message = state["messages"][-1]
    if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
        return "tools"
    return END

We define the core decision logic of the agent. The Agent_Node function processes incoming messages, calls the language model (using tools), and then returns the model’s response. The syse_continue function then evaluates whether the model’s response includes tool calls. If so, it routes the control to the tool execution node; otherwise, it will boot the process to end the interaction. These features allow dynamic and conditional transitions within the agent’s workflow.

def create_agent_graph():
    tool_node = ToolNode(tools)
   
    workflow = StateGraph(AgentState)
   
    workflow.add_node("agent", agent_node)
    workflow.add_node("tools", tool_node)
   
    workflow.add_edge(START, "agent")
    workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
    workflow.add_edge("tools", "agent")
   
    memory = MemorySaver()
   
    app = workflow.compile(checkpointer=memory)
   
    return app


print("Creating LangGraph Multi-Tool Agent...")
agent = create_agent_graph()
print("✓ Agent created successfully!n")

We construct a langgraph-Power workflow that defines the operational structure of the AI ​​agent. It initializes a tool sentence to handle tool execution and uses a status graph to organize the process between proxy decision making and tool use. Add nodes and edges to manage transitions: Start with the agent, route conditionally to the tool, and then loop backwards as needed. Integrate a memory for continuous state tracking across rounds. The graph is compiled into an executable application (APP), enabling a structured, memory multi-tool proxy for deployment.

def test_agent():
    """Test the agent with various queries."""
    config = {"configurable": {"thread_id": "test-thread"}}
   
    test_queries = [
        "What's 15 * 7 + 23?",
        "Search for information about Python programming",
        "What's the weather like in Tokyo?",
        "What time is it?",
        "Analyze this text: 'LangGraph is an amazing framework for building AI agents.'"
    ]
   
    print("🧪 Testing the agent with sample queries...n")
   
    for i, query in enumerate(test_queries, 1):
        print(f"Query {i}: {query}")
        print("-" * 50)
       
        try:
            response = agent.invoke(
                {"messages": [HumanMessage(content=query)]},
                config=config
            )
           
            last_message = response["messages"][-1]
            print(f"Response: {last_message.content}n")
           
        except Exception as e:
            print(f"Error: {str(e)}n")

The test_agent function is a verification utility that ensures that the langgraph agent responds correctly in different use cases. It runs predefined queries, arithmetic, web search, weather, time and text analysis, and prints the proxy’s response. Configure with a consistent thread_ID, which calls the proxy using each query. It neatly displays results, helping developers verify tool integration and conversational logic before moving to interactive or production use.

def chat_with_agent():
    """Interactive chat function."""
    config = {"configurable": {"thread_id": "interactive-thread"}}
   
    print("🤖 Multi-Tool Agent Chat")
    print("Available tools: Calculator, Web Search, Weather Info, Text Analyzer, Current Time")
    print("Type 'quit' to exit, 'help' for available commandsn")
   
    while True:
        try:
            user_input = input("You: ").strip()
           
            if user_input.lower() in ['quit', 'exit', 'q']:
                print("Goodbye!")
                break
            elif user_input.lower() == 'help':
                print("nAvailable commands:")
                print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'")
                print("• Web Search: 'Search for Python tutorials' or 'Find information about AI'")
                print("• Weather: 'Weather in Tokyo' or 'What's the temperature in London?'")
                print("• Text Analysis: 'Analyze this text: [your text]'")
                print("• Current Time: 'What time is it?' or 'Current date'")
                print("• quit: Exit the chatn")
                continue
            elif not user_input:
                continue
           
            response = agent.invoke(
                {"messages": [HumanMessage(content=user_input)]},
                config=config
            )
           
            last_message = response["messages"][-1]
            print(f"Agent: {last_message.content}n")
           
        except KeyboardInterrupt:
            print("nGoodbye!")
            break
        except Exception as e:
            print(f"Error: {str(e)}n")

The CHAT_WITH_AGENT function provides an interactive command-line interface for real-time conversations with the Langgraph Multi-Tool agent. It supports natural language queries and recognizes commands such as “help” for use guidance and “exit” exit. Each user input is processed through a proxy that dynamically selects and calls the appropriate response tool. This feature enhances user engagement by simulating conversation experiences and demonstrating the agent’s capabilities in handling a variety of queries, from math and web searches to weather, text analysis, and time searches.

if __name__ == "__main__":
    test_agent()
   
    print("=" * 60)
    print("🎉 LangGraph Multi-Tool Agent is ready!")
    print("=" * 60)
   
    chat_with_agent()


def quick_demo():
    """Quick demonstration of agent capabilities."""
    config = {"configurable": {"thread_id": "demo"}}
   
    demos = [
        ("Math", "Calculate the square root of 144 plus 5 times 3"),
        ("Search", "Find recent news about artificial intelligence"),
        ("Time", "What's the current date and time?")
    ]
   
    print("🚀 Quick Demo of Agent Capabilitiesn")
   
    for category, query in demos:
        print(f"[{category}] Query: {query}")
        try:
            response = agent.invoke(
                {"messages": [HumanMessage(content=query)]},
                config=config
            )
            print(f"Response: {response['messages'][-1].content}n")
        except Exception as e:
            print(f"Error: {str(e)}n")


print("n" + "="*60)
print("🔧 Usage Instructions:")
print("1. Add your ANTHROPIC_API_KEY to use Claude model")
print("   os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'")
print("2. Run quick_demo() for a quick demonstration")
print("3. Run chat_with_agent() for interactive chat")
print("4. The agent supports: calculations, web search, weather, text analysis, and time")
print("5. Example: 'Calculate 15*7+23' or 'Search for Python tutorials'")
print("="*60)

Finally, we coordinate the execution of Langgraph multi-tool agents. If the script is run directly, it will start test_agent() to sample query verification function, and then start Interactive CHAT_WITH_WITH_AGENT() mode for real-time interaction. The Quick_Demo() function also briefly demonstrates the functions of the agent in math, search and time query. Clear usage instructions print at the end, instructing the user to configure API keys, run the demo and interact with the agent. This provides users with a smooth introductory experience, where they can explore and extend the capabilities of the proxy.

In summary, this step-by-step tutorial provides valuable insights to leverage the generational capabilities of Langgraph and Claude for building an effective multi-tool AI agent. With direct explanations and hands-on demonstrations, the guide enables users to integrate various utilities into a cohesive and interactive system. The flexibility of agents to execute tasks, from complex computing to dynamic information retrieval, demonstrates the versatility of modern AI development frameworks. Additionally, including user-friendly features for testing and interactive chats can enhance practice understanding and apply immediately in various situations. Developers can confidently extend and customize their AI proxy with this basic knowledge.


View notebooks on Github. All credits for this study are to the researchers on the project. Also, please feel free to follow us twitter And don’t forget to join us 95k+ ml reddit And subscribe Our newsletter.


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button