Artificial Intelligence

With Langgraph Multi-Proxy Group: Python library for creating group multi-proxy systems using Langgraph

Langgraph Multi-Agent Swarm is a Python library designed to coordinate multiple AI agents into cohesive “groups”. It builds on Langgraph, a framework for building powerful, stateful proxy workflows to implement a dedicated multi-agent architecture form. In the group, agents with different professions move control dynamically to each other as the task requires, rather than a single monolithic agent trying everything. The system tracks which proxy is last active, so the conversation is seamlessly restored with the same proxy when the user provides the next input. This approach solves the problem of establishing a collaborative AI workflow where the most qualified agent can handle each subtask without losing context or continuity.

Langgraph Swarm’s goal is to make this multi-agency coordination easier and more reliable. It provides a summary that links a single language model agent (each language model with tools and tips) into an integrated application. Due to its foundation, the library is out-of-the-box support for streaming responses, short-term and long-term memory integration, and even human intervention. By leveraging Langgraph (a low-level orchestration framework) and naturally fitting into the wider Langchain ecosystem, Langgraph Swarm allows machine learning engineers and researchers to build complex AI proxy systems while maintaining clear control over information and decision flows.

Langgraph group architecture and key functions

Langgraph Swarm represents multiple agents in its core as nodes in the directional state graph, with the edges defining the handover path, and the shared state tracking “Active_agent”. When the proxy calls handover, the library updates the field and transfers the necessary context, so the next agent continues the conversation seamlessly. This setup supports collaborative specialization, allowing each agent to focus on a narrow domain while providing customizable handover tools for flexible workflows. Based on Langgraph’s stream and memory modules, Swarm retains short-term conversation environments and long-term knowledge, ensuring coherent multi-turn interactions even when controlling transfers are performed between agents.

Agent coordination through handover tool

Langgraph Swarm’s handover tool allows an agent to transfer controls to update the shared state by issuing a “command”, switching “Active_agent” and passing along the context, such as related messages or custom digests. While the default tool can hand over a full conversation and insert notifications, developers can implement custom tools to filter context, add instructions, or rename operations to affect the behavior of LLM. Unlike autonomous AI routing patterns, Swarm’s routing is well defined: each Handoff tool specifies that the proxy can take over, ensuring predictable traffic. The mechanism supports collaboration models such as “travel planners” delegate medical issues to “medical consultants” or coordinators to distribute technical and billing queries to experts. It relies on the internal router to direct user messages to the current proxy until another switch occurs.

National management and memory

Managing state and memory is critical to keeping tasks while saving context. By default, Langgraph Swarm maintains a shared state, including conversation history and “Active_agent” tags, and uses CheckPointer (such as internal storage accumulators or database storage) to persist this state across rounds. Furthermore, it supports a memory storage to provide long-term knowledge, allowing the system to record facts or past interactions for future sessions while retaining a window of the latest news for real-time context. Together, these mechanisms ensure that groups never “forget” which agent activity or what has been discussed, enabling seamless multi-turn dialogue and accumulating critical data on user preferences or data over time.

When more granular control is needed, developers can define custom state patterns so that each agent has its private message history. By wrapping proxy calls to map global state into specific fields, then calling and merging updates, the team can tailor the level of context sharing. This approach supports workflows from fully collaborative agents to isolated inference modules, while leveraging Langgraph Swarm’s powerful orchestration, memory and national management infrastructure.

Customization and scalability

Langgraph Swarm provides a wide range of flexibility for custom workflows. Developers can override the default handover tool that passes all messages and switches active agents to implement professional logic, such as summarizing context or attaching other metadata. The custom tool simply returns the langgraph command to update the state, and the proxy must be configured to process these commands with the appropriate node type and state-applicable keys. In addition to handover, you can also use Langgraph’s typing state mode to redefine how a proxy shares or isolates memory: maps global population state to each proxy realm before calling and merging the results. This allows an agent to maintain a private conversation history or use a different communication format without exposing its internal reasoning. For full control, you can bypass the advanced API and manually assemble the “Status Diagram”: add each compiled proxy as a node, define the transition edge and connect the active proxy router. While most use cases benefit from the simplicity of “create_swarm” and “create_reaeact_agent”, the ability to drop to langgraph primitives ensures that practitioners can check, adjust or scale aspects of multi-agent coordination.

Ecosystem integration and dependency

The Langgraph group is closely integrated with Langchain and is evaluated using components such as Langsmith. Langchain_Openai is used for model access, as well as langgraph’s orchestration features such as persistence and caching. Its model’s irreplaceable design allows it to coordinate proxying in all LLM backends (OpenAi, Hugging Face, or others) and it can be used in both Python (‘Pip install install langgraph-swarm’) and Javascript/typeScript (‘@langchain/langchain/langgraph-swarm) to make it suitable for web or serverless environments. Under the MIT license and active development, it continues to benefit from community contributions and enhancements to the Lamban ecosystem.

Sample implementation

Here are the minimum settings for both groups:

from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import create_react_agent
from langgraph_swarm import create_handoff_tool, create_swarm

model = ChatOpenAI(model="gpt-4o")

# Agent "Alice": math expert
alice = create_react_agent(
    model,
    [lambda a,b: a+b, create_handoff_tool(agent_name="Bob")],
    prompt="You are Alice, an addition specialist.",
    name="Alice",
)

# Agent "Bob": pirate persona who defers math to Alice
bob = create_react_agent(
    model,
    [create_handoff_tool(agent_name="Alice", description="Delegate math to Alice")],
    prompt="You are Bob, a playful pirate.",
    name="Bob",
)

workflow = create_swarm([alice, bob], default_active_agent="Alice")
app = workflow.compile(checkpointer=InMemorySaver())

Here, Alice handles addition, which can be handed over to Bob, who answers playfully, but extends the math problem to Alice. Invasive shelter ensures that the dialogue state persists during the turn.

Use cases and applications

Langgraph unlocks advanced multi-agent collaboration by enabling the center coordinator to delegate subtask dynamics to dedicated agents, whether by handing over breakup emergencies to medical, safety or disaster response experts, coding and coding workflows between flights, hotels and pilots, and coding between coding efforts, routing them into writers researchers, journalists and fact checkers. In addition to these examples, the framework can also power customers to inquire robots that route queries to department experts, interactive storytelling, a scientific pipeline with unique role agents, a processor with a specific stage, or any scenario where work is allocated among members of the expert “group” improves reliability and clarity. At the same time, the Langgraph group handles basic message routing, state management and smooth transition.

In short, the Langgraph group marks a leap in truly modular, cooperative AI systems. Structured multiple dedicated agents as directed graphs can solve the tasks that a single model struggles with, each dealing with its expertise and then moving the controls seamlessly away. The design keeps individual agents simple and easy to use, while collective group management involves complex workflows of reasoning, tool usage, and decision making. Built on Langchain and Langgraph, the library enters a mature ecosystem of LLM, tools, memory stores and debugging utilities. Developers retain clear control over proxy interaction and state sharing to ensure reliability, but still leverage LLM flexibility to decide when tools are called or delegated to other proxy.


Check Github page. All credits for this study are to the researchers on the project. Also, please feel free to follow us twitter And don’t forget to join us 90K+ ml reddit.


Sana Hassan, a consulting intern at Marktechpost and a dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. He is very interested in solving practical problems, and he brings a new perspective to the intersection of AI and real-life solutions.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button