Artificial Intelligence

Using MCP to implement LLM proxy accessed using tools

MCP usage is an open source library that allows you to connect any LLM to any MCP server, giving your proxy tools access to web browsing, file operations, and more without relying on closed source clients. In this tutorial, we will use Langchain-Groq And the built-in conversation memory used by MCP to build a simple chatbot that can interact with tools through MCP.

Install UV package manager

We will first set up the environment and then start by installing the UV package manager. For Mac or Linux:

curl -LsSf  | sh 

For Windows (PowerShell):

powershell -ExecutionPolicy ByPass -c "irm  | iex"

Create a new directory and activate the virtual environment

We will then create a new project directory and initialize it with UV

uv init mcp-use-demo
cd mcp-use-demo

Now we can create and activate a virtual environment. For Mac or Linux:

uv venv
source .venv/bin/activate

For Windows:

uv venv
.venvScriptsactivate

Install python dependencies

Now we will install the required dependencies

uv add mcp-use langchain-groq python-dotenv

GROQ API keys

LLM using Groq:

  1. Access the GROQ console and generate an API key.
  2. Create a .ENV file in the project directory and add the following line:

replace Use the key you just generated.

Brave search API keys

This tutorial uses Brave search for MCP server.

  1. Get the Brave Search API key from: Brave Search API
  2. Create a file named McP.json in the project root with the following content:
{
  "mcpServers": {
    "brave-search": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-brave-search"
      ],
      "env": {
        "BRAVE_API_KEY": ""
      }
    }
  }
}

replace Use your actual brave API key.

Node JS

Some MCP servers (including Brave Search) require NPX, which comes with node.js.

  • Download the latest version of node.js from nodejs.org
  • Run the installer.
  • Leave all settings as default settings and complete installation

Use other servers

If you want to use another MCP server, just replace the contents of McP.json with the configuration of that server.

Create an app.py file in the directory and add the following:

Import library

from dotenv import load_dotenv
from langchain_groq import ChatGroq
from mcp_use import MCPAgent, MCPClient
import os
import sys
import warnings

warnings.filterwarnings("ignore", category=ResourceWarning)

This section loads environment variables and imports modules required for Lanchain, MCP usage and GROQ. It also suppresses resource-wide clean output.

Setting up a chatbot

async def run_chatbot():
    """ Running a chat using MCPAgent's built in conversation memory """
    load_dotenv()
    os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")

    configFile = "mcp.json"
    print("Starting chatbot...")

    # Creating MCP client and LLM instance
    client = MCPClient.from_config_file(configFile)
    llm = ChatGroq(model="llama-3.1-8b-instant")

    # Creating an agent with memory enabled
    agent = MCPAgent(
        llm=llm,
        client=client,
        max_steps=15,
        memory_enabled=True,
        verbose=False
    )

This section loads the GROQ API key from the .env file and initializes the MCP client using the configuration provided in MCP.JSON. It then sets up Langchain Groq LLM and creates a memory-enabled proxy to handle the conversation.

Implement chatbots

# Add this in the run_chatbot function
    print("n-----Interactive MCP Chat----")
    print("Type 'exit' or 'quit' to end the conversation")
    print("Type 'clear' to clear conversation history")

    try:
        while True:
            user_input = input("nYou: ")

            if user_input.lower() in ["exit", "quit"]:
                print("Ending conversation....")
                break
           
            if user_input.lower() == "clear":
                agent.clear_conversation_history()
                print("Conversation history cleared....")
                continue
           
            print("nAssistant: ", end="", flush=True)

            try:
                response = await agent.run(user_input)
                print(response)
           
            except Exception as e:
                print(f"nError: {e}")

    finally:
        if client and client.sessions:
            await client.close_all_sessions()

This section enables interactive chat, allowing users to enter queries and receive responses from assistants. It also supports clearing chat history when required. The assistant’s response is displayed in real time, and the code ensures that all MCP sessions are closed cleanly when the conversation ends or is interrupted.

Run the application

if __name__ == "__main__":
    import asyncio
    try:
        asyncio.run(run_chatbot())
    except KeyboardInterrupt:
        print("Session interrupted. Goodbye!")
   
    finally:
        sys.stderr = open(os.devnull, "w")

This section runs an asynchronous chatbot loop to manage continuous interactions with users. It also handles keyboard interrupts gracefully to ensure that no errors occur when the user terminates the session.

You can find the entire code here

To run the app, run the following command

This will launch the application, you can interact with the chatbot and use the server to have conversations


I am a civil engineering graduate in Islamic Islam in Jamia Milia New Delhi (2022) and I am very interested in data science, especially neural networks and their applications in various fields.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button