Artificial Intelligence

Using Fireworks AI to build a reactive style agent with Langchain

In this tutorial, we will explore how to leverage the capabilities of fireworks AI to build an agent with Langchain’s intelligent, support tools. From installing the Langchain-Fireworks package and configuring the Fireworks Assembly API key, we will set up a Chatfireworks LLM instance powered by a high-performance Llama-V3-70B instructional model and integrate it with Langchain’s agent framework. In the process, we will define custom tools such as URL Fetcher for scraping web page text and SQL Generator for converting bland language requirements into executable BigQuery queries. Finally, we will have a fully-featured reactive agent that dynamically calls tools, maintains conversational memory and provides complex end-to-end workflows powered by fireworks AI.

!pip install -qU langchain langchain-fireworks requests beautifulsoup4

We have all the required Python packages, including Langchain, fireworks integration, and common utilities like requests and BeautifulSoup4. This ensures that we have the latest version of all the necessary components to run the rest of the notebook seamlessly.

import requests
from bs4 import BeautifulSoup
from langchain.tools import BaseTool
from langchain.agents import initialize_agent, AgentType
from langchain_fireworks import ChatFireworks
from langchain import LLMChain, PromptTemplate
from langchain.memory import ConversationBufferMemory
import getpass
import os

We bring all the necessary imports: HTTP clients (requests, beautiful suites), Langchain proxy framework (baseTool, Initialize_agent, AgentType), firework-powered LLM (ChatFireWorks), and prompts and memory and memory (LMCHAIN, PINSTEMPLATE, PINSTEMPLATE, MESSSSEMBEXTEMEMEMORY, CESSICTBUFFEMEMORY), and standard modes, as well as standard modes.

os.environ["FIREWORKS_API_KEY"] = getpass("🚀 Enter your Fireworks API key: ")

Now it prompts you to safely enter the fireworks API key via getPass and set it in your environment. This step ensures that the ChatFireWorks model is subsequently authenticated without exposing your key in plain text.

llm = ChatFireworks(
    model="accounts/fireworks/models/llama-v3-70b-instruct",
    temperature=0.6,
    max_tokens=1024,
    stop=["nn"]
)

We demonstrate how to instantiate ChatFireworks LLM for instructions following instructions, leveraging Llama-V3-70B instruction, medium temperature and token limits, allowing you to start prompting the model right away.

prompt = [
    {"role":"system","content":"You are an expert data-scientist assistant."},
    {"role":"user","content":"Analyze the sentiment of this review:nn"
                           ""The new movie was breathtaking, but a bit too long.""}
]
resp = llm.invoke(prompt)
print("Sentiment Analysis →", resp.content)

Next, we demonstrate a simple sentiment analysis example: it constructs a structured hint as a message list for character annotations, calls llm.Invoke(), and prints out the model’s emotional explanation of the provided movie review.

template = """
You are a data-science assistant. Keep track of the convo:


{history}
User: {input}
Assistant:"""


prompt = PromptTemplate(input_variables=["history","input"], template=template)
memory = ConversationBufferMemory(memory_key="history")


chain = LLMChain(llm=llm, prompt=prompt, memory=memory)


print(chain.run(input="Hey, what can you do?"))
print(chain.run(input="Analyze: 'The product arrived late, but support was helpful.'"))
print(chain.run(input="Based on that, would you recommend the service?"))

We illustrate how to add conversation memory, which involves defining a prompt template that combines past exchanges, setting up conversation buffermemory, and linking everything with llmchain. Run some example inputs to show how the model retains context in a turn.

class FetchURLTool(BaseTool):
    name: str = "fetch_url"
    description: str = "Fetch the main text (first 500 chars) from a webpage."


    def _run(self, url: str) -> str:
        resp = requests.get(url, timeout=10)
        doc = BeautifulSoup(resp.text, "html.parser")
        paras = [p.get_text() for p in doc.find_all("p")][:5]
        return "nn".join(paras)


    async def _arun(self, url: str) -> str:
        raise NotImplementedError

We define a custom fetchurltool through the base of the subclass. The tool uses requests and Beautifulsoup to get the first few paragraphs from any URL, making it easy for your agent to retrieve real-time web content.

class GenerateSQLTool(BaseTool):
    name: str = "generate_sql"
    description: str = "Generate a BigQuery SQL query (with comments) from a text description."


    def _run(self, text: str) -> str:
        prompt = f"""
-- Requirement:
-- {text}


-- Write a BigQuery SQL query (with comments) to satisfy the above.
"""
        return llm.invoke([{"role":"user","content":prompt}]).content


    async def _arun(self, text: str) -> str:
        raise NotImplementedError


tools = [FetchURLTool(), GenerateSQLTool()]


agent = initialize_agent(
    tools,
    llm,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)


result = agent.run(
    "Fetch  "
    "and then generate a BigQuery SQL query that counts how many times "
    "the word 'model' appears in the page text."
)


print("n🔍 Generated SQL:n", result)

Finally, GeneratesQltool is another dock subclass that wraps LLM to convert simple English requirements into BigQuery SQL for comments. It then wires both tools into a reactive agent via initialize_agent, runs a combined fetch and generate example, and prints out the resulting SQL query.

In short, we integrate fireworks AI with Langchain’s modular tools and proxy ecosystem to unlock a multi-functional platform for building AI applications beyond simple text generation. We can extend the capabilities of the proxy by adding domain-specific tools, custom prompts, and fine-tuning memory behavior, while leveraging Firerorks’ scalable inference engine. With the next steps, explore advanced features such as function calls, enclosing multiple agents, or incorporating vector-based retrieval to create more dynamic and context-aware assistants.


Check The notebook is here. Also, don’t forget to follow us twitter And join us Telegram Channel and LinkedIn GrOUP. Don’t forget to join us 90K+ ml reddit.

🔥 [Register Now] Minicon Agesic AI Virtual Conference: Free Registration + Certificate of Attendance + 4-hour Short Event (May 21, 9am-1pm) + Hands-On the Workshop


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button