Step-by-step coding guide to integrate Dappier AI’s real-time search and recommendation tools with OpenAI’s chat API

In this tutorial, we will learn how to leverage the power of Dappier AI with a set of real-time search and recommendation tools to enhance our conversational applications. By combining Dappier’s cutting-edge RealtimesearchTool with its aeronautical communication stations, we can query the latest information on custom data models from both the Web and Surface personalized articles. We guide you step by step by step by setting up our Google Colab environment, install dependencies, securely load API keys, and initialize each Dappier module. We will then integrate these tools with OpenAI chat models such as GPT-3.5-Turbo to build a composable prompt chain and perform end-to-end queries, both in nine neat laptop units. Whether we need the latest news retrieval or AI-driven content planning, this tutorial provides a flexible framework for building an intelligent, data-driven chat experience.
!pip install -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openai
We guide our Colab environment by installing the core Lange Chain Library, the official OpenAI client, by installing the core Lange Chain Library and community integration. With these packages, we will seamlessly access Dappier’s live search and recommendation tools, the latest Langchain Runtimes and OpenAI APIs, all in one environment.
import os
from getpass import getpass
os.environ["DAPPIER_API_KEY"] = getpass("Enter our Dappier API key: ")
os.environ["OPENAI_API_KEY"] = getpass("Enter our OpenAI API key: ")
We safely capture our Dappier and OpenAI API credentials at runtime, avoiding hard coding of sensitive keys in the notebook. By using GetPass, the prompt ensures that our input remains hidden and sets it as an environment variable to make them available for all subsequent cells without exposing them in the log.
from langchain_dappier import DappierRealTimeSearchTool
search_tool = DappierRealTimeSearchTool()
print("Real-time search tool ready:", search_tool)
We import Dappier’s real-time search module and create an instance of DappierRealTimesErafchTool to enable our laptop to perform real-time web queries. The print statement confirms that the tool has been successfully initialized and is ready to process the search request.
from langchain_dappier import DappierAIRecommendationTool
recommendation_tool = DappierAIRecommendationTool(
data_model_id="dm_01j0pb465keqmatq9k83dthx34",
similarity_top_k=3,
ref="sportsnaut.com",
num_articles_ref=2,
search_algorithm="most_recent",
)
print("Recommendation tool ready:", recommendation_tool)
We set up Dappier’s AI-driven recommendation engine by specifying our custom data model, the number of similar articles to retrieve, and the source domain of the context. The DappierairecommendationTool instance will now use the “most_recent” algorithm to draw Top-K related articles (here, two of them) from our specified references, ready for query-driven content suggestions.
from langchain.chat_models import init_chat_model
llm = init_chat_model(
model="gpt-3.5-turbo",
model_provider="openai",
temperature=0,
)
llm_with_tools = llm.bind_tools([search_tool])
print("✅ llm_with_tools ready")
We use GPT-3.5-Turbo with temperature 0 to create an instance of the OpenAI chat model to ensure the response is consistent, and then bind the previously initialized search tool so that the LLM can invoke real-time search. The final print statement confirms that our LLM is ready to call Dappier’s tool in our conversation stream.
import datetime
from langchain_core.prompts import ChatPromptTemplate
today = datetime.datetime.today().strftime("%Y-%m-%d")
prompt = ChatPromptTemplate([
("system", f"we are a helpful assistant. Today is {today}."),
("human", "{user_input}"),
("placeholder", "{messages}"),
])
llm_chain = prompt | llm_with_tools
print("✅ llm_chain built")
We build a conversation “chain” by first building a ChatPromptTemplate, which injects the current date into the system prompt and defines the slot for user input and prior messages. By feeding the template (|) into our llm_with_tools, we create an LLM_Chain that automatically prompts, invokes LLM (with real-time search functionality) and handles the response in a seamless workflow. The final print confirmation chain is ready to drive end-to-end interaction.
from langchain_core.runnables import RunnableConfig, chain
@chain
def tool_chain(user_input: str, config: RunnableConfig):
ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke(
{"user_input": user_input, "messages": [ai_msg, *tool_msgs]},
config=config
)
print("✅ tool_chain defined")
We define an end-to-end tool_chain which first sends our prompt to the LLM (the tool call that captures any requested tool calls), then performs these calls via search_tool.batch, and finally outputs the initial message and tool from the AI back into the LLM for a sticky response. @Chain Decorator converts it into a runnable pipeline, allowing us to simply call Tool_chain.invoke(…) to handle thinking and search in a single step.
res = search_tool.invoke({"query": "What happened at the last Wrestlemania"})
print("🔍 Search:", res)
We show Dappier’s real-time search engine a direct query asking “what happened to the last WrestleMania” and immediately print the structured results. It shows that we can easily leverage search_tool.invoke to get the latest information and check the original response in the notebook.
rec = recommendation_tool.invoke({"query": "latest sports news"})
print("📄 Recommendation:", rec)
out = tool_chain.invoke("Who won the last Nobel Prize?")
print("🤖 Chain output:", out)
Finally, we showcase our recommendations and full-chain workflow. First, it calls recommendation information. Next, it runs tool_chain.invoke (“Who won the last Nobel Prize?”) to perform end-to-end LLM query combined with real-time search, prints synthetic answers for AI and integrates real-time data.
In short, we now have a reliable baseline to embed Dappier AI capabilities into any conversational workflow. We have seen how Dappier’s real-time search gives our LLM capabilities access to new facts, and recommendation tools allow us to provide context-sensitive insights from proprietary data sources. From here, we can customize search parameters (e.g., refining query filters) or fine-tune suggestion settings (e.g., adjusting similarity thresholds and reference domains) to fit our domain.
Check dappier platform and The notebook is here. Also, don’t forget to follow us twitter And join us Telegram Channel and LinkedIn GrOUP. Don’t forget to join us 90K+ ml reddit.
🔥 [Register Now] Minicon Agesic AI Virtual Conference: Free Registration + Certificate of Attendance + 4-hour Short Event (May 21, 9am-1pm) + Hands-On the Workshop
Nikhil is an intern consultant at Marktechpost. He is studying for a comprehensive material degree in integrated materials at the Haragpur Indian Technical College. Nikhil is an AI/ML enthusiast and has been studying applications in fields such as biomaterials and biomedical sciences. He has a strong background in materials science, and he is exploring new advancements and creating opportunities for contribution.
