Coding implementation of intelligent AI assistants with Jina Search, Langchain and Gemini for real-time information retrieval
In this tutorial, we demonstrate how to build an intelligent AI assistant by integrating Langchain, Gemini 2.0 Flash and Jina search tools. By combining the capabilities of a powerful Big Language Model (LLM) with an external search API, we have created an assistant that provides the latest information with references. This step-by-step tutorial will cover setting up API keys, installing necessary libraries, binding tools to Gemini models, and building a custom lanban chain that dynamically calls external tools when the model requires new or specific information. By the end of this tutorial, we will have a fully-featured interactive AI assistant that responds to user queries with accurate, current and good answers.
%pip install --quiet -U "langchain-community>=0.2.16" langchain langchain-google-genai
We install the required Python packages for this project. It includes the Langchain framework for building AI applications, the Langchain community tools (version 0.2.16 or later), and the integration of Langchain with Google Gemini models. These packages can seamlessly use Gemini models and external tools in the Langchain pipeline.
import getpass
import os
import json
from typing import Dict, Any
We incorporate basic modules into the project. GetPass allows safe input of API keys without showing them on the screen, while OS can help manage environment variables and file paths. JSON is used to process JSON data structures, typing provides type prompts for variables such as dictionaries and function parameters, ensuring better code readability and maintainability.
if not os.environ.get("JINA_API_KEY"):
os.environ["JINA_API_KEY"] = getpass.getpass("Enter your Jina API key: ")
if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Google/Gemini API key: ")
We make sure to set the necessary API keys for Jina and Google Gemini as environment variables. Assume that the key has not been defined in the environment. In this case, the script prompts the user to enter them safely using the getPass module, thus making the keys hidden for security purposes. This approach allows seamless access to these services without hard-coded sensitive information in code.
from langchain_community.tools import JinaSearch
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
print("🔧 Setting up tools and model...")
We import key modules and classes from the Langchain ecosystem. It introduces Jinasearch tools for web search, for accessing the ChatGoogleGenerativeAi model of Google Gemini and the basic categories of Langchain Core, including ChatPromptTemplate, RunnableConfig, Message structures (HumanMessage, Aimessage, Aimessage, and ToolMessage). Together, these components integrate external tools with Gemini for dynamic, AI-driven information retrieval. Print statement to confirm that the setup process has begun.
search_tool = JinaSearch()
print(f"✅ Jina Search tool initialized: {search_tool.name}")
print("n🔍 Testing Jina Search directly:")
direct_search_result = search_tool.invoke({"query": "what is langgraph"})
print(f"Direct search result preview: {direct_search_result[:200]}...")
We initialized the Jina search tool by creating a jinasearch() instance and confirming it is ready. This tool is designed to handle web search queries in the Langchain ecosystem. The script then runs a direct test query “What is Langgraph” using the Invoke method and prints a preview of the search results. This step verifies that the search tool works properly before integrating it into a larger AI assistant workflow.
gemini_model = ChatGoogleGenerativeAI(
model="gemini-2.0-flash",
temperature=0.1,
convert_system_message_to_human=True
)
print("✅ Gemini model initialized")
We initialize the Gemini 2.0 Flash model using Langchain’s ChatGooglegenerativeai class. The model is set at low temperature (0.1) for more deterministic responses, and the Convert_System_Message_to_human = true parameter ensures that the system-level prompt can correctly handle human-readable messages for the Gemini API. Final printing instructions confirm that the Gemini model is ready.
detailed_prompt = ChatPromptTemplate.from_messages([
("system", """You are an intelligent assistant with access to web search capabilities.
When users ask questions, you can use the Jina search tool to find current information.
Instructions:
1. If the question requires recent or specific information, use the search tool
2. Provide comprehensive answers based on the search results
3. Always cite your sources when using search results
4. Be helpful and informative in your responses"""),
("human", "{user_input}"),
("placeholder", "{messages}"),
])
We use ChatPromptTemplate() which guides AI behavior to define a prompt template. It includes a system message that outlines the role of the assistant, a human message placeholder queried by a user, and a placeholder for tool messages generated during tool call. This structured tip ensures that AI provides useful, informative and well-responsiveness while seamlessly integrating search results into conversations.
gemini_with_tools = gemini_model.bind_tools([search_tool])
print("✅ Tools bound to Gemini model")
main_chain = detailed_prompt | gemini_with_tools
def format_tool_result(tool_call: Dict[str, Any], tool_result: str) -> str:
"""Format tool results for better readability"""
return f"Search Results for '{tool_call['args']['query']}':n{tool_result[:800]}..."
We use bind_tools() to bind the Jina search tool to the Gemini model, so that the model calls the search tool when needed. main_chain combines structured cue templates and tool-enhanced Gemini model to create a seamless workflow for handling user input and dynamic tool calls. In addition, the format_tool_result function format search results are displayed clearly and readably to ensure that users can easily understand the output of search queries.
@chain
def enhanced_search_chain(user_input: str, config: RunnableConfig):
"""
Enhanced chain that handles tool calls and provides detailed responses
"""
print(f"n🤖 Processing query: '{user_input}'")
input_data = {"user_input": user_input}
print("📤 Sending to Gemini...")
ai_response = main_chain.invoke(input_data, config=config)
if ai_response.tool_calls:
print(f"🛠️ AI requested {len(ai_response.tool_calls)} tool call(s)")
tool_messages = []
for i, tool_call in enumerate(ai_response.tool_calls):
print(f" 🔍 Executing search {i+1}: {tool_call['args']['query']}")
tool_result = search_tool.invoke(tool_call)
tool_msg = ToolMessage(
content=tool_result,
tool_call_id=tool_call['id']
)
tool_messages.append(tool_msg)
print("📥 Getting final response with search results...")
final_input = {
**input_data,
"messages": [ai_response] + tool_messages
}
final_response = main_chain.invoke(final_input, config=config)
return final_response
else:
print("ℹ️ No tool calls needed")
return ai_response
We use Langchain’s @Chain decorator to define Enhanced_Search_chain, which enables it to handle user queries using dynamic tools. It takes user input and configuration objects, passes the input through the main chain (which includes prompts and Gemini with tools), and checks whether the AI recommends any tool calls (e.g., web search via Jina). If a tool call exists, it will perform a search, create a tool object, and restart the chain with the tool results to get the final context-enhanced response. If no tool call is made, it will directly return the AI’s response.
def test_search_chain():
"""Test the search chain with various queries"""
test_queries = [
"what is langgraph",
"latest developments in AI for 2024",
"how does langchain work with different LLMs"
]
print("n" + "="*60)
print("🧪 TESTING ENHANCED SEARCH CHAIN")
print("="*60)
for i, query in enumerate(test_queries, 1):
print(f"n📝 Test {i}: {query}")
print("-" * 50)
try:
response = enhanced_search_chain.invoke(query)
print(f"✅ Response: {response.content[:300]}...")
if hasattr(response, 'tool_calls') and response.tool_calls:
print(f"🛠️ Used {len(response.tool_calls)} tool call(s)")
except Exception as e:
print(f"❌ Error: {str(e)}")
print("-" * 50)
The function test_search_chain() validates the entire AI assistant settings by running a series of test queries through Enhanced_Search_chain. It defines a list of various test tips, covers tools, AI topics, and Langchain integrations, and prints the results to indicate whether the tool calls were used. This helps verify that AI can effectively trigger web searches, process responses, and return useful information to users, ensuring a powerful and interactive system.
if __name__ == "__main__":
print("n🚀 Starting enhanced LangChain + Gemini + Jina Search demo...")
test_search_chain()
print("n" + "="*60)
print("💬 INTERACTIVE MODE - Ask me anything! (type 'quit' to exit)")
print("="*60)
while True:
user_query = input("n🗣️ Your question: ").strip()
if user_query.lower() in ['quit', 'exit', 'bye']:
print("👋 Goodbye!")
break
if user_query:
try:
response = enhanced_search_chain.invoke(user_query)
print(f"n🤖 Response:n{response.content}")
except Exception as e:
print(f"❌ Error: {str(e)}")
Finally, when the file is executed directly, we run the AI assistant as a script. It first calls the test_search_chain() function to verify the system using a predefined query to ensure the setup works properly. It then initiates interactive mode, allowing the user to type a custom question and receive AI-generated responses with dynamic search results when needed. The loop continues until the user types “Exit”, “Exit” or “Goodbye”, providing an intuitive and hands-on way to interact with the AI system.
In short, we have successfully built an enhanced AI assistant that leverages Langchain’s modular framework, Gemini 2.0 Flash generation capabilities, and Jina Search’s real-time web search capabilities. This hybrid approach demonstrates how AI models can extend their knowledge beyond static data, thus providing users with timely and relevant information from a reliable source. You can now further extend this project for a wider range of applications by integrating other tools, custom tips, or using assistants as APIs or web applications. This foundation opens up unlimited possibilities for building intelligent systems that are both powerful and context-conscious.
View notebooks on Github. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 95k+ ml reddit And subscribe Our newsletter.
Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.