Artificial Intelligence

Coding guide for different function call methods to create real-time, tool-enabled conversational AI proxy

Function calls make LLM act as a bridge between natural language prompts and actual code or API. Instead of simply generating text, the model decides when to call a predefined function, instead issues a structured json call with the function name and parameters, then waits for your application to execute that call and return the result. This back and forth loop can be looped, and multiple functions may be called sequentially, thus achieving rich multi-step interactions completely under dialogue control. In this tutorial, we will implement the Weather Assistant using Gemini 2.0 Flash to demonstrate how to set up and manage the call cycle of this feature. We will implement different function call variants. With integrated function calls, we transform the chat interface into a dynamic tool for real-time tasks, whether it is to get real-time weather data, check order status, schedule appointments, or update the database. Users no longer fill in complex forms or navigate multiple screens; they simply describe their needs, while LLM seamlessly coordinates the basic movements. This natural language automation makes it easy to construct AI agents that can access external data sources, execute transactions, or trigger workflows in a single conversation.

Using Google Gemini 2.0 Flash Calling Feature

!pip install "google-genai>=1.0.0" geopy requests

We installed the Gemini Python SDK (Google-genai ≥ 1.0.0), as well as a geolocation that converts the location name to coordinates and requests an HTTP call to ensure that all core dependencies of our Colab Weather Assistant are in place.

import os
from google import genai


GEMINI_API_KEY = "Use_Your_API_Key"  


client = genai.Client(api_key=GEMINI_API_KEY)


model_id = "gemini-2.0-flash"

We import the Gemini SDK, set your API key, and then create a Genai.Client instance configured to use the “Gemini-2.0-Flash” model to establish the foundation for all subsequent function call requests.

res = client.models.generate_content(
    model=model_id,
    contents=["Tell me 1 good fact about Nuremberg."]
)
print(res.text)

We send a user prompt (“Tell me 1 good fact about Nuremberg.”) to send a Gemini 2.0 Flash model via GERTATE_CONTENT, then print out the text reply of the model and demonstrate basic, end-to-end text generation calls using the SDK.

Use JSON mode to call functions

weather_function = {
    "name": "get_weather_forecast",
    "description": "Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour.",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g., San Francisco, CA"
            },
            "date": {
                "type": "string",
                "description": "the forecasting date for when to get the weather format (yyyy-mm-dd)"
            }
        },
        "required": ["location","date"]
    }
}

Here we define a JSON pattern for our get_weather_forecast tool, specifying its name, descriptive prompts, guiding Gemini when to use it and the exact input parameters (position and date) that have its type, description and required fields, so the model can emit valid features.

from google.genai.types import GenerateContentConfig


config = GenerateContentConfig(
    system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API. Today is 2025-03-04.",
    tools=[{"function_declarations": [weather_function]}],
)

We created a GenerateContentConfig, tell Gemini that it is a weather-back assistant, and register your weather features under the tool. Therefore, the model knows how to generate structured calls when asking for prediction data.

response = client.models.generate_content(
    model=model_id,
    contents="Whats the weather in Berlin today?"
)
print(response.text)

This call sends a naked prompt (“Berlin weather today?”) without including your configuration (and therefore no feature definition), so Gemini falls back to plain text completion, providing generic advice instead of calling your weather-prediction tool.

response = client.models.generate_content(
    model=model_id,
    config=config,
    contents="Whats the weather in Berlin today?"
)


for part in response.candidates[0].content.parts:
    print(part.function_call)

By passing Config (including your JSON -SCHEMA tool), Gemini recognizes that it should call get_weather_forecast instead of replying with plain text. Responsive loop.[0].content.parts then prints the .function_call object for each part, showing you exactly what the model determines the function (its name and parameters) is called.

from google.genai import types
from geopy.geocoders import Nominatim
import requests


geolocator = Nominatim(user_agent="weather-app")
def get_weather_forecast(location, date):
    location = geolocator.geocode(location)
    if location:
        try:
            response = requests.get(f"
            data = response.json()
            return {time: temp for time, temp in zip(data["hourly"]["time"], data["hourly"]["temperature_2m"])}
        except Exception as e:
            return {"error": str(e)}
    else:
        return {"error": "Location not found"}


functions = {
    "get_weather_forecast": get_weather_forecast
}


def call_function(function_name, **kwargs):
    return functions[function_name](**kwargs)


def function_call_loop(prompt):
    contents = [types.Content(role="user", parts=[types.Part(text=prompt)])]
    response = client.models.generate_content(
        model=model_id,
        config=config,
        contents=contents
    )
    for part in response.candidates[0].content.parts:
        contents.append(types.Content(role="model", parts=[part]))
        if part.function_call:
            print("Tool call detected")
            function_call = part.function_call
            print(f"Calling tool: {function_call.name} with args: {function_call.args}")
            tool_result = call_function(function_call.name, **function_call.args)
            function_response_part = types.Part.from_function_response(
                name=function_call.name,
                response={"result": tool_result},
            )
            contents.append(types.Content(role="user", parts=[function_response_part]))
            print(f"Calling LLM with tool results")
            func_gen_response = client.models.generate_content(
                model=model_id, config=config, contents=contents
            )
            contents.append(types.Content(role="model", parts=[func_gen_response]))
    return contents[-1].parts[0].text.strip()
   
result = function_call_loop("Whats the weather in Berlin today?")
print(result)

We implement a complete “proxy” loop: it sends your prompt to Gemini, checks the response of the function call, executes get_weather_forecast (using geopy plus open-meteo http request), and then feeds the results of the tool back into the model to produce and return the final conversation.

Calling functions using Python functions

from geopy.geocoders import Nominatim
import requests


geolocator = Nominatim(user_agent="weather-app")


def get_weather_forecast(location: str, date: str) -> str:
    """
    Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour."
   
    Args:
        location (str): The city and state, e.g., San Francisco, CA
        date (str): The forecasting date for when to get the weather format (yyyy-mm-dd)
    Returns:
        Dict[str, float]: A dictionary with the time as key and the temperature as value
    """
    location = geolocator.geocode(location)
    if location:
        try:
            response = requests.get(f"
            data = response.json()
            return {time: temp for time, temp in zip(data["hourly"]["time"], data["hourly"]["temperature_2m"])}
        except Exception as e:
            return {"error": str(e)}
    else:
        return {"error": "Location not found"}

The get_weather_forecast function first uses Geopy’s Nominatim to convert the city and state strings to coordinates, and then sends an HTTP request to the Open-Meteo API to retrieve the hourly temperature data on a given date and returns a dictionary that maps each time period to its corresponding temperature. It also handles error gracefully, returning an error message if the location is not found or the API call fails.

from google.genai.types import GenerateContentConfig


config = GenerateContentConfig(
    system_instruction="You are a helpful assistant that can help with weather related questions. Today is 2025-03-04.", # to give the LLM context on the current date.
    tools=[get_weather_forecast],
    automatic_function_calling={"disable": True}
)

This configuration takes your python get_weather_forecast function as a tool for cocoa. It sets a clear system prompt for the context (including dates) while disabling “automatic_function_calling” so that Gemini will issue a function call payload instead of calling it internally.

r = client.models.generate_content(
    model=model_id,
    config=config,
    contents="Whats the weather in Berlin today?"
)
for part in r.candidates[0].content.parts:
    print(part.function_call)

By sending a prompt with your custom configuration (including Python tools, but auto-calling is disabled), the snippet captures Gemini’s original functional keyboard decisions. It then loops over each response section to print out the .function_call object, allowing you to check exactly which tool the model wants to call and which parameters to use.

from google.genai.types import GenerateContentConfig


config = GenerateContentConfig(
    system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API. Today is 2025-03-04.", # to give the LLM context on the current date.
    tools=[get_weather_forecast],
)


r = client.models.generate_content(
    model=model_id,
    config=config,
    contents="Whats the weather in Berlin today?"
)


print(r.text)

Using this configuration (which includes your get_weather_forecast function and enables auto-call by default), calling generate_content will cause Gemini to call your weather tool behind the scenes and then return to a natural language reply. Print R.Text outputs the final response, including the actual temperature prediction for Berlin from the specified date.

from google.genai.types import GenerateContentConfig


config = GenerateContentConfig(
    system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API.",
    tools=[get_weather_forecast],
)


prompt = f"""
Today is 2025-03-04. You are chatting with Andrew, you have access to more information about him.


User Context:
- name: Andrew
- location: Nuremberg


User: Can i wear a T-shirt later today?"""


r = client.models.generate_content(
    model=model_id,
    config=config,
    contents=prompt
)


print(r.text)

We extend your assistant in a personal context, telling Gemini Andrew’s name and location (Nuremberg) and asking if it’s T-shirt weather while still using the get_weather_forecast tool under the hood. It then prints the model’s natural language recommendations based on actual predictions of the day.

In short, we now know how to define functions (via JSON pattern or Python signature), configure Gemini 2.0 Flash to detect and emit function calls, and implement a “proxy” loop that executes these calls and forms the final response. With these building blocks, we can extend any LLM to a tool-capable assistant that automates workflows, retrieves real-time data and interacts with your code or API as easy as chatting with colleagues.


This is COLAB notebook. Also, don’t forget to follow us twitter And join us Telegram Channel and LinkedIn GrOUP. Don’t forget to join us 90K+ ml reddit.

🔥 [Register Now] Minicon Agesic AI Virtual Conference: Free Registration + Certificate of Attendance + 4-hour Short Event (May 21, 9am-1pm) + Hands-On the Workshop


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button