Artificial Intelligence

A comprehensive tutorial on five levels of proxy AI architecture: From basic timely response to fully autonomous code generation and execution

In this tutorial, we explore five levels of proxy architecture, from the simplest language model calls to a completely autonomous code generation system. This tutorial is designed to run seamlessly on Google Colab. Starting with the basic “simple processor” that simply echoes the output of the model, you will gradually build routing logic, integrate external tools, coordinate multi-step workflows, and ultimately authorize the model to plan, validate, refine and execute your own Python code. In each section, you will find detailed instructions, independent demo features, and clear tips on how to balance human control and machine autonomy in an actual AI application.

import os
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
import re
import json
import time
import random
from IPython.display import clear_output

We import core Python and third-party libraries including environment and execution control time, torch, and Hugging Face’s Transformers (pipeline, AutoTokenizer, AutoModeForCausAllm) for model loading and inference. Additionally, we use RE and JSON to parse LLM output, random seeds and simulated data, while Clear_Output keeps the tidy COLAB interface.

MODEL_NAME = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
def get_model_and_tokenizer():
    if not hasattr(get_model_and_tokenizer, "model"):
        print(f"Loading model {MODEL_NAME}...")
        tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
        model = AutoModelForCausalLM.from_pretrained(
            MODEL_NAME,
            torch_dtype=torch.float16,
            device_map="auto",
            low_cpu_mem_usage=True
        )
        get_model_and_tokenizer.model = model
        get_model_and_tokenizer.tokenizer = tokenizer
        print("Model loaded successfully!")
   
    return get_model_and_tokenizer.model, get_model_and_tokenizer.tokenizer

Here we define model_name to point to the tinylama 1.1b chat model and implement the lazy assistant get_model_and_tokenizer(), which downloads and initializes tokenizer and Model only once, then caches on the first call to minimize overhead, and then returns it to all subsequent job seekers.

def get_model_and_tokenizer():
    if not hasattr(get_model_and_tokenizer, "model"):
        print(f"Loading model {MODEL_NAME}...")
        tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
        model = AutoModelForCausalLM.from_pretrained(
            MODEL_NAME,
            torch_dtype=torch.float16,
            device_map="auto",
            low_cpu_mem_usage=True
        )
        get_model_and_tokenizer.model = model
        get_model_and_tokenizer.tokenizer = tokenizer
        print("Model loaded successfully!")
   
    return get_model_and_tokenizer.model, get_model_and_tokenizer.tokenizer

This accessibility implements the lazy loading mode of the Tinyllama model and its tokenizer. In the first call, it downloads and initializes via semi-precision and automatic device placement, caches them as properties on function objects, and then on subsequent calls, simply returning the already loaded instance to avoid redundant overhead.

def generate_text(prompt, max_length=512):
    model, tokenizer = get_model_and_tokenizer()
   
    messages = [{"role": "user", "content": prompt}]
    formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False)
   
    inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
   
    with torch.no_grad():
        output = model.generate(
            **inputs,
            max_new_tokens=max_length,
            do_sample=True,
            temperature=0.7,
            top_p=0.9,
        )
   
    generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
   
    response = generated_text.split("ASSISTANT: ")[-1].strip()
    return response

The Generate_Text function wraps the Tinyllama inference workflow: it retrieves cached models and tokens, formats user prompts into chat templates, tokenizes and enters input into the model’s device, and then responds to temperature and top-level P settings. After one generation, it decodes the output by separating the “Assistant:” tags and extracts only the assistant’s reply.

Level 1: Simple Processor

On the simplest level, the code defines a direct text transfer pipeline that treats the model purely as a language processor. When the user provides a prompt, the “simple_processor” function calls the `generate_text` assister built on the tinylama 1.1b chat model to generate a free form of response. It then displays the response directly. Under the hood, generate_text ensures a single model and token by loading them into the `get_model_and_tokenizer` function, thus formatting the prompts for the chat model, and generating by sampling the diversity parameters, and extracting the assistant’s reply by extracting the assistant’s reply to the “Assistant:” Marker. This level demonstrates the most basic interaction mode: input is received, output is generated, and program flow is completely controlled by humans.

def simple_processor(prompt):
    """Level 1: Simple Processor - Model has no impact on program flow"""
    response = generate_text(prompt)
    return response


def demo_level1():
    print("n" + "="*50)
    print("LEVEL 1: SIMPLE PROCESSOR DEMO")
    print("="*50)
    print("At this level, the AI has no control over program flow.")
    print("It simply takes input and produces output.n")
   
    user_input = input("Enter your question or prompt: ") or "Write a short poem about artificial intelligence."
    print("nProcessing your request...n")
   
    output = simple_processor(user_input)
    print("OUTPUT:")
    print("-"*50)
    print(output)
    print("-"*50)

The Simple_processor function embodies the simple processor of our proxy hierarchy by treating the model purely as a text generator; it accepts prompts provided by the user and delegates to generate _text. It can return anything the model produces without any branching or decision logic. The accompanying demo_level1 routine provides a minimal interactive loop, prints a clear header, solicits user input (with wise default values), calls simple_processor, and then displays the original output, showing the most basic prompt and quick response workflow, where AI does not affect the process of the program on the AI.

Level 2: Router

The second level introduces conditional routing based on the classification of user queries based on the model. The `router_agent` function first requires the model to classify the query as “technical”, “creative” or “factual”, and then normalizes the model’s response to one of the categories. Depending on which category is detected, the query is dispatched to a specialized handler, namely “handle_technical_query”, “handle_creative_query” or `handing_factive_query’ or `handing_factual_query’, each wrapping the user’s query under a system-style startup, tailoring to the selected tone and purpose. This routing mechanism provides the model with partial control over the program flow, allowing it to guide subsequent interaction paths while still relying on human-defined handlers to generate the final output.

def router_agent(user_query):
    """Level 2: Router - Model determines basic program flow"""
   
    category_prompt = f"""Classify the following query into one of these categories:
    'technical', 'creative', or 'factual'.
   
    Query: {user_query}
   
    Return ONLY the category name and nothing else."""
   
    category_response = generate_text(category_prompt)
   
    category = category_response.lower()
    if "technical" in category:
        category = "technical"
    elif "creative" in category:
        category = "creative"
    else:
        category = "factual"
   
    print(f"Query classified as: {category}")
   
    if category == "technical":
        return handle_technical_query(user_query)
    elif category == "creative":
        return handle_creative_query(user_query)
    else:  
        return handle_factual_query(user_query)


def handle_technical_query(query):
    system_prompt = f"""You are a technical assistant. Provide detailed technical explanations.
   
    User query: {query}"""
   
    response = generate_text(system_prompt)
    return f"[Technical Response]n{response}"


def handle_creative_query(query):
    system_prompt = f"""You are a creative assistant. Be imaginative and inspiring.
   
    User query: {query}"""
   
    response = generate_text(system_prompt)
    return f"[Creative Response]n{response}"


def handle_factual_query(query):
    system_prompt = f"""You are a factual assistant. Provide accurate information concisely.
   
    User query: {query}"""
   
    response = generate_text(system_prompt)
    return f"[Factual Response]n{response}"


def demo_level2():
    print("n" + "="*50)
    print("LEVEL 2: ROUTER DEMO")
    print("="*50)
    print("At this level, the AI determines basic program flow.")
    print("It decides which processing path to take.n")
   
    user_query = input("Enter your question or prompt: ") or "How do neural networks work?"
    print("nProcessing your request...n")
   
    result = router_agent(user_query)
    print("OUTPUT:")
    print("-"*50)
    print(result)
    print("-"*50)

The Router_agent function first requires the model to classify the user’s query as “technical”, “creative” or “factual”, then normalize it and investigate the query to the corresponding handler (handle_technical_query, handle_creative_query, handle_factual_query), call_factual_query, and then call_factual_query). The Demo_level2 routine provides a clear CLI-style interface, prints headers, accepts input (by default), calls Routoter_agent and displays the response of the classification, showing how the model can basically control the program flow by selecting the “processing path”.

Level 3: Tool calls

At the third level, the code determines which of several external tools by embedding a JSON-based feature selection protocol into the prompt. `tool_calling_agent` presents user problems on the menu of potential tools, including weather lookup, web search simulation, current date and time search or direct response, and instructs the model to specify the selected tool and its parameters using a valid JSON message. Then, the regularity is to extract the first JSON object from the output of the model, and if parsing fails, the code safely drops back to the direct response. Once the tools and parameters are determined, the corresponding python function is executed, capturing its results, and the final model call integrates the results into a coherent answer. This pattern is included in LLM reasoning by letting the model arrange which APIs or utilities.

def tool_calling_agent(user_query):
    """Level 3: Tool Calling - Model determines how functions are executed"""
   
    tool_selection_prompt = f"""Based on the user query, select the most appropriate tool from the following list:
    1. get_weather: Get the current weather for a location
    2. search_information: Search for specific information on a topic
    3. get_date_time: Get current date and time
    4. direct_response: Provide a direct response without using tools
   
    USER QUERY: {user_query}
   
    INSTRUCTIONS:
    - Return your response in valid JSON format
    - Include the tool name and any required parameters
    - For get_weather, include location parameter
    - For search_information, include query and depth parameter (basic or detailed)
    - For get_date_time, include timezone parameter (optional)
    - For direct_response, no parameters needed
   
    Example output format: {{"tool": "get_weather", "parameters": {{"location": "New York"}}}}"""
   
    tool_selection_response = generate_text(tool_selection_prompt)
   
    try:
        json_match = re.search(r'({.*})', tool_selection_response, re.DOTALL)
        if json_match:
            tool_selection = json.loads(json_match.group(1))
        else:
            print("Could not parse tool selection. Defaulting to direct response.")
            tool_selection = {"tool": "direct_response", "parameters": {}}
    except json.JSONDecodeError:
        print("Invalid JSON in tool selection. Defaulting to direct response.")
        tool_selection = {"tool": "direct_response", "parameters": {}}
   
    tool_name = tool_selection.get("tool", "direct_response")
    parameters = tool_selection.get("parameters", {})
   
    print(f"Selected tool: {tool_name}")
   
    if tool_name == "get_weather":
        location = parameters.get("location", "Unknown")
        tool_result = get_weather(location)
    elif tool_name == "search_information":
        query = parameters.get("query", user_query)
        depth = parameters.get("depth", "basic")
        tool_result = search_information(query, depth)
    elif tool_name == "get_date_time":
        timezone = parameters.get("timezone", "UTC")
        tool_result = get_date_time(timezone)
    else:
        return generate_text(f"Please provide a helpful response to: {user_query}")
   
    final_prompt = f"""User Query: {user_query}
    Tool Used: {tool_name}
    Tool Result: {json.dumps(tool_result)}
   
    Based on the user's query and the tool result above, provide a helpful response."""
   
    final_response = generate_text(final_prompt)
    return final_response


def get_weather(location):
    weather_conditions = ["Sunny", "Partly cloudy", "Overcast", "Light rain", "Heavy rain", "Thunderstorms", "Snowy", "Foggy"]
    temperatures = {
        "cold": list(range(-10, 10)),
        "mild": list(range(10, 25)),
        "hot": list(range(25, 40))
    }
   
    location_hash = sum(ord(c) for c in location)
    condition_index = location_hash % len(weather_conditions)
    season = ["winter", "spring", "summer", "fall"][location_hash % 4]
   
    temp_range = temperatures["cold"] if season in ["winter", "fall"] else temperatures["hot"] if season == "summer" else temperatures["mild"]
    temperature = random.choice(temp_range)
   
    return {
        "location": location,
        "temperature": f"{temperature}°C",
        "conditions": weather_conditions[condition_index],
        "humidity": f"{random.randint(30, 90)}%"
    }


def search_information(query, depth="basic"):
    mock_results = [
        f"First result about {query}",
        f"Second result discussing {query}",
        f"Third result analyzing {query}"
    ]
   
    if depth == "detailed":
        mock_results.extend([
            f"Fourth detailed analysis of {query}",
            f"Fifth comprehensive overview of {query}",
            f"Sixth academic paper on {query}"
        ])
   
    return {
        "query": query,
        "results": mock_results,
        "depth": depth,
        "sources": [f"source{i}.com" for i in range(1, len(mock_results) + 1)]
    }


def get_date_time(timezone="UTC"):
    current_time = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
    return {
        "current_datetime": current_time,
        "timezone": timezone
    }


def demo_level3():
    print("n" + "="*50)
    print("LEVEL 3: TOOL CALLING DEMO")
    print("="*50)
    print("At this level, the AI selects which tools to use and with what parameters.")
    print("It can process the results from tools to create a final response.n")
   
    user_query = input("Enter your question or prompt: ") or "What's the weather like in San Francisco?"
    print("nProcessing your request...n")
   
    result = tool_calling_agent(user_query)
    print("OUTPUT:")
    print("-"*50)
    print(result)
    print("-"*50)

In a level 3 implementation, the tool_Calling_agent function prompts the model to select in a predefined set of utility programs, such as weather lookup, simulated web search, or date/time search, by returning a JSON object with the selected tool name and its parameters. It can then safely parse the JSON call the corresponding Python function to get structured data and make subsequent model calls to integrate the output of the tool into a coherent, user-oriented response.

Level 4: Multi-step agent

Level 4 extends the tool name pattern to a complete multi-step agent that manages its workflow and status. The “Multi-steady state” class maintains internal memory for user input, tool output, and proxy operations. Each iteration generates a plan prompt that sums up the entire memory, requiring the model to select one of several tools such as web search simulation, information extraction, text summary or report creation or end the task with final output. After executing the selected tool and appending its results to memory, the process repeats until the model issues a “full” operation or reaches the maximum number of steps. Finally, the agent organizes the memory into a cohesive final response. This structure shows how LLM arranges a complex multi-stage process while consulting external functions and refining plans based on previous results.

class MultiStepAgent:
    """Level 4: Multi-Step Agent - Model controls iteration and program continuation"""
   
    def __init__(self):
        self.tools = {
            "search_web": self.search_web,
            "extract_info": self.extract_info,
            "summarize_text": self.summarize_text,
            "create_report": self.create_report
        }
        self.memory = []
        self.max_steps = 5
   
    def run(self, user_task):
        self.memory.append({"role": "user", "content": user_task})
       
        steps_taken = 0
        while steps_taken 

The multi-stage category maintains evolving memory of user input and tool output, and then repeatedly prompts the LLM to decide on its next operation, whether to search the network, extract information, summarize information, create a report or complete a report or complete, execute the selected tool and attach results until the task has been completed or the step limit is reached. By doing so, it demonstrates a level 4 proxy that coordinates multi-step workflows by allowing model control iteration and program continuation.

Level 5: Completely autonomous agent

At the highest level, the Automatic Product class presents a closed-loop system where the model not only plans and executes, but also generates, validates, perfects and runs new Python code. After recording user tasks, the agent asks the model to develop a detailed plan and then prompts it to generate an independent solution code, which will automatically clean the price reduction method. The subsequent verification steps query any syntax or logic problems; if problems are found, the agent requires the model to be perfected. The verified code will then be wrapped with sandboxing utilities such as secure printing, captured output buffers and capture logic, and executed in a constrained local environment. Finally, the agent synthesized a professional report explaining the work done, how it was done, and the final result. This level embodies a truly autonomous AI system that extends its functionality through dynamic code creation and execution.

class AutonomousAgent:
    """Level 5: Fully Autonomous Agent - Model creates & executes new code"""
   
    def __init__(self):
        self.memory = []
   
    def run(self, user_task):
        self.memory.append({"role": "user", "content": user_task})
       
        print("🧠 Planning solution approach...")
        planning_message = self.plan_solution(user_task)
        self.memory.append({"role": "assistant", "content": planning_message})
       
        print("💻 Generating solution code...")
        generated_code = self.generate_solution_code()
        self.memory.append({"role": "assistant", "content": f"Generated code: ```pythonn{generated_code}n```"})
       
        print("🔍 Validating code...")
        validation_result = self.validate_code(generated_code)
        if not validation_result["valid"]:
            print("⚠️ Code validation found issues - refining...")
            refined_code = self.refine_code(generated_code, validation_result["issues"])
            self.memory.append({"role": "assistant", "content": f"Refined code: ```pythonn{refined_code}n```"})
            generated_code = refined_code
        else:
            print("✅ Code validation passed")
       
        try:
            print("🚀 Executing solution...")
            execution_result = self.safe_execute_code(generated_code, user_task)
            self.memory.append({"role": "system", "content": f"Execution result: {execution_result}"})
           
            # Generate a final report
            print("📝 Creating final report...")
            final_report = self.create_final_report(execution_result)
            return final_report
           
        except Exception as e:
            return f"Error executing the solution: {str(e)}nnGenerated code was:n```pythonn{generated_code}n```"
   
    def plan_solution(self, task):
        prompt = f"""Task: {task}


        You are an autonomous problem-solving agent. Create a detailed plan to solve this task.
        Include:
        1. Breaking down the task into subtasks
        2. What algorithms or approaches you'll use
        3. What data structures are needed
        4. Any external resources or libraries required
        5. Expected challenges and how to address them


        Provide a step-by-step plan.
        """
       
        return generate_text(prompt)
   
    def generate_solution_code(self):
        context = "Task and planning information:n"
        for item in self.memory:
            if item["role"] == "user":
                context += f"USER TASK: {item['content']}nn"
            elif item["role"] == "assistant":
                context += f"PLANNING: {item['content']}nn"
       
        prompt = f"""{context}


        Generate clean, efficient Python code that solves this task. Include comments to explain the code.
        The code should be self-contained and able to run inside a Python script or notebook.
        Only include the Python code itself without any markdown formatting.
        """
       
        code = generate_text(prompt)
       
        code = re.sub(r'^```pythonn|```$', '', code, flags=re.MULTILINE)
       
        return code
   
    def validate_code(self, code):
        prompt = f"""Code to validate:
        ```python
        {code}
        ```


        Examine the code for the following issues:
        1. Syntax errors
        2. Logic errors
        3. Inefficient implementations
        4. Security concerns
        5. Missing error handling
        6. Import statements for unavailable libraries


        If the code has any issues, describe them in detail. If the code looks good, state "No issues found."
        """
       
        validation_response = generate_text(prompt)
       
        if "no issues" in validation_response.lower() or "code looks good" in validation_response.lower():
            return {"valid": True, "issues": None}
        else:
            return {"valid": False, "issues": validation_response}
   
    def refine_code(self, original_code, issues):
        prompt = f"""Original code:
        ```python
        {original_code}
        ```


        Issues identified:
        {issues}


        Please provide a corrected version of the code that addresses these issues.
        Only include the Python code itself without any markdown formatting.
        """
       
        refined_code = generate_text(prompt)
       
        refined_code = re.sub(r'^```pythonn|```$', '', refined_code, flags=re.MULTILINE)
       
        return refined_code
   


    def safe_execute_code(self, code, user_task):
       
        safe_imports = """
    # Standard library imports
    import math
    import random
    import re
    import time
    import json
    from datetime import datetime


    # Define a function to capture printed output
    captured_output = []
    original_print = print


    def safe_print(*args, **kwargs):
        output = " ".join(str(arg) for arg in args)
        captured_output.append(output)
        original_print(output)
       
    print = safe_print


    # Define a result variable to store the final output
    result = None


    # Function to store the final result
    def store_result(value):
        global result
        result = value
        return value
    """
       
        result_capture = """
    # Store the final result if not already done
    if 'result' not in locals() or result is None:
        try:
            # Look for variables that might contain the final result
            potential_results = [var for var in locals() if not var.startswith('_') and var not in
                                ['math', 'random', 're', 'time', 'json', 'datetime',
                                'captured_output', 'original_print', 'safe_print',
                                'result', 'store_result']]
            if potential_results:
                # Use the last defined variable as the result
                store_result(locals()[potential_results[-1]])
        except:
            pass
    """
       
        full_code = safe_imports + "n# User code starts heren" + code + "nn" + result_capture
       
        code_lines = code.split('n')
        first_lines = code_lines[:3]
        print(f"nExecuting (first 3 lines):n{first_lines}")
       
        local_env = {}
       
        try:
            exec(full_code, {}, local_env)
           
            return {
                "output": local_env.get('captured_output', []),
                "result": local_env.get('result', "No explicit result returned")
            }
        except Exception as e:
            return {"error": str(e)}
       
    def create_final_report(self, execution_result):
        if isinstance(execution_result.get('output'), list):
            output_text = "n".join(execution_result.get('output', []))
        else:
            output_text = str(execution_result.get('output', ''))
       
        result_text = str(execution_result.get('result', ''))
        error_text = execution_result.get('error', '')
       
        context = "Task history:n"
        for item in self.memory:
            if item["role"] == "user":
                context += f"USER TASK: {item['content']}nn"
       
        prompt = f"""{context}
       
        EXECUTION OUTPUT:
        {output_text}
       
        EXECUTION RESULT:
        {result_text}
       
        {f"ERROR: {error_text}" if error_text else ""}
       
        Create a final report that explains the solution to the original task. Include:
        1. What was done
        2. How it was accomplished
        3. The final results
        4. Any insights or conclusions drawn from the analysis
       
        Format the report in a professional, easy to read manner.
        """
       
        return generate_text(prompt)


def demo_level5():
    print("n" + "="*50)
    print("LEVEL 5: FULLY AUTONOMOUS AGENT DEMO")
    print("="*50)
    print("At this level, the AI generates and executes code to solve complex problems.")
    print("It can create, validate, refine, and run custom code solutions.n")
   
    user_task = input("Enter a data analysis or computational task: ") or "Analyze a dataset of numbers [10, 45, 65, 23, 76, 12, 89, 32, 50] and create visualizations of the distribution"
    print("nProcessing your request... (this may take a minute or two)n")
   
    agent = AutonomousAgent()
    result = agent.run(user_task)
    print("nFINAL REPORT:")
    print("-"*50)
    print(result)
    print("-"*50)

The autonomous category can reflect the autonomy of fully autonomous agents by maintaining the running memory of user tasks and systematically planning five core stages: planning, code generation, verification, secure execution and reporting. After it is started, the agent prompts the model to generate a detailed plan to resolve the task and store the plan in memory. Next, it requires the model to create standalone Python code based on the plan, strip away any price-down styles, and then verify the code by querying syntax, logic, performance, and security issues in the model. If the verification finds a problem, the proxy instructs the model to perfect the code until it passes the check. The final customized code is then wrapped in the sandbox’s execution harness, equipped with the captured output buffer and automatic result extraction, and executed in an isolated local environment. Finally, the agent combines a beautiful professional report by feeding the execution results into the model and produces a narrative that explains the work done, how it is achieved, and what insights are gained. The included demo_level5 function provides a simple interactive loop that accepts user tasks, runs the agent and provides a comprehensive final report.


Main functions: above steps

def main():
    while True:
        clear_output(wait=True)
        print("n" + "="*50)
        print("AI AGENT LEVELS DEMO")
        print("="*50)
        print("nThis notebook demonstrates the 5 levels of AI agents:")
        print("1. Simple Processor - Model has no impact on program flow")
        print("2. Router - Model determines basic program flow")
        print("3. Tool Calling - Model determines how functions are executed")
        print("4. Multi-Step Agent - Model controls iteration and program continuation")
        print("5. Fully Autonomous Agent - Model creates & executes new code")
        print("6. Quit")
       
        choice = input("nSelect a level to demo (1-6): ")
       
        if choice == "1":
            demo_level1()
        elif choice == "2":
            demo_level2()
        elif choice == "3":
            demo_level3()
        elif choice == "4":
            demo_level4()
        elif choice == "5":
            demo_level5()
        elif choice == "6":
            print("nThank you for exploring the AI Agent levels!")
            break
        else:
            print("nInvalid choice. Please select 1-6.")
       
        input("nPress Enter to return to the main menu...")


if __name__ == "__main__":
    main()

Finally, the main function proposes a simple interactive menu loop that clears the Colab output for readability and displays all five proxy levels next to the exit option, then dispatches the user’s selection to the corresponding demo function before waiting for the input to return to the menu. The structure provides a cohesive CLI-style interface that enables you to explore each proxy level sequentially without manual unit execution.

All in all, by working on these five levels, we have gained practical insights into the principles of proxy AI and the tradeoffs between control, flexibility and autonomy. We have seen how systems evolve from direct and timely response behaviors to complex decision-making pipelines, even self-modified code execution. Whether you are manufacturing smart assistants, building data pipelines, or experimenting with emerging AI capabilities, this advancement framework provides a roadmap for designing powerful and scalable agents.


This is COLAB notebook. Also, don’t forget to follow us twitter And join us Telegram Channel and LinkedIn GrOUP. Don’t forget to join us 90K+ ml reddit.

🔥 [Register Now] Minicon Agesic AI Virtual Conference: Free Registration + Certificate of Attendance + 4-hour Short Event (May 21, 9am-1pm) + Hands-On the Workshop


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button