Building Media Internet: Technology Infiltrates AI Agent Protocol and Its Role in Scalable Intelligent Systems

As large language model (LLM) agents gain traction throughout the enterprise and research ecosystem, a fundamental gap emerges: communication. Although today’s agents can be autonomous for reasons, plans, and actions, their ability to coordinate with other agents or with external tools remains limited by the lack of standardized protocols. This communication bottleneck not only scatters the proxy landscape, but also limits the emergence of scalability, interoperability and collaborative AI systems.
A recent survey conducted by researchers from Shanghai Joao University and the ANP community provides the first comprehensive taxonomy and assessment of AI agent protocols. This work introduces a principled classification scheme, explores existing protocol frameworks, and outlines future indications for the scalable, secure and intelligent proxy ecosystem.
Communication issues in modern AI agents
LLM agents are deployed beyond the development of mechanisms that enable them to interact with each other or with external resources. In fact, most agents’ interactions rely on temporary APIs or fragile functional terminology – lacking indications of universality, security assurance, and cross-vendor compatibility.
This problem is similar to the early days of the Internet, where the lack of general transportation and application layer protocols prevented seamless information exchange. Just as TCP/IP and HTTP catalyze global connectivity, the standard protocol of AI proxy is expected to become the backbone of the future “proxy Internet”.
Framework of proxy protocols: context and collaboration
The author proposes a two-dimensional classification system that describes the proxy protocol along two axes:
- Context-oriented and proxy protocols
- Context-oriented protocol Control how the proxy interacts with external data, tools, or APIs.
- Agent Agreement Enable peer-to-peer communication, mission authorization, and coordination among multiple agents.
- Common and domain-specific schemes
- General Protocol Designed to run on different environments and proxy types.
- Domain-specific protocols Optimized for specific applications such as human proxy conversations, robotics, or IoT systems.
This classification helps clarify design tradeoffs across flexibility, performance, and specialization.
Key Agreements and Design Principles
1. Model Context Protocol (MCP) – Human
MCP is a context-oriented, common protocol that facilitates structured interactions between LLM agents and external resources. Its architecture removes inference (host) from execution (client and server), thereby enhancing security and scalability. It is worth noting that MCP mitigates privacy risks by ensuring that sensitive user data is processed locally instead of directly embedding LLM-generated feature calls.
2. Agent to Agent Protocol (A2A) – Google
A2A is designed for secure and synchronous collaboration, allowing agents to exchange tasks and artifacts in enterprise settings. It emphasizes modularity, multimodal support (e.g., files, streams) and opaque execution, preserving IP while enabling interoperability. This protocol defines standardized entities, e.g. Agent Card,,,,, Taskand Cultural Relics Used for powerful workflow orchestration.
3. Agent Network Protocol (ANP) – Open source
ANP envisions a decentralized web-scale proxy network. ANP is built on a decentralized identity (DID) and semantic metaprotocol layer, facilitating trustless, encrypted communication between proxies across heterogeneous domains. It introduces layered abstractions to conduct discovery, negotiation and task execution and uses itself as the basis for an open “proxy internet”.

Performance indicators: Overall evaluation framework
To assess protocol robustness, the survey introduces a comprehensive framework based on seven evaluation criteria:
- efficiency – Throughput, latency and resource utilization (e.g. token cost in LLMS)
- Scalability – Supports additional agents, intensive communication and dynamic task allocation
- Safety – Fine-grained authentication, access control and context desensitization
- reliability – Strong messaging, flow control and connection persistence
- Scalability – Ability to develop without breaking compatibility
- Operability – Easy to deploy, observability and platform mismatch SNOSTIC implementation
- Interoperability – Cross-system compatibility across languages, platforms and vendors
This framework reflects classic network protocol principles and agent-specific challenges such as semantic coordination and multi-turn workflows.

Moving towards emerging collective wisdom
One of the most compelling arguments about protocol standardization is the potential Collective wisdom. By aligning communication strategies and capabilities, agents can form dynamic alliances to solve complex tasks – gold-to-robot technology or modular cognitive systems. Agreement, e.g. Agora Routines and structured documents generated through LLM enable the agent to negotiate and adapt to new protocols in real time to do this further.
Similarly, the agreement is similar Loca Embed moral reasoning and identity management into the communication layer to ensure that the proxy ecosystem can develop responsibly, transparently and securely.
The way forward: From static interfaces to adaptive protocols
Looking into the future, the authors outline three phases of the evolution of the protocol:
- short term: Transition from rigid function calls to dynamic, evolving protocols.
- Medium term: Move from rules-based APIs to an agency ecosystem that is capable of self-organizing and negotiating.
- long: The emergence of a hierarchical infrastructure that supports the protection of privacy, collaboration and smart proxy networks.
These trends suggest a shift from traditional software design to a more flexible, agent-based local computing paradigm.
in conclusion
The future of AI will not be shaped simply by model architecture or training data, which will be by how agents interact, coordinate and learn from each other. Protocols are not only technical specifications; they are connective tissues of intelligent systems. By formalizing these communication layers, we unravel the possibility of a decentralized, secure and interoperable proxy network that can scale capabilities far beyond any single model or framework.
View the model on paper. Also, don’t forget to follow us twitter And join us Telegram Channel and LinkedIn GrOUP. Don’t forget to join us 90K+ ml reddit.
🔥 [Register Now] Minicon Agesic AI Virtual Conference: Free Registration + Certificate of Attendance + 4-hour Short Event (May 21, 9am-1pm) + Hands-On the Workshop

Sana Hassan, a consulting intern at Marktechpost and a dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. He is very interested in solving practical problems, and he brings a new perspective to the intersection of AI and real-life solutions.
