Farewell to APM – The future of observability is the MCP tool

The past was an absolute roller coaster (or Joyride) of rapidly evolving generative AI technology. In twenty-five years, I have forgotten to be a software developer, and I don’t remember a similar level of construction transformation that has fundamentally changed the way software is written.
But it would be short-sighted to believe that this revolution simply generates code and stops. With new integration of AI agents open on loose and ecosystems, the foundations of how we monitor, understand and optimize software are also being disrupted. Tools that serve us in a people-centered world are built around concepts such as manual alerts, data magazines, and dashboards, which become irrelevant and outdated. Application Performance Monitoring (APM) platforms, especially how they leverage logs, metrics, and traces, will require confirmation, users with the time resources needed to browse, filter and set thresholds are no longer available, and the team has delegated a lot of work to AI.
Intelligent agents are becoming an integral part of SDLC (Software Development Lifecycle), autonomous analysis, diagnosis and improvement of systems in real time. This emerging paradigm requires a new perspective on old problems. In order for observability data to merge to make agents and teams more productive, it must be structured against machines rather than humans. The latest technology that makes this technology possible is also a technology that has received a lot of buzz recently, Model Context Protocol (MCP).

in short
Originally it was HumanModel Context Protocol (MCP) represents a communication layer between AI agents and other applications, allowing agents to access other data sources and perform operations where they see fit. More importantly, MCP opens up new horizons for agents to intelligently select actions beyond their immediate scope, thus expanding the scope of use cases it can solve.
The technology is not new, but an ecosystem. In my opinion, this is equivalent to the development from custom mobile app development to owning an app store. At present, it has not experienced Cambrian proportions by chance, as it only has a rich and standardized ecosystem that opens up markets for new opportunities. Broadly speaking, MCP represents a proxy-centric model for creating ways that can change the way applications are built and provide value to end users.
Limitations of a human-centered model
Most software applications revolve around people as main users. Generally speaking, the supplier decides to invest in developing certain product features, which it believes will match the needs and needs of the end user. The user then tries to use a given set of features to try to meet their specific needs.

This approach has three main limitations, and they are increasingly obstacles as teams adopt AI agents to simplify their processes:
- Fixed interface– Product managers must predict and generalize the use cases to create the correct interface in the application. The UI or API set is fixed and cannot adapt to each unique requirement. Therefore, users may find that certain features are completely useless for their specific requirements. Other times, even with a combination of features, users cannot get everything they need.
- Cognitive load – The process of interacting with application data to obtain the information required by the user requires manual effort, resources, and sometimes expertise. Taking APM as an example, understanding the root cause of a performance problem and solving it may require some investigation because each problem is different. The lack of automation and dependence on voluntary manual processes often means that the data is not used at all.
- Limited range – Each product usually only contains a portion of the required solution to a specific requirement. For example, an APM may have tracked data but cannot access code, GITHUB history, JIRA trends, infrastructure data, or customer tickets. It leaves users with multiple source categories to reach the root of each problem.
Agent-centric MCP-Inverted Application
With the advent of MCP, software developers can now choose to adopt different models for developing software. Instead, an application can pin the correct UI elements to a hard-coded usage pattern, and instead try to pin it to a specific use case, it can be converted into a resource for the AI-driven process. This describes the transition from supporting a few predefined interactions to supporting numerous urgent Use cases. The application now has the option to invest in a specific feature, but instead it can choose to data and action Even indirectly, it can be used opportunistically in the case of relevance.

As the model expands, the agent can seamlessly incorporate data and operations from different applications and domains, such as GitHub, Jira, observability platforms, analytics tools, and the code base itself. The agent can then automate the analysis process itself as part of the synthesis Data, manual steps for deletion and the need for expertise.
Observability is not a web application; it is data expertise

Let’s look at a practical example of how to open new neural paths in engineering with proxy models.
Every developer knows Code Comments It requires a lot of effort; worse, reviewers often divert context from other tasks, further depleting the team’s productivity. On the surface, this seems to be an opportunity for observability to apply luminescence. After all, the code being reviewed has accumulated meaningful data that runs in test and preproduction environments. In theory, this information can help to explain more about changes, what they are affecting, and how the behavior of the system can be changed. Overwhelmed, the high cost of knowing all data in multiple applications and data streams makes it almost useless.
However, in a proxy-centric stream, the whole process becomes completely autonomous whenever an engineer asks the AI agent to assist in reviewing new code. In the backend, the agent will conduct investigation steps on multiple applications and MCPs, including observability tools, to bring back actionable insights about code changes. The agent can access relevant runtime data (e.g., tracking and logs in staging runs), analysis of functional usage, github submission metadata, and even JIRA ticketing history. It then associates the difference with the associated runtime span, marker delayed regression, or failed interactions, and points out the latest events that might be related to the modified code.

In this case, developers don’t need to filter different tools or tabs, nor do they need to spend time trying to connect points – the agents bring them all together, behind the scenes, identifying problems and possible solutions. With dynamic generation of the response itself: it may start with a concise text summary, extend to a table showing metrics, include a link in Github with affected files highlighting changes, and even embed a chart that visualizes the error schedule before and after the release.

While the above workflow is produced organically by the agent, some AI clients will allow users to consolidate the required workflow by adding rules to the agent’s memory. For example, this is the memory file I currently use with the cursor to make sure all code review prompts will consistently trigger the check to the test environment and check usage based on production.
One Thousand Use Cases Death
Code review scheme is just one many urgent Use cases demonstrate how AI can quietly use relevant MCP data to help users achieve their goals. More importantly, users do not need to understand the applications that agents use independently. From the user’s point of view, they just need to describe their needs.
Emerging use cases can improve user productivity with data that is otherwise inaccessible. Here are some other examples where observability data can make a huge difference without anyone visiting an APM webpage:
- Test generation Based on real usage
- Select the correct area Reconstruction Code issues based on performance impact
- prevention Break the change When the code is still checked out
- Test Unused code
Product needs to be changed
However, making observations useful for the agent is a little more than slapping APM on an MCP adapter. Indeed, many current generation tools are eager to support new technologies, which is very, very limited.
Although smart and powerful, the proxy cannot replace any application that interacts with any data immediately on demand. At least in the current iteration, they are bound by the size of the dataset and stop applying more complex ML algorithms and even higher order maths. If the observability tool is a valid data provider to become an agent, data must be prepared in advance in place of these limitations. More broadly, this defines the role of products in the AI era – an island of non-trivial domain expertise for AI-driven processes.

The best way to do this topic has many posts to prepare to generate AI proxy usage data, and some links are included at the end of this article. However, we can describe some requirements for a broad range of MCP output:
- structure (Consistent architecture, typing entity)
- Preprocessing (Aggregate, duplicate data, tag)
- Up and down culture (Group by session, life cycle, or intent)
- Link (References across code spans, logs, submissions and tickets)
Instead of surface testing MCP must feed coherent data narrative For the agent, after analysis. Agents are not just rendered dashboard views. At the same time, it must also RelatedRaw data that can be used on demand to allow further investigations to support agents’ independent reasoning actions.
Given simple access to raw data, it is nearly impossible for a proxy to determine the problem that manifests itself within only 5% of the traces of available traces, let alone prioritize the problem based on the impact of its system, or determine whether the pattern is abnormal.
To bridge this gap, many products may evolve into “AI preparatory staff”, leading to dedicated ML processes and advanced statistical analysis as well as field expertise.
Say goodbye to APM
Ultimately, APM is not a traditional tool – they represent Legacy mentalityThis is slow but certainly replaced. The industry may take more time to readjust, but it will ultimately affect many of the products we are currently using, especially in the software industry, which is competing for adoption of generative AI.
As AI becomes more dominant in developing software, it will no longer be limited to human-induced interactions. The generated AI inference will be used as part of the CI process, and in some cases it can even be used as data and perform operations that are constantly running indefinitely as background processes. With this in mind, a growing number of tools will propose proxy-centric models that sometimes replace their approach directly to humans, or have the potential to be excluded from customers.
Links and resources
- Airbyte: Normalization is key – pattern consistency and relationship links improve cross-original reasoning.
- Harrison Clark : The preprocessing must reach the optimal position – sufficiently rich in reasoning and sufficient structure to be accurate.
- Digitalocean : Better decomposition and story-based reasoning can be unlocked through semantic boundaries (user sessions, flows).
Want to connect?You can contact me on Twitter @doppleware Or via LinkedIn.
Follow me MCPDynamic code analysis for using observability