Model Context Protocol (MCP): Foundation for AI or a Looming Risk? — AI Innovations and Insights 37
Welcome to the 37th installment of this elegant series.
Model Context Protocol (MCP) was introduced by Anthropic in 2024 to tackle the complexity and scalability challenges of manually connecting AI applications to external APIs. It has been quite a hot topic lately.
I wrote an introductory article about it a while back (MCP = HTTP or USB-C for the AI World?), but recently I discovered a new survey (Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions) — and from what I’ve seen, it appears to be the first serious, comprehensive academic overview of MCP. I thought it would be valuable to share it with you all.
Overview
Before MCP came along, AI applications relied on various methods — manual API wiring, plugin-based interfaces, agent frameworks — just to connect with external tools.

As shown in Figure 1 (left), every external service meant dealing with a specific API, adding complexity and making it harder to scale.
As shown in Figure 1 (right), MCP changes that by introducing a standard protocol that streamlines how AI models connect with external tools — standardizing the way they retrieve data and perform actions.
Core Components
The MCP architecture has three main components: MCP host, client, and server. They work together to make communication between AI applications, external tools, and data sources smooth, secure, and well-managed.

As shown in Figure 2, a typical workflow looks like this: the user sends a prompt to the MCP client, which figures out the intent, picks the right tools through the MCP server, and calls external APIs to retrieve and process the needed information — then sends the results back to the user.
MCP Server
The MCP server manages external tools, data sources, and workflows — giving AI models the resources they need to work efficiently and securely.
As shown in the upper half of Figure 3, it’s built around a few key components that keep everything running smoothly: Metadata, Configuration, Tool List, Resources List, Prompts and Templates.

As shown in the lower half of Figure 3, the MCP server goes through three main phases: creation, operation, and update. Each phase lays out the essential steps needed to keep the server running securely and efficiently, making sure AI models can connect smoothly with external tools, resources, and prompts.
Ecosystem

As shown in Figure 4, MCP is picking up serious momentum across a wide range of industries — a clear sign that seamless AI-to-tool integration is becoming more important than ever.

Anthropic hasn’t launched an official MCP marketplace yet, but the community hasn’t waited. As shown in Figure 5, a growing number of independent server collections and platforms have popped up to fill the gap.
Security and Privacy Analysis
The MCP server faces security risks throughout its lifecycle:
At the creation stage, risks include server name collision, installer spoofing, and code injection/backdoor.
During operation, issues like tool name conflicts, overlapping slash commands, and sandbox escapes can lead to sensitive data leaks or even full system breaches.
In the update phase, key risks include post-update privilege persistence, re-deployment of vulnerable versions, and configuration drift.
This survey outlines mitigation strategies for each category of threat, highlighting the need for stronger verification, isolation, and update mechanisms to keep the MCP ecosystem secure.
Discussion
MCP’s rapid growth has made tool integration much easier for developers — but it also brings new responsibilities around security and version control.
For users, MCP delivers a smoother, more automated workflow experience, though it comes with the need to stay cautious about unverified tools.
Because MCP servers are managed in a decentralized way, maintainers face tough challenges in enforcing security standards and keeping things consistent.
While MCP unlocks powerful new agent capabilities for the AI community, it also raises fresh concerns around ethics and operational risks.
To support MCP’s long-term growth and security, there are some practical recommendations.
For MCP maintainers: Set up package management and a centralized server registry; enforce cryptographic signatures, security audits, and sandbox protections.
For developers: Prioritize secure coding, automate configuration, enforce version control, validate tool names, and monitor systems at runtime.
For researchers: Analyze vulnerabilities, strengthen sandboxing, improve versioning and package management, and develop automated security tools.
For end-users: Stick to verified MCP servers, avoid unofficial installers, update regularly, monitor configurations, and enforce strong access controls.
Thoughts and Insights
Here are some thoughts I've been exploring.
Model-Native OS
MCP can be seen as a key component — or even a protocol layer — for handling external services and capabilities in a future Model-Native OS.
Just like traditional operating systems manage hardware access, powerful Model-Native OS will need a standardized runtime environment for AI models to call APIs and interact with external tools.
MCP feels like an early but solid foundation for what that environment might become.
Model Supply Chain Security
MCP could spark a new field: Model Supply Chain Security.
Traditional software supply chain security frameworks — like SLSA and SBOM — barely scratch the surface of what's needed for AI models. With MCP making tool invocation chains a reality, we’re likely to see entirely new areas emerge, like MCP chain integrity monitoring and MCP call trace audits.
My Concerns
As MCP moves from the lab to real-world deployment, it still lacks a robust system for security governance and resilience control.
Right now, MCP places too much idealistic trust in servers and the invocation chain — a weakness that's easy to exploit in practice.
Without stronger supply chain integrity checks, dynamic trust management, and cross-tool transaction consistency, large-scale deployments could easily run into catastrophic failures.
If MCP is going to support critical AI agents in sectors like finance, healthcare, and defense, it will need a much tighter trust framework, finer-grained security protocols, and mechanisms to maintain context consistency across agents.
Over the next 3–5 years, we’re likely to see a wave of new standards, new attack vectors, and new governance models emerge around MCP and its evolving ecosystem — bringing both big opportunities and serious challenges for the AI industry.