OpenAI has announced the adoption of Anthropic’s Model Context Protocol (MCP) across its platforms, aiming to enhance how ChatGPT and other AI models interact with external data sources.
The decision, confirmed by CEO Sam Altman, introduces MCP support in OpenAI’s Agents SDK, with plans to expand integration to additional products.
This move is expected to standardize AI system connectivity, improve response consistency, and streamline integrations with cloud-based services.
Addressing AI Data Integration Challenges
MCP, introduced as an open-source protocol by Anthropic in November 2024, was developed to tackle a key challenge in AI development—efficient access to external data sources.
While large language models rely on internal knowledge bases, many applications require real-time data retrieval from external databases, cloud servers, and business applications.
Without a standardized method, AI systems may experience latency issues, inconsistent data formatting, and unreliable outputs.
By integrating MCP, OpenAI aims to improve AI accuracy and reliability by enabling a more structured approach to fetching and processing external data.
Developers using OpenAI’s Agents SDK can now leverage MCP to create more seamless AI interactions, reducing the complexity of custom-built integrations.
Expanding AI Capabilities with Model Context Protocol
Beyond its initial adoption in the Agents SDK, OpenAI plans to introduce MCP integration in the ChatGPT desktop app and Responses API.
The company is also developing a feature for Team subscribers, allowing ChatGPT to connect with Google Drive and Slack, potentially using MCP to facilitate secure data retrieval and processing. Further details on these enhancements are expected in the coming months.
Standardizing AI Integration Across Platforms
MCP serves as a universal interface, enabling AI assistants to securely access various data sources, including content repositories, business applications, and development environments.
By adopting MCP, OpenAI seeks to replace fragmented, custom-built integrations with a single, standardized protocol, improving the efficiency and scalability of AI-powered solutions.
With MCP, AI models can:
- Retrieve real-time data from business tools like GitHub, allowing tasks such as repository management and pull request handling.
- Connect with enterprise systems to fetch customer support data, CRM records, or internal knowledge bases for context-aware AI assistance.
- Streamline interactions between AI-powered assistants and cloud-based services, ensuring consistent and reliable performance.
Industry Adoption and Ecosystem Growth
Since its introduction, MCP has gained adoption from companies such as Block, Apollo, Replit, Codeium, and Sourcegraph, integrating the protocol into their AI platforms.
OpenAI’s adoption signals a broader industry shift toward standardized AI connectivity, promoting a collaborative and scalable AI ecosystem.
Enhancing OpenAI’s Agents SDK
OpenAI’s Agents SDK provides a framework for developing multi-step reasoning and execution capabilities in AI-powered applications.
When combined with MCP, developers can create AI systems that efficiently connect to external data sources without requiring custom integration efforts.
For example, an AI customer support assistant using Agents SDK could handle inquiries, manage conversations, and retrieve account details from an internal database via MCP, improving response accuracy and user experience.
Anthropic’s Chief Product Officer, Mike Krieger, acknowledged OpenAI’s adoption of MCP, stating:
“Excited to see the MCP love spread to OpenAI! MCP has become a thriving open standard with thousands of integrations and growing. LLMs are most useful when connecting to the data you already have and software you already use.”