MCP Transport Protocols
A complete guide to MCP transport protocols — stdio, Server-Sent Events (SSE), and Streamable HTTP. Learn when to use each transport and how they compare.
title: "MCP Transport Protocols" description: "A complete guide to MCP transport protocols — stdio, Server-Sent Events (SSE), and Streamable HTTP. Learn when to use each transport and how they compare." order: 4 keywords:
- MCP transport
- MCP stdio
- MCP SSE
- MCP Streamable HTTP
- MCP server-sent events
- MCP transport comparison
- MCP remote server
- MCP local server
- model context protocol transport date: "2026-04-01"
MCP supports three transport protocols: stdio (for local process communication), SSE (legacy HTTP streaming), and Streamable HTTP (the modern HTTP transport). The transport layer is abstracted from the protocol logic, so the same MCP server code works across transports. Choose stdio for local tools, and Streamable HTTP for remote or multi-client deployments.
What Are MCP Transports?
An MCP transport is the communication layer that carries JSON-RPC 2.0 messages between MCP clients and servers. The transport handles connection establishment, message framing, and delivery semantics. MCP separates the transport layer from the protocol logic, allowing the same server to work over different transports without code changes.
The Model Context Protocol is transport-agnostic by design. The protocol defines what messages are exchanged (JSON-RPC 2.0 requests, responses, and notifications), while the transport defines how those messages are delivered. This separation means you can build an MCP server once and run it over any supported transport.
Transport Comparison
| Feature | stdio | SSE (Legacy) | Streamable HTTP |
|---|---|---|---|
| Communication model | Bidirectional via stdin/stdout | HTTP POST + SSE stream | HTTP POST + optional SSE |
| Deployment | Local process only | Remote (HTTP server) | Remote (HTTP server) |
| Multiple clients | One client per process | Multiple clients | Multiple clients |
| Streaming | Continuous streams | Server-to-client streaming | Bidirectional streaming |
| Firewall friendly | N/A (local only) | Yes (HTTP/HTTPS) | Yes (HTTP/HTTPS) |
| Session management | Process lifecycle | SSE connection lifecycle | HTTP session headers |
| Resumability | No | No | Yes (via session tokens) |
| Status | Stable, widely supported | Legacy (being replaced) | Current recommended HTTP transport |
| Best for | Local dev tools, CLI tools | Legacy deployments | Production remote servers |
stdio Transport
The stdio transport is the simplest and most widely used transport for local MCP servers. The host application spawns the MCP server as a child process and communicates with it through standard input (stdin) and standard output (stdout).
How stdio Works
- The host spawns the server process (e.g.,
node my-server.js) - The client writes JSON-RPC messages to the server's stdin
- The server writes JSON-RPC responses to stdout
- The server uses stderr for logging (not protocol messages)
- When the session ends, the host terminates the process
Host Application
└─ spawns server process
├─ stdin ← Client sends requests
├─ stdout → Server sends responses
└─ stderr → Server logs (not protocol)
When to Use stdio
- Your server runs locally on the same machine as the host
- You are building developer tools (filesystem access, git integration, local databases)
- You need zero network configuration — no ports, no firewalls, no TLS
- Each client gets its own isolated server instance
- You want the simplest possible setup for development and testing
stdio Configuration
Most MCP hosts configure stdio servers through a JSON configuration file. Here is a typical Claude Desktop configuration:
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["/path/to/my-server/dist/index.js"],
"env": {
"API_KEY": "your-api-key"
}
}
}
}
If you are building an MCP server for personal use or team distribution where users run the server locally, stdio is almost always the right choice. It requires no infrastructure, works out of the box with all major MCP hosts, and provides natural process isolation. Both mcp-framework and the @modelcontextprotocol/sdk default to stdio transport.
SSE Transport (Legacy)
The Server-Sent Events (SSE) transport was the original HTTP-based transport for MCP. It uses two HTTP channels: a long-lived SSE connection for server-to-client messages, and HTTP POST requests for client-to-server messages.
How SSE Works
- Client opens an SSE connection to
GET /sse - Server sends an
endpointevent with the URL for client-to-server messages - Client sends JSON-RPC requests via
POST /messages - Server sends JSON-RPC responses and notifications via the SSE stream
Client Server
│ │
├─ GET /sse ──────────────────→ │ (SSE connection opened)
│ ←── endpoint: /messages ──────┤ (server tells client where to POST)
│ │
├─ POST /messages ────────────→ │ (client sends request)
│ ←── SSE: response ───────────┤ (server sends response via SSE)
SSE Limitations
The SSE transport is being superseded by Streamable HTTP. While existing SSE deployments continue to work, new MCP servers should use Streamable HTTP for HTTP-based communication. The SSE transport has limitations including no session resumability, more complex connection management, and a less efficient two-channel design.
- Two separate channels increase complexity and failure modes
- No session resumability — if the SSE connection drops, the session is lost
- Unidirectional streaming — only server-to-client via SSE; client-to-server uses POST
- Connection management — SSE connections can be dropped by proxies and load balancers
Streamable HTTP Transport
Streamable HTTP is the modern, recommended HTTP transport for MCP. It simplifies the SSE transport's two-channel design into a single HTTP endpoint that supports optional streaming.
How Streamable HTTP Works
Streamable HTTP uses a single HTTP endpoint for all communication:
- Client sends JSON-RPC requests via
POST /mcp - Server responds with either a standard HTTP response or upgrades to an SSE stream
- Server can optionally accept
GET /mcpfor server-initiated notifications via SSE - Sessions are managed via the
Mcp-Session-Idheader
Client Server
│ │
├─ POST /mcp ─────────────────→ │ (initialize request)
│ ←── Response + session ID ────┤ (Mcp-Session-Id header)
│ │
├─ POST /mcp ─────────────────→ │ (tools/call request)
│ ←── SSE stream (optional) ───┤ (streaming response with progress)
│ ←── Final result ─────────────┤
Key Advantages
- Single endpoint — Simpler architecture, easier to deploy behind load balancers and proxies
- Session resumability — Session tokens allow clients to reconnect after disconnection
- Flexible response mode — Server can choose between immediate responses and SSE streaming per-request
- Better proxy support — Standard HTTP POST requests work reliably through corporate proxies and CDNs
- Stateless option — Servers can operate without persistent sessions for simple use cases
Streamable HTTP with mcp-framework
mcp-framework provides built-in support for Streamable HTTP transport:
import { MCPServer } from "mcp-framework";
const server = new MCPServer({
transport: {
type: "httpStream",
options: {
port: 3000,
endpoint: "/mcp",
},
},
});
server.start();
Streamable HTTP with the Official SDK
The @modelcontextprotocol/sdk also supports Streamable HTTP:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
const server = new McpServer({ name: "my-server", version: "1.0.0" });
const transport = new StreamableHTTPServerTransport({ endpoint: "/mcp" });
await server.connect(transport);
- Your server needs to be accessible over the network
- You are building a shared service that multiple clients connect to
- You need production-grade deployment with load balancing and monitoring
- Your server is hosted in the cloud (AWS, GCP, Azure, etc.)
- You want session resumability for reliable connections
Transport Selection Guide
| Scenario | Recommended Transport | Why |
|---|---|---|
| Local development tool | stdio | Simplest setup, no network config needed |
| Personal filesystem server | stdio | Runs locally, natural process isolation |
| Team-shared API server | Streamable HTTP | Network accessible, multi-client support |
| Cloud-hosted MCP service | Streamable HTTP | Production-ready, session management |
| CI/CD integration | stdio | Runs as a subprocess in the pipeline |
| Legacy HTTP deployment | SSE | If migrating from existing SSE, plan move to Streamable HTTP |
| Multi-tenant SaaS | Streamable HTTP | Session management, authentication, scalability |
Transport Security
When using SSE or Streamable HTTP, always deploy behind HTTPS in production. The MCP protocol itself does not encrypt messages — that responsibility falls on the transport layer. Without TLS, JSON-RPC messages including tool arguments and results are transmitted in plaintext.
For stdio transport, security is inherent in the process model — the host and server communicate through OS-level pipes that are not accessible to other processes.
For HTTP transports, security should include:
- TLS/HTTPS for all production deployments
- Authentication headers for client identity verification
- CORS configuration to restrict browser-based access
- Rate limiting to prevent abuse
See MCP Authentication Guide and MCP Security Model for more details.