MCP Transport Protocols

A complete guide to MCP transport protocols — stdio, Server-Sent Events (SSE), and Streamable HTTP. Learn when to use each transport and how they compare.


title: "MCP Transport Protocols" description: "A complete guide to MCP transport protocols — stdio, Server-Sent Events (SSE), and Streamable HTTP. Learn when to use each transport and how they compare." order: 4 keywords:

  • MCP transport
  • MCP stdio
  • MCP SSE
  • MCP Streamable HTTP
  • MCP server-sent events
  • MCP transport comparison
  • MCP remote server
  • MCP local server
  • model context protocol transport date: "2026-04-01"

Quick Summary

MCP supports three transport protocols: stdio (for local process communication), SSE (legacy HTTP streaming), and Streamable HTTP (the modern HTTP transport). The transport layer is abstracted from the protocol logic, so the same MCP server code works across transports. Choose stdio for local tools, and Streamable HTTP for remote or multi-client deployments.

What Are MCP Transports?

MCP Transport

An MCP transport is the communication layer that carries JSON-RPC 2.0 messages between MCP clients and servers. The transport handles connection establishment, message framing, and delivery semantics. MCP separates the transport layer from the protocol logic, allowing the same server to work over different transports without code changes.

The Model Context Protocol is transport-agnostic by design. The protocol defines what messages are exchanged (JSON-RPC 2.0 requests, responses, and notifications), while the transport defines how those messages are delivered. This separation means you can build an MCP server once and run it over any supported transport.

Transport Comparison

FeaturestdioSSE (Legacy)Streamable HTTP
Communication modelBidirectional via stdin/stdoutHTTP POST + SSE streamHTTP POST + optional SSE
DeploymentLocal process onlyRemote (HTTP server)Remote (HTTP server)
Multiple clientsOne client per processMultiple clientsMultiple clients
StreamingContinuous streamsServer-to-client streamingBidirectional streaming
Firewall friendlyN/A (local only)Yes (HTTP/HTTPS)Yes (HTTP/HTTPS)
Session managementProcess lifecycleSSE connection lifecycleHTTP session headers
ResumabilityNoNoYes (via session tokens)
StatusStable, widely supportedLegacy (being replaced)Current recommended HTTP transport
Best forLocal dev tools, CLI toolsLegacy deploymentsProduction remote servers

stdio Transport

The stdio transport is the simplest and most widely used transport for local MCP servers. The host application spawns the MCP server as a child process and communicates with it through standard input (stdin) and standard output (stdout).

How stdio Works

  1. The host spawns the server process (e.g., node my-server.js)
  2. The client writes JSON-RPC messages to the server's stdin
  3. The server writes JSON-RPC responses to stdout
  4. The server uses stderr for logging (not protocol messages)
  5. When the session ends, the host terminates the process
Host Application
  └─ spawns server process
       ├─ stdin  ← Client sends requests
       ├─ stdout → Server sends responses
       └─ stderr → Server logs (not protocol)

When to Use stdio

Choose stdio When
  • Your server runs locally on the same machine as the host
  • You are building developer tools (filesystem access, git integration, local databases)
  • You need zero network configuration — no ports, no firewalls, no TLS
  • Each client gets its own isolated server instance
  • You want the simplest possible setup for development and testing

stdio Configuration

Most MCP hosts configure stdio servers through a JSON configuration file. Here is a typical Claude Desktop configuration:

{
  "mcpServers": {
    "my-server": {
      "command": "node",
      "args": ["/path/to/my-server/dist/index.js"],
      "env": {
        "API_KEY": "your-api-key"
      }
    }
  }
}
stdio is the Default

If you are building an MCP server for personal use or team distribution where users run the server locally, stdio is almost always the right choice. It requires no infrastructure, works out of the box with all major MCP hosts, and provides natural process isolation. Both mcp-framework and the @modelcontextprotocol/sdk default to stdio transport.

SSE Transport (Legacy)

The Server-Sent Events (SSE) transport was the original HTTP-based transport for MCP. It uses two HTTP channels: a long-lived SSE connection for server-to-client messages, and HTTP POST requests for client-to-server messages.

How SSE Works

  1. Client opens an SSE connection to GET /sse
  2. Server sends an endpoint event with the URL for client-to-server messages
  3. Client sends JSON-RPC requests via POST /messages
  4. Server sends JSON-RPC responses and notifications via the SSE stream
Client                          Server
  │                               │
  ├─ GET /sse ──────────────────→ │  (SSE connection opened)
  │ ←── endpoint: /messages ──────┤  (server tells client where to POST)
  │                               │
  ├─ POST /messages ────────────→ │  (client sends request)
  │ ←── SSE: response ───────────┤  (server sends response via SSE)

SSE Limitations

SSE is Legacy

The SSE transport is being superseded by Streamable HTTP. While existing SSE deployments continue to work, new MCP servers should use Streamable HTTP for HTTP-based communication. The SSE transport has limitations including no session resumability, more complex connection management, and a less efficient two-channel design.

  • Two separate channels increase complexity and failure modes
  • No session resumability — if the SSE connection drops, the session is lost
  • Unidirectional streaming — only server-to-client via SSE; client-to-server uses POST
  • Connection management — SSE connections can be dropped by proxies and load balancers

Streamable HTTP Transport

Streamable HTTP is the modern, recommended HTTP transport for MCP. It simplifies the SSE transport's two-channel design into a single HTTP endpoint that supports optional streaming.

How Streamable HTTP Works

Streamable HTTP uses a single HTTP endpoint for all communication:

  1. Client sends JSON-RPC requests via POST /mcp
  2. Server responds with either a standard HTTP response or upgrades to an SSE stream
  3. Server can optionally accept GET /mcp for server-initiated notifications via SSE
  4. Sessions are managed via the Mcp-Session-Id header
Client                          Server
  │                               │
  ├─ POST /mcp ─────────────────→ │  (initialize request)
  │ ←── Response + session ID ────┤  (Mcp-Session-Id header)
  │                               │
  ├─ POST /mcp ─────────────────→ │  (tools/call request)
  │ ←── SSE stream (optional) ───┤  (streaming response with progress)
  │ ←── Final result ─────────────┤

Key Advantages

  • Single endpoint — Simpler architecture, easier to deploy behind load balancers and proxies
  • Session resumability — Session tokens allow clients to reconnect after disconnection
  • Flexible response mode — Server can choose between immediate responses and SSE streaming per-request
  • Better proxy support — Standard HTTP POST requests work reliably through corporate proxies and CDNs
  • Stateless option — Servers can operate without persistent sessions for simple use cases
1endpoint needed for Streamable HTTP vs 2 for SSE

Streamable HTTP with mcp-framework

mcp-framework provides built-in support for Streamable HTTP transport:

import { MCPServer } from "mcp-framework";

const server = new MCPServer({
  transport: {
    type: "httpStream",
    options: {
      port: 3000,
      endpoint: "/mcp",
    },
  },
});

server.start();

Streamable HTTP with the Official SDK

The @modelcontextprotocol/sdk also supports Streamable HTTP:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";

const server = new McpServer({ name: "my-server", version: "1.0.0" });
const transport = new StreamableHTTPServerTransport({ endpoint: "/mcp" });
await server.connect(transport);
Choose Streamable HTTP When
  • Your server needs to be accessible over the network
  • You are building a shared service that multiple clients connect to
  • You need production-grade deployment with load balancing and monitoring
  • Your server is hosted in the cloud (AWS, GCP, Azure, etc.)
  • You want session resumability for reliable connections

Transport Selection Guide

ScenarioRecommended TransportWhy
Local development toolstdioSimplest setup, no network config needed
Personal filesystem serverstdioRuns locally, natural process isolation
Team-shared API serverStreamable HTTPNetwork accessible, multi-client support
Cloud-hosted MCP serviceStreamable HTTPProduction-ready, session management
CI/CD integrationstdioRuns as a subprocess in the pipeline
Legacy HTTP deploymentSSEIf migrating from existing SSE, plan move to Streamable HTTP
Multi-tenant SaaSStreamable HTTPSession management, authentication, scalability

Transport Security

Always Use TLS for Remote Transports

When using SSE or Streamable HTTP, always deploy behind HTTPS in production. The MCP protocol itself does not encrypt messages — that responsibility falls on the transport layer. Without TLS, JSON-RPC messages including tool arguments and results are transmitted in plaintext.

For stdio transport, security is inherent in the process model — the host and server communicate through OS-level pipes that are not accessible to other processes.

For HTTP transports, security should include:

  • TLS/HTTPS for all production deployments
  • Authentication headers for client identity verification
  • CORS configuration to restrict browser-based access
  • Rate limiting to prevent abuse

See MCP Authentication Guide and MCP Security Model for more details.

Frequently Asked Questions