Prompt Engineering for MCP

Master MCP prompt primitives including dynamic prompt arguments, multi-message prompt flows, prompt composition patterns, and embedded resource references for building intelligent prompt templates.


title: "Prompt Engineering for MCP" description: "Master MCP prompt primitives including dynamic prompt arguments, multi-message prompt flows, prompt composition patterns, and embedded resource references for building intelligent prompt templates." order: 12 level: "intermediate" duration: "20 min" keywords:

  • "MCP prompts"
  • "MCP prompt engineering"
  • "MCP dynamic prompts"
  • "MCP multi-message prompts"
  • "MCP prompt arguments"
  • "mcp-framework prompts"
  • "@modelcontextprotocol/sdk prompts"
  • "MCP prompt templates"
  • "MCP prompt composition" date: "2026-04-01"

Quick Summary

MCP prompts are reusable message templates that help AI models perform specific tasks with your server. Unlike tools (which execute code) or resources (which provide data), prompts define structured conversations. This lesson covers dynamic prompt arguments with Zod schemas, multi-message prompt flows, embedding resource references in prompts, and composition patterns for building a library of intelligent prompt templates.

What Are MCP Prompts?

MCP Prompt

A server-defined message template that AI clients can present to users or invoke programmatically. Prompts return an array of messages (with roles like "user" and "assistant") and can accept dynamic arguments. They are discovered via the prompts/list protocol call and invoked with prompts/get.

Think of MCP prompts as saved workflows. Instead of the user typing a detailed instruction every time, they select a prompt from the server and fill in the arguments. The server generates a tailored set of messages that guide the AI model toward the desired output.

PrimitivePurposeAnalogy
ToolsExecute actions, return resultsAPI endpoints
ResourcesProvide read-only dataGET endpoints / files
PromptsStructure conversationsSaved templates / macros

Basic Prompt Registration

Using the Official SDK

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const server = new McpServer({
  name: "prompt-server",
  version: "1.0.0",
});

server.prompt(
  "explain-code",
  "Explain a piece of code in plain language",
  {
    code: z.string().describe("The code to explain"),
    language: z.string().describe("Programming language"),
    audience: z.enum(["beginner", "intermediate", "expert"])
      .default("intermediate")
      .describe("Target audience level"),
  },
  async ({ code, language, audience }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: `Explain the following ${language} code for a ${audience} audience. Be clear and thorough.\n\n\`\`\`${language}\n${code}\n\`\`\``,
        },
      },
    ],
  })
);

Using mcp-framework

import { MCPPrompt } from "mcp-framework";
import { z } from "zod";

class ExplainCodePrompt extends MCPPrompt {
  name = "explain-code";
  description = "Explain a piece of code in plain language";

  schema = {
    code: { type: z.string(), description: "The code to explain" },
    language: { type: z.string(), description: "Programming language" },
    audience: {
      type: z.enum(["beginner", "intermediate", "expert"]).default("intermediate"),
      description: "Target audience level",
    },
  };

  async generateMessages({ code, language, audience }: {
    code: string;
    language: string;
    audience: string;
  }) {
    return [
      {
        role: "user" as const,
        content: {
          type: "text" as const,
          text: `Explain the following ${language} code for a ${audience} audience. Be clear and thorough.\n\n\`\`\`${language}\n${code}\n\`\`\``,
        },
      },
    ];
  }
}

export default ExplainCodePrompt;
my-mcp-server/
src/
prompts/
ExplainCodePrompt.ts
ReviewCodePrompt.ts
DebugPrompt.ts
tools/
SearchTool.ts
resources/
ConfigResource.ts
index.ts

Dynamic Prompt Arguments

The real power of MCP prompts comes from dynamic arguments that customize the generated messages.

Conditional Content Based on Arguments

server.prompt(
  "code-review",
  "Generate a thorough code review",
  {
    code: z.string().describe("Code to review"),
    language: z.string().describe("Programming language"),
    focus: z.array(z.enum([
      "security",
      "performance",
      "readability",
      "testing",
      "architecture",
    ])).default(["readability", "security"])
      .describe("Areas to focus the review on"),
    strictness: z.enum(["lenient", "standard", "strict"])
      .default("standard")
      .describe("How strict the review should be"),
  },
  async ({ code, language, focus, strictness }) => {
    const focusInstructions = focus.map(f => {
      switch (f) {
        case "security": return "- Check for injection vulnerabilities, unsafe data handling, and authentication issues";
        case "performance": return "- Identify performance bottlenecks, unnecessary allocations, and O(n^2) patterns";
        case "readability": return "- Evaluate naming conventions, code organization, and comment quality";
        case "testing": return "- Assess testability, suggest missing test cases, and identify untestable code";
        case "architecture": return "- Review design patterns, coupling, cohesion, and SOLID principles";
      }
    }).join("\n");

    const strictnessGuide = {
      lenient: "Focus on critical issues only. Minor style issues can be ignored.",
      standard: "Flag important issues and suggest improvements. Balance thoroughness with pragmatism.",
      strict: "Flag all issues regardless of severity. Enforce best practices rigorously.",
    }[strictness];

    return {
      messages: [
        {
          role: "user",
          content: {
            type: "text",
            text: [
              `Review the following ${language} code.`,
              "",
              `**Review strictness:** ${strictnessGuide}`,
              "",
              "**Focus areas:**",
              focusInstructions,
              "",
              "For each issue found, provide:",
              "1. The problematic code",
              "2. Why it is an issue",
              "3. A suggested fix",
              "",
              `\`\`\`${language}`,
              code,
              "```",
            ].join("\n"),
          },
        },
      ],
    };
  }
);
Prompt Argument Design

Design prompt arguments to be meaningful but not overwhelming. Use sensible defaults for optional parameters. Enum types are better than free-form strings when the options are known. Each argument should meaningfully change the prompt output.

Multi-Message Prompts

Prompts can return multiple messages to set up a conversation context.

System-User Pattern

server.prompt(
  "sql-assistant",
  "Set up an SQL query assistant with schema context",
  {
    dialect: z.enum(["postgresql", "mysql", "sqlite"])
      .describe("SQL dialect"),
    schema: z.string().describe("Database schema DDL"),
    question: z.string().describe("What to query"),
  },
  async ({ dialect, schema, question }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: [
            `You are an expert ${dialect} developer. Here is the database schema:`,
            "",
            "```sql",
            schema,
            "```",
            "",
            "Rules:",
            "- Write only valid " + dialect + " syntax",
            "- Use parameterized queries to prevent SQL injection",
            "- Optimize for readability first, then performance",
            "- Include comments explaining complex joins or conditions",
          ].join("\n"),
        },
      },
      {
        role: "assistant",
        content: {
          type: "text",
          text: `I understand the schema. I'll write ${dialect} queries following best practices with parameterized queries and clear comments. What would you like to query?`,
        },
      },
      {
        role: "user",
        content: {
          type: "text",
          text: question,
        },
      },
    ],
  })
);
Multi-Message Role Alternation

When using multi-message prompts, alternate between "user" and "assistant" roles. This establishes a conversation history that primes the AI model with the right context and behavior before it generates its final response.

Few-Shot Examples in Prompts

server.prompt(
  "generate-test",
  "Generate unit tests with examples",
  {
    code: z.string().describe("Function or class to test"),
    framework: z.enum(["jest", "vitest", "mocha"])
      .default("vitest")
      .describe("Testing framework"),
  },
  async ({ code, framework }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: `Write ${framework} tests for this function:\n\n\`\`\`typescript\nfunction add(a: number, b: number): number {\n  return a + b;\n}\n\`\`\``,
        },
      },
      {
        role: "assistant",
        content: {
          type: "text",
          text: `\`\`\`typescript\nimport { describe, it, expect } from '${framework}';\n\ndescribe('add', () => {\n  it('adds two positive numbers', () => {\n    expect(add(2, 3)).toBe(5);\n  });\n\n  it('handles negative numbers', () => {\n    expect(add(-1, 1)).toBe(0);\n  });\n\n  it('handles zero', () => {\n    expect(add(0, 0)).toBe(0);\n  });\n});\n\`\`\``,
        },
      },
      {
        role: "user",
        content: {
          type: "text",
          text: `Now write ${framework} tests for this code:\n\n\`\`\`typescript\n${code}\n\`\`\``,
        },
      },
    ],
  })
);

Embedding Resource References

Prompts can reference MCP resources, giving the AI model access to dynamic data alongside the prompt instructions:

server.prompt(
  "debug-with-context",
  "Debug an issue with access to relevant logs and config",
  {
    errorMessage: z.string().describe("The error message to debug"),
    component: z.string().describe("The component experiencing the error"),
  },
  async ({ errorMessage, component }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: `I am seeing this error in the ${component} component:\n\n> ${errorMessage}\n\nPlease analyze the error using the following context data and suggest a fix.`,
        },
      },
      {
        role: "user",
        content: {
          type: "resource",
          resource: {
            uri: `logs://${component}/recent`,
            text: "Recent logs will be loaded from the server",
            mimeType: "text/plain",
          },
        },
      },
      {
        role: "user",
        content: {
          type: "resource",
          resource: {
            uri: `config://${component}`,
            text: "Component configuration will be loaded",
            mimeType: "application/json",
          },
        },
      },
    ],
  })
);
Resource Embedding

MCP prompts can include messages with type: "resource" content, which tells the AI client to resolve the resource URI and include its data in the conversation. This connects prompts to your server's data layer seamlessly.

Prompt Composition Patterns

Template Builders

Create reusable building blocks for prompt construction:

// Reusable prompt sections
function codeContext(code: string, language: string): string {
  return `\`\`\`${language}\n${code}\n\`\`\``;
}

function roleInstruction(role: string, constraints: string[]): string {
  return [
    `You are ${role}.`,
    "",
    "Constraints:",
    ...constraints.map(c => `- ${c}`),
  ].join("\n");
}

function outputFormat(format: string, example?: string): string {
  let text = `\n**Output format:** ${format}`;
  if (example) {
    text += `\n\nExample:\n${example}`;
  }
  return text;
}

// Compose them into prompts
server.prompt(
  "refactor-code",
  "Suggest refactoring improvements for code",
  {
    code: z.string().describe("Code to refactor"),
    language: z.string().describe("Programming language"),
    goals: z.array(z.string()).describe("Refactoring goals"),
  },
  async ({ code, language, goals }) => ({
    messages: [{
      role: "user",
      content: {
        type: "text",
        text: [
          roleInstruction("a senior software engineer performing a code review", [
            "Suggest only meaningful refactorings",
            "Preserve existing behavior",
            "Explain the benefit of each change",
          ]),
          "",
          `Refactoring goals: ${goals.join(", ")}`,
          "",
          codeContext(code, language),
          outputFormat(
            "Numbered list of suggestions with before/after code blocks",
          ),
        ].join("\n"),
      },
    }],
  })
);

Prompt Factories

For servers with many similar prompts, use a factory function:

function createAnalysisPrompt(
  name: string,
  description: string,
  analysisType: string,
  additionalInstructions: string[] = []
) {
  server.prompt(
    name,
    description,
    {
      code: z.string().describe("Code to analyze"),
      language: z.string().describe("Programming language"),
    },
    async ({ code, language }) => ({
      messages: [{
        role: "user",
        content: {
          type: "text",
          text: [
            `Perform a ${analysisType} analysis on the following ${language} code.`,
            ...additionalInstructions,
            "",
            `\`\`\`${language}`,
            code,
            "```",
          ].join("\n"),
        },
      }],
    })
  );
}

// Create multiple analysis prompts from a shared pattern
createAnalysisPrompt(
  "security-audit",
  "Audit code for security vulnerabilities",
  "security",
  ["Focus on OWASP Top 10 vulnerabilities.", "Rate each finding as Low/Medium/High/Critical."]
);

createAnalysisPrompt(
  "performance-review",
  "Review code for performance issues",
  "performance",
  ["Identify Big-O complexity.", "Suggest concrete optimizations with benchmarks."]
);

createAnalysisPrompt(
  "accessibility-check",
  "Check UI code for accessibility compliance",
  "accessibility",
  ["Reference WCAG 2.1 guidelines.", "Check for ARIA attributes and keyboard navigation."]
);
3MCP primitives work together: tools for actions, resources for data, prompts for structure

Prompts in Practice

When to Use Prompts vs Tools

ScenarioUse PromptUse Tool
Generate code based on a patternYes — structure the requestNo
Search a databaseNoYes — execute the query
Set up a debugging sessionYes — provide context + instructionsNo
Deploy to productionNoYes — perform the action
Summarize a codebaseYes — with embedded resource refsMaybe — if data gathering is needed
Combine Prompts with Tools

The most powerful pattern is using prompts to structure the initial request and tools to gather data. For example, a "debug-issue" prompt sets up the context, and the AI model then uses tools to read logs, check configs, and search code.

Frequently Asked Questions