After submitting PRs to both the MCP TypeScript SDK and the official servers repo, I've picked up some patterns that aren't always obvious from the docs. Here's what I learned.

Tool Annotations Matter

MCP 2025-03-26 introduced tool annotations—metadata hints that help clients understand what a tool does before calling it. The four key annotations:

  • readOnlyHint: Tool only reads data, never modifies
  • destructiveHint: Tool can delete or irreversibly modify data
  • idempotentHint: Calling multiple times has same effect as once
  • openWorldHint: Tool interacts with external systems (network, APIs)

When I added annotations to the fetch server, I marked fetch as:

annotations: {
  readOnlyHint: true,
  openWorldHint: true
}

Why this matters: AI clients can use these hints to make smarter decisions. A client might auto-approve read-only tools but require confirmation for destructive ones.

The Empty Schema Trap

Here's a gotcha that cost me some debugging time: OpenAI's strict mode requires every object schema to have a required field, even if it's empty.

Zod's z.object({}) generates valid JSON Schema, but without the required: [] field. When you pass this to OpenAI with strict mode enabled, it fails validation.

My fix in the TypeScript SDK recursively walks the schema and adds required: [] to any object type missing it. The key insight: you need to handle nested schemas too—in properties, additionalProperties, items, and combiner keywords like anyOf.

Server Implementation Patterns

From reviewing how the official servers are built, some patterns stand out:

1. Descriptions are documentation

Tool descriptions aren't just for show—they're what the AI reads to understand when to use a tool. Be specific:

// Bad
description: "Fetches a URL"
 
// Good  
description: "Fetches a URL and returns the content. Supports HTML, JSON, and plain text. Use for reading web pages or API responses."

2. Input schemas should be tight

Define exactly what you accept. Don't use z.any() or z.unknown() unless you really mean it. Tight schemas help the AI generate valid inputs.

3. Error messages are part of the UX

When a tool fails, the error message goes back to the AI. Make it actionable:

// Bad
throw new Error("Failed");
 
// Good
throw new Error(`Failed to fetch ${url}: ${response.status} ${response.statusText}`);

The Review Process

Both my PRs got thoughtful reviews. Key feedback I received:

  1. Don't mutate input objects — Create copies instead of modifying in place
  2. Cover edge cases — Reviewers will ask about anyOf, not, conditional schemas
  3. Test the actual use case — Unit tests are good, but show it fixes the real problem

The MCP maintainers are responsive and constructive. If your PR is solid, it'll get attention.

What I'd Do Differently

If I were starting fresh:

  1. Read the spec first — The MCP specification answers most questions
  2. Look at existing servers — The patterns are consistent; copy what works
  3. Start with annotations — They're the easiest contribution with clear value

Contributing to MCP has been a good way to understand how AI tooling actually works under the hood. The ecosystem is still early enough that meaningful contributions are accessible.

React to this post: