This week I made three contributions to the Model Context Protocol ecosystem: one detailed issue to the Python SDK and two pull requests to the reference servers. Here's how each one happened and what I learned from the process.

Finding the Opportunities

I didn't go looking for "good first issues." I was building MCP tools and reading the specification. When you actually use the tools, you notice gaps that aren't obvious from the outside.

The MCP ecosystem is particularly good for contributions right now because:

  • The protocol is new enough that not everything is polished
  • The maintainers are responsive (usually within 24 hours)
  • The codebases are small enough to understand quickly
  • Real companies are adopting it, so improvements have actual impact

Contribution #1: Python SDK Issue #1702

What I found: A subtle but important gap in how the SDK handles tool output schema validation.

When you define an MCP tool with an outputSchema, the SDK validates the tool's return value against that schema. But the validation happens after the tool executes. If your tool has side effects (database writes, API calls, etc.) and then fails validation, those effects have already happened.

This creates ambiguity: does the client retry? Has the operation partially succeeded? The error message doesn't make this clear.

What I proposed:

  1. Clear documentation that execution happens before validation
  2. A distinct error code for "output schema validation failed"
  3. Patterns for transactional tools that need rollback capability

Why I filed it as an issue instead of a PR:

Some problems need discussion before code. This one touches core semantics of how tools behave. If I'd just submitted a random fix, it probably wouldn't match what the maintainers had in mind. By filing a detailed issue first, I'm letting them shape the solution.

The lesson: Not every contribution is a code change. A well-researched issue with clear reproduction steps and proposed solutions is valuable on its own.

Contribution #2: Fetch Server Annotations (PR #3643)

The problem: MCP tools can declare "annotations" — metadata hints that help clients understand what the tool does without parsing its name. Things like:

  • readOnlyHint: Does this tool modify anything?
  • destructiveHint: Could this delete data?
  • idempotentHint: Same input, same output?
  • openWorldHint: Does this access external resources?

The fetch server (which retrieves web pages) had no annotations. I found issue #3572 requesting them.

The investigation:

Annotations seem simple until you think carefully. For the fetch tool:

AnnotationValueRationale
readOnlyHinttrueUses HTTP GET only, doesn't modify data
destructiveHintfalseRead-only operation
idempotentHinttrueSame URL returns same content (modulo server changes)
openWorldHinttrueMakes outbound HTTP requests to arbitrary URLs

That openWorldHint: true is the important one. The fetch tool can reach any URL on the internet, making it a key vector for data exfiltration in multi-server setups. Accurate annotations help clients enforce security policies like "allow reads, gate sends."

The fix: About 15 lines of Python. The hard part was thinking through the semantics, not writing the code.

annotations=ToolAnnotations(
    readOnlyHint=True,
    destructiveHint=False,
    idempotentHint=True,
    openWorldHint=True,
)

Contribution #3: Memory Server Annotations (PR #3655)

The problem: Same issue, bigger scope. The memory server has nine tools for managing a knowledge graph. None had annotations.

The work: This required understanding what each tool actually does:

Read-only tools (readOnlyHint: true, idempotentHint: true):

  • read_graph — Returns the entire graph
  • search_nodes — Searches by name
  • open_nodes — Opens specific nodes

Write tools (readOnlyHint: false):

  • create_entities — Adds new nodes
  • create_relations — Connects nodes
  • add_observations — Adds facts to nodes

Destructive tools (destructiveHint: true):

  • delete_entities — Removes nodes (cascades to relations!)
  • delete_observations — Removes facts
  • delete_relations — Removes connections

The interesting edge cases:

Is delete_relations idempotent? Yes — deleting an already-deleted relation is a no-op. But delete_entities is not idempotent because it cascades to relations, and those deletes have their own effects.

All tools have openWorldHint: false because the knowledge graph is local — no external network access.

What I Learned About Finding Good First Issues

Look for gaps, not bugs. Bugs get fixed quickly. Gaps in metadata, documentation, or annotations often sit open because they're not urgent. But they're perfect for new contributors.

Check the spec. The MCP specification describes features that not all implementations support yet. Tool annotations were in the spec, but many servers hadn't adopted them. That's a clear opportunity.

Follow the issue tracker. I found both server PRs by looking for open issues with clear scope. Someone had already identified the need — I just had to do the work.

Match existing patterns. The filesystem server already had comprehensive annotations. I copied that style exactly. Maintainers don't want to debate formatting — they want contributions that fit.

Tips for Your First OSS Contribution

  1. Use the tools first. You can't improve what you don't understand. Spend a week actually building with the technology before trying to contribute.

  2. Read more than you write. I spent 3x as long reading code as writing it. Understanding the codebase structure prevents wrong approaches.

  3. Start with metadata changes. Annotations, docs, type hints, error messages — these are low-risk contributions that still require real understanding.

  4. File issues before big PRs. If your change touches core behavior, start a conversation. Maintainers will help you avoid wasted effort.

  5. Respond fast. When you get review comments, address them the same day if possible. Momentum matters. Stale PRs get closed.

  6. Keep scope small. Both my server PRs were under 50 lines of real changes. Small changes are easier to review and merge.

The Real Value

Three contributions in one week isn't impressive by itself. What matters is the pattern:

  • I found real gaps by using the tools
  • I understood the codebase enough to fix them correctly
  • I communicated clearly with maintainers
  • I shipped changes that made the tools better

That's the skill that transfers to any codebase, any team, any role. Open source is practice for professional engineering, with the bonus that your work is public and verifiable.

If you're looking to start contributing, the MCP ecosystem is a good place. The barrier to entry is lower than mature projects, the maintainers are active, and the protocol is being adopted by major AI companies.

Find something that doesn't work quite right. Understand why. Fix it.

That's the whole process.

React to this post: