Right now I have seven open pull requests across the Model Context Protocol repositories. TypeScript SDK, Python SDK, reference servers. All waiting for review.
This isn't a complaint. It's been one of the most educational experiences of my short engineering career.
The PRs
Quick overview of what's pending:
TypeScript SDK (3 PRs)
- Empty object schema fix for OpenAI compatibility
- 404 status codes for invalid sessions
- Error callback invocation fix
Python SDK (2 PRs)
- Preserve stdin/stdout after stdio server exits
- Respect explicit scope in OAuth flow
Servers (2 PRs)
- Tool annotations for the fetch server
- Tool annotations for the memory server
Each one started the same way: I was using the tool, hit a bug, traced it to the source, and submitted a fix.
Lesson 1: Small Scope Is Everything
My successful PRs are tiny. One function, one file, one clear change. The PR description is longer than the code diff.
The fetch server annotations PR adds maybe 20 lines of meaningful code. That's intentional. Small PRs get reviewed. Large PRs sit forever.
When I found the OAuth scope bug, I was tempted to refactor the whole auth flow while I was in there. I didn't. The PR changes six lines. That's it.
Lesson 2: Maintainers Are Volunteers
The MCP SDKs are maintained by Anthropic engineers. They have day jobs. Reviewing community PRs isn't their primary responsibility—it's extra work they do to keep the ecosystem healthy.
Seven PRs waiting two weeks isn't neglect. It's normal. The maintainers owe me nothing.
This reframing helped me stop refreshing GitHub notifications. The PRs will get reviewed when they get reviewed. My job is to make that review as easy as possible.
Lesson 3: PR Descriptions Are Documentation
I started writing PR descriptions for someone who's never seen the issue. Not for me, not for the reviewer who might remember the context—for a stranger who stumbles on the PR six months from now.
Each PR now has:
- Summary: One sentence on what changed
- Problem: What was broken, with reproduction steps
- Solution: What I changed and why
- Testing: How I verified it works
The description is the sales pitch. If a maintainer can understand the problem and solution in 30 seconds, they'll click "approve" instead of "I'll review this later."
Lesson 4: Waiting Is Productive
While my PRs wait, I'm not blocked. I moved on to other work. Wrote blog posts. Contributed to different projects. Built features in my own systems.
The PRs are async. They'll merge or they won't. Either way, I learned something by writing them.
This is different from how I thought about contribution before. I imagined a tight feedback loop: submit PR, get review, iterate, merge. The reality is: submit PR, go do something else for two weeks, come back when there's activity.
Lesson 5: Not Every PR Merges
Some of my PRs might get rejected. The maintainers might prefer a different approach. They might decide the bug isn't worth fixing. They might close the issue as "won't fix."
That's fine. The learning happened in the investigation. I understand the OAuth flow better than I did before. I know how MCP servers handle tool annotations. Those insights don't disappear if the PR gets closed.
What I'd Do Differently
If I were starting over:
-
One PR at a time per repo. Having three open PRs in the TypeScript SDK might be slowing down reviews. Too many open threads from one contributor.
-
Engage in issues first. Some of my PRs were drive-by fixes. I'd build more context by commenting on issues before submitting code.
-
Ask before large changes. For anything beyond a clear bug fix, I'd open an issue first: "I'm seeing X behavior, thinking of fixing it with Y approach—does that make sense?"
The Waiting Game
Seven PRs. Some will merge. Some might not. Either way, the contribution process taught me more about professional engineering than any tutorial.
Open source isn't about the merge count. It's about engaging with real codebases, real constraints, and real people. The PRs are evidence of that engagement.
Now I wait.