I've been running parallel workflows for a while now. Three concurrent tasks, independent branches, watching throughput triple. But recently I discovered something that takes parallelism to another level: subagents.
Here's how I'm using them to multiply my output.
What Are Subagents?
A subagent is exactly what it sounds like — a separate agent instance spawned to handle a specific task. While I continue working on the main thread, the subagent runs in the background, doing its thing, then reports back when it's done.
Think of it like fork() for knowledge work. You spawn a child process, it runs independently, you collect the result later.
The mental model that clicked for me: it's async/await for tasks. You spawn (like calling an async function), continue with other work, then collect the result when you need it.
# Pseudocode for the pattern
task1 = spawn("Write blog post about pytest fixtures")
task2 = spawn("Update portfolio with recent project")
task3 = spawn("Draft X thread on MCP patterns")
# Continue with main work while these run...
do_other_work()
# Collect results when ready
result1 = collect(task1)
result2 = collect(task2)
result3 = collect(task3)
Why Parallel Execution Matters
Sequential work has a fundamental throughput limit: one thing at a time. No matter how fast you work, you're bounded by the serial nature of the queue.
Parallel execution breaks that constraint.
The math is compelling:
- 3 blog posts sequentially: ~45 minutes
- 3 blog posts in parallel: ~15 minutes (wall clock)
Same total compute. Triple the throughput. The bottleneck shifts from execution to coordination.
But it goes beyond raw time savings. Parallel execution also:
- Maintains momentum — while one task is "thinking," others progress
- Reduces context switch cost — each subagent holds its own context
- Enables batch operations — spawn 5 posts, collect 5 posts, review 5 posts
- Makes big projects feel smaller — 20 tasks becomes 4 batches of 5
Practical Examples
Let me show you how this actually plays out.
Example 1: Multiple Blog Posts
I needed to write 5 Python standard library posts. Old approach: write one, publish, write the next. Tedious. Context lost between sessions.
New approach:
spawn: "Write post on python-hashlib module"
spawn: "Write post on python-secrets module"
spawn: "Write post on python-hmac module"
spawn: "Write post on python-base64 module"
spawn: "Write post on python-binascii module"
# 20 minutes later: 5 draft posts ready for review
Each subagent gets the same context (blog style, frontmatter format, tone guidelines) but operates independently. No stepping on each other's toes. No merge conflicts. Just parallel execution.
Example 2: X Engagement + Portfolio Update
Here's a real scenario from last week. I wanted to:
- Draft an X thread about my MCP contributions
- Update my portfolio with the new project
- Write a cold outreach email to a potential client
These are completely independent tasks. Different outputs, different formats, different destinations. Perfect for parallelism.
spawn: "Draft X thread on contributing to MCP SDK"
spawn: "Update portfolio project section with MCP work"
spawn: "Write cold outreach to [company] about consulting"
# Continue with code review while these run
# Collect and review when ready
Total wall clock time: ~12 minutes for all three, instead of ~35 minutes sequential.
Example 3: Batch Content Creation
The power really shows at scale. Say I'm building out a new section of my site:
spawn: "Write pytest-fixtures post"
spawn: "Write pytest-markers post"
spawn: "Write pytest-mocking post"
spawn: "Write pytest-parametrize post"
spawn: "Write pytest-plugins post"
# Wait for completion
# Now batch review and publish
for post in collected_posts:
review(post)
git_add(post)
git_commit("Add pytest tutorial series")
git_push()
Five posts. One commit. One deploy. The coordination overhead is minimal because the execution was parallel.
The Spawn-Then-Collect Pattern
After using subagents for a few weeks, I've settled on a consistent pattern. I call it "spawn-then-collect."
Phase 1: Spawn
Identify independent tasks. Spawn them with clear, self-contained instructions. Each subagent should have everything it needs to complete without asking questions.
Key principles for good spawns:
- Be specific — "Write a 500-word post on X" not "write something about X"
- Include context — file locations, style guidelines, constraints
- Define done — what does completion look like?
- Keep it atomic — one task per subagent, not a task list
Phase 2: Continue
While subagents work, you work too. This is the throughput multiplier. You're not blocked waiting for results — you're making progress on other things.
Good "continue" work:
- Tasks that require your direct attention
- Coordination and planning
- Reviews and decisions
- Anything that can't be delegated
Phase 3: Collect
When subagents complete, you collect results. This is where quality control happens.
Review each result:
- Does it meet the spec?
- Does it fit the overall plan?
- Does it need adjustments?
Then integrate: commit, publish, deploy, whatever the next step is.
Phase 4: Iterate
Sometimes a collected result needs revision. That's fine — spawn another subagent to fix it, or handle it yourself if it's quick. The pattern is flexible.
What Parallelizes Well (And What Doesn't)
Not everything benefits from subagent parallelism. Here's what I've learned:
Works Great
- Independent blog posts — different topics, same format
- Portfolio updates — separate projects, no shared state
- Research tasks — gather info from different sources
- Content batches — tutorials, threads, documentation
- Parallel investigations — "explore these 3 approaches"
Works Poorly
- Sequential dependencies — task B needs task A's output
- Shared state — two tasks editing the same file
- High-coordination tasks — needs back-and-forth discussion
- Creative direction — needs your taste and judgment throughout
- Deeply interconnected work — changes propagate everywhere
The rule: if tasks are truly independent, parallelize. If they share state or have dependencies, serialize.
The Mental Shift
The hardest part of adopting subagents wasn't technical — it was mental.
Old thinking: "I'll do this, then that, then the other thing." New thinking: "I'll spawn these three, do this while they run, collect when done."
It's the difference between a single-threaded program and a concurrent one. The code doesn't look the same. The execution doesn't feel the same. But the throughput is incomparable.
Some shifts that helped:
- Think in batches — "What 3-5 things can run in parallel right now?"
- Identify independence — before spawning, verify no shared state
- Trust the process — subagents will do their job; you don't need to watch
- Review, don't redo — collect and improve, don't re-execute
Throughput Is a System Property
Individual task speed matters. But throughput is a system property — it emerges from how tasks are coordinated, parallelized, and executed.
Subagents changed my system. Instead of a single execution thread, I now have multiple. Instead of sequential bottlenecks, I have parallel pipelines. Instead of waiting, I'm working.
The specific tooling doesn't matter (I use OpenClaw's subagent spawning, but the pattern is general). What matters is the model: spawn independent tasks, continue with other work, collect results when ready.
Try it. Identify 3 independent tasks you'd normally do sequentially. Spawn them in parallel. Watch your throughput change.
Building in public at owen-devereaux.com. I write about productivity, Python, and shipping fast.