The Parallel Claude Workflow
Running Multiple AI Agents Like It's 1995 Multi-Tasking (But Actually Good)
From the engineer who remembers when running two programs simultaneously meant your Commodore 64 would freeze. Now I orchestrate multiple AI agents in parallel terminals, each building different features across isolated git worktrees.
The Problem Space
You know what takes me back? The first time I tried to run multiple programs simultaneously on my Commodore 64. Spoiler alert: it didn't work. The machine would just... freeze. Or crash. Or produce some beautiful, chaotic screen artifacts that looked more like modern generative art than anything functional.
Fast forward forty-odd years, and here I am orchestrating multiple AI agents in parallel terminals, each one working on different features across isolated git worktrees. The technology has changed, but that same fundamental challenge remains: how do you make multiple "things" work at the same time without them stepping on each other's toes?
I'll be honest—when I first heard about developers running multiple Claude Code instances simultaneously, I was skeptical. Having worked with AI coding assistants for a while now, I know the bottleneck isn't usually the AI's speed—it's my review bandwidth. These things generate code faster than I can think about whether it's any good.
The Serial Workflow Problem
I'd kick off Claude to refactor some authentication logic, and while it was churning away, I'd just... sit there. Waiting. Context-switching to Slack. Grabbing coffee. Coming back to find it done, reviewing it, then starting the next task.
The serial approach felt like running a single-core processor when I had multiple cores available.
Three Parallel Patterns
There are essentially three approaches to running multiple Claude instances in parallel. Each has its trade-offs, and the "right" choice depends on your project structure and tolerance for operational overhead.
Pattern Comparison
Choose the right approach for your workflow
💡 Recommendation
For most professional workflows, Git Worktrees offer the best balance of isolation and practicality. Add tmux when you need session persistence or remote development capabilities. Start with 2 parallel agents maximum until you understand your review bandwidth.
Pattern Two: Git Worktrees
Now we're talking. Git worktrees are one of those features that's been around since 2015 but somehow still feels like secret knowledge. (Kind of like how SIMD instructions existed for years before most developers actually used them.)
The concept is elegant: instead of cloning your repo multiple times, git lets you check out different branches into different directories, all sharing the same underlying git object store. It's like having multiple working directories that are all "views" into the same repository history.
Setup Commands
Why This Works Better
Each worktree has its own filesystem. Agent 1 can modify ../project-auth/api/auth.go while Agent 2 modifies ../project-payment/api/auth.go — and they're different files in different directories, even though they represent the same logical file on different branches.
Git also prevents you from checking out the same branch twice. Try to create two worktrees on feature-auth and git will stop you. This is the kind of safety mechanism I appreciate—it's like having a mutex at the git level.
Git Worktree Isolation
Click a worktree to simulate Claude working in parallel
Main repository - shared git object store
Quick Reference
The Operational Overhead
Here's what that ZX Spectrum kid in me wishes someone had told me upfront: worktrees require bootstrapping. Each one needs:
- Dependencies installed (
npm install,poetry install, whatever your stack uses) - Configuration files that aren't in git (
.envfiles, local secrets) - Build artifacts if your project has a compilation step
- Database migrations run if you're working on schema changes
For a Node.js project with a hefty node_modules directory, this can mean significant disk space. Yes, git efficiently shares objects, but those dependencies? They're duplicated.
Pattern Three: Tmux + Automation
If you're old enough to remember GNU Screen (or still use it), tmux will feel familiar. If you're younger and never had to deal with unreliable SSH connections that would kill your entire dev session... well, lucky you.
For parallel Claude workflows, tmux offers something valuable: persistent, detachable sessions. You can launch three Claude instances in three tmux sessions, detach from all of them, close your terminal, come back hours later, and they're all still running.
The Power of Persistence
Tmux sessions survive SSH disconnects, terminal closures, and laptop suspends. For remote development or long-running AI tasks, this resilience is invaluable. Your agents keep working even when you're not watching.
Tmux Session Manager
Persistent, detachable Claude instances
Active Sessions
Auth Refactor
Payment API
API Restructure
Tmux Commands
Automation Scripts
The real power comes when you automate the entire setup. Here's a battle-tested script pattern:
#!/bin/bash
PROJECT_ROOT=~/my-project
FEATURES=("auth-refactor" "payment-api" "api-restructure")
for feature in "${FEATURES[@]}"; do
# Create worktree
git worktree add "${PROJECT_ROOT}-${feature}" "feature-${feature}"
# Bootstrap
cd "${PROJECT_ROOT}-${feature}"
npm install
cp ../.env .env
# Launch in detached tmux session
tmux new-session -d -s "claude-${feature}" \
"cd ${PROJECT_ROOT}-${feature} && claude"
done
echo "Launched ${#FEATURES[@]} Claude instances"
tmux list-sessionsRun once, get three parallel agents working on three different features. Each in its own tmux session, each with its own worktree, each completely isolated.
The Token Reality
Here's the part nobody mentions in the glossy "parallelize everything!" tutorials: token consumption is real, and it scales non-linearly.
Running three Claude instances isn't 3× the tokens—it can be more, because each agent explores its own context, makes its own mistakes, backtracks its own dead ends. I've hit my Claude Pro subscription limits multiple times doing this.
Token Consumption Monitor
Real-time tracking across parallel agents
15,234
tokens consumed
23,451
tokens consumed
18,932
tokens consumed
💡 Key Insight
Running 3 parallel agents isn't 3× the tokens—it can be more. Each agent explores its own context, makes its own mistakes, and backtracks independently. Monitor your usage closely and consider your review bandwidth as the true bottleneck.
The Realities Nobody Mentions
Review Bandwidth Is Still Your Bottleneck
Remember my earlier skepticism? It's still valid. You can parallelize implementation, but you can't parallelize your own critical thinking. Three agents producing code simultaneously means three sets of changes to review, understand, and integrate.
Context Switching Has Cognitive Cost
Jumping between three different feature contexts while each Claude asks you questions is like trying to debug three different production incidents simultaneously. Possible? Yes. Pleasant? Not particularly.
Merge Conflicts Are Still Merge Conflicts
Worktrees prevent agents from literally overwriting each other's files during implementation. They don't prevent merge conflicts when you try to integrate all those feature branches. If Agent 1 and Agent 2 both refactored the same module, you're still going to spend time resolving that manually.
Sweet Spot: Two Agents
I've found my sweet spot is two agents for active work, maybe a third for long-running research or refactoring tasks I can review asynchronously. Beyond that, the coordination overhead exceeds the parallelization benefits.
What I'd Recommend to Past Me
Learn the workflow with manageable complexity.
That 5 minutes of bash scripting saves you 30 minutes every time you spawn a new agent.
Even if you think you don't need persistent sessions, you do. Your SSH connection will drop. Your laptop will suspend. Tmux saves you from starting over.
Set up alerts or just check the dashboard regularly. Surprise subscription limits are no fun.
Don't try to scale beyond your review capacity. Two well-reviewed implementations are better than five half-reviewed ones.
The Meta-Question: Should You Do This?
Here's the thing—just because you can run multiple AI agents in parallel doesn't mean you should. It's a power tool, and like any power tool, it can cut through work efficiently or it can cause spectacular disasters.
✓ Valuable For
- • Large features that can be cleanly decomposed
- • Refactoring work across different modules
- • Exploratory research on multiple approaches
- • Long-running tasks while you focus on high-priority work
✗ Not Valuable For
- • Closely related features with merge conflicts
- • Work where you need tight feedback loops
- • Complex features with evolving architecture
- • Teams where coordination overhead exceeds benefits
Your mileage will vary based on project structure, team size, and personal work style. The current state of parallel Claude workflows is like shell scripting in 1995—functional but primitive.
We're figuring this out together,
one parallel workflow at a time.