Claude Code Agent Teams vs Subagents: When to Use Which (From Someone Who Uses Both Daily)

Agent teams and subagents solve different problems. After running 6-8 parallel agents daily for months, here is the decision framework that actually works.

March 4, 2026 Β· 2 min read

Summary

Quick Decision Matrix (March 2026)

  • 1-3 parallel tasks, no coordination needed: Subagents. Cheapest, fastest to set up.
  • 4-8 parallel tasks, you want control: Manual panes. You are the coordinator.
  • 8+ tasks, agents need to talk to each other: Agent Teams. They self-coordinate.
  • Research and read-heavy work: Subagents. Smallest context windows, lowest cost.
  • Multi-file coordinated writes: Agent Teams. Agents share discoveries in real time.
3-4x
Token cost of agent team vs single session
~5x
Per-teammate cost vs equivalent subagent
2-16
Supported agent team size
100K
Lines of code built by 16 coordinated agents

The core question is not "which is better." It is "do your agents need to talk to each other?" If yes, use agent teams. If no, subagents are cheaper and simpler. Everything else is detail.

The Three Approaches to Parallel Work in Claude Code

Claude Code supports three distinct patterns for running work in parallel. They differ in communication model, cost, and how much control you retain as the human operator.

πŸ”€

Subagents

Fast parallel workers, no inter-agent chat

Setup Cost
Token Cost
Coordination
Parallelism
Control
Best For
Research queriesParallel file readsIndependent writesQuick focused tasks

"Cheapest parallel execution. No coordination overhead, no coordination capability."

🀝

Agent Teams

Full sessions that message each other

Setup Cost
Token Cost
Coordination
Parallelism
Control
Best For
Multi-file featuresCross-layer changesCoordinated refactorsLarge content pipelines

"Maximum coordination at maximum cost. Agents share findings in real time."

πŸ–₯️

Manual Panes

You are the coordinator

Setup Cost
Token Cost
Coordination
Parallelism
Control
Best For
Experienced users4-8 parallel tasksWhen you want to watch each agentUltrawide setups

"Zero abstraction. You see everything, you decide everything."

AspectSubagentsAgent TeamsManual Panes
CommunicationReport to parent onlyDirect peer messagingYou relay manually
Context windowsSmaller, scopedFull session eachFull session each
Task coordinationParent dispatchesShared task list with depsYour brain
SetupBuilt-in, zero configEnv var + TeamCreateOpen terminal splits
Max parallelism~5-8 per session2-16 teammatesLimited by your attention
Token costLowestHighest (~5x per agent)Medium (independent sessions)
Independent research tasks
Subagents
Teams
Multi-file coordinated edits
Subagents
Teams
Cost efficiency
Subagents
Teams
Human control
Panes
Teams

Subagents: The Agent Tool

Subagents are spawned within a single Claude Code session using the Agent tool. Each subagent gets its own context window, runs its task, and returns a summary to the parent. The parent is the only agent that sees all results. Subagents cannot message each other.

How Subagents Work

  • Parent agent calls the Agent tool with a prompt and (optionally) a custom system prompt
  • Subagent gets its own context window, separate from the parent
  • Subagent has access to the same tools (file read, write, grep, bash)
  • When finished, the subagent returns a text summary to the parent
  • Parent receives the summary and continues its work

Subagent Pattern: Parallel Research

# Parent agent spawns 5 subagents in parallel
# Each researches a different topic and returns a summary

Agent(prompt="Research the Stripe SDK payment intent flow")
Agent(prompt="Research the Plaid link token exchange pattern")
Agent(prompt="Find all existing payment-related files in src/")
Agent(prompt="Check the database schema for payment tables")
Agent(prompt="Read the current test coverage for billing/")

# All 5 run simultaneously
# Each gets its own context window (no cross-pollination)
# Parent receives 5 summaries, synthesizes, and acts

When Subagents Win

Independent Research

5 subagents each research a different topic. No agent needs to know what the others found. Parent synthesizes all 5 summaries into a plan.

Parallel File Analysis

Read 10 files simultaneously, each in its own context window. No risk of context pollution from one file affecting the analysis of another.

When Subagents Break Down

The parent bottleneck becomes painful when agents need to share intermediate results. If subagent A discovers a type definition that subagent B needs, the information must flow: A returns summary to parent, parent parses it, parent spawns B (or re-prompts B) with that context. With 8 agents, this coordination overhead dominates the parent's context window.

Custom Subagents (Feb 2026)

Claude Code now supports custom subagent definitions via Markdown files with YAML frontmatter. You can define specialized subagents with custom system prompts, tool access restrictions, and lifecycle hooks. These live in your project's .claude/agents/ directory. This is useful for creating reusable, scoped workers, but they still cannot message each other.

Agent Teams: TeamCreate + SendMessage

Agent teams are independent Claude Code sessions that coordinate through a shared task list and direct messaging. Each teammate is a full session with its own context window, tool access, and permissions. The team lead orchestrates, but teammates can message each other without going through the lead.

How Agent Teams Work

  • Enable with CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
  • Team lead calls TeamCreate to set up the team directory and config
  • Lead spawns teammates, each as an independent Claude Code process
  • Teammates share a task list at ~/.claude/tasks/{team-name}/
  • Any agent can message any other agent via SendMessage
  • SendMessage supports direct messages, broadcasts, shutdown requests, and plan approvals

Agent Teams: Coordinated Feature Build

# Team lead creates the team
TeamCreate(name="payment-feature")

# Spawn specialized teammates
# Each is a FULL Claude Code session with its own context

# API teammate: builds the payment endpoints
# UI teammate: builds the payment form
# Test teammate: writes integration tests

# The coordination difference:
# API teammate finishes type definitions β†’
#   SendMessage(to="ui-teammate", "PaymentIntent types are at src/types/payment.ts")
# UI teammate picks it up immediately, no round-trip through the lead

# Test teammate asks API teammate directly:
#   SendMessage(to="api-teammate", "Can you start the dev server on port 3001?")
# API teammate responds, test teammate proceeds

# Shared task list tracks dependencies:
# Task "Build UI" is blocked by "Define API types"
# When API teammate marks types complete, UI task unblocks automatically

When Agent Teams Win

Cross-Layer Features

Frontend, backend, and test agents work in parallel. When the API agent changes a response shape, it messages the UI agent directly. No coordinator bottleneck.

Content Pipelines

Morph runs 22 writer agents in a team. The team lead forwards research data via SendMessage. Writers auto-commit and report back. Each writer produces an independent page but shares keyword and linking data with siblings.

The Cost Reality

Every teammate is a full Claude Code session. A 3-agent team uses roughly 3-4x the tokens of doing the same work sequentially in one session. With 8 agents, that multiplies further. Anthropic built their 100,000-line C compiler with 2,000 sessions at $20,000 in API cost. For most teams, the practical sweet spot is 3-5 teammates.

Cost Optimization

Use Sonnet 4.6 for worker teammates and Opus 4.6 only for the team lead or critical tasks. Sonnet scores 79.6% on SWE-bench Verified, only 1.2% behind Opus, at roughly half the token cost. For a 5-agent team, this can reduce total cost by 40-50%.

Manual Panes: You Are the Coordinator

No coordination layer. No shared task list. No SendMessage. You open 6 terminal panes, start a Claude Code session in each, and relay information between them yourself. This is the approach that experienced power users often prefer.

The Setup

  • iTerm2 splits, tmux panes, or multiple terminal windows
  • Each pane is an independent Claude Code session
  • You monitor all panes and copy-paste between them as needed
  • No token overhead for coordination, you do it with your eyes and clipboard

Manual Panes: Ultrawide Workflow

# Terminal layout on ultrawide monitor:
# β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
# β”‚ Pane 1  β”‚ Pane 2  β”‚ Pane 3  β”‚
# β”‚ Backend β”‚ Frontendβ”‚ Tests   β”‚
# β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
# β”‚ Pane 4  β”‚ Pane 5  β”‚ Pane 6  β”‚
# β”‚ Docs    β”‚ DevOps  β”‚ Review  β”‚
# β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
#
# Pane 1 finishes the API endpoint β†’
# You see it, copy the route path, paste into Pane 2's prompt
# Pane 3 watches both and you feed it test subjects
#
# No tokens spent on coordination
# No waiting for SendMessage delivery
# Maximum situational awareness

When Manual Panes Win

Manual panes win when you value control over automation. You see every agent's output in real time. You decide what information to share and when. There is no risk of an agent team lead making a bad coordination decision that cascades through the whole team. The tradeoff: your attention is the bottleneck. Beyond 6-8 panes, most people lose track.

Decision Framework

After months of using all three approaches daily, the decision reduces to two questions.

ScenarioBest ApproachWhy
Research 5 topics in parallelSubagentsIndependent tasks, no coordination needed, cheapest
Read and analyze 10 filesSubagentsEach file is independent, smallest context per agent
Build feature across frontend + backend + testsAgent TeamsAgents need to share type defs, API shapes, test data
Produce 20 SEO pages from shared dataAgent TeamsWriters need shared keyword data and internal links
Debug 4 separate issuesManual PanesIndependent tasks, you want to watch each fix
Prototype 3 approaches to compareManual PanesYou want to read each output and pick the best
Refactor auth across 30 filesAgent TeamsChanges cascade, agents need to agree on patterns
Write tests for existing codeSubagentsEach test file is independent, no coordination

The Two Questions

Do agents need to share discoveries?

If agent A's output changes what agent B should do, you need agent teams or manual panes. Subagents cannot react to each other's findings mid-task.

How many agents are running?

1-3 agents: subagents. 4-8 agents: manual panes if you want control, agent teams if you want automation. 8+ agents: agent teams (no human can track 8+ panes effectively).

Cost Comparison: Real Token Numbers

Token costs vary by task complexity, but the ratios are consistent. Here are representative numbers from production usage.

ApproachTokens UsedRelative CostWall Clock Time
Single session (sequential)~200K1x (baseline)~15 min
3 subagents (parallel)~350K1.75x~6 min
3-agent team (coordinated)~700K3.5x~5 min
3 manual panes~400K2x~7 min (includes your relay time)

Why Agent Teams Cost More

Each teammate maintains a full context window. A subagent gets a scoped prompt and returns a summary. A teammate persists for the entire team session, accumulating context as it receives messages from other agents, reads shared task updates, and processes its own work. The coordination messages alone can add 30-50% overhead.

API vs Subscription

On Claude Max ($200/mo, 20x Pro usage), agent teams are effectively "free" within your allocation. On API pricing (Opus 4.6 at $5/$25 per 1M input/output tokens), a 5-agent team running for 2 hours can cost $15-30. The subscription absorbs agent team costs better than the API for most workflows.

Cost Optimization Strategies

  • Mix models: Sonnet 4.6 for workers, Opus 4.6 for the lead. Cuts cost 40-50% with minimal quality loss.
  • Scope tasks tightly: A teammate that finishes early still holds its context window open. Give each agent exactly one well-defined task.
  • Use subagents within teams: A teammate can spawn its own subagents for sub-tasks. Use teams for coordination, subagents for execution within each teammate.
  • Plan first: Run plan mode before committing tokens to a team. A 5-minute planning pass can prevent a 30-minute wasted team run.

Real Examples from Production

Example 1: SEO Content Pipeline (Agent Teams)

Morph's SEO pipeline runs 22 writer agents in a team. A trend-scout agent identifies keywords. The team lead distributes topics via SendMessage. Each writer agent produces a complete page (React component, metadata, structured data) and commits independently. Writers share internal link targets with siblings so pages cross-reference each other.

This would not work with subagents. The cross-linking requirement means writers need to know what URLs other writers are creating. With subagents, the parent would need to track 22 pages' worth of URLs and relay them, which would overflow its context window.

Example 2: Research Pipeline (Subagents)

Before writing each SEO page, the pipeline spawns 5-8 research subagents. Each searches for different aspects of the topic: competitor content, community discussions, documentation, benchmarks. Each subagent returns a 200-300 word summary. The parent synthesizes these into a research brief that gets forwarded to the writer agent.

Subagents are correct here because each research query is independent. The subagent searching Reddit does not need to know what the subagent reading documentation found. The parent handles synthesis.

Example 3: Multi-Issue Debugging (Manual Panes)

When facing 4-6 unrelated bugs across different parts of the codebase, manual panes win. Each pane gets one bug. You watch all of them, intervene when one gets stuck, and can quickly redirect an agent that goes off track. No coordination overhead because the bugs are independent.

Choosing the Right Approach: Mental Model

# Ask yourself:
#
# 1. Are the tasks independent?
#    YES β†’ Subagents (cheapest) or Manual Panes (most control)
#    NO  β†’ Agent Teams (coordination required)
#
# 2. How many parallel tasks?
#    1-3  β†’ Subagents
#    4-8  β†’ Manual Panes or Agent Teams
#    8+   β†’ Agent Teams (you can't watch 8+ panes)
#
# 3. Do agents need real-time shared state?
#    YES β†’ Agent Teams (SendMessage + shared tasks)
#    NO  β†’ Subagents
#
# 4. Is cost a concern?
#    YES β†’ Subagents (1.75x) > Panes (2x) > Teams (3.5x)
#    NO  β†’ Pick based on coordination needs

Common Mistakes

Mistake 1: Using Agent Teams for Independent Tasks

If your agents do not need to talk to each other, agent teams waste tokens on coordination infrastructure. The shared task list, message routing, and team config all consume context. For 5 independent research tasks, subagents cost 1.75x baseline. An agent team doing the same work costs 3.5x. You pay double for coordination you do not use.

Mistake 2: Using Subagents When Coordination Is Needed

The opposite mistake is equally expensive. If you use subagents for a cross-layer feature, the parent becomes a bottleneck. It receives results from the API subagent, then must re-prompt the UI subagent with that context, then feed test results back to both. The parent's context window fills with relay traffic instead of actual work.

Mistake 3: Too Many Agents

More agents does not mean faster results. Agent teams have diminishing returns past 5-6 teammates because coordination overhead grows with team size. Each additional agent adds messages to every other agent's context window. Anthropic's C compiler project used 16 agents, but that was a 100,000-line project over two weeks. For a typical feature, 3-5 agents is the sweet spot.

Mistake 4: Not Planning First

Starting a 5-agent team without a clear task breakdown wastes the first 10-15 minutes of every agent's time on discovery. Run plan mode first. Define what each agent will do. Then spawn the team with clear assignments.

The Hierarchy of Efficiency

Single session > Subagents > Manual panes > Agent teams. Start at the left. Move right only when the task genuinely requires it. Most tasks that feel like they need agent teams actually work fine with 3 subagents.

Frequently Asked Questions

What is the difference between agent teams and subagents in Claude Code?

Subagents (the Agent tool) run within a single session and report results to the parent. They cannot message each other. Agent teams (TeamCreate + SendMessage) are independent Claude Code sessions that message each other directly, share a task list, and self-coordinate. Subagents are isolated workers. Agent teams are a coordinated group.

When should I use agent teams instead of subagents?

Use agent teams when agents need to share discoveries mid-task, when you have 8+ parallel tasks, or when one agent's output affects another agent's approach. Use subagents for independent research, parallel file reads, or any work where each agent can finish without knowing what the others found.

How much do agent teams cost compared to subagents?

A 3-teammate agent team uses roughly 3-4x the tokens of a single session. Each teammate is a full Claude Code session with its own context window, costing approximately 5x per teammate versus a subagent. Subagents have smaller, focused context windows. For cost-sensitive work, use Sonnet 4.6 for worker agents and Opus 4.6 only for the team lead.

Can I use manual terminal panes instead of agent teams?

Yes. Manual panes (iTerm2 splits, tmux) give you maximum control with zero coordination overhead. You are the coordinator. This works well for 4-8 parallel tasks where you want to monitor each agent. The tradeoff: your attention is the bottleneck. Beyond 6-8 panes, most people lose track.

How do I enable agent teams in Claude Code?

Set CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 in your shell or settings.json. This unlocks TeamCreate, TaskCreate, TaskUpdate, TaskList, and SendMessage. Agent teams support 2-16 agents working on shared codebases. Released with Claude Opus 4.6 on February 5, 2026.

Can subagents within agent teams talk to teammates?

No. A subagent spawned by a teammate still follows subagent rules: it reports to its parent (the teammate), not to the team. But the teammate can relay findings to other teammates via SendMessage. This is a valid pattern: use subagents for execution within each teammate, agent teams for coordination between teammates.

What is the maximum number of agents in a team?

Agent teams support 2-16 teammates. In practice, diminishing returns start around 5-6 agents because coordination overhead grows quadratically with team size. Anthropic's 16-agent C compiler project is the upper bound of what has been demonstrated publicly.

WarpGrep Makes Every Agent Faster

Whether you use subagents, agent teams, or manual panes, every agent spends time searching code. WarpGrep replaces that search with 8 parallel tool calls per turn, returning results in under 6 seconds. Opus 4.6 + WarpGrep scores 57.5% on SWE-bench Pro, up from 55.4% stock.

Sources