Claude Code is the coding agent that hit $2.5 billion ARR in nine months and now accounts for 4% of all public GitHub commits. This page covers what it does, how it works, Agent Teams, benchmarks, pricing, and how it compares to Cursor, Copilot, and Codex CLI.
What Claude Code Actually Does
Claude Code is Anthropic's terminal-native coding agent. Not an IDE plugin. Not a chatbot with a code block. It runs in your terminal with direct access to your shell, file system, and every dev tool on your machine.
You describe a goal. Claude Code plans the approach, reads files, writes code, runs commands, checks output, and iterates on failures. No copy-pasting. No manual file switching. The agent loop continues until the task is complete or it needs your input.
The practical result: you type a prompt like "add rate limiting to the /api/chat endpoint with Redis, write tests, and make sure CI passes" and walk away. Claude Code reads your codebase, identifies the right files, implements the change, runs your test suite, fixes whatever breaks, and commits the result.
Terminal vs. IDE agents
Claude Code is terminal-native, meaning it composes with unix tools and runs wherever your shell runs. IDE agents (Cursor, Windsurf, Copilot) live inside your editor. Both approaches have tradeoffs. See our full agent comparison for the complete breakdown.
Claude Code by the Numbers
Per SemiAnalysis, Claude Code went from zero to $2.5 billion in annualized billings in approximately nine months. Business subscriptions quadrupled in the six weeks after January 1, 2026. It accounts for over half of all enterprise spending on Anthropic products.
How It Works
Claude Code implements an agentic read-eval-print loop. The model reasons about the task, invokes structured tools (file reads, writes, shell commands), observes the results, and continues. All within a persistent CLI session.
200K Token Context Window
Handles large codebases in a single session. A 1M token beta is available on Opus 4.6. The entire conversation, CLAUDE.md config, tool schemas, and file contents share this window.
Direct Tool Access
Shell commands, file system operations, git, package managers, test runners, linters. Claude Code uses the same tools you use. No sandboxed simulation.
MCP Server Integration
Model Context Protocol servers extend Claude Code with external tools: databases, APIs, browsers, custom services. Add servers with claude mcp add.
Permission System
Controls what Claude can do autonomously vs. what requires approval. Three modes: ask (default), auto-accept (trusted tools), and dangerously skip permissions.
Starting Claude Code
# Install
npm install -g @anthropic-ai/claude-code
# Run in any project directory
cd your-project
claude
# Give it a task
> "Refactor the auth middleware to use JWT verification,
update all route handlers, and run the test suite"For setup details, see our Claude Code installation guide. For configuration deep dives, see best practices.
Agent Teams (Opus 4.6)
Agent Teams shipped in February 2026 as an experimental feature. One Claude Code session (the "lead") spawns multiple teammate sessions that work in parallel on separate git worktrees.
The key distinction from subagents: subagents report results back to the main agent and never talk to each other. If Agent A discovers something Agent B needs, the main agent has to relay it. Agent Teams teammates share findings, challenge each other, and coordinate on their own.
| Capability | Subagents | Agent Teams |
|---|---|---|
| Communication | Report to parent only | Direct peer-to-peer messaging |
| Context | Shared parent context | Independent context windows |
| Coordination | Parent relays information | Autonomous coordination |
| Best for | Quick focused tasks | Complex multi-part projects |
| Token usage | Lower | Higher (each teammate has own window) |
Anthropic demonstrated Agent Teams by having 16 parallel Opus 4.6 instances build a 100,000-line C compiler in Rust over two weeks. The compiler successfully compiled Linux 6.9, QEMU, FFmpeg, and SQLite. Total cost: approximately $20,000 in tokens.
Enabling Agent Teams
# In your project's .claude/settings.json
{
"experimental": {
"agentTeams": true
}
}
# Or via environment variable
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=true
# Start with agent teams
claude
# The lead can now spawn teammates:
> "Create a team: one agent handles the API routes,
another writes tests, a third updates documentation"Practical guidance
Start with 3-5 teammates. More agents means more coordination overhead and token burn. The strongest use cases: research and review, new modules or features, debugging with competing hypotheses, and cross-layer coordination (frontend + backend + tests simultaneously).
CLAUDE.md and Configuration
CLAUDE.md is the file that makes Claude Code actually useful on your specific project. It sits in your repo root and is loaded into context at the start of every session.
Example CLAUDE.md
# CLAUDE.md
## Commands
- bun run dev # Start dev server on port 3000
- bun run test # Run vitest
- bun run lint # ESLint + Prettier check
- bun run typecheck # tsc --noEmit
## Architecture
- Next.js 15 App Router, React 19, TypeScript strict
- Database: PostgreSQL with Drizzle ORM
- Auth: Clerk middleware on /dashboard/*
## Rules
- Always run typecheck after editing .ts files
- Never modify migration files directly
- Use server actions for mutations, not API routes
- Test files go next to source: foo.ts -> foo.test.tsThe pattern is simple: when Claude makes a mistake, add a rule. Over time, CLAUDE.md becomes a living document of your project's conventions and constraints. Teams that maintain CLAUDE.md report significantly fewer repeated errors.
Hooks
Shell commands that execute in response to Claude Code events. Pre-tool-use hooks can validate or block actions. Post-tool-use hooks can run linters or tests after edits.
Custom Slash Commands
Markdown files in .claude/commands/ become reusable workflows. /commit, /review, /deploy. Each command is a prompt template Claude executes.
MCP Servers
Connect databases, browsers, APIs, and custom services. Claude Code discovers available tools automatically. Add with claude mcp add.
Permission Modes
Default mode asks before shell commands and file writes. Auto-accept trusts specific tools. dangerouslySkipPermissions runs fully autonomous (use with caution).
For more on hooks, see Claude Code hooks. For MCP setup, see Claude Code MCP servers. For context window management, see context window guide.
Benchmarks
| Benchmark | Model | Score | Context |
|---|---|---|---|
| SWE-bench Verified | Opus 4.5 | 80.9% | First model to break 80% |
| SWE-bench (Agent Teams) | Opus 4.6 | 80.8% | Multi-agent coordination |
| Terminal-Bench 2.0 | Opus 4.6 | 65.4% | Second to GPT-5.3 Codex (77.3%) |
| Terminal-Bench 2.0 | Opus 4.5 | 59.3% | Exceeds Gemini 3 Pro (54.2%) |
| ARC-AGI-2 | Opus 4.6 | 68.8% | Nearly 2x improvement from 37.6% |
SWE-bench tests agents on real GitHub issues from open-source projects. Terminal-Bench measures performance on practical terminal development tasks. Claude leads SWE-bench (reasoning depth) while GPT-5.3 Codex leads Terminal-Bench (speed and throughput).
The practical takeaway: Claude Code is the strongest agent for complex reasoning tasks, multi-file refactors, and architecture-level decisions. For high-volume simple edits where speed matters more than depth, Codex-powered tools have the edge.
Pricing
Claude Code is included in Claude subscriptions. No separate product, no additional install fee. The subscription covers both claude.ai (web/desktop/mobile) and Claude Code (terminal).
| Plan | Price | What You Get |
|---|---|---|
| Pro | $20/month | Standard usage limits, weekly rate ceiling, all Claude models |
| Max 5x | $100/month | 5x Pro usage, Opus access, priority during peak demand |
| Max 20x | $200/month | 20x Pro usage, highest priority, Opus access |
| Team Premium | $150/user/month | Team admin, shared billing, Claude Code environment |
| Model | Input | Output | Batch API |
|---|---|---|---|
| Opus 4.6 | $5.00 | $25.00 | $2.50/$12.50 |
| Sonnet 4.6 | $3.00 | $15.00 | $1.50/$7.50 |
| Haiku 4.5 | $1.00 | $5.00 | $0.50/$2.50 |
The rule of thumb: if your monthly API equivalent exceeds $100, Max 5x saves money. Above $200 equivalent, Max 20x makes sense. Heavy agentic usage (Agent Teams, long sessions, large codebases) burns through rate limits faster.
Claude Code vs. Competitors
| Dimension | Claude Code | Cursor Agent | Copilot Agent | Codex CLI |
|---|---|---|---|---|
| Interface | Terminal | IDE (VS Code fork) | IDE (VS Code/GitHub) | Terminal |
| Context window | 200K (1M beta) | 70-120K practical | Varies by model | 200K (GPT-5.3) |
| Top benchmark | 80.9% SWE-bench | Not disclosed | Not disclosed | 77.3% Terminal-Bench |
| Multi-agent | Agent Teams | Subagent system | Multi-model assign | Agents SDK |
| Model lock-in | Claude only | Multi-model | Multi-model | OpenAI only |
| Open source | No | No | No | Yes (Rust) |
| Starting price | $20/mo | $20/mo | $10/mo | $20/mo (API) |
vs. Cursor Agent Mode
Cursor excels at visual diff review, inline styling, and the in-editor experience. Claude Code excels at autonomous multi-file refactors and reasoning-heavy architecture work. The 200K context window versus Cursor's practical 70-120K matters on large codebases. Many developers use both: Claude Code for heavy lifting, Cursor for inline work. Full comparison: Claude Code vs. Cursor.
vs. Copilot Agent Mode
GitHub Copilot now supports Claude and Codex as agents inside VS Code and on github.com. You can assign issues to Copilot, Claude, or Codex and compare results. Copilot at $10-19/month is cheaper but less autonomous than dedicated Claude Code. Copilot is the pragmatic default for teams already on GitHub. Full comparison: Claude Code vs. Copilot.
vs. Codex CLI
Codex CLI is open source (Rust), leads Terminal-Bench at 77.3%, and runs at 240+ tokens per second (2.5x faster than Claude on raw throughput). Claude Code leads SWE-bench at 80.9% and has a deeper reasoning capability. The split: Claude for architecture, Codex for speed. Full comparison: Codex vs. Claude Code.
For a broader view of alternatives, see Claude Code alternatives.
Real Developer Workflows
The Claude Code team revealed their internal setup in January 2026. Here is what professional usage looks like in practice:
Parallel Terminal Tabs
Run 5+ Claude Code instances in numbered terminal tabs. System notifications alert you when a Claude needs input. Each instance works on a different task.
CLAUDE.md as Living Docs
Every mistake becomes a rule. Over time, CLAUDE.md captures your project's conventions, gotchas, and constraints. New team members (human or AI) inherit this knowledge.
Verification Loops
The most important pattern: give Claude a way to verify its work. Run tests, check types, lint. This feedback loop improves output quality by 2-3x compared to generate-and-hope.
Slash Command Library
Teams build libraries of .claude/commands/ for common workflows: /commit, /review-pr, /deploy, /debug. Each command is a prompt template that standardizes how Claude approaches the task.
Common workflow: bug fix with verification
# Start Claude Code in your project
claude
# Describe the bug and let Claude fix it
> "Users report 500 errors on POST /api/subscribe when
the email already exists. Find the bug, fix it, write
a regression test, and run the test suite."
# Claude will:
# 1. Search for the subscribe endpoint
# 2. Read the handler and related files
# 3. Identify the missing duplicate check
# 4. Write the fix
# 5. Write a test for the edge case
# 6. Run bun run test
# 7. Fix anything that fails
# 8. Report what it changedFor more patterns, see Claude Code best practices.
The Apply Layer: Infrastructure Under Every Agent
Every coding agent, Claude Code included, faces the same bottleneck: applying edits to files reliably. An LLM generates an edit intent, but merging that intent into existing code is where things break. Diffs fail when context shifts. Search-and-replace misses when code moves.
Morph's Fast Apply model solves this with a deterministic merge: instruction + code + update in, fully merged file out. At over 10,500 tokens per second, it handles real-time feedback. The API is OpenAI-compatible, so it drops into any agent pipeline.
Morph Fast Apply API
import { OpenAI } from 'openai';
const morph = new OpenAI({
apiKey: process.env.MORPH_API_KEY,
baseURL: 'https://api.morphllm.com/v1'
});
const result = await morph.chat.completions.create({
model: 'morph-v3-fast',
messages: [{
role: 'user',
content: `<instruction>Add error handling</instruction>
<code>${originalFile}</code>
<update>${llmEditSnippet}</update>`
}],
stream: true
});Whether you are building a coding agent, extending an open-source tool, or creating internal dev tooling, the apply step is the reliability bottleneck. Morph handles it so you can focus on agent logic.
Frequently Asked Questions
What is Claude Code?
Claude Code is Anthropic's terminal-native coding agent. It runs in your terminal with direct access to your shell, file system, and dev tools. You describe a goal and it executes autonomously: reading files, writing code, running commands, and iterating on failures. It reached $2.5 billion ARR in nine months and accounts for 4% of all public GitHub commits.
How much does Claude Code cost?
Pro is $20/month, Max 5x is $100/month, Max 20x is $200/month. API pricing: Opus 4.6 at $5/$25 per million tokens (input/output), Sonnet 4.6 at $3/$15, Haiku 4.5 at $1/$5. Team premium seats cost $150/user/month. No free tier.
What are Claude Code Agent Teams?
Agent Teams let you orchestrate multiple Claude Code sessions working in parallel. One session acts as lead, spawning teammates on separate git worktrees. Unlike subagents that only report back, teammates communicate directly and coordinate autonomously.
How does Claude Code compare to Cursor's agent mode?
Claude Code is terminal-native with a 200K context window. Cursor is IDE-based with a practical 70-120K window. Claude Code excels at autonomous refactors; Cursor excels at visual review and inline editing. Many devs use both. See Claude Code vs. Cursor.
What is CLAUDE.md?
A configuration file in your repo root that tells Claude Code about your project. Build commands, conventions, constraints. When Claude makes a mistake, add a rule. It is loaded into context at every session start, making Claude smarter about your specific codebase over time.
What are Claude Code's benchmark scores?
Opus 4.5 scored 80.9% on SWE-bench Verified (first model to break 80%). Opus 4.6 scores 65.4% on Terminal-Bench 2.0 (second to GPT-5.3 Codex at 77.3%). With Agent Teams, Claude Code scores 80.8% on SWE-bench.
Build on Reliable Infrastructure
Every AI coding agent needs a reliable apply layer. Morph's Fast Apply model merges LLM edits deterministically at 10,500+ tokens per second. Try it in the playground or integrate via API.