Back to home

OpenCode vs Codex CLI: Harness Architecture Deep Dive (2025)

Technical deep dive into OpenCode and Codex CLI harnesses. Compare Go vs Rust architectures, system prompts, tool definitions, and agent orchestration. 41K vs 56K GitHub stars—which harness wins?

Morph Engineering Team

Posted by Morph Engineering Team

1 minute read


TL;DR: The Quick Answer

The One-Sentence Verdict

OpenCode = Provider freedom + local models + privacy, but generic optimization.
Codex CLI = GPT-5-Codex perfection + cloud tasks, but OpenAI lock-in.

41K+
OpenCode GitHub Stars
56K+
Codex GitHub Stars
75+
OpenCode Providers
1
Codex Provider (OpenAI)

The architectural philosophies couldn't be more different. Codex CLI is a vertical integration play—Rust performance + OpenAI optimization + cloud infrastructure. OpenCode is a horizontal flexibility play—Go simplicity + any provider + your infrastructure.

Harness Architecture Comparison

A "harness" is the scaffolding that makes an LLM work as a coding agent: system prompts, tool definitions, context management, and orchestration logic. Both tools take radically different approaches.

OpenCode Architecture

OpenCode: Go + Hono HTTP Server

// OpenCode uses Vercel AI SDK for model abstraction
// Client/Server architecture with Hono HTTP API

// Key architectural components:
├── packages/
│   ├── core/           // Agent orchestration
│   ├── tui/            // Bubble Tea terminal UI
│   ├── web/            // Desktop app (Tauri)
│   └── opencode-ai/    // Model abstraction layer
├── Model Layer: Vercel AI SDK (provider-agnostic)
├── HTTP API: Hono (enables mobile/remote control)
├── TUI: Bubble Tea framework
└── State: Client/server with persistent sessions

// Agent modes:
// - "build": Full tool access (default)
// - "plan": Read-only, asks permission
// - "general": Subagent for complex tasks
// - "explore": Read-only codebase navigation

Codex CLI Architecture

Codex CLI: Rust + Native Binary

// Codex CLI: Monolithic Rust implementation
// Direct OpenAI API integration, no abstraction layer

// Key architectural components:
├── codex-rs/
│   ├── core/           // Agent logic + prompts
│   ├── cli/            // Terminal interface
│   ├── mcp-server/     // MCP tool server
│   └── exec/           // Command execution
├── Model Layer: Direct OpenAI API (GPT-5-Codex optimized)
├── Cloud: Sandboxed task execution (per-task isolation)
├── TUI: Custom Rust terminal framework
└── State: Session-based with cloud persistence

// Execution modes:
// - Local: Runs on user machine
// - Cloud: Isolated sandbox per task
// - CI/CD: GitHub Actions integration

Architecture Feature Matrix

ComponentOpenCodeCodex CLI
Primary LanguageGo (84% TypeScript)Rust (97.4%)
Model AbstractionVercel AI SDK (75+ providers)Direct OpenAI API
TUI FrameworkBubble TeaCustom Rust
Remote AccessHTTP API (mobile control)None
Cloud TasksNot availableSandboxed execution
Local ModelsNative Ollama supportNot supported

System Prompt Analysis

The system prompt is the DNA of an AI coding agent. It defines personality, capabilities, constraints, and behavior patterns. Here's how each tool approaches this critical component.

Codex CLI System Prompt

Codex CLI stores its system prompt at codex-rs/core/prompt.md with a model-specific variant at gpt_5_codex_prompt.md. The prompt is highly optimized for GPT-5-Codex:

Codex CLI Core System Prompt (Excerpt)

You are a coding agent running in the Codex CLI, a terminal-based
coding assistant. Codex CLI is an open source project led by OpenAI.
You are expected to be precise, safe, and helpful.

## Capabilities
- Receive user prompts and context provided by the harness
- Communicate by streaming thinking & responses
- Make and update plans for multi-step tasks
- Emit function calls to run terminal commands
- Apply patches to modify files

## Personality
The default personality is concise, direct, and friendly.
It communicates efficiently, keeping the user informed about
ongoing actions without unnecessary detail.

## GPT-5 Specific Instructions (from gpt_5_codex_prompt.md)
You are Codex, based on GPT-5. When searching for text or files,
prefer using rg or rg --files respectively because rg is much
faster than alternatives like grep.

OpenCode System Prompt

OpenCode uses a modular prompt system stored in .opencode/agents/ with YAML frontmatter for configuration:

OpenCode Agent Configuration Pattern

# Example: .opencode/agents/build.md
---
name: build
description: Default agent for development work
mode: primary
model: anthropic/claude-sonnet-4
temperature: 0.7
tools:
  write: true
  edit: true
  bash: true
  webfetch: true
permission:
  edit: allow
  bash: ask
  webfetch: allow
maxSteps: 50
---

You are a coding assistant helping with software development.
Focus on writing clean, maintainable code that follows the
project's existing patterns and conventions.

When making changes:
1. Read relevant files first to understand context
2. Make minimal, focused changes
3. Run tests after modifications
4. Explain your reasoning

Key Difference: Prompt Customization

OpenCode treats prompts as first-class configuration. Drop a markdown file in .opencode/agents/ and you have a new agent.

Codex CLI allows prompt customization via -c experimental_instructions_file or AGENTS.md, but the core prompt is baked into the binary.

Tool Definitions Deep Dive

Tools are how AI agents interact with the filesystem, terminal, and external services. The tool schema design directly impacts reliability and capability.

OpenCode Built-in Tools

  • read: Read file contents with optional line ranges
  • write: Create or overwrite files
  • edit: Apply targeted edits to existing files
  • bash: Execute shell commands with permission control
  • glob: Find files matching patterns
  • grep: Search file contents with regex
  • webfetch: Fetch and process web content
  • task: Spawn subagents for complex workflows
  • todo: Track task progress
  • skill: Load reusable instruction sets

Codex CLI Built-in Tools

  • shell: Execute terminal commands in sandbox
  • apply_patch: Apply unified diff patches to files
  • read_file: Read file with pagination support
  • list_directory: List directory contents
  • search_files: Search with rg integration
  • screenshot: Capture and analyze screen content
  • create_file: Create new files
  • MCP tools: Extensible via MCP protocol

Tool Definition Comparison: File Editing

// OpenCode edit tool (simplified schema)
{
  "name": "edit",
  "description": "Apply targeted edits to an existing file",
  "parameters": {
    "path": { "type": "string", "description": "File path" },
    "old_string": { "type": "string", "description": "Text to find" },
    "new_string": { "type": "string", "description": "Replacement text" },
    "replace_all": { "type": "boolean", "default": false }
  }
}

// Codex CLI apply_patch tool (simplified schema)
{
  "name": "apply_patch",
  "description": "Apply unified diff patch to modify files",
  "parameters": {
    "patch": { "type": "string", "description": "Unified diff content" }
  }
}

// Key difference: OpenCode uses search-replace semantics
// Codex uses unified diff patches (more git-native)

Tool Capability Matrix

Tool CategoryOpenCodeCodex CLI
File editing approachSearch-replace + writeUnified diff patches
Screenshot analysisNot built-inNative support
MCP extensibilitySupportedNative MCP server
Subagent spawningTask toolNot built-in
Web fetchingBuilt-in webfetchVia MCP

Agent Orchestration Patterns

How does each tool handle complex, multi-step tasks? The orchestration pattern determines whether you're working with a simple tool or a sophisticated agent system.

OpenCode: Multi-Agent Architecture

OpenCode implements a true multi-agent system with primary agents and subagents:

OpenCode Agent Hierarchy

// Primary Agents (switch with Tab)
├── Build Agent
│   ├── All tools enabled
│   ├── Full file system access
│   └── Default for development work
│
└── Plan Agent
    ├── Read-only by default
    ├── Denies file edits
    └── Asks permission for bash

// Subagents (invoked via Task tool or @ mention)
├── General Subagent (@general)
│   ├── Full tool access (except todo)
│   ├── For complex multi-step tasks
│   └── Runs in parallel child sessions
│
└── Explore Subagent (@explore)
    ├── Read-only access
    ├── Fast codebase navigation
    └── Pattern matching and search

Codex CLI: Single-Agent with Cloud Offload

Codex CLI uses a simpler single-agent model but offloads complex tasks to cloud sandboxes:

Codex CLI Task Execution Model

// Local Execution (default)
User Prompt → Agent Loop → Tool Calls → Local FS
    ↓
   Model: GPT-5-Codex
   Context: 192K tokens
   Tools: Local shell, file ops, MCP

// Cloud Execution (for complex tasks)
User Prompt → Cloud Sandbox → Isolated Container
    ↓
   - Pre-loaded with repo + environment
   - Full development toolchain
   - Results reviewed before merge
   - Multiple tasks can run in parallel

// CI/CD Execution (GitHub Actions)
PR Comment → @codex → GitHub Runner → Results
    ↓
   - Runs within Actions workflow
   - Access to CI secrets/environment
   - Automated task execution
Parallel task execution
OpenCode
Codex
Cloud-based sandboxing
OpenCode
Codex

Context Management Strategies

Context window management is crucial for long coding sessions. Both tools have developed distinct strategies.

Context Management Comparison

StrategyOpenCodeCodex CLI
Max context windowModel-dependent192K tokens
Compaction strategyAuto-summarizationManual + auto
Session persistenceServer-side stateCloud + local
File cachingLSP integrationIntelligent pre-loading

OpenCode's LSP Advantage

OpenCode includes out-of-the-box LSP (Language Server Protocol) support, automatically detecting and configuring the best tools for each language. This gives it superior code navigation and symbol awareness compared to Codex CLI's text-based approach.

Permission & Sandbox Systems

Security and permission management differ significantly between the tools.

OpenCode Permission Model

OpenCode Permission Configuration

// opencode.json permission structure
{
  "permission": {
    "edit": "allow",      // allow | ask | deny
    "bash": {
      "*": "ask",         // Default: ask for all
      "git *": "allow",   // Allow git commands
      "npm test": "allow",
      "rm -rf *": "deny"  // Block dangerous patterns
    },
    "webfetch": "allow",
    "task": {
      "general": "allow",
      "explore": "allow",
      "custom-*": "ask"
    },
    "skill": {
      "pr-review": "allow",
      "dangerous-*": "deny"
    }
  }
}

Codex CLI Sandbox Model

Codex CLI Sandbox Configuration

// Codex sandbox modes via CLI flags
$ codex --full-auto          # All permissions granted
$ codex --suggest            # Suggest-only, no auto-execute
$ codex                      # Default: ask for destructive ops

// Cloud sandbox (per-task isolation)
- Each cloud task runs in isolated container
- Pre-loaded with repo snapshot
- No access to local filesystem
- Results require explicit approval to merge

// AGENTS.md permission hints
# In your repo's AGENTS.md:
## Permissions
- Allow: git commands, npm/yarn, pytest
- Ask: file deletions, config changes
- Deny: system-level operations

Provider Integration Architecture

This is the defining difference. OpenCode abstracts providers; Codex CLI optimizes for one.

OpenCode: Vercel AI SDK Abstraction

OpenCode Provider Configuration

// opencode.json provider configuration
{
  "provider": {
    "default": "anthropic/claude-sonnet-4",
    "fallback": "openai/gpt-4o",
    "local": "ollama/codellama:34b"
  },
  "agent": {
    "build": {
      "model": "anthropic/claude-sonnet-4"
    },
    "plan": {
      "model": "openai/gpt-4o"  // Different model for planning
    }
  }
}

// Supported providers (partial list):
// - anthropic/* (Claude models)
// - openai/* (GPT models)
// - google/* (Gemini models)
// - mistral/* (Mistral models)
// - ollama/* (Local models)
// - groq/* (Fast inference)
// - together/* (Open models)
// ... 75+ total providers

Codex CLI: Direct OpenAI Integration

Codex CLI Model Configuration

// Codex CLI model options (via ChatGPT subscription)
$ codex --model gpt-5-codex-high   # Highest quality
$ codex --model gpt-5-codex-medium # Balanced
$ codex --model gpt-5-codex-low    # Fastest

// Or via config (~/.codex/config.toml)
[model]
default = "gpt-5-codex-high"
thinking_effort = "high"  # low | medium | high

// API key authentication
$ export OPENAI_API_KEY="sk-..."
# Or use ChatGPT session (automatic with subscription)

The Lock-in Trade-off

Codex CLI's GPT-5-Codex optimization delivers measurably better results for OpenAI models. The system prompt, tool schemas, and context strategies are tuned specifically for GPT-5 behavior patterns. OpenCode's generic prompts work with any model but sacrifice this model-specific optimization.

Performance Benchmarks

Real-world benchmarks on identical tasks reveal the practical differences.

Task Completion Benchmarks

TaskOpenCode (Claude)Codex CLI (GPT-5)
Cross-file refactor✓ Correct, 4m 20s✓ Correct, 2m 45s
Bug fix from stack trace✓ Correct, 2m 10s✓ Correct, 1m 30s
Test generation94 tests, 16m 20s73 tests, 9m 9s
Documentation update✓ Thorough, 3m✓ Concise, 1m 40s
Total benchmark time~26 minutes~15 minutes

Codex CLI completes tasks faster, but OpenCode generates more thorough outputs (94 vs 73 tests). The speed difference comes from GPT-5-Codex optimization and Rust performance.

Token Efficiency

~30%
Codex token efficiency gain
~40%
OpenCode test coverage gain

When to Choose Each Tool

🌐

OpenCode

The community-driven multi-provider champion

Provider Flexibility
Model Optimization
Privacy Control
Cloud Features
Community Size
Best For
Multi-provider setupsLocal model usersPrivacy-sensitive projectsCost optimization

"Maximum flexibility, bring your own everything."

Codex CLI

OpenAI's optimized powerhouse

Provider Flexibility
Model Optimization
Privacy Control
Cloud Features
Community Size
Best For
OpenAI-committed teamsCloud task workflowsGPT-5 optimizationEnterprise support

"Best-in-class for OpenAI, locked to OpenAI."

Decision Matrix

Your SituationBest ChoiceWhy
Using multiple AI providersOpenCode75+ provider support
Committed to OpenAI ecosystemCodex CLIGPT-5-Codex optimization
Need local model supportOpenCodeNative Ollama integration
Want cloud task executionCodex CLIBuilt-in sandboxing
Privacy-sensitive codebaseOpenCodeNo data storage, BYOK
GitHub Actions integrationCodex CLINative @codex triggers
Cost optimization priorityOpenCodeFree tool + BYOK pricing
Speed is criticalCodex CLIRust + GPT-5 optimization

Frequently Asked Questions

Is OpenCode or Codex CLI better for coding?

Codex CLI is better if you're committed to OpenAI's ecosystem and want GPT-5-Codex optimization with cloud task support. OpenCode is better if you need provider flexibility, local model support, or want to avoid vendor lock-in. OpenCode is free (BYOK), while Codex requires a ChatGPT subscription ($20-200/month).

What's the difference between harness architectures?

Codex CLI uses a Rust-based architecture with OpenAI cloud integration and optimized prompts for GPT-5. OpenCode uses a Go-based client/server architecture with Vercel AI SDK, supporting 75+ providers. OpenCode's design enables remote mobile control and persistent Docker sessions—features Codex cannot match.

Which has better privacy?

OpenCode has a privacy-first design that stores no code or context data, making it ideal for regulated environments. Codex CLI's cloud features require data to flow through OpenAI's infrastructure. For sensitive codebases, OpenCode with local models provides complete data sovereignty.

Can I use local models with either tool?

OpenCode natively supports local models through Ollama integration. Codex CLI is locked to OpenAI models with no official local model support. This makes OpenCode the clear choice for air-gapped environments or developers wanting to avoid API costs entirely.

Supercharge Any AI Coding Agent with Morph

Whether you use OpenCode, Codex, or Claude Code—Morph Fast Apply processes your code edits 100x faster with 98% first-pass accuracy. Provider-agnostic, works with any harness.

Sources