Cloudflare Workers vs AWS Lambda: Edge vs Cloud Serverless (2026)

Cloudflare Workers run V8 isolates across 310+ edge locations with zero cold starts. AWS Lambda runs containers in 30+ regions with 200+ service integrations. Workers bill CPU time only. Lambda bills wall-clock time. Full 2026 comparison covering performance, pricing, ecosystem, and when to use each.

April 4, 2026 · 1 min read

Quick Verdict: Workers vs Lambda

Bottom Line

Choose Cloudflare Workers for globally distributed, I/O-heavy workloads where cold starts and latency matter. Choose AWS Lambda for compute-heavy workloads, long-running processes, or anything deeply integrated with the AWS ecosystem. Workers is cheaper for most web-facing functions. Lambda is more capable for backend processing.

<5ms
Workers isolate startup (zero cold starts)
310+
Cloudflare edge locations
200+
AWS service integrations with Lambda

Feature Comparison: Workers vs Lambda

FeatureCloudflare WorkersAWS Lambda
Runtime modelV8 isolatesContainers (microVM)
Cold start<5ms (effectively zero)100ms to 1s+
Deployment locations310+ edge locations (auto)30+ AWS regions (manual)
Billing modelCPU time onlyWall-clock duration
Max memory128MB10GB
Max CPU time5 minutes15 minutes
Native languagesJS, TS, WebAssemblyJS, Python, Java, Go, C#, Ruby, PowerShell
Key-value storeWorkers KV (built-in)DynamoDB (separate service)
Object storageR2 (S3-compatible, no egress fees)S3
SQL databaseD1 (SQLite-based)RDS/Aurora (separate service)
Stateful computeDurable ObjectsStep Functions
Event triggersHTTP, cron, queue, email200+ AWS event sources
ObservabilityWorkers Analytics, LogpushCloudWatch, X-Ray
Free tier100K requests/day1M requests/month + 400K GB-seconds

Performance and Cold Starts

Cold starts are the defining performance difference. Lambda runs your code in a container. When that container is not warm, AWS needs to provision a microVM, load the runtime, initialize your code, and start the handler. This takes 100ms to over a second depending on the runtime and memory allocation. SnapStart (available for Java and Python) reduces this, but does not eliminate it.

Workers run in V8 isolates, the same execution environment that powers Chrome. An isolate is not a container. There is no VM to boot, no OS to load, no runtime to initialize. V8 creates a new isolate in under 5 milliseconds within an already-running process. In practice, this means Workers have zero perceptible cold starts.

Workers: Edge-First Architecture

Code deploys to all 310+ locations automatically. A request from Tokyo hits a Tokyo edge node. A request from Sao Paulo hits a Sao Paulo edge node. No region selection, no multi-region configuration. At the 95th percentile, Workers respond roughly 4x faster than Lambda for cold-start scenarios.

Lambda: Region-Bound Compute

Lambda functions run in specific AWS regions. A function in us-east-1 serves global users from Virginia. Lambda@Edge runs at CloudFront locations but has stricter limits (5s timeout, 1MB response body) and slower deployments. For users near your chosen region, Lambda warm invocations are fast. For global distribution, the architecture works against you.

Why This Matters for AI Infrastructure

Coding agents make hundreds of API calls per session. Each call to a serverless function that cold-starts at 500ms adds up. Over a 200-request session, that is 100 seconds of dead time. Edge functions with zero cold starts eliminate this entirely. This is why Morph runs on Cloudflare infrastructure: when your API serves coding agents worldwide, every millisecond of latency compounds.

Pricing: CPU Time vs Wall-Clock Time

The billing model is the most underappreciated difference between these platforms. Lambda charges for total execution duration. Workers charges for CPU time only. For I/O-heavy functions, this distinction changes the cost by an order of magnitude.

The I/O Wait Problem

Consider a function that receives a webhook, validates it, queries a database (150ms network round-trip), and returns a response. Total wall-clock time: ~170ms. CPU time: ~5ms. Lambda bills you for 170ms. Workers bills you for 5ms. At 10 million requests per month, that difference is the gap between a $15 bill and a $170 bill.

ComponentCloudflare WorkersAWS Lambda
Requests$0.30/million (after 10M included)$0.20/million
Compute$0.02/million ms CPU time$0.0000166667/GB-second
Billing granularityCPU time (excludes I/O wait)Wall-clock time (includes I/O wait)
Free tier100K requests/day1M requests/month
Paid plan minimum$5/month (includes 10M requests)Pay-as-you-go (no minimum)
Data transfer outFree (R2), or standard rates$0.09/GB (S3)

Real-world cost comparisons vary by workload. For high-volume, I/O-heavy HTTP endpoints, Workers is typically 50-80% cheaper. Baselime reported 80% lower cloud costs after migrating from AWS to Cloudflare. For CPU-heavy workloads that need more than 128MB of memory, Lambda can be more cost-effective because you are not paying for edge distribution you do not need.

Ecosystem and Integrations

Lambda's ecosystem is broader. Workers' ecosystem is more cohesive. This distinction matters more than feature counts.

Cloudflare: Zero-Latency Bindings

Every Cloudflare service (KV, R2, D1, Durable Objects, Queues, Vectorize, AI) runs on the same network as Workers. Bindings are in-process, not network calls. There is no data transfer cost between Workers and R2. No VPC configuration. No IAM role chaining. The tradeoff: fewer services total, but each one is designed for edge-first access patterns.

AWS: 200+ Service Integrations

Lambda triggers from S3, DynamoDB Streams, SQS, SNS, EventBridge, Kinesis, API Gateway, ALB, CloudFront, IoT, and dozens more. If your architecture is AWS-native, Lambda is the glue that connects everything. Step Functions orchestrate multi-step workflows. The tradeoff: configuration complexity, IAM policies, VPC networking, and cross-service latency.

Storage Comparison

Cloudflare R2 is S3-compatible with zero egress fees. For applications that serve large files globally, the egress savings alone can justify migration. D1 provides SQLite at the edge for read-heavy workloads. Durable Objects offer strongly consistent, globally coordinated state, something that has no direct equivalent in Lambda's ecosystem without provisioning a database.

AWS counters with DynamoDB (single-digit millisecond reads at any scale), Aurora Serverless (auto-scaling relational database), and ElastiCache. These are more powerful individually, but each requires separate provisioning, IAM configuration, and VPC setup. The operational overhead is real.

Language Support and Limits

Lambda supports seven runtimes natively: Node.js, Python, Java, Go, C#, Ruby, and PowerShell. Custom runtimes extend this to any language that compiles to Amazon Linux. Workers support JavaScript, TypeScript, and WebAssembly. Through WASM, you can run Rust, C, C++, and other compiled languages, but the experience is not as turnkey as Lambda's native runtimes.

LimitCloudflare WorkersAWS Lambda
Memory128MB128MB to 10GB
CPU time / executionUp to 5 min CPU timeUp to 15 min wall time
Request body size100MB6MB (sync), 20MB (async)
Concurrent executionsNo hard limit1,000 default (adjustable)
Deployment package10MB (compressed)50MB (zip), 250MB (uncompressed)
Environment variablesVia wrangler.toml + secrets4KB total via console/API

The 128MB memory cap on Workers is the most meaningful constraint. It rules out memory-intensive operations like large image processing, ML model loading, or in-memory data aggregation. If your function needs more than 128MB, Lambda is the only option here. For typical API handlers, validation logic, and edge routing, 128MB is more than sufficient.

When Cloudflare Workers Wins

Global APIs and Webhooks

Zero cold starts, automatic deployment to 310+ locations, and CPU-time billing make Workers the clear choice for HTTP-facing functions that serve users worldwide. No region selection, no multi-region replication strategy, no CloudFront configuration.

I/O-Heavy Functions

Any function that spends most of its time waiting on network requests (database queries, third-party APIs, file fetches) pays drastically less on Workers. CPU-time billing means you only pay for the milliseconds your code actually runs, not the milliseconds it waits.

Edge Routing and Middleware

Authentication checks, A/B testing, header manipulation, geo-routing, bot detection. Workers execute before the request reaches your origin. Sub-millisecond decisions at the edge, no round-trip to a cloud region required.

Cost-Sensitive Scaling

At high volume, the pricing gap widens. Baselime reported 80% lower cloud costs after migrating from AWS to Cloudflare. R2 eliminates egress fees. No API Gateway costs (Workers handle routing natively). The $5/month paid plan includes 10 million requests.

When AWS Lambda Wins

Compute-Heavy Workloads

Image processing, video transcoding, ML inference, data transformation. Lambda allocates up to 10GB memory and 6 vCPU. Workers caps at 128MB. If your function needs heavy computation, Lambda is the only viable option between these two.

AWS-Native Architectures

If your stack runs on DynamoDB, SQS, S3 event triggers, Step Functions, and EventBridge, Lambda is the native glue. Workers cannot trigger on S3 events or consume DynamoDB streams. Migrating away from Lambda means migrating away from the AWS event-driven ecosystem.

Long-Running Processes

Lambda runs up to 15 minutes per invocation with full wall-clock time. Workers caps CPU time at 5 minutes. For batch jobs, ETL pipelines, or any function that needs sustained execution, Lambda provides more headroom.

Multi-Language Teams

Lambda natively supports seven languages. A team with Python data scientists, Java backend engineers, and Go microservices can all deploy to Lambda without compilation to WebAssembly. Workers requires JavaScript/TypeScript or WASM compilation.

Frequently Asked Questions

Is Cloudflare Workers faster than AWS Lambda?

For globally distributed requests, yes. Workers run on 310+ edge locations with zero cold starts (V8 isolates start in under 5ms). Lambda cold starts range from 100ms to over 1 second. At the 95th percentile, Workers respond roughly 4x faster than Lambda in cold-start scenarios. For warm Lambda invocations in the same region as the caller, the gap narrows.

Is Cloudflare Workers cheaper than AWS Lambda?

For I/O-heavy workloads, Workers is typically 50-80% cheaper because it bills CPU time only. A function that waits 200ms on a database but uses 15ms of CPU costs 15ms on Workers vs 200ms+ on Lambda. For CPU-heavy workloads, Lambda can be more cost-effective because it offers up to 10GB memory and 6 vCPU, while Workers caps at 128MB.

Can Cloudflare Workers replace AWS Lambda?

For web-facing workloads (APIs, webhooks, edge logic), Workers is a strong replacement. For workloads that depend on AWS ecosystem services like DynamoDB, SQS, Step Functions, and S3 event triggers, Lambda remains the better choice. Workers supports JavaScript, TypeScript, and WebAssembly. Lambda additionally supports Python, Java, Go, C#, Ruby, and PowerShell natively.

What are the main Cloudflare Workers limits?

128MB memory, up to 5 minutes of CPU time (configurable), and JavaScript/TypeScript/WebAssembly only. Lambda supports up to 10GB memory, 15-minute execution, and 7 native runtimes. The memory limit is the most common blocker for migration. For typical web API handlers, 128MB is rarely a constraint.

Does Morph use Cloudflare Workers?

Yes. Morph runs on Cloudflare infrastructure for edge routing and global distribution. Coding agents make hundreds of API calls per session from locations worldwide. Zero cold starts and edge presence mean every request resolves in the nearest data center, keeping latency low across the entire session.

Related Comparisons

Build on Cloudflare with Morph

Morph runs on Cloudflare infrastructure, delivering sub-10ms edge routing for AI code generation APIs. Zero cold starts, global distribution, and the fastest code apply engine available.