workerd: The C++ Runtime Quietly Powering Millions of Edge Functions

Cloudflare open-sourced the exact engine behind Workers. Not a rewrite. Not a reference implementation. The same code, running on your laptop or on 300+ data centers worldwide.

cloudflare/workerd · 12 min read

A massive turbine engine labeled workerd sits at the center of a global network. Tiny V8 isolate chambers glow inside it. Wires extend outward to hundreds of small server nodes arranged in a globe shape. The engine is partially transparent revealing intricate internal gears and channels.
The engine that powers Cloudflare Workers, now available to everyone.
Key Takeaways

Not Another JavaScript Runtime

The JavaScript runtime landscape is crowded. Node.js dominates server-side. Deno pitched security-first defaults. Bun bet on raw speed. Each carved out territory by rethinking how JavaScript should run on a server.

workerd took a different path entirely. It was not built to be a general-purpose runtime. It was built to run Cloudflare Workers at global scale, then open-sourced in September 2022 so anyone could run that same code locally or self-hosted.

The distinction matters. Node, Deno, and Bun are designed for developers building applications. workerd is designed for infrastructure operators running thousands of untrusted scripts in the same process without them interfering with each other.

"workerd is not just a way to run JavaScript on a server. It is a way to run many peoples' JavaScript on the same server."

-- Kenton Varda, Principal Engineer at Cloudflare, creator of workerd and Cap'n Proto

The Architecture That Makes It Different

Traditional runtimes give each application its own process. That works fine for a handful of services. It falls apart when you need to run tens of thousands of independent scripts on the same machine with sub-millisecond cold starts.

workerd solves this with V8 isolates. Each Worker gets its own isolate: separate code, separate global scope, separate memory. But all isolates share the same process, the same native API implementations, and critically, the same thread when they call each other.

Cross-section of a single workerd process showing multiple V8 isolate chambers side by side. Each chamber contains a small Worker script. Shared native API code runs along the bottom like a foundation. Arrows between chambers show same-thread calls with zero latency labels.
Multiple V8 isolates sharing a single process. The secret to sub-millisecond cold starts.

This is not theoretical. Cloudflare runs this architecture across more than 300 data centers worldwide. Every Workers request you have ever made hit this exact code.

Nanoservices: Microservices Without the Network Tax

Microservices revolutionized how teams build software. They also introduced network latency, serialization overhead, and operational complexity that nobody asked for.

workerd introduces a concept called nanoservices. Split your application into independently deployable components, just like microservices. But when one nanoservice calls another, the callee runs in the same thread and process. No HTTP hop. No JSON serialization. Just a function call.

Two architectural diagrams side by side. Left shows traditional microservices with network hops between boxes connected by wavy lines representing latency. Right shows nanoservices as chambers within a single box connected by straight arrows with a zero ms label.
Microservices vs. nanoservices. Same decomposition, radically different performance characteristics.

The key insight is homogeneous deployment. Instead of deploying different services to different machines, you deploy all your nanoservices to every machine. Load balancing becomes trivial. Any request can be handled by any server, because every server has every service.

This only works because V8 isolates are lightweight enough to run thousands of them in a single process. A container-based approach would collapse under its own weight.

Capability Bindings: Security by Construction

Most runtimes let code access anything the process can access. A bug in one module can reach the filesystem, the network, or environment variables belonging to a completely different service. This is the root cause of Server-Side Request Forgery (SSRF) attacks.

workerd takes a fundamentally different approach. Configuration uses capability bindings instead of global namespaces. A Worker can only access the resources explicitly listed in its configuration. There is no global process.env. There is no ambient filesystem access. Each binding is a named capability granted to a specific Worker.

A Worker script in the center with labeled keys on keyrings extending outward. Each key connects to one specific resource like KV AUTH or R2. A wall separates the Worker from resources it does not have keys for. Crossed-out arrows show blocked access attempts.
Capability bindings make unauthorized access structurally impossible, not just policy-prohibited.

Consider an authentication service running as a nanoservice. It does not need a network address. Other Workers reach it through a binding. The auth service does not need to verify that requests came from allowed clients, because only Workers with that specific binding can send requests to it in the first place.

This is not a firewall rule. It is not an ACL. It is a structural property of the runtime itself.

The Backwards Compatibility Promise

Every server runtime eventually ships a breaking change. Node.js went through painful major version transitions. Deno 2.0 reversed course on Node compatibility. These upgrades force developers to choose between security patches and application stability.

workerd solved this with a date-based compatibility system. The version number is a date: v1.20260318.1. Each Worker declares a compatibility date in its configuration. When you update workerd, it emulates the API surface as it existed on that date.

worker = (
  compatibilityDate = "2024-01-01",
  modules = [( name = "main.js", esModule = embed "main.js" )]
)

Want new APIs? Move your compatibility date forward. Want stability? Keep your date pinned. Either way, the runtime update itself will never break your code. This is a guarantee, not a best-effort policy.

Cloudflare ships a new release of workerd essentially every day. That cadence would be impossible without this compatibility system. It decouples runtime security from application compatibility in a way no other runtime has managed.

What Is Actually in the Codebase

The repository is large and meticulously organized. The primary language is C++ with 8.6 million lines, supported by 3.6 million lines of JavaScript and 3.1 million lines of TypeScript for the Web API implementations and type definitions.

Directory Purpose Key Details
src/workerd/server Core server binary The main entry point. Handles config parsing, socket listeners, and V8 platform setup.
src/workerd/api Web API implementations fetch, WebSocket, Streams, Crypto, Cache, URL, HTMLRewriter, KV, R2, Durable Objects, and more.
src/workerd/io I/O layer The event loop, promise integration, and network I/O that bridges V8 and the OS.
src/workerd/jsg JavaScript glue The binding layer between C++ implementations and JavaScript APIs. Handles type marshaling.
src/node Node.js compat Partial Node.js API compatibility layer including Buffer, streams, and crypto.
src/pyodide Python support Pyodide integration for running Python Workers via WebAssembly.
A layered architectural diagram showing the workerd codebase from bottom to top. V8 engine at the base. C++ server and IO layer above it. JSG binding layer in the middle. Web APIs and Node compat at the top. Pyodide and Rust modules as side extensions.
The geological layers of workerd, from V8 bedrock to Web API surface.

The build system is Bazel, which handles the complexity of compiling C++, linking V8, and managing the dependency tree. Building from source requires Clang 19+, libc++, and LLD on Linux, or Xcode 16.3 on macOS. Not a trivial setup, but the Bazelisk wrapper automates most of it.

Cap'n Proto, also created by Kenton Varda, serves as both the configuration format and the internal RPC mechanism. This is not accidental. Cap'n Proto was designed for zero-copy deserialization, which matters when you are parsing configuration for thousands of Workers in a single process.

The Kenton Varda Thread

The creator of workerd has an unusual resume. Kenton Varda spent 7.5 years at Google where he open-sourced Protocol Buffers, one of the most widely used serialization formats in the world. He then created Cap'n Proto as a successor with zero-copy reads and a capability-based RPC system.

He founded Sandstorm, a company building fine-grained containerization for web apps, which Cloudflare acquired. At Cloudflare, he designed and built the Workers runtime from scratch as Principal Engineer.

Kenton Varda

"Rather than use a container-based approach where each customer's code runs in its own container, we instead run code from many different customers in the same process using V8 isolates."

-- Kenton Varda, Introducing workerd

The design fingerprints are everywhere. Capability bindings come from Sandstorm's object-capability security model. Cap'n Proto powers the configuration. The nanoservice concept reflects years of thinking about how to decompose applications without paying the network tax.

With 870 commits, Varda is the second most prolific contributor after James Snell (1,626 commits), who leads much of the Web Standards API work.

How workerd Compares

Feature workerd Node.js Deno Bun
Engine V8 (isolates) V8 (process) V8 (process) JavaScriptCore
Primary Use Multi-tenant edge General server General server General server
Cold Start Sub-millisecond ~100ms+ ~50ms+ ~30ms+
Multi-tenancy Built-in (isolates) Separate processes Separate processes Separate processes
API Surface Web Standards Node APIs Web Standards + Node Web Standards + Node
Config Format Cap'n Proto package.json deno.json bunfig.toml
Breaking Changes Never (compat dates) Major versions Major versions Major versions
Language C++ / Bazel C++ / GYP Rust / Cargo Zig / CMake

The comparison reveals that workerd is not trying to win the same race. Node, Deno, and Bun compete on developer experience for individual applications. workerd competes on operational density for platform operators running thousands of applications simultaneously.

If you are building one web app, any of these runtimes will serve you well. If you are building a platform that runs other people's code at the edge, workerd is the only production-proven option in the open-source ecosystem.

Python, Wasm, and the Expanding Surface

workerd started as a JavaScript and WebAssembly runtime. It has grown significantly since.

In 2024, Cloudflare integrated Pyodide directly into workerd. Python Workers run via WebAssembly without any precompilation step. You write Python. It runs. Packages like FastAPI, NumPy, and LangChain work out of the box. By 2025, cold start performance improved dramatically and a uv-first workflow made the Python developer experience feel native.

A Python snake and a JavaScript lightning bolt intertwined inside a V8 isolate chamber. The Python snake passes through a WebAssembly prism before entering the chamber. Both coexist peacefully inside the same isolate.
Python joins JavaScript inside workerd via WebAssembly. No precompilation required.

Rust support arrived through WebAssembly as well. The src/rust directory contains tooling for compiling Rust modules to Wasm targets that run inside Workers. This positions workerd as a polyglot runtime, though JavaScript remains the primary citizen.

In June 2025, Cloudflare launched Workers Containers, running Firecracker microVMs alongside workerd processes. This hybrid lets platform users choose between the lightweight isolate model and full container compatibility when they need it.

The Security Caveat You Should Know

The README is refreshingly honest about one thing: workerd is not a hardened sandbox on its own.

"workerd tries to isolate each Worker so that it can only access the resources it is configured to access. However, workerd on its own does not contain suitable defense-in-depth against the possibility of implementation bugs."

-- workerd README

Cloudflare's production setup layers multiple additional defenses on top of workerd: virtual machines, process isolation, Spectre mitigations, and more. If you self-host workerd to run untrusted code, you need to provide those layers yourself.

This honesty is rare in open-source security tooling. The project explicitly tells you what it does and does not protect against, rather than implying safety through marketing.

Who Should Care About This

If you deploy to Cloudflare Workers, you are already running workerd. Using it locally through wrangler dev gives you exact production parity, which is a significant improvement over the miniflare simulator that preceded it.

If you are building a platform that runs user-submitted code, workerd is the most battle-tested open-source option for multi-tenant JavaScript execution. The V8 isolate model, capability bindings, and compatibility dates solve real problems that you would otherwise need to engineer from scratch.

If you are evaluating edge runtimes, workerd represents the infrastructure-operator end of the spectrum. It sacrifices developer-experience niceties like built-in package managers and test runners in favor of operational density, security isolation, and backwards compatibility.

A developer at a desk with a laptop running workerd locally. The same workerd engine also appears inside a massive globe of data centers. A double-headed arrow labeled same code connects the two scenes.
The same runtime on your laptop and in 300+ data centers. That is the promise.

The Bigger Picture

Open-sourcing workerd was a strategic masterstroke. It eliminated the biggest objection to Cloudflare Workers: vendor lock-in. You can always take your code and self-host it. In practice, most teams will not bother because the managed platform is easier. But the option existing changes the buying conversation entirely.

The daily release cadence signals something important about Cloudflare's engineering culture. This is not a side project dumped over the wall. It is an active, living codebase with more than 3,500 commits and contributions from dozens of engineers. The runtime that runs in production is the runtime on GitHub.

workerd is not trying to replace Node.js or compete with Bun on benchmarks. It is solving a different problem: how do you run the world's code at the edge, safely, with sub-millisecond starts, and never break anything? Three years in, the answer seems to be working.