Skip to main content
Declaw is a monorepo of Go services and Python/TypeScript SDKs that together provide secure, isolated code execution for AI agents. The system has three conceptual layers: a client layer (SDKs), a control plane (sandbox-manager, node-collector, guardrails) deployed via Helm, and the execution layer (Firecracker microVMs managed by an orchestrator on bare metal).

System diagram

Component breakdown

Control plane (Helm-deployed)

ComponentPathRole
Sandbox Managerinfra/sandbox-manager/REST API (Gin framework) on port 8080, authentication, sandbox + template CRUD, tier enforcement
Node Collectorinfra/node-collector/Worker state repository on port 8090, tracks live sandboxes across orchestrator nodes
Guardrailsinfra/guardrails/FastAPI ML scanner service on port 8000 — PII, prompt injection, toxicity, code security, invisible text
Sharedinfra/shared/Common Go library — models, auth, blobstore, telemetry

Orchestration (bare metal)

ComponentPathRole
Orchestratorinfra/orchestrator/Firecracker VM lifecycle on port 9090, network namespace management, per-sandbox security proxy, template caching
envdinfra/orchestrator/envd/In-VM daemon, ConnectRPC on port 49983, filesystem API, process management, PTY support — bundled into the rootfs image

SDKs

ModulePathRole
Python — syncpython-sdk/declaw/sandbox_sync/Synchronous sandbox API
Python — asyncpython-sdk/declaw/sandbox_async/Async mirror of all sync APIs
Python — securitypython-sdk/declaw/security/SecurityPolicy, PIIConfig, NetworkPolicy, etc.
TypeScriptts-sdk/src/Promise-based sandbox API with full TS types

Request flow

End-to-end flow for Sandbox.create() followed by sandbox.commands.run():

Monorepo structure

declaw/
├── python-sdk/                 # Python SDK (sync + async)
│   └── declaw/
│       ├── sandbox_sync/       # Synchronous implementation
│       ├── sandbox_async/      # Async mirror
│       ├── security/           # SecurityPolicy, PII, injection, etc.
│       └── template/           # Template management
├── ts-sdk/                     # TypeScript SDK (@declaw/sdk)
├── cookbook/                   # 49 runnable examples + integration tests
├── templates/                  # Firecracker rootfs definitions (base, python, node, …)
├── spec/                       # OpenAPI + gRPC/proto definitions
├── docs/                       # Architecture & design docs
├── client_docs/                # Mintlify documentation site
└── infra/                      # All deployable services
    ├── shared/                 # Go shared library (models, auth, blobstore, telemetry)
    ├── sandbox-manager/        # REST API (Helm-deployed, port 8080)
    ├── node-collector/         # Worker state repo (Helm-deployed, port 8090)
    ├── orchestrator/           # Firecracker VM manager (bare metal, port 9090)
    │   └── envd/               # In-VM daemon — baked into rootfs
    ├── guardrails/             # ML security scanner (Helm-deployed, port 8000)
    ├── mock-guardrails/        # Regex-based guardrails drop-in
    ├── postgres/               # PostgreSQL (Helm-deployed)
    ├── redis/                  # Redis cache (Helm-deployed)
    └── service-discovery/      # Consul configs (bare metal)

Key design decisions

Docker containers share the host kernel. A container escape exploit gives an attacker access to the host. Firecracker microVMs have a hardware isolation boundary — each VM has its own kernel, memory space, and I/O devices. A compromised sandbox cannot escape to the host or to other sandboxes.
Each sandbox gets its own Linux network namespace with a dedicated veth pair and TAP device. Sandboxes cannot see each other’s network traffic, cannot reach each other’s IPs, and cannot intercept host-level network interfaces.
The security proxy runs inside each Firecracker VM, not on the host. This means the security policy enforcement is co-located with the workload — it cannot be bypassed by the orchestrator or other components. The per-sandbox CA certificate is generated fresh for each sandbox and injected into the VM trust store at boot.
The base rootfs image is read-only and shared across all sandboxes. Each sandbox gets a writable overlay layer (ext4 image) on top. This makes sandbox creation fast (no full copy) and guarantees filesystem isolation. Destroying a sandbox deletes only the overlay layer.
If the security proxy fails to start or configure iptables, the sandbox creation fails with an error. There is no “log and continue” path — a sandbox without a functioning security proxy is considered unsafe and never reaches the running state.

Architecture sub-pages

PageWhat it covers
FirecrackerMicroVM internals: rootfs, TAP networking, boot process, envd
Security ProxyMITM TLS, certificate generation, scanning pipeline
Packet FlowNetwork packet diagrams: iptables, TCP proxy, HTTP/HTTPS flows