Skip to main content

What this section covers

End-to-end examples that wire the OpenAI Agents SDK up to a declaw sandbox with pip install "declaw[openai]". Every bash, file, and PTY tool call the agent makes is dispatched through a declaw microVM with the platform’s full security posture applied at the network edge — PII redaction, prompt-injection detection, per-sandbox domain allowlists, audit logging, env-var masking.

Install

pip install "declaw[openai]"

Credentials

export DECLAW_API_KEY=dcl_...
export DECLAW_DOMAIN=api.declaw.ai
export OPENAI_API_KEY=sk-...

Import surface

Every declaw knob — sandbox config, security policy, network policy, lifecycle — is re-exported from declaw.openai so recipes import from a single place:
from declaw.openai import (
    DeclawSandboxClient,
    DeclawSandboxClientOptions,
    SecurityPolicy,
    PIIConfig,
    InjectionDefenseConfig,
    ToxicityConfig,
    CodeSecurityConfig,
    InvisibleTextConfig,
    NetworkPolicy,
    TransformationRule,
    SandboxLifecycle,
    SandboxNetworkOpts,
)

Template coverage

RecipeTemplateWhat it shows
Data analystpythonpandas + matplotlib + PII rehydration
Code reviewerai-agentgit clone, ruff, structured output, env-driven config
Customer supportbaseMulti-agent handoffs, PII redact + rehydrate
Web scraperpythonSingle-host network allowlist, BeautifulSoup
TypeScript APInodeBackground server, curl, compile + run
DevOps auditdevopsStatic checks, transformation rules, hadolint
ML trainingcode-interpreterscikit-learn + matplotlib, zero-install
Custom transformationspythonEnd-to-end proof of regex-based directional rewrites at the edge proxy

Two layers of isolation every recipe relies on

  1. Filesystem isolation: every sandbox boots with a fresh /workspace overlay. Artifacts an agent writes (reports, logs, compiled binaries, trained models) disappear when the sandbox terminates. No host bleed-through, no scratch cleanup to manage.
  2. Environment isolation: envs={...} pushes key/value pairs into the microVM as real process env vars. The agent reads them with printenv — they never need to appear in the prompt, so secrets stay out of the LLM trace.
Plus the security posture enforced at the VM’s edge proxy:
  • PIIConfig(rehydrate_response=True) — redact PII on the way out, restore it on the way back in, so the agent code works unchanged while the upstream model never sees real PII.
  • NetworkPolicy(allow_out=[...]) — default-deny outbound; only the listed hosts reach the internet.
  • InjectionDefenseConfig(enabled=True) — flag prompt-injection attempts in HTTP bodies before they hit the upstream LLM.
  • TransformationRule(match=..., replace=...) — directional regex rewrites, e.g. redact AWS keys before they leave the VM.

Getting started

Start with the quickstart. Once that runs cleanly, any recipe above is a drop-in copy — each script is self-contained and under 150 lines.