What You’ll Learn
- How to create one sandbox per agent for maximum isolation
- How to pass data between agents by reading from one sandbox and writing to the next
- How to verify that agents cannot access each other’s files
- The orchestrator-mediated data flow pattern for multi-agent systems
Prerequisites
- Declaw running locally or in the cloud (see Deployment)
DECLAW_API_KEY and DECLAW_DOMAIN set in your environment
This example is available in Python. TypeScript support coming soon.
Pipeline Architecture
Orchestrator (host process)
│
├─── Creates sbx1 (Agent 1: Collector)
│ Runs COLLECTOR_SCRIPT
│ Reads /home/user/output.json ──────────────── raw_data
│
├─── Creates sbx2 (Agent 2: Processor)
│ Writes raw_data → /home/user/input.json
│ Runs PROCESSOR_SCRIPT
│ Reads /home/user/output.json ──────────────── processed_data
│
└─── Creates sbx3 (Agent 3: Reporter)
Writes processed_data → /home/user/input.json
Runs REPORTER_SCRIPT
Reads /home/user/output.txt ───────────────── final_report
Sandboxes never communicate with each other. The orchestrator reads the output of one sandbox and supplies it as the input to the next.
Code Walkthrough
Agent scripts
Each agent is a self-contained Python script uploaded to its sandbox:
Agent 1 — Data Collector generates 20 sales records and writes them to output.json:
COLLECTOR_SCRIPT = """\
import json, random
random.seed(42)
products = ["Laptop", "Phone", "Tablet", "Monitor", "Keyboard"]
regions = ["North", "South", "East", "West"]
records = []
for i in range(20):
records.append({
"id": i + 1,
"product": random.choice(products),
"region": random.choice(regions),
"quantity": random.randint(1, 50),
"unit_price": round(random.uniform(10.0, 500.0), 2),
"returned": random.random() < 0.15,
})
with open("/home/user/output.json", "w") as f:
json.dump(records, f, indent=2)
print(f"Collected {len(records)} sales records")
"""
Agent 2 — Data Processor filters returned items, computes revenue, and aggregates by product and region:
PROCESSOR_SCRIPT = """\
import json
from collections import defaultdict
with open("/home/user/input.json") as f:
records = json.load(f)
valid_records = [r for r in records if not r["returned"]]
for r in valid_records:
r["revenue"] = round(r["quantity"] * r["unit_price"], 2)
product_stats = defaultdict(lambda: {"quantity": 0, "revenue": 0.0, "count": 0})
for r in valid_records:
ps = product_stats[r["product"]]
ps["quantity"] += r["quantity"]
ps["revenue"] += r["revenue"]
ps["count"] += 1
output = {
"total_records": len(records),
"valid_records": len(valid_records),
"returned_count": len(records) - len(valid_records),
"total_revenue": round(sum(r["revenue"] for r in valid_records), 2),
"product_stats": dict(product_stats),
}
with open("/home/user/output.json", "w") as f:
json.dump(output, f, indent=2)
print(f"Processed {len(valid_records)} valid records")
"""
Agent 3 — Report Generator reads the processed data and produces a formatted text report.
The run_agent helper
A shared helper handles the upload-run-read cycle for each agent:
def run_agent(name: str, sbx: Sandbox, script: str, input_data: str | None = None) -> str:
if input_data is not None:
sbx.files.write("/home/user/input.json", input_data)
sbx.files.write("/home/user/agent.py", script)
result = sbx.commands.run("python3 /home/user/agent.py 2>&1")
print(f" {name}: {result.stdout.strip()}")
# Agents write to output.json or output.txt
try:
return sbx.files.read("/home/user/output.json")
except Exception:
return sbx.files.read("/home/user/output.txt")
Orchestrating the pipeline
from declaw import Sandbox
sbx1 = Sandbox.create(template="python", timeout=300)
sbx2 = Sandbox.create(template="python", timeout=300)
sbx3 = Sandbox.create(template="python", timeout=300)
try:
raw_data = run_agent("Data Collector", sbx1, COLLECTOR_SCRIPT)
processed_data = run_agent("Data Processor", sbx2, PROCESSOR_SCRIPT, raw_data)
report = run_agent("Report Generator", sbx3, REPORTER_SCRIPT, processed_data)
print(report)
# Verify isolation: Agent 1 never received input.json
check = sbx1.commands.run(
"python3 -c \"import os; print(os.path.exists('/home/user/input.json'))\""
)
print(f"Agent 1 has input.json from Agent 2? {check.stdout.strip()}") # False
finally:
sbx1.kill()
sbx2.kill()
sbx3.kill()
Expected Output
--- Creating Agent Sandboxes ---
Agent 1 (Collector): sbx-aaa111
Agent 2 (Processor): sbx-bbb222
Agent 3 (Reporter): sbx-ccc333
--- Running Pipeline ---
Data Collector: Collected 20 sales records
Data Processor: Processed 17 valid records (3 returned filtered)
Report Generator: Report generated successfully
==================================================
SALES ANALYSIS REPORT
==================================================
Total records analyzed: 20
Valid sales: 17
Returned items: 3
Total revenue: $21,483.62
--------------------------------------------------
REVENUE BY PRODUCT
--------------------------------------------------
Keyboard qty= 82 revenue=$ 4,218.50 orders=4
Laptop qty= 67 revenue=$ 8,942.10 orders=3
Monitor qty= 115 revenue=$ 5,103.22 orders=4
Phone qty= 48 revenue=$ 2,191.80 orders=3
Tablet qty= 39 revenue=$ 1,028.00 orders=3
--- Verifying Isolation ---
Agent 1 has input.json from Agent 2? False
Isolation Guarantees
Each sandbox is a separate Firecracker microVM with its own:
- Filesystem — Agent 1 cannot read Agent 2’s files or vice versa
- Process tree — Agents cannot list or signal each other’s processes
- Network namespace — Agents cannot connect to each other’s ports
This makes the pattern safe for untrusted agent code: even if an agent is compromised, it cannot reach the other agents or the host.