What You’ll Learn
- How to write an agent script into a sandbox filesystem using
sbx.files.write()
- How to upload a task definition (JSON) for the agent to consume
- How to run the agent with
sbx.commands.run() and capture its output
- How to read structured results back from the sandbox after execution
- The core pattern of isolating autonomous code execution inside a microVM
Prerequisites
- Declaw running locally or in the cloud (see Deployment)
DECLAW_API_KEY and DECLAW_DOMAIN set in your environment
This example uses Python only. The agent script itself is a plain Python string uploaded to the sandbox — no external agent framework is required.
Code Walkthrough
1. Define the agent script
The agent script is a Python string defined in the outer (host) process. It will be written into the sandbox and executed there. It reads a task file, runs each shell command in the task, and writes structured results to /tmp/result.json.
AGENT_SCRIPT = textwrap.dedent("""\
import json
import subprocess
TASK_FILE = "/tmp/task.json"
RESULT_FILE = "/tmp/result.json"
with open(TASK_FILE) as f:
task = json.load(f)
results = []
for i, step in enumerate(task["steps"], 1):
proc = subprocess.run(
step, shell=True, capture_output=True, text=True, timeout=30
)
results.append({
"step": i,
"command": step,
"stdout": proc.stdout.strip(),
"exit_code": proc.returncode,
})
output = {"task_name": task["name"], "status": "completed", "step_results": results}
with open(RESULT_FILE, "w") as f:
json.dump(output, f, indent=2)
""")
2. Define the task
The task is a Python dict that will be serialized to JSON and uploaded alongside the agent:
TASK_PAYLOAD = {
"name": "system-info-gathering",
"description": "Collect basic system information inside the sandbox",
"steps": [
"uname -a",
"python3 --version",
"whoami",
"ls /tmp",
],
}
3. Create the sandbox and upload files
sbx = Sandbox.create(template="python", timeout=300)
try:
# Upload the agent script
sbx.files.write("/tmp/agent.py", AGENT_SCRIPT)
# Upload the task definition
task_json = json.dumps(TASK_PAYLOAD, indent=2)
sbx.files.write("/tmp/task.json", task_json)
4. Run the agent and read results
# Execute the agent inside the isolated sandbox
result = sbx.commands.run("python3 /tmp/agent.py", timeout=60)
print(result.stdout)
# Read structured output back to the host process
result_content = sbx.files.read("/tmp/result.json")
results = json.loads(result_content)
for step in results["step_results"]:
print(f"Step {step['step']}: {step['command']} -> exit {step['exit_code']}")
finally:
sbx.kill()
Expected Output
--- Creating Sandbox ---
Sandbox created: sbx-abc123
--- Uploading Agent Script ---
Wrote /tmp/agent.py
--- Uploading Task ---
Task: system-info-gathering
Steps: 4
--- Running Agent ---
Agent stdout:
Agent received task: system-info-gathering
Description: Collect basic system information inside the sandbox
Running step 1: uname -a
Running step 2: python3 --version
Running step 3: whoami
Running step 4: ls /tmp
Agent finished. Results written to /tmp/result.json
Exit code: 0
--- Reading Agent Results ---
Task: system-info-gathering
Status: completed
Step 1: uname -a
stdout: Linux ... x86_64 GNU/Linux
exit_code: 0
Step 2: python3 --version
stdout: Python 3.x.x
exit_code: 0
Key Pattern
The outer script orchestrates; the agent script executes. This separation means:
- The host process controls what the agent can do (via the task definition)
- Agent code never runs on the host machine — only inside the isolated microVM
- Results are returned by reading files from the sandbox, keeping the interface clean