What You’ll Learn
- How to start a long-running server process as a background subprocess inside the sandbox
- How to use
urllib from inside the sandbox to exercise the server’s API
- How to poll for server readiness before running tests
- How to verify process cleanup after the server stops
Prerequisites
- Declaw running locally or in the cloud (see Deployment)
DECLAW_API_KEY and DECLAW_DOMAIN set in your environment
This example is available in Python. TypeScript support coming soon.
Code Walkthrough
1. Define the server
The server uses only Python’s stdlib http.server and json modules — no pip installs needed:
SERVER_SCRIPT = """\
import json
from http.server import HTTPServer, BaseHTTPRequestHandler
items = {}
next_id = 1
class APIHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == "/health":
self._respond(200, {"status": "ok"})
elif self.path == "/items":
self._respond(200, {"items": list(items.values())})
elif self.path.startswith("/items/"):
item_id = self.path.split("/")[-1]
if item_id in items:
self._respond(200, items[item_id])
else:
self._respond(404, {"error": "not found"})
def do_POST(self):
global items, next_id
if self.path == "/items":
length = int(self.headers.get("Content-Length", 0))
body = json.loads(self.rfile.read(length)) if length else {}
item = {"id": str(next_id), "name": body.get("name", "unnamed")}
items[str(next_id)] = item
next_id += 1
self._respond(201, item)
def _respond(self, status, data):
self.send_response(status)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(json.dumps(data).encode())
def log_message(self, format, *args):
pass # Suppress default logging
if __name__ == "__main__":
server = HTTPServer(("0.0.0.0", 8000), APIHandler)
print("Server running on port 8000", flush=True)
server.serve_forever()
"""
2. Define the client script
The client starts the server as a subprocess, waits for readiness, exercises the API, then shuts down:
CLIENT_SCRIPT = """\
import urllib.request, json, time, subprocess, sys
server_proc = subprocess.Popen(
[sys.executable, "/home/user/server.py"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
BASE = "http://127.0.0.1:8000"
def get(path):
req = urllib.request.Request(f"{BASE}{path}")
with urllib.request.urlopen(req, timeout=5) as resp:
return json.loads(resp.read().decode())
def post(path, data):
body = json.dumps(data).encode()
req = urllib.request.Request(
f"{BASE}{path}", data=body,
headers={"Content-Type": "application/json"}, method="POST",
)
with urllib.request.urlopen(req, timeout=5) as resp:
return json.loads(resp.read().decode())
# Wait for server to start
print("Waiting for server to start...")
for attempt in range(10):
try:
get("/health")
print("Server is ready!")
break
except Exception:
time.sleep(0.5)
# Exercise the API
print("\\n1. Health check:")
print(f" {get('/health')}")
print("\\n2. Creating items:")
item1 = post("/items", {"name": "Widget Alpha"})
print(f" Created: {item1}")
item2 = post("/items", {"name": "Widget Beta"})
print(f" Created: {item2}")
print("\\n3. Listing all items:")
print(f" {get('/items')}")
print("\\n4. Getting item 1:")
print(f" {get('/items/1')}")
server_proc.kill()
print("\\nServer stopped. All tests passed!")
"""
3. Upload and run from the orchestrator
The outer Python script uploads both files and runs the client (which manages the server internally):
from declaw import Sandbox
sbx = Sandbox.create(template="python", timeout=300)
try:
sbx.files.write("/home/user/server.py", SERVER_SCRIPT)
sbx.files.write("/home/user/client.py", CLIENT_SCRIPT)
result = sbx.commands.run("python3 /home/user/client.py", timeout=30)
print(result.stdout)
if result.exit_code != 0:
print(f"stderr: {result.stderr}")
# Verify cleanup
processes = sbx.commands.list()
if not processes:
print("No running processes (all cleaned up).")
finally:
sbx.kill()
Expected Output
--- Creating Sandbox ---
Sandbox created: sbx-abc123
--- Running Server + Client ---
Waiting for server to start...
Server is ready!
1. Health check:
{'status': 'ok'}
2. Creating items:
Created: {'id': '1', 'name': 'Widget Alpha'}
Created: {'id': '2', 'name': 'Widget Beta'}
3. Listing all items:
{'items': [{'id': '1', 'name': 'Widget Alpha'}, {'id': '2', 'name': 'Widget Beta'}]}
4. Getting item 1:
{'id': '1', 'name': 'Widget Alpha'}
Server stopped. All tests passed!
--- Verifying Cleanup ---
No running processes (all cleaned up).
Adapting for MCP
This pattern directly applies to Model Context Protocol (MCP) servers. An MCP server is an HTTP or stdio API that exposes tools to an LLM agent. Running it inside a Declaw sandbox means:
- The MCP server’s filesystem access is isolated from the host
- Network egress from the MCP server can be restricted to a specific allowlist
- PII in the MCP server’s HTTP responses can be redacted before reaching the agent
The supported way to talk to an MCP server running inside a sandbox is stdio over the sandbox’s command/PTY APIs: start the server with sbx.commands.run("your-mcp-server --stdio", background=True) and drive it from an agent that proxies stdio through sbx.pty or sbx.commands.send_stdin. Per-sandbox public URLs (e.g. <port>-<id>.api.declaw.ai) are not part of the Declaw platform — path-based APIs under api.declaw.ai/sandboxes/<id>/... are the only customer-facing surface.