Every sandbox has an independent ext4 rootfs — a copy of the base image. Writes in one sandbox never affect another. The filesystem API lets you read and write files, list directories, watch for changes, and upload or download data.
Write a file
# Write text
sbx.files.write("/workspace/hello.py", b"print('hello')")
# Write from a string
sbx.files.write("/workspace/config.json", b'{"key": "value"}')
await sbx.files.write('/workspace/hello.py', "print('hello')");
Write multiple files at once
write_files() uploads multiple files in a single request.
from declaw import WriteEntry
sbx.files.write_files([
WriteEntry(path="/workspace/main.py", data=b"print('main')"),
WriteEntry(path="/workspace/utils.py", data=b"def helper(): pass"),
WriteEntry(path="/workspace/config.yaml", data=b"debug: true"),
])
await sbx.files.writeFiles([
{ path: '/workspace/main.py', data: "print('main')" },
{ path: '/workspace/utils.py', data: 'def helper(): pass' },
]);
WriteEntry model
| Field | Type | Description |
|---|
path | str | Absolute path inside the sandbox |
data | bytes | str | File content |
user | str | None | User to write as (default: root) |
WriteInfo model
write() returns a WriteInfo with metadata about the written file.
| Field | Type | Description |
|---|
path | str | Absolute path of the written file |
size | int | Bytes written |
Read a file
# Read as bytes (default)
content = sbx.files.read("/workspace/output.txt")
print(content) # b'...'
# Read as text
text = sbx.files.read("/workspace/output.txt", format="text")
print(text) # '...'
const content = await sbx.files.read('/workspace/output.txt');
console.log(content);
List a directory
entries = sbx.files.list("/workspace")
for entry in entries:
print(entry.name, entry.type, entry.size)
const entries = await sbx.files.list('/workspace');
for (const entry of entries) {
console.log(entry.name, entry.type, entry.size);
}
EntryInfo model
| Field | Type | Description |
|---|
name | str | Filename or directory name |
type | FileType | file or dir |
size | int | Size in bytes (0 for directories) |
path | str | Full absolute path |
modified | datetime | Last modification time |
Check if a path exists
if sbx.files.exists("/workspace/output.csv"):
data = sbx.files.read("/workspace/output.csv")
if (await sbx.files.exists('/workspace/output.csv')) {
const data = await sbx.files.read('/workspace/output.csv');
}
Get file info
info = sbx.files.get_info("/workspace/model.pkl")
print(info.size) # 4194304 (bytes)
print(info.modified) # 2024-01-15T10:30:00Z
print(info.type) # FileType.file
const info = await sbx.files.getInfo('/workspace/model.pkl');
console.log(info.size);
console.log(info.type);
Create a directory
sbx.files.make_dir("/workspace/results")
await sbx.files.makeDir('/workspace/results');
Rename or move a file
sbx.files.rename("/workspace/temp.csv", "/workspace/final.csv")
await sbx.files.rename('/workspace/temp.csv', '/workspace/final.csv');
Remove a file or directory
sbx.files.remove("/workspace/temp.txt")
# Remove directory (recursive)
sbx.files.remove("/workspace/old-results")
await sbx.files.remove('/workspace/temp.txt');
Watch a directory for changes
watch_dir() / watchDir() registers a watcher on the directory and returns
a WatchHandle. The handle buffers FilesystemEvent objects internally —
drain them with get_new_events() / getNewEvents(), and call stop() when
you’re done.
import time
from declaw import FilesystemEventType
handle = sbx.files.watch_dir("/workspace")
sbx.commands.run("touch /workspace/output.txt")
# Poll the buffered events
time.sleep(0.5)
for event in handle.get_new_events():
if event.type == FilesystemEventType.create:
print(f"Created: {event.path}")
handle.stop()
const handle = await sbx.files.watchDir('/workspace');
await sbx.commands.run('touch /workspace/output.txt');
// Poll the buffered events
await new Promise((r) => setTimeout(r, 500));
for (const event of handle.getNewEvents()) {
console.log(event.type, event.path);
}
handle.stop();
Full SSE streaming from envd into the WatchHandle buffer is still landing.
The current release registers the watcher server-side and exposes the poll
API; events may not populate until the streaming change ships.
FilesystemEvent model
| Field | Type | Description |
|---|
type | FilesystemEventType | create, modify, or delete |
path | str | Absolute path of the changed file |
timestamp | datetime | When the event occurred |
Upload and download patterns
Upload a local file to the sandbox
with open("local_dataset.csv", "rb") as f:
sbx.files.write("/workspace/dataset.csv", f.read())
result = sbx.commands.run("python3 analyze.py /workspace/dataset.csv")
Download a file from the sandbox
# Run a job that produces output
sbx.commands.run("python3 -c \"import json; json.dump({'result': 42}, open('/workspace/out.json','w'))\"")
# Read results back to the host
content = sbx.files.read("/workspace/out.json")
import json
data = json.loads(content)
print(data) # {'result': 42}
Upload multiple files efficiently
import os
files = []
for fname in os.listdir("./scripts"):
with open(f"./scripts/{fname}", "rb") as f:
files.append(WriteEntry(
path=f"/workspace/scripts/{fname}",
data=f.read(),
))
sbx.files.write_files(files)
The write_files() batch call is more efficient than calling write() in a loop. It sends all files in a single HTTP request.
Streaming upload and download
sbx.files.read() and sbx.files.write() buffer the whole payload in memory, which is fine up to ~10 MiB. For larger files — model weights, datasets, snapshot archives — use the raw streaming endpoints. sbx.upload_url() and sbx.download_url() return path-based URLs under api.declaw.ai that accept binary bodies up to 500 MiB, streamed end-to-end.
Requests must include your X-API-Key header. The URLs are safe to use from your own processes (CI jobs, local scripts, agents), but should not be shared to third parties because the API key is still required separately.
# PUT a large binary to the sandbox
upload_url = sbx.upload_url("/workspace/model.bin")
# Send the bytes with curl, requests, or any HTTP client.
# GET the file back
download_url = sbx.download_url("/workspace/output.zip")
Example binary upload with curl:
curl -X PUT "$UPLOAD_URL" \
-H "X-API-Key: $DECLAW_API_KEY" \
-H "Content-Type: application/octet-stream" \
--data-binary @model.bin
Example download:
curl "$DOWNLOAD_URL" \
-H "X-API-Key: $DECLAW_API_KEY" \
-o output.zip