Skip to main content
Every sandbox has an independent ext4 rootfs — a copy of the base image. Writes in one sandbox never affect another. The filesystem API lets you read and write files, list directories, watch for changes, and upload or download data.

Write a file

# Write text
sbx.files.write("/workspace/hello.py", b"print('hello')")

# Write from a string
sbx.files.write("/workspace/config.json", b'{"key": "value"}')

Write multiple files at once

write_files() uploads multiple files in a single request.
from declaw import WriteEntry

sbx.files.write_files([
    WriteEntry(path="/workspace/main.py", data=b"print('main')"),
    WriteEntry(path="/workspace/utils.py", data=b"def helper(): pass"),
    WriteEntry(path="/workspace/config.yaml", data=b"debug: true"),
])

WriteEntry model

FieldTypeDescription
pathstrAbsolute path inside the sandbox
databytes | strFile content
userstr | NoneUser to write as (default: root)

WriteInfo model

write() returns a WriteInfo with metadata about the written file.
FieldTypeDescription
pathstrAbsolute path of the written file
sizeintBytes written

Read a file

# Read as bytes (default)
content = sbx.files.read("/workspace/output.txt")
print(content)  # b'...'

# Read as text
text = sbx.files.read("/workspace/output.txt", format="text")
print(text)  # '...'

List a directory

entries = sbx.files.list("/workspace")
for entry in entries:
    print(entry.name, entry.type, entry.size)

EntryInfo model

FieldTypeDescription
namestrFilename or directory name
typeFileTypefile or dir
sizeintSize in bytes (0 for directories)
pathstrFull absolute path
modifieddatetimeLast modification time

Check if a path exists

if sbx.files.exists("/workspace/output.csv"):
    data = sbx.files.read("/workspace/output.csv")

Get file info

info = sbx.files.get_info("/workspace/model.pkl")
print(info.size)      # 4194304 (bytes)
print(info.modified)  # 2024-01-15T10:30:00Z
print(info.type)      # FileType.file

Create a directory

sbx.files.make_dir("/workspace/results")

Rename or move a file

sbx.files.rename("/workspace/temp.csv", "/workspace/final.csv")

Remove a file or directory

sbx.files.remove("/workspace/temp.txt")

# Remove directory (recursive)
sbx.files.remove("/workspace/old-results")

Watch a directory for changes

watch_dir() / watchDir() registers a watcher on the directory and returns a WatchHandle. The handle buffers FilesystemEvent objects internally — drain them with get_new_events() / getNewEvents(), and call stop() when you’re done.
import time
from declaw import FilesystemEventType

handle = sbx.files.watch_dir("/workspace")
sbx.commands.run("touch /workspace/output.txt")

# Poll the buffered events
time.sleep(0.5)
for event in handle.get_new_events():
    if event.type == FilesystemEventType.create:
        print(f"Created: {event.path}")

handle.stop()
Full SSE streaming from envd into the WatchHandle buffer is still landing. The current release registers the watcher server-side and exposes the poll API; events may not populate until the streaming change ships.

FilesystemEvent model

FieldTypeDescription
typeFilesystemEventTypecreate, modify, or delete
pathstrAbsolute path of the changed file
timestampdatetimeWhen the event occurred

Upload and download patterns

Upload a local file to the sandbox

with open("local_dataset.csv", "rb") as f:
    sbx.files.write("/workspace/dataset.csv", f.read())

result = sbx.commands.run("python3 analyze.py /workspace/dataset.csv")

Download a file from the sandbox

# Run a job that produces output
sbx.commands.run("python3 -c \"import json; json.dump({'result': 42}, open('/workspace/out.json','w'))\"")

# Read results back to the host
content = sbx.files.read("/workspace/out.json")
import json
data = json.loads(content)
print(data)  # {'result': 42}

Upload multiple files efficiently

import os

files = []
for fname in os.listdir("./scripts"):
    with open(f"./scripts/{fname}", "rb") as f:
        files.append(WriteEntry(
            path=f"/workspace/scripts/{fname}",
            data=f.read(),
        ))

sbx.files.write_files(files)
The write_files() batch call is more efficient than calling write() in a loop. It sends all files in a single HTTP request.

Streaming upload and download

sbx.files.read() and sbx.files.write() buffer the whole payload in memory, which is fine up to ~10 MiB. For larger files — model weights, datasets, snapshot archives — use the raw streaming endpoints. sbx.upload_url() and sbx.download_url() return path-based URLs under api.declaw.ai that accept binary bodies up to 500 MiB, streamed end-to-end. Requests must include your X-API-Key header. The URLs are safe to use from your own processes (CI jobs, local scripts, agents), but should not be shared to third parties because the API key is still required separately.
# PUT a large binary to the sandbox
upload_url = sbx.upload_url("/workspace/model.bin")
# Send the bytes with curl, requests, or any HTTP client.

# GET the file back
download_url = sbx.download_url("/workspace/output.zip")
Example binary upload with curl:
curl -X PUT "$UPLOAD_URL" \
  -H "X-API-Key: $DECLAW_API_KEY" \
  -H "Content-Type: application/octet-stream" \
  --data-binary @model.bin
Example download:
curl "$DOWNLOAD_URL" \
  -H "X-API-Key: $DECLAW_API_KEY" \
  -o output.zip