Skip to main content
The devops template ships Go 1.23, the docker CLI, terraform 1.9.8, kubectl, helm, the AWS CLI v2, and ansible. Pick it whenever an agent needs to lint infrastructure-as-code, render a Helm chart, or assemble a multi-tool pipeline without pulling binaries on every run.

What you’ll learn

  • Picking template="devops" to skip a long apt install / binary download sequence
  • Running terraform validate on a generated HCL file
  • Using kubectl client-side against a local manifest (no cluster needed)

Prerequisites

export DECLAW_API_KEY="your-api-key"
export DECLAW_DOMAIN="your-declaw-instance.example.com:8080"

Code

import textwrap

from declaw import Sandbox


TF_MAIN = textwrap.dedent("""
    terraform {
      required_version = ">= 1.5.0"
    }

    variable "bucket_name" { type = string }

    resource "null_resource" "example" {
      triggers = { name = var.bucket_name }
    }

    output "bucket" { value = var.bucket_name }
""")

K8S_MANIFEST = textwrap.dedent("""
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo
    spec:
      replicas: 2
      selector: { matchLabels: { app: demo } }
      template:
        metadata:
          labels: { app: demo }
        spec:
          containers:
          - name: web
            image: nginx:1.27
            ports:
            - containerPort: 80
""")


def main() -> None:
    sbx = Sandbox.create(template="devops", timeout=180)
    try:
        for tool in ("terraform -version", "kubectl version --client",
                     "helm version --short", "docker --version"):
            r = sbx.commands.run(tool, timeout=15)
            print(f"{tool:<30} => {r.stdout.splitlines()[0] if r.stdout else r.stderr.strip()}")

        # Terraform: init + validate a tiny module.
        sbx.files.mkdir("/tmp/tf")
        sbx.files.write("/tmp/tf/main.tf", TF_MAIN)
        sbx.files.write(
            "/tmp/tf/terraform.tfvars",
            'bucket_name = "declaw-demo"\n',
        )
        r = sbx.commands.run(
            "cd /tmp/tf && terraform init -input=false -backend=false "
            "-no-color >/dev/null && terraform validate -no-color",
            timeout=90,
        )
        print("\nterraform validate:", r.stdout.strip() or r.stderr.strip())

        # Kubernetes: client-side dry-run against the manifest.
        sbx.files.write("/tmp/deploy.yaml", K8S_MANIFEST)
        r = sbx.commands.run(
            "kubectl apply --dry-run=client -f /tmp/deploy.yaml",
            timeout=15,
        )
        print("kubectl dry-run:  ", r.stdout.strip() or r.stderr.strip())
    finally:
        sbx.kill()


if __name__ == "__main__":
    main()

Expected output

terraform -version             => Terraform v1.9.8
kubectl version --client       => Client Version: v1.31.x
helm version --short           => v3.x.x+g...
docker --version               => Docker version 24.x.x, build ...

terraform validate: Success! The configuration is valid.
kubectl dry-run:    deployment.apps/demo created (dry run)
No outbound network is required for these validations — they’re purely client-side. If you want to actually terraform apply against AWS or hit a real cluster, attach a SecurityPolicy with a domain allowlist and credentials via the envs= kwarg.