Skip to main content
A realistic health-tech use of the ai-agent template: a LangGraph prior- authorization graph runs on the host, but two sensitive steps hop into Declaw sandboxes under different SecurityPolicy postures:
  1. Payer clearinghouse submitpython sandbox, untrusted third-party payer API, egress locked to the payer domain only.
  2. Appeal-letter draft (GPT-4.1)ai-agent sandbox. PHI (member_id, SSN, email, phone) is redacted on outbound, OpenAI sees [REDACTED_*] tokens, then Declaw rehydrates the originals in the response so the letter your agent reads back contains the real values.
This is a distilled version of health-tech/sandboxed/01-prior-auth-langgraph/run.py. Two sandboxes, two policies, one graph.
The appeal-draft step spends real OpenAI credits — one pass against gpt-4.1 costs roughly 0.050.05–0.15. Set OPENAI_API_KEY before running.

What you’ll learn

  • Running a LangGraph workflow on the host while sandboxing only the steps that touch untrusted inputs or external LLMs
  • Using two different SecurityPolicy objects in one workflow — loose for the clearinghouse, LLM-grade for the GPT-4.1 appeal
  • Using rehydrate_response=True so the agent code is oblivious to the redact/rehydrate round-trip — it sees original PHI in the letter, while OpenAI only ever saw tokens

Prerequisites

export DECLAW_API_KEY="your-api-key"
export DECLAW_DOMAIN="your-declaw-instance.example.com:8080"
pip install declaw langgraph
export OPENAI_API_KEY=sk-...

Code

import json
import os
import textwrap
from typing import Annotated, Literal, TypedDict

from langgraph.graph import END, START, StateGraph

from declaw import (
    Sandbox,
    SecurityPolicy,
    PIIConfig,
    NetworkPolicy,
    AuditConfig,
    ALL_TRAFFIC,
)


# --- Mock PHI (one denied case — missing A1c, drives the appeal path) ---
PATIENT = {
    "patient_id": "p-003",
    "patient_name": "Riya Singh",
    "member_id": "MBR-7781432",
    "ssn": "512-88-4401",
    "email": "riya.singh@example.com",
    "phone": "+1-415-555-0188",
    "diagnoses": ["severe eosinophilic asthma"],
    "medications": ["ICS-LABA (high dose)", "montelukast"],
    "a1c": None,   # absent → payer denies → appeal drafted
    "notes": "Exacerbation in last 12 months despite high-dose ICS-LABA.",
}

POLICY_CRITERIA = [
    "severe eosinophilic phenotype confirmed",
    "trial on high-dose ICS-LABA with continued symptoms",
    "eosinophil count documented",
]


# --- LangGraph state shape ---
class PAState(TypedDict, total=False):
    patient_id: str
    requested_drug: str
    evidence: dict
    packet: dict
    submission_id: str
    status: Literal["pending", "approved", "denied"]
    denial_reasons: list[str]
    appeal_letter: str
    audit_log: Annotated[list[dict], "append-only audit trail"]


# --- Policy factories ---
def untrusted_api_policy(allow_domains: list[str]) -> SecurityPolicy:
    """Outbound call to the payer clearinghouse — third-party untrusted API."""
    return SecurityPolicy(
        pii=PIIConfig(
            enabled=True,
            types=["ssn", "email", "phone", "person_name", "address"],
            action="redact",
            rehydrate_response=False,   # we don't trust responses from here
        ),
        network=NetworkPolicy(allow_out=allow_domains, deny_out=[ALL_TRAFFIC]),
        audit=AuditConfig(enabled=True),
    )


def llm_appeal_policy() -> SecurityPolicy:
    """LLM appeal drafting — PHI redacted outbound, rehydrated inbound."""
    return SecurityPolicy(
        pii=PIIConfig(
            enabled=True,
            types=["ssn", "email", "phone", "person_name", "address",
                   "api_key", "ip_address"],
            action="redact",
            rehydrate_response=True,    # originals restored on response
        ),
        network=NetworkPolicy(
            allow_out=["api.openai.com", "pypi.org",
                       "*.pythonhosted.org", "files.pythonhosted.org"],
            deny_out=[ALL_TRAFFIC],
        ),
        audit=AuditConfig(enabled=True),
    )


# --- Sandbox 1: untrusted payer submit (python template) ---
PAYER_SCRIPT = textwrap.dedent("""
    import json
    with open("/tmp/in.json") as f:
        packet = json.load(f)
    # Mock clearinghouse logic: denies when A1c is missing.
    has_a1c = packet.get("evidence", {}).get("a1c") is not None
    out = {
        "submission_id": "PA-9001",
        "status": "approved" if has_a1c else "denied",
        "reasons": [] if has_a1c else ["missing_a1c"],
    }
    with open("/tmp/out.json", "w") as f:
        json.dump(out, f)
""")


def submit_to_payer(packet: dict) -> dict:
    sbx = Sandbox.create(
        template="python",
        timeout=120,
        security=untrusted_api_policy(["*.payer-clearinghouse.com"]),
    )
    try:
        sbx.files.write("/tmp/in.json", json.dumps(packet))
        sbx.files.write("/tmp/payer.py", PAYER_SCRIPT)
        r = sbx.commands.run("python3 /tmp/payer.py", timeout=60)
        if r.exit_code != 0:
            raise RuntimeError(f"payer submit failed: {r.stderr}")
        return json.loads(sbx.files.read("/tmp/out.json"))
    finally:
        sbx.kill()


# --- Sandbox 2: LLM appeal draft (ai-agent template, PII redact+rehydrate) ---
APPEAL_SCRIPT = textwrap.dedent("""
    import json
    from openai import OpenAI
    with open("/tmp/in.json") as f:
        inp = json.load(f)
    client = OpenAI()
    resp = client.chat.completions.create(
        model="gpt-4.1",
        messages=[
            {"role": "system", "content": (
                "You are a clinical appeals specialist. Draft a concise, "
                "professional prior-authorization appeal letter justifying "
                "medical necessity. Cite the specific policy criteria the "
                "patient meets. Plain text, no markdown. If you see "
                "REDACTED_* tokens, treat them as opaque placeholders for "
                "patient identifiers."
            )},
            {"role": "user", "content": json.dumps(inp)},
        ],
        max_completion_tokens=600,
    )
    with open("/tmp/out.json", "w") as f:
        json.dump({"letter": resp.choices[0].message.content}, f)
""")


def draft_appeal(packet: dict, reasons: list[str]) -> str:
    if not os.getenv("OPENAI_API_KEY"):
        raise SystemExit("Set OPENAI_API_KEY before running this example.")
    sbx = Sandbox.create(
        template="ai-agent",
        timeout=300,
        security=llm_appeal_policy(),
        envs={"OPENAI_API_KEY": os.environ["OPENAI_API_KEY"]},
    )
    try:
        sbx.files.write("/tmp/in.json", json.dumps({
            "submission_id": packet["submission_id"],
            "denial_reasons": reasons,
            "packet": packet,
        }))
        sbx.files.write("/tmp/appeal.py", APPEAL_SCRIPT)
        r = sbx.commands.run("python3 /tmp/appeal.py", timeout=240)
        if r.exit_code != 0:
            raise RuntimeError(f"appeal draft failed: {r.stderr}")
        return json.loads(sbx.files.read("/tmp/out.json"))["letter"]
    finally:
        sbx.kill()


# --- LangGraph nodes ---
def gather(state: PAState) -> PAState:
    return {"evidence": PATIENT,
            "audit_log": [{"node": "gather"}]}


def assemble_packet(state: PAState) -> PAState:
    return {"packet": {
        "patient_id": state["patient_id"],
        "drug": state["requested_drug"],
        "evidence": state["evidence"],
        "policy_criteria": POLICY_CRITERIA,
    }, "audit_log": [{"node": "assemble_packet"}]}


def submit(state: PAState) -> PAState:
    print("[node submit] entering python sandbox (untrusted clearinghouse)")
    result = submit_to_payer(state["packet"])
    # submit_to_payer returns the submission_id with the packet so the
    # appeal-draft node can reference it.
    state["packet"]["submission_id"] = result["submission_id"]
    return {
        "submission_id": result["submission_id"],
        "status": result["status"],
        "denial_reasons": result["reasons"],
        "audit_log": [{"node": "submit", "sandboxed": True,
                       "result": result["status"]}],
    }


def appeal(state: PAState) -> PAState:
    print("[node appeal] entering ai-agent sandbox (gpt-4.1, PHI redacted+rehydrated)")
    letter = draft_appeal(state["packet"], state["denial_reasons"])
    return {"appeal_letter": letter,
            "audit_log": [{"node": "appeal", "sandboxed": True,
                           "model": "gpt-4.1"}]}


def route_after_submit(state: PAState) -> str:
    return "appeal" if state["status"] == "denied" else END


def build_graph():
    g = StateGraph(PAState)
    g.add_node("gather", gather)
    g.add_node("assemble_packet", assemble_packet)
    g.add_node("submit", submit)
    g.add_node("appeal", appeal)
    g.add_edge(START, "gather")
    g.add_edge("gather", "assemble_packet")
    g.add_edge("assemble_packet", "submit")
    g.add_conditional_edges("submit", route_after_submit,
                            {"appeal": "appeal", END: END})
    g.add_edge("appeal", END)
    return g.compile()


def main() -> None:
    graph = build_graph()
    result = graph.invoke({
        "patient_id": "p-003",
        "requested_drug": "mepolizumab",
    })

    print("\n=== Prior Auth Result ===")
    print(f"Patient:        {result['patient_id']}")
    print(f"Drug:           {result['requested_drug']}")
    print(f"Submission ID:  {result.get('submission_id')}")
    print(f"Status:         {result.get('status')}")
    if result.get("status") == "denied":
        print(f"Reasons:        {result['denial_reasons']}")
        print("\n--- Appeal Letter (gpt-4.1, PHI rehydrated by declaw proxy) ---")
        print(result["appeal_letter"])


if __name__ == "__main__":
    main()

Expected output (shape)

[node submit] entering python sandbox (untrusted clearinghouse)
[node appeal] entering ai-agent sandbox (gpt-4.1, PHI redacted+rehydrated)

=== Prior Auth Result ===
Patient:        p-003
Drug:           mepolizumab
Submission ID:  PA-9001
Status:         denied
Reasons:        ['missing_a1c']

--- Appeal Letter (gpt-4.1, PHI rehydrated by declaw proxy) ---
To whom it may concern,

On behalf of Riya Singh (member ID MBR-7781432), I am submitting an appeal
for prior authorization of mepolizumab in connection with submission
PA-9001…
The key thing to notice in the letter: the patient’s name is present in cleartext, even though OpenAI only ever saw a [REDACTED_PERSON_NAME] token. rehydrate_response=True on the appeal sandbox’s PIIConfig restores the original from the outbound-redaction token before the response body is handed back to the agent code. (The MBR-7781432 member ID in this example is not one of the built-in PII entities and passes through as-is — see the note below on custom regex rules for payer identifiers.)

What Declaw is doing behind the scenes

  • Two SecurityPolicy objects, two trust postures. The payer-clearinghouse sandbox allows only *.payer-clearinghouse.com outbound and redacts PHI without rehydrating (you don’t trust the payer’s response). The appeal sandbox allows only api.openai.com + PyPI bootstrap and rehydrates responses (you do trust OpenAI not to be storing the tokens).
  • PII scanner runs on every outbound request body in either sandbox. The built-in entity set (ssn, email, phone, person_name, address) covers the common 45 CFR §164.514(b) Safe Harbor identifiers. Member IDs (e.g. MBR-7781432) are payer-specific formats — add a custom regex via a transformation rule to redact and rehydrate them alongside the built-in entities.
  • Rehydration is a proxy-side feature — the VM process sees original PHI in the response bytes exactly as OpenAI sent tokens back. No agent code change is required.
For agent-in-sandbox (instead of host LangGraph + sandboxed steps), swap the host-side graph for one running entirely inside a single ai-agent sandbox — same policies, just one longer-lived sandbox. See Agent-in-Sandbox → Fully Secured.