Physical network layout
Flow 1: HTTPS with no security scanning
When only a domain allowlist (no PII, no injection) is configured, the proxy uses TLS passthrough — it inspects the SNI without decrypting. The workload negotiates TLS directly with the destination — the proxy is invisible.Flow 2: HTTPS with PII scanning (MITM active)
WhenPIIConfig.enabled=True, the proxy performs full TLS interception.
Flow 3: Domain blocked
Flow 4: IP blocked by iptables (CIDR rule)
IP and CIDR rules bypass the userspace proxy entirely — they are kernel-level DROP rules.Flow 5: Metadata service block (always-on)
The cloud metadata endpoint169.254.169.254 is blocked by a hardcoded DROP rule regardless of network policy:
Flow 6: envd traffic (API-to-sandbox)
SDK calls go through the orchestrator to envd via the private veth pair — this traffic never crosses the public network:iptables rule structure
The orchestrator installs rules in two places for each sandbox:In the sandbox network namespace (applied to veth and TAP interfaces)
In the Firecracker VM (applied to eth0, set via envd at boot)
Summary of interception points
| Traffic type | Intercepted by | What happens |
|---|---|---|
| HTTP to blocked IP | Sandbox iptables (kernel) | DROP — no proxy involved |
| HTTP to blocked domain | Namespace proxy (userspace) | RST after Host header read |
| HTTP to allowed domain, no scan | Namespace proxy | Forward raw TCP |
| HTTPS to blocked IP | Sandbox iptables (kernel) | DROP — no proxy involved |
| HTTPS to blocked domain | Namespace proxy | RST after SNI peek |
| HTTPS to allowed domain, no scan | Namespace proxy | TLS passthrough |
| HTTPS to allowed domain, scan active | Namespace proxy (MITM) | Decrypt, scan, re-encrypt |
| API-to-envd traffic | Private veth pair | No interception, private network |
| Cloud metadata (169.254.169.254) | VM iptables (hardcoded) | DROP always |