Skip to main content
Sandboxes are short-lived, isolated environments that you can spin up instantly for code execution. Sandboxes can be deployed within Buildfunctions (via CPU or GPU Functions) or deployed in app code anywhere else (e.g., local scripts, Next.js apps, external workers).

Core Concepts

  • Simple-to-use: Sandboxes are created, used, and destroyed seamlessly.
  • Secure: They provide a safe boundary for running untrusted AI actions, like executing AI-generated code.
  • Nested: You can run a Sandbox inside a data processing pipeline or an AI agent workflow.

Supported Runtimes

RuntimeSupported Sandboxes
PythonCPU, GPU
GoCPU Only
Node.jsCPU Only
DenoCPU Only
BashCPU Only

CPU Sandboxes

CPUSandbox is ideal for running lightweight code, data processing, or executing user-submitted scripts securely.

Create Hardware-Isolated Sandbox and Run Code

JavaScript
import { CPUSandbox } from 'buildfunctions';

// Create a CPU Sandbox
const cpuSandbox = await CPUSandbox.create({
    name: "text-analyzer",
    runtime: "node",
    memory: "512MB",
    timeout: 120
});

try {
    const result = await cpuSandbox.run("console.log('Hello from Sandbox!');");
    console.log(result.stdout);
} finally {
    // Manually clean up
    await cpuSandbox.delete();
}

GPU Sandboxes

GPUSandbox provides instant access to secure, hardware-isolated VMs with GPUs. They include automatic storage for self-hosted models (perfect for agents) and support concurrent requests on the same GPU for significant cost savings.

Upload Script and Run Inference

You can upload scripts or files into the sandbox and execute them on the GPU.
JavaScript
...
// Create a GPU Sandbox
const sandbox = await GPUSandbox.create({
  name: 'secure-agent-action',
  memory: "65536MB",
  timeout: 300,
  language: 'python',
  requirements: ['transformers', 'torch', 'accelerate'],
  model: '/home/prod/Qwen/Qwen3-8B',
})

// Upload inference script from path (or just inline code)
await sandbox.upload({ filePath: 'inference_script.py' })

// Run script in a hardware-isolated virtual machine with full GPU access
const result = await sandbox.run(
  `python inference_script.py "${prompt}"`
)
...

Sandbox Management

Delete and Timeouts

You have the option to manually call delete() to clean up a Sandbox when you’re ready. If you don’t call delete(), the sandbox will be automatically cleaned up after the period you set for the timeout argument.
  • Default Timeout: If you don’t set a timeout argument, the default is 1 minute.
  • Auto-Cleanup: The sandbox is destroyed automatically after the timeout expires.
JavaScript
...
} finally {
    await gpuSandbox.delete();
}

Sandbox Configuration

You can customize the resources and environment for your sandboxes.

Parameters

GPU Sandbox (Python SDK)
  • language: python (more coming soon).
  • memory: RAM allocation (e.g., "65536MB").
  • gpu: GPU Type (e.g., T4).
  • requirements: List of Python packages (e.g., ['transformers']).
  • model: Path to model can be local or remote (e.g., Hugging Face Qwen/Qwen3-8B).
CPU Sandbox (Node.js SDK)
  • runtime: (e.g., node, python).
  • memory: RAM allocation.
  • timeout: Max execution time in seconds.

Runtime Specifics

Python Requirements You can specify dependencies in your code or via a requirements.txt.
transformers==4.47.1
accelerate
Deno Permissions For Deno, you can pass run flags in your command:
deno run --allow-ffi my_script.ts

Nested Sandboxes

One of the most powerful features of Buildfunctions is Nested Orchestration. You can deploy a top-level Function (e.g., a Node.js API) that spins up child Sandboxes (e.g., Python GPU workers) to handle requests.

Example Architecture

  1. Top-Level Function: Receives an HTTP request.
  2. Child Sandbox: The function spins up a GPUSandbox to run a customized model.
  3. Result: The sandbox returns the inference result to the function, which responds to the user.
  4. Cleanup: The sandbox is destroyed, ensuring clean resource usage.