Skip to main content

DaytonaSandbox

Executes commands in isolated Daytona cloud sandboxes. Supports multiple runtimes, resource configuration, volumes, snapshots, streaming output, sandbox reconnection, filesystem mounting (S3, GCS), and network isolation.

info

For interface details, see WorkspaceSandbox interface.

Installation
Direct link to Installation

npm install @mastra/daytona

Set your Daytona API key in one of three ways.

export DAYTONA_API_KEY=your-api-key

Usage
Direct link to Usage

Add a DaytonaSandbox to a workspace and assign it to an agent:

import { Agent } from '@mastra/core/agent'
import { Workspace } from '@mastra/core/workspace'
import { DaytonaSandbox } from '@mastra/daytona'

const workspace = new Workspace({
sandbox: new DaytonaSandbox({
language: 'typescript',
timeout: 120_000,
}),
})

const agent = new Agent({
id: 'code-agent',
name: 'Code Agent',
instructions: 'You are a coding assistant working in this workspace.',
model: 'anthropic/claude-sonnet-4-6',
workspace,
})

const response = await agent.generate(
'Print "Hello, world!" and show the current working directory.',
)

console.log(response.text)
// I'll run both commands simultaneously!
//
// Here are the results:
//
// 1. **Hello, world!** — Successfully printed the message.
// 2. **Current Working Directory** — `/home/daytona`
//
// Both commands ran in parallel and completed successfully!

With a snapshot
Direct link to With a snapshot

Use a pre-built snapshot to skip environment setup time:

const workspace = new Workspace({
sandbox: new DaytonaSandbox({
snapshot: 'my-snapshot-id',
timeout: 60_000,
}),
})

Custom image with resources
Direct link to Custom image with resources

Use a custom Docker image with specific resource allocation:

const workspace = new Workspace({
sandbox: new DaytonaSandbox({
image: 'node:20-slim',
resources: { cpu: 2, memory: 4, disk: 6 },
language: 'typescript',
}),
})

Ephemeral sandbox
Direct link to Ephemeral sandbox

For one-shot tasks — sandbox is deleted immediately on stop:

const workspace = new Workspace({
sandbox: new DaytonaSandbox({
ephemeral: true,
language: 'python',
}),
})

Streaming output
Direct link to Streaming output

Stream command output in real time via onStdout and onStderr callbacks:

await sandbox.executeCommand('bash', ['-c', 'for i in 1 2 3; do echo "line $i"; sleep 1; done'], {
onStdout: chunk => process.stdout.write(chunk),
onStderr: chunk => process.stderr.write(chunk),
})

Both callbacks are optional and can be used independently.

Reconnection
Direct link to Reconnection

Reconnect to an existing sandbox by providing the same id. The sandbox resumes with its files and state intact:

const sandbox = new DaytonaSandbox({ id: 'my-persistent-sandbox' })

// First session
await sandbox._start()
await sandbox.executeCommand('sh', ['-c', 'echo "session 1" > /tmp/state.txt'])
await sandbox._stop()

// Later — reconnects to the same sandbox
const sandbox2 = new DaytonaSandbox({ id: 'my-persistent-sandbox' })
await sandbox2._start()
const result = await sandbox2.executeCommand('cat', ['/tmp/state.txt'])
console.log(result.stdout) // "session 1"

If the sandbox is in a stopped or archived state, it's restarted automatically. If it's in a dead state (destroyed, errored), a fresh sandbox is created instead.

Filesystem mounting
Direct link to Filesystem mounting

Mount S3 or GCS buckets as local directories inside the sandbox.

Via workspace mounts config
Direct link to Via workspace mounts config

The simplest way — filesystems are mounted automatically when the sandbox starts:

import { Workspace } from '@mastra/core/workspace'
import { DaytonaSandbox } from '@mastra/daytona'
import { GCSFilesystem } from '@mastra/gcs'
import { S3Filesystem } from '@mastra/s3'

const workspace = new Workspace({
mounts: {
'/s3-data': new S3Filesystem({
bucket: process.env.S3_BUCKET!,
region: 'auto',
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
endpoint: process.env.S3_ENDPOINT, // e.g. https://<account-id>.r2.cloudflarestorage.com
}),
'/gcs-data': new GCSFilesystem({
bucket: process.env.GCS_BUCKET!,
projectId: 'my-project-id',
credentials: JSON.parse(process.env.GCS_SERVICE_ACCOUNT_KEY!),
}),
},
sandbox: new DaytonaSandbox({ language: 'python' }),
})

When the workspace starts, the filesystems are automatically mounted at the specified paths. Code running in the sandbox can then access files at /s3-data and /gcs-data as if they were local directories.

Via sandbox.mount()
Direct link to via-sandboxmount

Mount manually at any point after the sandbox has started:

S3
Direct link to S3

import { S3Filesystem } from '@mastra/s3'

await sandbox.mount(
new S3Filesystem({
bucket: process.env.S3_BUCKET!,
region: 'us-east-1',
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
}),
'/data',
)

S3-compatible (Cloudflare R2, MinIO)
Direct link to S3-compatible (Cloudflare R2, MinIO)

import { S3Filesystem } from '@mastra/s3'

await sandbox.mount(
new S3Filesystem({
bucket: process.env.S3_BUCKET!,
region: 'auto',
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
endpoint: process.env.S3_ENDPOINT, // e.g. https://<account-id>.r2.cloudflarestorage.com
}),
'/data',
)

GCS
Direct link to GCS

import { GCSFilesystem } from '@mastra/gcs'

await sandbox.mount(
new GCSFilesystem({
bucket: process.env.GCS_BUCKET!,
projectId: 'my-project-id',
credentials: JSON.parse(process.env.GCS_SERVICE_ACCOUNT_KEY!),
}),
'/data',
)

Network isolation
Direct link to Network isolation

Restrict outbound network access:

const workspace = new Workspace({
sandbox: new DaytonaSandbox({
networkBlockAll: true,
networkAllowList: '10.0.0.0/8,192.168.0.0/16',
}),
})

Constructor parameters
Direct link to Constructor parameters

id?:

string
= Auto-generated
Unique identifier for this sandbox instance.

apiKey?:

string
Daytona API key for authentication. Falls back to DAYTONA_API_KEY environment variable.

apiUrl?:

string
Daytona API endpoint. Falls back to DAYTONA_API_URL environment variable.

target?:

string
Runner region. Falls back to DAYTONA_TARGET environment variable.

timeout?:

number
= 300000 (5 minutes)
Default execution timeout in milliseconds.

language?:

'typescript' | 'javascript' | 'python'
= 'typescript'
Runtime language for the sandbox.

snapshot?:

string
Pre-built snapshot ID to create the sandbox from. Takes precedence over image.

image?:

string
Docker image for sandbox creation. Triggers image-based creation when set. Can be combined with resources. Ignored when snapshot is set.

resources?:

{ cpu?: number; memory?: number; disk?: number }
Resource allocation for the sandbox (CPU cores, memory in GiB, disk in GiB). Only used when image is set.

env?:

Record<string, string>
= {}
Environment variables to set in the sandbox.

labels?:

Record<string, string>
= {}
Custom metadata labels.

name?:

string
= Sandbox id
Sandbox display name.

user?:

string
= 'daytona'
OS user to run commands as.

public?:

boolean
= false
Make port previews public.

ephemeral?:

boolean
= false
Delete sandbox immediately on stop.

autoStopInterval?:

number
= 15
Auto-stop interval in minutes. Set to 0 to disable.

autoArchiveInterval?:

number
= 7 days
Auto-archive interval in minutes. Set to 0 for the maximum interval (7 days).

autoDeleteInterval?:

number
= disabled
Auto-delete interval in minutes. Negative values disable auto-delete. Set to 0 to delete on stop.

volumes?:

Array<{ volumeId: string; mountPath: string }>
Daytona volumes to attach at sandbox creation time.

networkBlockAll?:

boolean
= false
Block all outbound network access from the sandbox.

networkAllowList?:

string
Comma-separated list of allowed CIDR addresses when network access is restricted.

Properties
Direct link to Properties

id:

string
Sandbox instance identifier.

name:

string
Provider name ('DaytonaSandbox').

provider:

string
Provider identifier ('daytona').

status:

ProviderStatus
'pending' | 'initializing' | 'ready' | 'stopped' | 'destroyed' | 'error'

instance:

Sandbox
The underlying Daytona Sandbox instance. Throws SandboxNotReadyError if the sandbox has not been started.

processes:

DaytonaProcessManager
Background process manager. See [SandboxProcessManager reference](/reference/workspace/process-manager).

Background processes
Direct link to Background processes

DaytonaSandbox includes a built-in process manager for spawning and managing background processes. Processes run in the Daytona cloud sandbox using session-based command execution.

const sandbox = new DaytonaSandbox({ language: 'typescript' })
await sandbox.start()

// Spawn a background process
const handle = await sandbox.processes.spawn('node server.js', {
env: { PORT: '3000' },
onStdout: data => console.log(data),
})

// Interact with the process
console.log(handle.stdout)
await handle.sendStdin('input\n')
await handle.kill()

See SandboxProcessManager reference for the full API.

Mounting cloud storage
Direct link to Mounting cloud storage

Daytona sandboxes can mount S3 or GCS buckets, making cloud storage accessible as local directories inside the sandbox. This is useful for:

  • Processing large datasets stored in cloud buckets
  • Writing output files directly to cloud storage
  • Sharing data between sandbox sessions

For usage examples, see Filesystem mounting.

Daytona sandboxes use FUSE (Filesystem in Userspace) to mount cloud storage:

The required FUSE tools are installed automatically at mount time if not already present in the sandbox image.

S3 environment variables
Direct link to S3 environment variables

VariableDescription
S3_BUCKETBucket name
S3_REGIONAWS region or auto for R2/MinIO
S3_ACCESS_KEY_IDAccess key ID
S3_SECRET_ACCESS_KEYSecret access key
S3_ENDPOINTEndpoint URL (S3-compatible only)

GCS environment variables
Direct link to GCS environment variables

VariableDescription
GCS_BUCKETBucket name
GCS_SERVICE_ACCOUNT_KEYService account key JSON (full JSON string, not a path)

Reducing cold start latency with a snapshot
Direct link to Reducing cold start latency with a snapshot

By default, s3fs and gcsfuse are installed at first mount via apt, which adds startup time. To eliminate this, prebake them into a Daytona snapshot and pass the snapshot name via the snapshot option.

Option 1: Declarative image build

import { Daytona, Image } from '@daytonaio/sdk'

const template = Image.base('daytonaio/sandbox')
.runCommands('sudo apt-get update -qq')
.runCommands('sudo apt-get install -y s3fs')
// gcsfuse requires the Google Cloud apt repository
.runCommands(
'sudo mkdir -p /etc/apt/keyrings && ' +
'curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /tmp/gcsfuse-key.gpg && ' +
'sudo gpg --batch --yes --dearmor -o /etc/apt/keyrings/gcsfuse.gpg /tmp/gcsfuse-key.gpg && ' +
// Use gcsfuse-jammy for Ubuntu, gcsfuse-bookworm for Debian
'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] https://packages.cloud.google.com/apt gcsfuse-jammy main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list',
)
.runCommands('sudo apt-get update -qq && sudo apt-get install -y gcsfuse')

const daytona = new Daytona()

await daytona.snapshot.create(
{
name: 'cloud-fs-mounting',
image: template,
},
{ onLogs: console.log },
)

Option 2: Dockerfile — using Image.fromDockerfile()

FROM daytonaio/sandbox
RUN sudo apt-get update -qq
RUN sudo apt-get install -y s3fs
# Use gcsfuse-jammy for Ubuntu, gcsfuse-bookworm for Debian
RUN sudo mkdir -p /etc/apt/keyrings && curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /tmp/gcsfuse-key.gpg && sudo gpg --batch --yes --dearmor -o /etc/apt/keyrings/gcsfuse.gpg /tmp/gcsfuse-key.gpg && echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] https://packages.cloud.google.com/apt gcsfuse-jammy main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
RUN sudo apt-get update -qq && sudo apt-get install -y gcsfuse
import { Daytona, Image } from '@daytonaio/sdk'

const daytona = new Daytona()

await daytona.snapshot.create(
{
name: 'cloud-fs-mounting',
image: Image.fromDockerfile('./Dockerfile'),
},
{ onLogs: console.log },
)

Then use the snapshot name in your sandbox config:

const workspace = new Workspace({
mounts: {
'/s3-data': new S3Filesystem({
/* ... */
}),
'/gcs-data': new GCSFilesystem({
/* ... */
}),
},
sandbox: new DaytonaSandbox({ snapshot: 'cloud-fs-mounting' }),
})

Direct SDK access
Direct link to Direct SDK access

Access the underlying Daytona Sandbox instance for filesystem, git, and other operations not exposed through the WorkspaceSandbox interface:

const daytonaSandbox = sandbox.instance

// Upload a file
await daytonaSandbox.fs.uploadFile(Buffer.from('hello'), '/tmp/hello.txt')

// Run git operations
await daytonaSandbox.git.clone('https://github.com/org/repo', '/workspace/repo')

The instance getter throws SandboxNotReadyError if the sandbox hasn't been started yet.

Sandbox creation modes
Direct link to Sandbox creation modes

DaytonaSandbox selects a creation mode based on the options provided:

OptionsCreation mode
snapshot setSnapshot-based (snapshot takes precedence over image)
image set (no snapshot)Image-based (optionally with resources)
Neither setDefault snapshot-based

Resources are only applied when image is set. Passing resources without image has no effect.