DaytonaSandbox
Executes commands in isolated Daytona cloud sandboxes. Supports multiple runtimes, resource configuration, volumes, snapshots, streaming output, sandbox reconnection, filesystem mounting (S3, GCS), and network isolation.
For interface details, see WorkspaceSandbox interface.
InstallationDirect link to Installation
- npm
- pnpm
- Yarn
- Bun
npm install @mastra/daytona
pnpm add @mastra/daytona
yarn add @mastra/daytona
bun add @mastra/daytona
Set your Daytona API key in one of three ways.
- Shell export
- .env file
- Constructor
export DAYTONA_API_KEY=your-api-key
DAYTONA_API_KEY=your-api-key
new DaytonaSandbox({ apiKey: 'your-api-key' })
UsageDirect link to Usage
Add a DaytonaSandbox to a workspace and assign it to an agent:
import { Agent } from '@mastra/core/agent'
import { Workspace } from '@mastra/core/workspace'
import { DaytonaSandbox } from '@mastra/daytona'
const workspace = new Workspace({
sandbox: new DaytonaSandbox({
language: 'typescript',
timeout: 120_000,
}),
})
const agent = new Agent({
id: 'code-agent',
name: 'Code Agent',
instructions: 'You are a coding assistant working in this workspace.',
model: 'anthropic/claude-sonnet-4-6',
workspace,
})
const response = await agent.generate(
'Print "Hello, world!" and show the current working directory.',
)
console.log(response.text)
// I'll run both commands simultaneously!
//
// Here are the results:
//
// 1. **Hello, world!** — Successfully printed the message.
// 2. **Current Working Directory** — `/home/daytona`
//
// Both commands ran in parallel and completed successfully!
With a snapshotDirect link to With a snapshot
Use a pre-built snapshot to skip environment setup time:
const workspace = new Workspace({
sandbox: new DaytonaSandbox({
snapshot: 'my-snapshot-id',
timeout: 60_000,
}),
})
Custom image with resourcesDirect link to Custom image with resources
Use a custom Docker image with specific resource allocation:
const workspace = new Workspace({
sandbox: new DaytonaSandbox({
image: 'node:20-slim',
resources: { cpu: 2, memory: 4, disk: 6 },
language: 'typescript',
}),
})
Ephemeral sandboxDirect link to Ephemeral sandbox
For one-shot tasks — sandbox is deleted immediately on stop:
const workspace = new Workspace({
sandbox: new DaytonaSandbox({
ephemeral: true,
language: 'python',
}),
})
Streaming outputDirect link to Streaming output
Stream command output in real time via onStdout and onStderr callbacks:
await sandbox.executeCommand('bash', ['-c', 'for i in 1 2 3; do echo "line $i"; sleep 1; done'], {
onStdout: chunk => process.stdout.write(chunk),
onStderr: chunk => process.stderr.write(chunk),
})
Both callbacks are optional and can be used independently.
ReconnectionDirect link to Reconnection
Reconnect to an existing sandbox by providing the same id. The sandbox resumes with its files and state intact:
const sandbox = new DaytonaSandbox({ id: 'my-persistent-sandbox' })
// First session
await sandbox._start()
await sandbox.executeCommand('sh', ['-c', 'echo "session 1" > /tmp/state.txt'])
await sandbox._stop()
// Later — reconnects to the same sandbox
const sandbox2 = new DaytonaSandbox({ id: 'my-persistent-sandbox' })
await sandbox2._start()
const result = await sandbox2.executeCommand('cat', ['/tmp/state.txt'])
console.log(result.stdout) // "session 1"
If the sandbox is in a stopped or archived state, it's restarted automatically. If it's in a dead state (destroyed, errored), a fresh sandbox is created instead.
Filesystem mountingDirect link to Filesystem mounting
Mount S3 or GCS buckets as local directories inside the sandbox.
Via workspace mounts configDirect link to Via workspace mounts config
The simplest way — filesystems are mounted automatically when the sandbox starts:
import { Workspace } from '@mastra/core/workspace'
import { DaytonaSandbox } from '@mastra/daytona'
import { GCSFilesystem } from '@mastra/gcs'
import { S3Filesystem } from '@mastra/s3'
const workspace = new Workspace({
mounts: {
'/s3-data': new S3Filesystem({
bucket: process.env.S3_BUCKET!,
region: 'auto',
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
endpoint: process.env.S3_ENDPOINT, // e.g. https://<account-id>.r2.cloudflarestorage.com
}),
'/gcs-data': new GCSFilesystem({
bucket: process.env.GCS_BUCKET!,
projectId: 'my-project-id',
credentials: JSON.parse(process.env.GCS_SERVICE_ACCOUNT_KEY!),
}),
},
sandbox: new DaytonaSandbox({ language: 'python' }),
})
When the workspace starts, the filesystems are automatically mounted at the specified paths. Code running in the sandbox can then access files at /s3-data and /gcs-data as if they were local directories.
Via sandbox.mount()Direct link to via-sandboxmount
Mount manually at any point after the sandbox has started:
S3Direct link to S3
import { S3Filesystem } from '@mastra/s3'
await sandbox.mount(
new S3Filesystem({
bucket: process.env.S3_BUCKET!,
region: 'us-east-1',
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
}),
'/data',
)
S3-compatible (Cloudflare R2, MinIO)Direct link to S3-compatible (Cloudflare R2, MinIO)
import { S3Filesystem } from '@mastra/s3'
await sandbox.mount(
new S3Filesystem({
bucket: process.env.S3_BUCKET!,
region: 'auto',
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
endpoint: process.env.S3_ENDPOINT, // e.g. https://<account-id>.r2.cloudflarestorage.com
}),
'/data',
)
GCSDirect link to GCS
import { GCSFilesystem } from '@mastra/gcs'
await sandbox.mount(
new GCSFilesystem({
bucket: process.env.GCS_BUCKET!,
projectId: 'my-project-id',
credentials: JSON.parse(process.env.GCS_SERVICE_ACCOUNT_KEY!),
}),
'/data',
)
Network isolationDirect link to Network isolation
Restrict outbound network access:
const workspace = new Workspace({
sandbox: new DaytonaSandbox({
networkBlockAll: true,
networkAllowList: '10.0.0.0/8,192.168.0.0/16',
}),
})
Constructor parametersDirect link to Constructor parameters
id?:
apiKey?:
apiUrl?:
target?:
timeout?:
language?:
snapshot?:
image?:
resources?:
env?:
labels?:
name?:
user?:
public?:
ephemeral?:
autoStopInterval?:
autoArchiveInterval?:
autoDeleteInterval?:
volumes?:
networkBlockAll?:
networkAllowList?:
PropertiesDirect link to Properties
id:
name:
provider:
status:
instance:
processes:
Background processesDirect link to Background processes
DaytonaSandbox includes a built-in process manager for spawning and managing background processes. Processes run in the Daytona cloud sandbox using session-based command execution.
const sandbox = new DaytonaSandbox({ language: 'typescript' })
await sandbox.start()
// Spawn a background process
const handle = await sandbox.processes.spawn('node server.js', {
env: { PORT: '3000' },
onStdout: data => console.log(data),
})
// Interact with the process
console.log(handle.stdout)
await handle.sendStdin('input\n')
await handle.kill()
See SandboxProcessManager reference for the full API.
Mounting cloud storageDirect link to Mounting cloud storage
Daytona sandboxes can mount S3 or GCS buckets, making cloud storage accessible as local directories inside the sandbox. This is useful for:
- Processing large datasets stored in cloud buckets
- Writing output files directly to cloud storage
- Sharing data between sandbox sessions
For usage examples, see Filesystem mounting.
Daytona sandboxes use FUSE (Filesystem in Userspace) to mount cloud storage:
The required FUSE tools are installed automatically at mount time if not already present in the sandbox image.
S3 environment variablesDirect link to S3 environment variables
| Variable | Description |
|---|---|
S3_BUCKET | Bucket name |
S3_REGION | AWS region or auto for R2/MinIO |
S3_ACCESS_KEY_ID | Access key ID |
S3_SECRET_ACCESS_KEY | Secret access key |
S3_ENDPOINT | Endpoint URL (S3-compatible only) |
GCS environment variablesDirect link to GCS environment variables
| Variable | Description |
|---|---|
GCS_BUCKET | Bucket name |
GCS_SERVICE_ACCOUNT_KEY | Service account key JSON (full JSON string, not a path) |
Reducing cold start latency with a snapshotDirect link to Reducing cold start latency with a snapshot
By default, s3fs and gcsfuse are installed at first mount via apt, which adds startup time. To eliminate this, prebake them into a Daytona snapshot and pass the snapshot name via the snapshot option.
Option 1: Declarative image build
import { Daytona, Image } from '@daytonaio/sdk'
const template = Image.base('daytonaio/sandbox')
.runCommands('sudo apt-get update -qq')
.runCommands('sudo apt-get install -y s3fs')
// gcsfuse requires the Google Cloud apt repository
.runCommands(
'sudo mkdir -p /etc/apt/keyrings && ' +
'curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /tmp/gcsfuse-key.gpg && ' +
'sudo gpg --batch --yes --dearmor -o /etc/apt/keyrings/gcsfuse.gpg /tmp/gcsfuse-key.gpg && ' +
// Use gcsfuse-jammy for Ubuntu, gcsfuse-bookworm for Debian
'echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] https://packages.cloud.google.com/apt gcsfuse-jammy main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list',
)
.runCommands('sudo apt-get update -qq && sudo apt-get install -y gcsfuse')
const daytona = new Daytona()
await daytona.snapshot.create(
{
name: 'cloud-fs-mounting',
image: template,
},
{ onLogs: console.log },
)
Option 2: Dockerfile — using Image.fromDockerfile()
FROM daytonaio/sandbox
RUN sudo apt-get update -qq
RUN sudo apt-get install -y s3fs
# Use gcsfuse-jammy for Ubuntu, gcsfuse-bookworm for Debian
RUN sudo mkdir -p /etc/apt/keyrings && curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /tmp/gcsfuse-key.gpg && sudo gpg --batch --yes --dearmor -o /etc/apt/keyrings/gcsfuse.gpg /tmp/gcsfuse-key.gpg && echo "deb [signed-by=/etc/apt/keyrings/gcsfuse.gpg] https://packages.cloud.google.com/apt gcsfuse-jammy main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
RUN sudo apt-get update -qq && sudo apt-get install -y gcsfuse
import { Daytona, Image } from '@daytonaio/sdk'
const daytona = new Daytona()
await daytona.snapshot.create(
{
name: 'cloud-fs-mounting',
image: Image.fromDockerfile('./Dockerfile'),
},
{ onLogs: console.log },
)
Then use the snapshot name in your sandbox config:
const workspace = new Workspace({
mounts: {
'/s3-data': new S3Filesystem({
/* ... */
}),
'/gcs-data': new GCSFilesystem({
/* ... */
}),
},
sandbox: new DaytonaSandbox({ snapshot: 'cloud-fs-mounting' }),
})
Direct SDK accessDirect link to Direct SDK access
Access the underlying Daytona Sandbox instance for filesystem, git, and other operations not exposed through the WorkspaceSandbox interface:
const daytonaSandbox = sandbox.instance
// Upload a file
await daytonaSandbox.fs.uploadFile(Buffer.from('hello'), '/tmp/hello.txt')
// Run git operations
await daytonaSandbox.git.clone('https://github.com/org/repo', '/workspace/repo')
The instance getter throws SandboxNotReadyError if the sandbox hasn't been started yet.
Sandbox creation modesDirect link to Sandbox creation modes
DaytonaSandbox selects a creation mode based on the options provided:
| Options | Creation mode |
|---|---|
snapshot set | Snapshot-based (snapshot takes precedence over image) |
image set (no snapshot) | Image-based (optionally with resources) |
| Neither set | Default snapshot-based |
Resources are only applied when image is set. Passing resources without image has no effect.