S3Filesystem
Stores files in Amazon S3 or S3-compatible storage services like Cloudflare R2, MinIO, DigitalOcean Spaces, and Tigris.
For interface details, see WorkspaceFilesystem Interface.
InstallationDirect link to Installation
- npm
- pnpm
- Yarn
- Bun
npm install @mastra/s3
pnpm add @mastra/s3
yarn add @mastra/s3
bun add @mastra/s3
UsageDirect link to Usage
Add an S3Filesystem to a workspace and assign it to an agent:
import { Agent } from '@mastra/core/agent'
import { Workspace } from '@mastra/core/workspace'
import { S3Filesystem } from '@mastra/s3'
const workspace = new Workspace({
filesystem: new S3Filesystem({
bucket: 'my-bucket',
region: 'us-east-1',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
}),
})
const agent = new Agent({
name: 'file-agent',
model: 'anthropic/claude-opus-4-6',
workspace,
})
AWS credential provider chainDirect link to AWS credential provider chain
When no credentials are provided, S3Filesystem uses the AWS SDK default credential provider chain. This discovers credentials automatically from environment variables, ~/.aws config files, ECS container credentials, EC2 instance profiles, and other standard sources.
import { S3Filesystem } from '@mastra/s3'
// SDK discovers credentials from the environment automatically
const filesystem = new S3Filesystem({
bucket: 'my-bucket',
region: 'us-east-1',
})
Pass a credential provider function for auto-refreshing credentials. This is useful for deployments on ECS, Lambda, or when using SSO/AssumeRole where temporary credentials expire and need to be refreshed.
Install the AWS SDK credential providers package when calling fromNodeProviderChain() directly:
- npm
- pnpm
- Yarn
- Bun
npm install @aws-sdk/credential-providers
pnpm add @aws-sdk/credential-providers
yarn add @aws-sdk/credential-providers
bun add @aws-sdk/credential-providers
import { S3Filesystem } from '@mastra/s3'
import { fromNodeProviderChain } from '@aws-sdk/credential-providers'
const filesystem = new S3Filesystem({
bucket: 'my-bucket',
region: 'us-east-1',
credentials: fromNodeProviderChain(),
})
Provider functions only apply to S3Filesystem API calls. When mounting the filesystem into an E2B sandbox, mount configuration only supports static accessKeyId, secretAccessKey, and sessionToken values, so credential refresh must be handled outside the mount.
Cloudflare R2Direct link to Cloudflare R2
import { S3Filesystem } from '@mastra/s3'
const filesystem = new S3Filesystem({
bucket: 'my-r2-bucket',
region: 'auto',
endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
})
MinIODirect link to MinIO
import { S3Filesystem } from '@mastra/s3'
const filesystem = new S3Filesystem({
bucket: 'my-bucket',
region: 'us-east-1',
endpoint: 'http://localhost:9000',
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
})
TigrisDirect link to Tigris
import { S3Filesystem } from '@mastra/s3'
const filesystem = new S3Filesystem({
bucket: 'my-bucket',
region: 'auto',
endpoint: 'https://t3.storage.dev',
accessKeyId: process.env.TIGRIS_ACCESS_KEY_ID,
secretAccessKey: process.env.TIGRIS_SECRET_ACCESS_KEY,
forcePathStyle: false,
})
Tigris uses virtual-hosted-style addressing, so forcePathStyle must be set to false (the default is true when a custom endpoint is provided). Create credentials from the Tigris Dashboard — access keys are prefixed tid_ and secrets tsec_.
Constructor parametersDirect link to Constructor parameters
bucket:
region:
credentials?:
accessKeyId?:
secretAccessKey?:
sessionToken?:
endpoint?:
forcePathStyle?:
prefix?:
id?:
displayName?:
icon?:
description?:
readOnly?:
PropertiesDirect link to Properties
id:
name:
provider:
bucket:
readOnly:
MethodsDirect link to Methods
S3Filesystem implements the WorkspaceFilesystem interface, providing all standard filesystem methods:
readFile(path, options?)- Read file contentswriteFile(path, content, options?)- Write content to a fileappendFile(path, content)- Append content to a filedeleteFile(path, options?)- Delete a filecopyFile(src, dest, options?)- Copy a filemoveFile(src, dest, options?)- Move or rename a filemkdir(path, options?)- Create a directoryrmdir(path, options?)- Remove a directoryreaddir(path, options?)- List directory contentsexists(path)- Check if a path existsstat(path)- Get file or directory metadata
init()Direct link to init
Initialize the filesystem. Verifies bucket access and credentials.
await filesystem.init()
getInfo()Direct link to getinfo
Returns metadata about this filesystem instance.
const info = filesystem.getInfo()
// { id: '...', name: 'S3Filesystem', provider: 's3', status: 'ready' }
getMountConfig()Direct link to getmountconfig
Returns the mount configuration for sandboxes that support mounting this filesystem type.
const config = filesystem.getMountConfig()
// { type: 's3', bucket: 'my-bucket', region: 'us-east-1', ... }
Mounting in E2B sandboxesDirect link to Mounting in E2B sandboxes
S3Filesystem can be mounted into E2B sandboxes, making the bucket accessible as a local directory:
import { Workspace } from '@mastra/core/workspace'
import { S3Filesystem } from '@mastra/s3'
import { E2BSandbox } from '@mastra/e2b'
const workspace = new Workspace({
mounts: {
'/data': new S3Filesystem({
bucket: 'my-bucket',
region: 'us-east-1',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
}),
},
sandbox: new E2BSandbox({ id: 'dev-sandbox' }),
})
See E2BSandbox reference for more details on mounting.