Deploy Mastra to AWS Lambda
Deploy your Mastra application to AWS Lambda using Docker containers and the AWS Lambda Web Adapter. This approach runs your Mastra server as a containerized Lambda function with automatic scaling.
This guide covers deploying the Mastra server. If you're using a server adapter or web framework, deploy the way you normally would for that framework.
Before you beginDirect link to Before you begin
You'll need:
- A Mastra application
- An AWS account with permissions for Lambda, ECR, and IAM
- AWS CLI installed — run
aws configureto authenticate - Docker installed and running
On AWS Lambda, the filesystem is ephemeral, so any local database file will be lost between invocations. If you're using LibSQLStore with a local file, configure it to use a remote LibSQL-compatible database (for example, Turso) instead.
Create a DockerfileDirect link to Create a Dockerfile
Create a Dockerfile in your project root:
FROM node:22-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci
# Copy source and build Mastra
COPY src ./src
RUN npx mastra build
# Alpine compatibility for some native deps
RUN apk add --no-cache gcompat
# Add the Lambda Web Adapter
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1 /lambda-adapter /opt/extensions/lambda-adapter
# Run as non-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S mastra -u 1001 && \
chown -R mastra:nodejs /app
USER mastra
# Adapter / app configuration
ENV PORT=8080
ENV NODE_ENV=production
ENV AWS_LWA_READINESS_CHECK_PATH="/api"
# Start the Mastra server
CMD ["node", ".mastra/output/index.mjs"]
This Dockerfile uses npm. If you're using pnpm, yarn, or another package manager, adjust the commands accordingly (e.g. npm ci won't work with pnpm lockfiles).
Build and push the Docker imageDirect link to Build and push the Docker image
Set up environment variables for the deployment. Use the same region for all AWS services (ECR, Lambda) to avoid issues with resources not appearing across regions:
export PROJECT_NAME="your-mastra-app"
export AWS_REGION="us-east-1"
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)Build your Docker image. Disable BuildKit to ensure the image is compatible with Lambda's container runtime:
export DOCKER_BUILDKIT=0
docker build --platform linux/arm64 -t "$PROJECT_NAME" .noteBuildKit's default image format can cause compatibility issues with Lambda. Setting
DOCKER_BUILDKIT=0uses the classic builder, which produces images in a format Lambda reliably accepts.Create an Amazon ECR repository to store your Docker image:
aws ecr create-repository --repository-name "$PROJECT_NAME" --region "$AWS_REGION"Log in to Amazon ECR:
aws ecr get-login-password --region "$AWS_REGION" | \
docker login --username AWS --password-stdin \
"$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com"Tag your image and push it to ECR:
docker tag "$PROJECT_NAME":latest \
"$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROJECT_NAME":latest
docker push \
"$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROJECT_NAME":latest
Create the Lambda functionDirect link to Create the Lambda function
Navigate to the AWS Lambda Console and click Create function. Make sure you have the same region selected as where you created your ECR repository.
Select Container image and configure the function:
- Function name: Enter a name (e.g.
mastra-app) - Container image URI: Click Browse images, select your ECR repository, and choose the
latesttag - Architecture: Select arm64
Click Create function to create the Lambda function.
- Function name: Enter a name (e.g.
Configure memory and timeout. Go to Configuration > General configuration and click Edit:
- Memory: 512 MB (adjust based on your application needs)
- Timeout: 1 minute or higher (the default 3 seconds is too low for most Mastra applications)
Click Save.
Add environment variables. Go to Configuration > Environment variables and click Edit. Add the variables your Mastra application needs:
OPENAI_API_KEY: Your OpenAI API key (if using OpenAI)ANTHROPIC_API_KEY: Your Anthropic API key (if using Anthropic)- Other provider-specific API keys as needed (e.g.
TURSO_DATABASE_URLandTURSO_AUTH_TOKENif using LibSQL with Turso)
Click Save.
Enable a Function URL for external access. Go to Configuration > Function URL and click Create function URL:
- Auth type: Select NONE for public access
- Under Additional settings, check Configure cross-origin resource sharing (CORS), then configure:
- Allow origin:
* - Allow headers:
content-type(x-amzn-request-contextis also required when used with services like CloudFront/API Gateway) - Allow methods:
*
- Allow origin:
Click Save.
Verify your deploymentDirect link to Verify your deployment
Copy the Function URL from the Lambda console and visit
<your-function-url>/api/agentsin your browser. You should see a JSON list of your agents.You can now call your Mastra endpoints over HTTP.
warningFor production deployments, set up authentication before exposing your endpoints publicly, restrict CORS origins to your trusted domains, use AWS IAM roles for secure access to other AWS services, and store sensitive environment variables in AWS Secrets Manager or Parameter Store.
Next stepsDirect link to Next steps
This guide provides a quickstart for deploying Mastra to AWS Lambda. For production workloads, consider enabling CloudWatch monitoring for your Lambda function, setting up AWS X-Ray for distributed tracing, and configuring provisioned concurrency for consistent performance.