How to Structure Projects for AI Agents and LLMs

Learn how Mastra is optimized for modern AI systems, and how to implement a similar approach.

Lennart JörgensLennart Jörgens·

Feb 16, 2026

·

8 min read

We've recently announced Mastra Skills which are a way for agents to discover best practices and up-to-date information about Mastra. It is one of multiple building blocks we have in place to support LLMs. Mastra's documentation is the foundation that everything is built on. With the rise of AI, more projects are recognizing the importance of good, accessible documentation — and I'm glad to see it. Some of the things you'll see below were good practices for a very long time, but they’ve now become even more important.

In this post I want to share everything we do at Mastra to make it easier for LLMs to interact with our resources and to make our lives as maintainers easier. The checklist below is based on what we do at Mastra, but it is not meant to be a strict guideline. AI is changing rapidly and you need to be ready to adapt to new tools and techniques as they emerge.

Before we get into the details, I want to acknowledge the fatigue that open source maintainers are currently facing with AI-generated contributions. Most of this post is focused on how to enable more LLM usage, but at the end I'll share notes on how projects try to manage the influx.

Checklist

  • Documentation: Write well-structured, accessible, up-to-date documentation and ensure LLMs can easily access it
  • Skills: Create skills that define best practices and give the LLM current information
  • MCP Docs Server: Set up an MCP server for deeper documentation access and interactive workflows
  • Embedded docs: Embed relevant documentation into node_modules
  • Contributing setup: Based on your project's needs and your own stance, set up expectations and tooling to handle AI-generated contributions

Documentation

Your documentation is the foundation for everything. If your documentation is lacking, outdated, or hard to understand, it won't matter how good your skills or discoverability are. Everything that makes documentation good for humans also benefits LLMs.

Organize your documentation into a predictable structure with self-contained sections. You can use frameworks like Diátaxis to help you with this. It'll make it easier for anyone who contributes to your documentation to know where and how to add new content. With this systematic approach, each page has a distinct purpose so that readers can find the right page for their current needs.

Apply the same consistency to individual docs pages. Make each one standalone and use headings to break up the content and add context to each section. LLMs work with limited context windows, so a docs page shouldn't require reading three other pages before it makes sense.

Reduce ambiguity wherever you can. LLMs are good at filling in gaps, but they can also fill them with hallucinated information. Vague documentation leads to misunderstandings and incorrect assumptions. Use plain language and avoid jargon. If you need to use technical terms, make sure to define them clearly. Never assume prior knowledge. You might be surprised by what people don't know, and LLMs are no exception. Always provide enough context and explanations for someone who is new to the topic.

Keep it concise. If you use LLMs to assist with writing documentation, you'll find they often produce verbose content. Scratch all the fluff and get to the point.

Writing good documentation is half the equation. The other half is making sure LLMs can actually access it in formats they consume well.

Context files

Here's what you should implement:

  • A "Copy markdown" button on every page
  • An "Open in" dropdown to allow for quickly opening the page in different agents
  • Providing a root llms.txt file that contains an overview of all links (mastra.ai/llms.txt)
  • Serve your pages in markdown format (e.g. mastra.ai/docs.md) by adding .md or /llms.txt to the URL
  • Use content negotiation to serve those markdown files when requested by an LLM (when they send the Accept: text/markdown header)

Beyond the docs site itself, provide an AGENTS.md/CLAUDE.md file in your getting started templates. Teach the LLM how to best use your documentation.

If you're curious how to implement these things with Docusaurus, I've written about how to add llms.txt to Docusaurus.

Skills

Your documentation describes everything that's possible with your project. But an LLM reading it has no sense of priority — it can't distinguish recommended patterns from deprecated ones, or tell which approach is the happy path versus a footgun. Skills fill that gap.

Agent Skills are folders of instructions, scripts, and resources that agents can discover and use. Where docs say "here's how X works," a skill says "use X this way, avoid Y, and if you need Z, start with this doc page." Implement the Skills Specification, publish it on GitHub, and users can start installing it.

A skill file might include sections like:

  • How to set up a project from scratch
  • Which APIs to prefer and which are deprecated
  • Common mistakes and how to avoid them
  • Where to find more information for specific topics (pointing to specific doc pages)

Like documentation, skills are living documents. As your project evolves (e.g. new APIs, changed defaults, deprecated patterns), your skills need to reflect that.

Have a look at the Mastra Skills and our dedicated post for a working example.

MCP docs server

Mastra's MCP Docs Server accesses the documentation and provides tools to help with version migrations. MCP servers and skills live on different layers: MCP gives agents capabilities, and skills teach the agent how to use them. For pure documentation retrieval they feel pretty similar, but if you want to provide additional functionality (e.g. looking up an error in the docs, or interacting with a live running server), an MCP server is the way to go.

Ideally, you'll also host your MCP docs server remotely so that anyone can access it. Users in cloud-based development environments or tools that don't support local MCP servers will benefit from not having to run anything on their machine.

Embedded docs

The last approach we use at Mastra to surface our documentation to an agent is by shipping it together with the package. Go ahead and open one of your projects using Mastra. In node_modules/@mastra/core/dist/docs you'll find a SKILL.md file and markdown files in a references folder.

This skill provides highly specific information for @mastra/core and nothing else from the broader docs. Rather than letting an agent explore the documentation on its own, it gets a curated list of files to search.

You can build this yourself by following what we do:

  • Add a packages frontmatter field to each docs page and add any relevant npm package names, for example:

     1---
     2packages:
     3  - "@mastra/core"
     4---
  • Build optimized markdown versions of your docs during the build process (as explained in context files)

  • Generate a manifest file that maps each package to the relevant context files (example). Alternatively, you can also link to the source files if you don't want to generate llms.txt/.md files.

     1{
     2  "version": "1.0.0",
     3  "generatedAt": "2026-02-16T08:53:13.317Z",
     4  "packages": {
     5    "@mastra/core": [
     6      {
     7        "path": "docs/agents/adding-voice/llms.txt",
     8        "title": "Voice",
     9        "category": "docs",
    10        "folderPath": "agents/adding-voice"
    11      },
    12    ],
    13  },
    14}
  • Author a script that copies the relevant markdown files into node_modules during the prepack step of publishing. Examples: script, prepack. Our script also adds a SKILL.md file that contains a table of contents of all files copied over.

Contributing setup

GitHub acknowledged the recent challenges OSS maintainers face in their "Eternal September" post. A pull request can be generated in seconds, but the cost of giving feedback, asking for changes, testing, and ultimately merging can be significant. With your project existing in an LLM world, you need to develop a stance on how you want to handle this. There is no right or wrong answer. Do what helps you build a healthy community and project. That means making sure you, the maintainer, don't burn out along the way.

Projects like Ghostty, pi-mono, and tldraw moved to invitation-only contributions. Others are building tools to evaluate the quality of contributions and filter out low-quality ones. Some projects are "read-only" and don't accept contributions at all.

In any case, you should write up clear CONTRIBUTING.md and AGENTS.md/CLAUDE.md files with your guidelines, style guides, expectations, and instructions. This will benefit both humans and agents.

Wrapping up

These are the building blocks we use at Mastra today. Some of them — good documentation, clear contribution guidelines — have been good practices for years. Others, like skills, MCP servers, and embedded docs, are newer and still evolving. The specifics will change as tooling matures, but the underlying principle won't: make your project's knowledge accessible, structured, and machine-readable, and both your users and their agents will have a better time working with it.

Share:
Lennart Jörgens
Lennart JörgensSoftware Engineer

Lennart is a designer turned software engineer who is passionate about working on open source projects & making the web more inclusive through them. When away from the keyboard, Lennart can be found hiking in the mountains, taking landscape photos, or reading fantasy & sci-fi books.

All articles by Lennart Jörgens