Demystifying MCP: Build a Code Agent with MCP and Mastra
The Model Context Protocol (MCP) is an open standard developed by Anthropic that enables seamless integration between AI systems and various data sources and tools. By standardizing how applications provide context to large language models (LLMs), MCP simplifies the development of AI-powered applications, allowing for more efficient and scalable integrations.
In this hands-on workshop, you'll learn how to leverage MCP in conjunction with Mastra to build intelligent coding agents that streamline development workflows.
We'll start by covering the fundamentals of MCP, including its architecture and how it standardizes interactions between AI models and external data sources. Then, you'll dive into practical implementation—building a simple local coding agent that can assist with tasks like code generation, modification, and execution.
You'll gain hands-on experience with:
Understanding the core components of MCP and how it facilitates AI integration
Integrating MCP with Mastra to enhance development workflows
Building a local coding agent that automates and enhances coding tasks
Best practices for structuring AI agent interactions within a development environment
This workshop is ideal for developers and AI engineers looking to explore the intersection of AI and coding automation. Participants should have familiarity with JavaScript and be comfortable working with AI models.
Come prepared with a code editor—you'll leave with a functional coding agent that you can expand and integrate into your own development workflows.
Take your AI-powered development to the next level with MCP and Mastra. Join us for this hands-on session and start building smarter coding agents today!
Workshop Transcript
And I guess one other question right before we get started. If you if can you maybe in one sentence if you uh if you've used MCP you what's your experience with it? You know like are you just just getting started? You you you've used you've built something
with it. You've used it in a project. It'd be cool to just get everyone's you know we're gonna have people holiday.
We're going to have people that are all over the place as far as uh their experience with it. And I think a lot of people are just learning this stuff, right? A lot of this stuff is really new. It's it's changing every day. It's definitely not a set in stone standard
yet either. So So Blender MCP demo used MCP and cursor. I think that's a pretty common one. Either cursor or cloud desktop. And then some people that are
just getting started with it. Super bass. Use it with Klein.
Hello. All right. So, let's go ahead and get started. I'm going to share my screen and we will go ahead
and get started here. So, we got a lot of people on the call. That's good.
Yeah, I don't know. Normally the for some reason I think when you're joining the people are not uh muted. I think I can make one quick second here.
All right. and we will start sharing and we will kick this thing off. Thank you everyone for coming today. Hopefully you can all see my see my screen, see some slides.
All right, we need to let's go to the actual start and we will get started. So, thank you for joining. We came with the goal today.
The goal is to build a code agent with MCP and Mastra. We're going to learn a lot about MCP. We're going to talk about what MCP is, why it matters. We're going to Tyler is going to help us learn how
we can actually use MCP with MRA so you'll get an idea of how this stuff actually works, how it's kind of built under the hood. And then we'll have some time at the end for questions. But feel free to ask questions along the way. We
have other people on the master team here that are going to be helping field some questions. And then of course we'll have time at the end where Tyler can can answer some questions too. So who are we? So I'm Shane Thomas.
I'm the chief product officer and a founder of Mastra. I was previously at Gatsby and Netlefi. I built a site called audio feed a few years back now or a year and a half back now at this point, maybe a little longer. And I've
been in open source for 15 years when I actually was doing a lot of Drupal stuff way back in the day. So, I've been kind of in the open source world for a long time. And please uh follow me on Twitter or LinkedIn and we can connect. And also
want to talk about uh the the uh the real star of the show, Tyler. He's a founding engineer at MRAA. He was also at Gatsby and Netlifi with me and he's kind of our resident memory and tools expert and so connect with him on Twitter as well. He had a post yesterday that got a lot of attention just on the MCP doc server that we released with
MRA. So go check him out. But I'll go through this stuff really quickly and I know you all want to just get into the code and start to see it. So we'll hand it off to Tyler uh soon, but I want to
talk just a little bit high level of what is MCP. It's an open standard and this was originally kind of brought forth by Enthropic. It's really for connecting LLM to systems where your data lives. So
it's almost like a you know you can kind of think of it like a USBC port for AI applications. It's like you have this AI, you have all these like third party APIs and how do we connect them in a more standardized way? And that's really the goal of of MCP. So before MCP, you often had to
kind of build your own connectors. You were basically writing the glue code between your AI application and maybe a bunch of thirdparty tools. And so MCP, the whole goal is to create a more unified standardized way to kind of hook your models into your different systems. And so how does it actually work? Well,
you have some kind of MCP client which in this case could be, you know, as some of you said in the chat, could be Cursor, Windsurf, Klein, you know, Claude Desktop. There's the pro the MCP protocol, which is the standard. And then you have these different MCP servers which could connect to, and it's kind of hard to see, but you know, it could connect to GitHub or Slack or some kind of local file system tools. And so it's really this kind of
bridge between these potential uh like thirdparty tools or potential local tools and some kind of MCP client. And so as far as real world application, there are companies that are using MCPs internally to connect their AI agents with internal systems. So it's a standardized way to connect in kind of internal tooling. There's of course all the dev platforms that are using MCP like Cursor, Windsurf, Client,
you know, Cloud Desktop to make it really easy to integrate in MCP with with everything that uh you're trying to do locally. And then there's a growing set of connectors for different services and we'll show some of those today, but Google Drive, Slack, GitHub, all kinds of different tools. You know, it's it's really just
that unified integration. It really allows for faster development. So you you don't have to write all the glue code. you can kind of just pull an MCP
the the promise is you can pull an MCP server off the shelf and you don't have to write the code for how you connect to you know Google drives API for instance uh and it's a separation of concern so you kind of like split your AI application and then your data and tool app integrations so it's you know allows you to kind of keep things a little cleaner and you don't have this one big
kind of monolithic application but you can kind of have a nice clean separation of concerns between AI and your tools we'll talk a little bit today about MCP P registry options. MCP registries are kind of like npm if you're familiar with the JavaScript world. It's really a package manager for MCP servers and there's a
lot of them that are trying to pop up and you know we'll talk a little bit you know anthropic has some ideas around this too but there's composio smithery mcp open tools and you know there's a number of others that we're probably missing. So here's just a few screenshots you can see, you know, in that one from MCP run on the
bottom right is the one from Caposio and then, you know, of course there's others, too. But it makes it easy for you to kind of grab these different uh MCP servers and then use them in your application. So let's talk a little bit about MRA before I hand it over to Tyler. Master is an open source AI agent framework in
Typescript. And what's all included? You can build agents with tools, memory, tracing, you know, you can build agents that have access to MCP servers as we'll see today. It's we have state machine
based workflows. So you can build human in the loop workflows with, you know, we try to make it a very simple API. So it's easy to kind of construct these various uh complex workflows. We have an eval framework for tracking and
measuring AI output. We have storage for rag pipelines. We have a local development playground. It's really the
goal around Maestra is we're opinionated but we're flexible. So you can get further faster but you can it doesn't lock you in. You can swap out pieces as your application gets more complex and your needs change and grow. So let's go ahead and get started. We're going to be building with MRA and MCP. Here's the quick highlight and then
I'll hand it over to Tyler. We'll talk about the master docs MCP. We'll talk about some MCP registries. We'll build a
code agent and how we will uh talk about building your own MCP server and then we'll have a discussion on where MCP is headed. And with that I'm going to kick it over to you Tyler. Awesome. Thanks Jane. Hey everyone. I'm just going to share my
screen here. Okay. So yeah, we're going to be building a coding agent and that agent itself is going to be able to build master agents. So, there's going to be a bit of a progression for us to get there. We're going to actually start
um by scaffolding a new master project. And we're going to install the MCP doc server, which we just launched yesterday. Some of you might have seen it. Uh I recorded a demo video, but I'm going to do a quick run through. Um but
our coding agent is actually going to use that able and that'll make it able to build other agents. Um so, let's jump into it. Okay. So, just going to head over to my
terminal here. So, if you just go to like the master homepage, you can just click this to copy it. This is like the install script for scaffolding a new master project. Um, we'll call it master
project. And this shouldn't take too long here, but it does have to install all the dependencies and everything. And should be any moment now. Yep, there
we go. Good. So, just going to use some some of the default options here. This
doesn't matter too much. And this is the part that matters more. So, now we're going to be able to choose to install the MRA MCP into Cursor or Windsurf. I'm going to choose Cursor because that's the one that I use.
and let's open it up in cursor. So project scaffolded here and you can see there's this mcp.json. Um this is actually the master
docs mcp and this is going to give cursor um the ability to understand what master is and read the docs and so on. So let's just enable it. So, I went to cursor settings and then MCP servers and I'm just going to click enable on it. This might take a little bit of time. It has to actually install the new doc server.
This little yellow dot will turn green um once it's installed. Sometimes you have to refresh it. And there we go. Okay.
So, now that we have that set up, what can we do with it? So, we've got our Monster project here. We've got an example weather agent. Um, I'm just going to show you real quick like what we could do here. So, I'm just going to
tell it to add memory to this agent. So, add memory. Just going to click plus here. Sometimes you actually have to start a new chat after you add an MCP server and
cursor. So just to be sure, I'm going to do that. Um, add memorating to my agent. All right. So we can see it knows that it can check the docs and it's actually doing that. So it's going to
check the memory reference and the memory guides. So memory is going to allow our agent to be able to uh have a longunning conversation where it remembers all the previous messages um that a user sends to it. So without that, it only remembers the current message that you send it. So not quite
as useful as having memory. Um, okay. So it installed the memory uh package, MRA memory, and now it's actually going to add it to this file for us. So you could do this without the MRA MCP docs tool. Um, but it would involve
like you know pasting documentation links into cursor. So you would go to like the master.ai docs website, copy a link in and so on. This is just a lot more seamless. It can keep looking up more docs links as it needs to. You
know, if it if it runs into some problem, it can generally kind of fix it for you, which is cool. So, yep, it added memory here. So, we got a new memory class instance with some, you know, default options. I'll just delete
this because we don't we don't need all of that. Um, and I'm going to delete this this file because I have a global environment variable for OpenAI. And let's see here. Let's just
run it to make sure that it worked. So I cdm into there and we'll do PMTM dev, which is the default dev script for starting VRA. We'll open up the playground. We got our
weather agent and it has a chat history which means that cursor correctly added memory which is kind of cool. Um so let's actually head back over to cursor. So you could do other things too like I just added memory, right? But we could also say, you know, um what
are latest MRA features I should know about and it's going to read the MRA blog. checking the change log. And yeah, so we got some new enhancements to workflows, new voice features, new memory improvements. Agent network is an interesting one. That's a new one. Let's actually just get it to add an agent
network. Let's see how well it does with that. This is a brand brand new like alpha feature agent network that that uh the team just is is kind of iterating on right now. So we'll see. Um okay so it's
going to create a agent network for the weather application. So it's going to add a second agent that can provide weather related recommendations based on the data. Okay. So
interesting. Never seen this in cursor before. Sure. I'll pay 20 bucks for faster
completions. All right. So, what do we get? Um, weather agent is what we had
already. Now, we have a recommendation agent and we have a weather network. So, that looks good. Um, okay. It's realizing that it made a mistake here. See, it said like
it added this routing model to this agent network and it realizes that's wrong. So, it's checking the documentation again and it actually fixed it. So, that's kind of cool. Let's accept the
file and yeah, so I hope you can kind of get an idea of why this would be useful. So, we're going to use this in the coding agent. We're going to then give that agent the ability to do what we just did in cursor here.
Um, yeah. So, I guess it's just kind of talking about the agent network. That's cool.
So let's let's let's move on now. So um I'm going to like in building up to this coding agent. The next thing I'm I'm going to show you is how to use MCP with MRA. And the first thing I'm going to
talk about is uh see here I'm going to talk about the using it with different registries. So, Composio is a really good one. So, I'm going to show Composio, MCP Run, and Smithery. But there's some other really great ones like there's Open tools, uh
there's Pulse,MCP, Graphit, and some other ones. Um, but let's just take a look here. So, I have a terminal UI that I built for this demo. So, we can kind
of see a demo of using each of these. So, we'll have our coding agent for later. We've got Composeio, MCP, Run, and Smidy like I was mentioning. So the code here that we're looking at, this is for Composeio.
Um, and you know what? I'm going to really quickly just show you what Composio looks like. So when you go to their website, they have like an MCP section, mcp.composio.dev, and they have all
these really great um MCP servers that you can install. Um, so for example, there's a Gmail one right here, right? So you would come in here, click this generate. I'm not going to click it because it's actually a private URL. Um,
but you would then copy that URL and use it for your MCP server. So, I've already done that. You jump back over to the code. You can see I added it as an environment variable because as I mentioned, these
are private URLs. Um, and it'll look something like what what you see in this uh comment here. Um, but just run it. So this terminal UI, it lists out all
the available um MCP servers that I have installed. So for Composio, I installed Google Sheets and Gmail. And let's just ask it question here. So this is a bit risky, but based on my emails, what can you tell me about me? Please look at my most recent five
emails to tell. Don't list the email contents and don't name any names. So risky. We'll see.
Hopefully it doesn't say anything weird. What do we got here? Okay. Based
on your most five recent emails, here's what I can infer about you. Okay. Professional engagement. Yeah. Uh event
participation. We're doing that right now. Doing it live. Um sort of. So yeah, it the main thing
here is it worked, right? We saw, you know, was using this tool and then it finished with a tool result and we can interact with it. So I already set up Composio here, but what would normally happen when you interact with the Composio MCP is it would need to authenticate with these services first.
So Gmail or Google Sheets or whatever and the agent actually gives you a URL and you open that in your browser and it will authenticate, go through an OOTH flow and then the agent is just kind of good to go after that. Um so let's look at the code a little bit. So we have this MCP configuration class. What this does is it allows you
to pass multiple MCP server configurations, right? So we were just looking at that. Um we added these two.
Um you create your regular MRA agent and you actually just pass tools onto it and your MCP configuration has a get tools uh method you can call. You just await that and then the tools are wired up to your agent and it should all work. Um, and for the demo I'm showing this with a terminal UI, but let's actually I'm just going to show you that we can also do this and look at it in
um in the Monster Playground. So I think this will take a little bit of time to start up and the reason is it's actually starting up three different registries. So, starting with the Composio one, MCP run, and the smithery one. Should only take a few more seconds
here. Oh, we got an error. So, have something misconfigured probably. What
do we got here? So, creating a new websocket connection failed. Connecting to one of these servers. So, one thing about MCP is it's quite
new still, right? It just was launched at the end of last year. So, the error messages actually aren't always the best right now. And that's something that that we're iterating on. Um, so for us to start, let me see here. Do a little
bit of quick debugging. So, I will just remove a couple of these from my master instance here. We'll just start it up with the Composio one because we saw that that was working. Oh, interesting. It was probably the maybe it's the agent
network code that was written. Uh oh, yes. You know what?
Um maybe you might be right. Let's see here. Oh, you know what? No, my bad. No. Okay, it worked. The problem is that I have the playground open already and I
had it open on the weather agent page. So this was pinging my server. So my bad. So there is no weather agent. So
that's the error we were seeing. So let me just restart this quick to show. So this is what it should look like. And now if I come in here, we see
our Composio agent and it has all the tools and everything. You can click on one. Clicked by accident there. But you
could click on any of these to kind of debug one. So check active connection you know um click on different one uh tool calls and pass in arguments yourself. Um but we can see you can interact with these agents directly in playground as well. Um so something we're working on adding right now is the
ability to see tool calls and tool results in the playground but we haven't shipped that quite yet. Um that's probably coming next week. Um, so for now that that's actually the reason I'm going to show it to this terminal UI in the terminal UI. Um, I can kind of log
out the tool calls and tool results. It should make it easier to see what's happening. Um, okay. So let's take a
look at another one. So we got MCP run. So MCP run is another uh registry here. I'll just quickly open it up in my browser.
So sim similar to composio different platform you have all these different MCP servers and you can just choose to use one install it. Um so I've already installed a couple see here. Uh so for for this I have the perplexity and sequential thinking MCP servers installed. Um, similar to Composio, you would click this generate
button right here in the top right corner and you would get a URL. And if I head back over to the code, you could see that's what I put here. And again, that would be a private URL. So that's why I have it as an
environment variable. Um, but yeah, we've got our MCP run example going over here on the side. And we can see those two tools are connected.
And I have a prompt here. This one might not actually work. Uh we'll see. But I
just I'm just asking it to plan a day trip or whatever a vacation to Kyoto. So this is using GPT40, which is not a thinking reasoning model, but using the sequential thinking um MCP server, it can kind of act like one. So that's what we'll see here. we see it's um calling you know the perplexity chat to search
the web but in between we'll actually see the sequential thinking as well. So it's kind of iterating on the prompt that I gave it. It's doing the sequential thinking making a step from that step is making a web search adding another thinking step and so on and kind of in a loop.
Um, I didn't spend too much time on this one, so I think it might run longer than we have, so I'm just going to cancel it. But you kind of get the idea. It would go for a while. Um, I could probably, you know, do have it run for a few
number of steps or something like that. Um, but and then it would give us, you know, a summary. Um, and then the other one I wanted to show you is Smithery. So, I'm not going
to run this one. Um you you can run it yourself. This uh all this code is going to be open source. It'll be a repo that you can clone and run.
Um but for Smithery, it's it's similar. Uh you don't sign in, which I think they did add a sign-in feature recently, but I'm not using it for the example here. Let me just open it in my browser.
So, same thing with smithery here. Can kind of search through, pick all of the MCP servers that we want. And you would be able to kind of copy the server configuration from here and add it in here.
So, the example here is exactly the same. So like I said, I'm not going to run it. Um, but just to give you an idea that you can use more than one registry. You can also combine them. So that's not something I have shown here, but like
here we have, you know, this is pulling from smithery, but you know, we could easily just take one of these like the composio one um, and also add it to our MCP configuration. So that's kind of the nice thing about this MCP configuration class is you just add as many MCP servers as you want and you don't need to change the rest of your code. The tools will just get connected
automatically. Okay. So now that we've seen uh you know this MCP configuration how to connect it to the agent and see we've seen you know registries and stuff let's look at the actual coding agent.
So actually sorry before that I want to show one quick thing. So the terminal UI this is also going to be in the repo that I mentioned. We're not going to write the code for this because I think it's not super relevant but I'm just going to show you real quick. Um it's
just a it's a while loop here that uses nudge read line. So you can, you know, type in your prompt. When you send it, it just sends the prompt to your agent, streams back the response, and then as we're streaming that response, we have all these different parts on the response type, right? Like a text delta, tool call, tool result,
and so on. So this is what I was mentioning before. We're just logging out the tool call and logging out the tool result. Um, okay. Okay, so jumping over to the coder
agent. Got a to-do file in the repo here. So, all right, here's what we're going to do. So, yeah, we're going to build a code
agent that can build other MRA agents. So, that'll be interesting. Um, and this agent is going to need the master docs MCP, right? Because it needs to understand how to build master agents.
Going to need a good system prompt, shell access. So it can run commands like you know installing PNPM packages, you know, scaffolding, master projects, stuff like that. Needs file system access so it can read and write code files. Web search technically doesn't need that, but we're going to
add it anyway just to show how easy it is to do that. Um, you know, maybe it needs to check some some docs on API that we're integrating or something like that. Um, and then there's these two that I added. We're actually not going
to do these today. Um, but most of these coding agents, they index your code and they integrate with an LSP or trusitter or something like that so it can kind of understand how all your code is contextually related to other code. That's a bit too complicated for this so we're going to skip it. Um,
but as you'll see the coding agent is going to work without it. Um, and human in the loop. So this would be you know the ability to approve or deny the actions that it's taking.
Um, it can be a little bit risky to just let it give it full shell and file system access. Um, I'm just going to take the risk for the demo, but human in the loop uh would be really good to add. Um, and maybe we can show that in the next MCP workshop. But yeah, so let's take a
look. So the coding agent I have here right now there's no servers at all. um just says that you know it's a coding agent helpful coding agent. We're going to use claude because quad is very very
good at writing code and we have memory attached to it. That's pretty important for coding. It needs to know what it's doing. So let's just start it up and make sure that uh in this basic form we
can chat with it. So I'll ask it what master is shouldn't know at this point. Yeah, it doesn't have any idea. What can you do though? Probably nothing. Can probably tell tell us about
code. That's pretty smart, but yep. Okay, so quit out of that. So,
let's take a look here. So, the first thing for the coding agent we said was the master docs MCP. So, let's install that. And I'm just going to copy a code
snippet. All right. So, probably I could get cursor to do this. I don't have cursor set up for
this repo with the MRA doc MCP. So, we're just going to do it one code chunk at a time here. Um, so that's really all that it takes though, right? We just do PNPX or you know
PNPMY. Same thing. Both of those are good. Um now
when I run it that question we asked before it should know now what is mo. All right. So yeah that's just adding this little snippet here. It now
is able to call that tool, look at the monster docs, and it has a pretty good uh answer for us. So, it's an open source TypeScript agent uh framework designed to help builder uh developers build AI applications. So, 100% right. Okay, so that one's done. What next? Good system prompt.
So, going to copy a snippet again. So, I have this prompts file. imported. I'm not going to show the first part of the prompt here. And the reason why is I actually copied the
leaked cursor prompt. So, I'm not going to show that to you guys. You can do whatever you want to do. Um, all this prompt, all it's saying is like you're a
coding agent. Uh, you have access to tools. Do what the user says. Don't show
a bunch of code snippets. just m make changes to code and so on. Um I'm going to add my own little bit onto that which is not much. Make sure you check available documentation if you're not 100% sure. We're going to make sure this
coding agent knows the directory that it's in. So I'm just going to put right in the system prompt the current working directory. And we're going to list out all the files in the current working directory. So I think I need another import for this as
well. And again, this will be in the repo. Um, so I'm not going to talk about this function, but or I'm not going to show the function, but what this function does is it just recursively lists all the files in the current working directory. So it'll show all the folders, all the individual files, so it
can immediately get a sense of what what it's looking at. Okay. So, at this point, we do have our our good system prompt, but it's still not going to be able to do anything besides what we saw the first time. So, let's move on to the
next one before we test it again. So, we're going to add shell access. Uh, so it's going to be able to run commands and so on. So, I'm going to copy this
in right up here. So I'll show that. So um doing a similar kind of thing. The MPX this is the name of the MCP server
package. We're just saying the allowed path that it can run commands in is the current working directory. I added this open AI key to it. Uh because later we'll need it when the agent is running its own agent. Um when
it's doing that, that agent is going to need to make open AI uh open AAI calls. So all right, now we got that. Let's restart the terminal and we'll just see what it can do now. So what do we got here? Maximum calls on this. Yeah, I must have messed something up while I was doing this.
What do we got? Oh, you know what? I think I was making some quick changes right before and I must have changed something in this. We'll just do something quick and happy.
Then we'll say the depth that it's getting the directory file list is greater than like I don't know what. Let's say it doesn't need that much. We'll say three. We just
return and hopefully that fixes it. Oh, this needs to be string. Okay. So that that worked fine. All right. So, it should have access to
everything we just added. Let's send it a prompt. So, check how much disk space I have and it was able to run a shell command now. So, using tool shell
execute command and then it got a response and I have half my hard drive left. Okay, so we're getting closer to an agent that is able to work as a coding agent, but it still cannot change any files or anything. So the next thing we're going to do is give it that ability. And again, I'll copy a
code snippet in. We're going to give it a text editor MCP server. So you can see this is at model context protocol/server file system. There's actually a bunch of reference MCP um servers made by
Anthropic and other folks who contribute to this repo. Um their one actually works pretty good. I tried a couple different ones. All of them work decently well, but I ran into some bugs.
This one I have not seen any bugs, which is really nice. Um but similar to the shell MCP, we're going to just tell it that it can read and write files in the current working directory. So, looking at our to-do list, we got our shell access. We're going to try file uh file system access
now. Start it up. And we can see a lot more tools. You have all these text
editor tools. Excuse me. And just to test it, we'll say let's get files and working directory.
Cool. Yeah. So, it's seeing um all the files for this MCP workshop uh repo. That's good. Um
but this is not a master project. So, well, it is a master project, but we don't want it working on itself. It might break itself. So, let's jump over to the one that we scaffolded at the
very beginning. And see, oops, sorry. The zoom controls are over the bottom of my terminal here. Okay, we're in the right directory. And what I
have is this alias MCPUI. All this does is it runs the terminal script from the workshop repo. but from somewhere else. So, we're in
that other project and I should be able to just run mcptti and we get the same terminal UI, but now we're in the other directory. So, let's see. Tell me about my vision. So, you know, read all the files in source mra. It's reading the specific
agents file. Um, it sees there's a weather agent. It's going to look at the tool.
See, it's kind of already able to understand a lot about this project. Um, let me just open this. Oh, you know what? Actually, I forgot I have this opening cursor. So, I'll jump over there to look at it.
So, what I'm going to do, I don't know if this agent network works. I haven't used agent network myself. So, I'm just going to delete it real quick.
Um that kind of getting back to the point that we would be at when we scaffolded a new project. Okay, got the weather agent has tools in memory. Okay, so let's tell it to add an eval to this agent. So add
a master eval my so one thing I'm noticing is that my agent is not actually using the monster documentation although it seems to understand. No, it it here. Hold on. Let's quit this out for a second. We'll
go back over here. So, we have the doc server. So, that should be working. However, um obviously our coding agent
is not as sophisticated as something like cursor. So, we're going to just tell it in the system prompt um always check the MRA docs before um editing or adding MRA features to animation. Okay. So, we'll restart it and we'll try that again. Oops.
Yeah, just to be sure what is MRA, we'll ask it that first. Yeah, so it can definitely access the MRA docs MCP and once it's done, we'll try what we did before. So add eval to my agent.
So with this terminal UI, every time I restart it, it's actually creating a new memory thread. So it can't remember what we talked about the first time we asked it to do that. So that's why it's checking the the directories and everything again. It's reading the
files. Oh, you know what? It's actually seeing the eval that the first one added incorrectly. Oh, I probably shouldn't
have stopped it because I was about to check the master docs. Okay, let me just delete it real quick so we can start from a clean slate. I think it actually would have worked if I hadn't stopped it, but let's just make sure. And we have about 15 minutes left, Tyler. So just the Okay,
we're at All right, we'll do this one and then we're gonna get we're gonna have a de a guest demo real quick. So back to the coding agent. We'll try this one more time. Uh add a master eval to
my weather agent. Okay, there we go. That's what we wanted to see. So, it's checking the docs about eval to implement it properly. It's
reading the eval docs to overview and looks like it's going to check the agent weather or the weather agent code installing the eval package. So I think we I think it might have worked. I think we've got an agent here that can write code uh run shell scripts and actually create other monster agents.
Um let's let's actually go to cursor to see what it did. So the difference here with our agent is that it doesn't have access to see any of these linting errors. So in cursor we saw that you know with the agent network it used an incorrect key and then it saw that check the docs again and fixed it.
Uh for us here it doesn't have that capability. That's something we'd have to add in addition. But at least we can see that it was able to write code, read the docs, run shell scripts and so on. So I think we have a coding agent. Not
quite as good as as cursor. We need a few more features but pretty close. So, um, so we're gonna get I'm gonna call Daniel Lou from Netlifi. Daniel, are you here? You, uh, Daniel made a pretty cool
MCP server. He used Maestra to kind of debug it and work on it. Um, oh, there you go. Yeah. Do you want to do you want to give us a quick demo? Yeah, for sure. And thank you. Uh, yeah.
So, I I actually first heard about MCP servers through uh your blog, Tyler. Then I I thought it was like uh very interesting and we started uh looking more into it. And so uh right now uh we're kind of exploring what MCP servers unlocks.
So I guess I I I guess a little background to to to understand like what this uh MCP server that I wrote. And so I I guess like a lot of the the focus so far has been on like the the agent side of things like how do you create an agent to to like consume uh MCP servers. So I just wanted to show like the other side of things. So like uh uh creating
an MCP server that can be used uh in in Ma. Uh so uh what what this MC MCP server is doing essentially uh just starting up uh a package called content engine. It's basically just a data syncing framework uh powered by connectors and plugins.
Uh, and so it's just pulling in data from multiple sources into a single data store, giving me a unified GraphQL API, starting up a server uh to to aid with local development, and then the the interesting part uh starting up an MCP server to give agents an interface to interact with the the data and the schema in this uh data store.
Uh so this is just set up for uh the data store itself. Um so I'm using uh a package fast MCP. It's uh pretty easy to use. The implementation is was
incredibly straightforward. This didn't take long at all to to get this uh up and running. Um so there it exposes this add tool function here. So have I'm just going to show these two
tools that I I wrote here. one is just how to get the the schema for the the GraphQL API and the other one's how to execute it. And so uh using this uh from so I I started to use uh flawed uh first to to kind of like test this out. Uh it it was kind of frustrating and so
I ended up actually switching over to MRA to use uh and like one of the this is such a a small thing. Uh let me just open up uh the agent here. Uh where is it? Yeah, this was such a a small
uh thing that just made uh development so much uh easier. So when I was trying to configure the server in cloud, it just kind of it doesn't it's uh the way that you provide it is just through like unstructured JSON schema. And so it was kind of like hard to to figure out like what like how do you actually configure this? And because it's so new, documentation is kind of sparse and examples are sparse. Um but just having
like type safety here uh is such a huge thing for uh that doesn't exist even in if you go into like the the cursor settings for MCP here. So if I edit this like this is just there's no like type safety here. So, uh, like when I was trying to use cloud, it just kept adding it as endpoint and I couldn't figure out like why my MCP server wasn't working. And
so, I had a lot of trouble with this too when I was first using MCP like originally. Yeah. And it just kept hallucinating new properties to add here and I just had no idea what like actual properties existed.
Um, and to go back to my MCP server. Uh, so I have it, uh, I have it running. I have my MCP server running here. And then I have a a Nex.js site running here. And so,
uh, for the sake of time, I I don't think I'm going to, uh, walk through like, uh, necessarily create. So what I what I basically did was take a screenshot of this, paste it into cursor and just say, "Hey, give me use my GraphQL API uh and give me a page that consumes all of this." And so if I go into
my my uh agent here, so it it created all this. I'll I'll delete this products just to quickly show if this so uh what products do I have available or product types and so this is something I was running into before it seems to uh stall sometimes Do you have access to my MCP server? What tools are available? So, we're almost out of times. Um, I guess let's just see if it maybe give it that prompt one more time
and then I think we have to jump over to uh quick Q&A. Oh, yeah. But yeah, I guess like essentially what I just want Yeah. Okay. So, it's it's finally calling the tool. So, it's uh
pulling up my schema and it'll be able to I'm not going to show it creating components because that'll take a little bit of time, but basically just turning like any GraphQL API into uh something that an agent can work with easily. Nice. Yeah, that's pretty cool. I think that's the the power of MCP is, you know, just having these essentially
plugins that you can install anywhere. Like Daniel installed it in MRA and in cursor and as we saw, you can do the same with like the MRA docs MCP. So, um, yeah, it's pretty cool. There's
there's a a lot of power in how it works. And with that, we're going to start to wrap up. So, we can talk a little bit about the future of MCP. We'll send
these slides. So, we have some links here that talk about, you know, like a well-known MCP directory. There's a GitHub discussion, the blog post that we wrote on why we're all in on MCP. We have some ideas on the future there. We
talk about, you know, anthropics, you know, thinking of an MCP server registry and this idea around stateless MCP servers. And before we h try to answer a few questions live some links please uh go to our GitHub and star give us a star if you haven't already you join our discord if you want to chat about this
stuff after you know after this all of the team you know we're in and out of discord all the time and then follow me and Tyler on Twitter connect with us on LinkedIn and if you do want to join the MRA cloud beta we're slowly rolling out access to our cloud product, go to the website and click on request or get access, drop your email in and you will
eventually get access to it. We're slowly rolling it out. And with that, is there any questions that we want to answer?
Okay, there's questions. So, it looks like Abby, thanks for jumping in and helping out with all these questions. We're not doing Python ports on anything. Just letting you guys know straight up. Yeah. No, no plans on that.
No Python. Um I guess a question. What about other MCP features support? Is it planned prompts, server push messages, etc.? Uh
yes. So um we're going to support more of the MCP features we that we don't today. Um, and then when some of these new um, RFC's make it into MCP, like the well-known directory and so on, we'll also support those. Um, the blog post
that Shane mentioned has some code snippets on what how we think we want to use some of these features when they come out. Um, we're want one thing that we want to do is allow you to point at a registry from that MCP configuration class and then just have TypeScript uh, TypeScript autocomplete right in your IDE for all
the different servers that you can install. So, you'll just, you know, start typing something and then it'll autocomplete. You press, you know, whatever your your IDE uh, like tab or whatever it is to autocomplete. You'll
see everything and then you'll see all, you know, all the required configuration for each server and so on. Um, there's a question. Oh, yeah. Go ahead. There's a question about setting up master server to use in uh Visual
Studio Code if that's possible. Uh, I haven't done it myself. As long as it supports MCP servers, you should be able to install it in any literally any IDE or client that can install an MCP will work with this. Um, and we do have we have a docs page. So, you could show that real quick.
actually go you just go to the homepage click on docs and then in the top part of the sidebar here there's using with cursor windinsurf but we have a manual installation section that just shows the snippet that you need to put into your um MCP client. So um you would you would use that Awesome. Well, we do have I think most of them have been answered now. Thanks
for everyone on the team jumping in. And with that, you know, just as as we all said, as we said before, the demo, uh, this recording, the slides, they'll all be sent out in an email that you registered with. So, expect that in, you know, a few hours here. It'll come out and you'll have access to everything
we did today. the GitHub repo will be there so you can download the code, play with it and you know see if you can uh if you're comfortable giving your agent access to your shell directly or or not. Uh thank you everyone for attending. Tyler, thanks for uh showing all these code examples and running through this.
Daniel, thanks for the guest demo. And yeah, it it was it was good seeing you all. Thank you all for coming.
Thanks everyone. If we didn't get to your questions, go to Discord. Yes, please, please come to Discord with your questions if we didn't answer it and we will do our best to answer them there. See you all. If you're if you're not in Discord, go to monster.ai. You can join
it from the homepage. Yeah, it's in the footer of the homepage or if you go to the docs, it's in the it's in the header of the docs as well. So, we'll see you all there. Thanks everybody. See you later.

