Build and deploy your first agent with Mastra
2025 might be the year of AI agents but this session is your chance to actually build and deploy an agent yourself.
AI agents are autonomous systems that can perceive their environments, make decisions, and independently take actions to achieve specific goals. With today’s tools and frameworks, it’s never been easier to build them, and Mastra.ai co-founder Shane Thomas will show you how to do it in less than an hour.
Use agents to build things like a personalized travel agent that puts together vacations for you. Or use multiple-agents to simulate a ‘team’ of writers and editors that seamlessly collaborate on content.
This event is open to all devs and aspiring AI engineers, regardless of background, so feel free to share the invite link. It’s recommended that you have a code editor and node v20+ installed prior to the session. You should be comfortable with basic JavaScript and the command line.
This isn’t a talk; it’s a live workshop where you’ll walk away having built and deployed your first AI agent.
Workshop Transcript
e e e e e e e e e hey everybody we will get started in just a few minutes here probably give everyone you know maybe another two or three minutes so I guess while you are you for those of you that are here if you want to drop in where you're uh where you're from or you know where you're calling in from in the chat
that'd be kind of kind of cool so I am calling in from San Francisco hi Shane so good morning is it morning inan Francisco it is morning here yep it's it's 9:00 am okay so good H good evening from France it's uh 6 PM over there well good evening to you yeah yeah I know we have people calling in from all over yeah good good evening to many of you good you know afternoon good
morning New York City all right Paris Boston okay I used to live over near Boston at one point they were on all right we'll probably wait another say 60 seconds and then we'll slowly get this get this train uh running got Austin Texas I love Austin Texas e okay that's really cool I'd love to talk more about that uh
Graham that's cool all right I'm gonna share my screen hey Sam good to see you I will share my screen and we'll get this thing going I know we'll have a lot of people that kind of come in and come and go that's pretty normal this make sure it is recording one time I forgot it is okay so you this will be recorded So if you do have to leave we you'll get an email it usually comes out it takes a few hours
you know it'll come out later this uh later today but let me get my screen shared and we will kick this thing off all right should all be able to see my screen I hope all good someone want to drop in the chat if it looks good if you can see it all right Sam says Sam says it's good we're going to say it's good okay so
we're going to talk a little bit today about uh with the goal of build and deploy our first AI agent so we're going to learn a little bit about you know what AI agents are learn a little bit about mastra and kind of talk through how you might actually build and deploy one so we're going to talk talk through some common AI engineering
Concepts and we're going to talk about how those kind of map to core MRA Primitives agents workflows evals and so who am I so my name is Shane Thomas I was you know formerly engineering and product at Gatsby and netfi I built a website called audio feed a while back a couple years ago now and I spent 15 years in open source uh back if you can believe it or not I started in Drupal probably about 15
years ago so I spent a lot of time in the Drupal Community Building uh Drupal modules and things like that for uh Drupal which is all you know open source if you're familiar and uh so I'm the found one of the founders and chief product officer at MRA and go ahead and connect with me on Twitter or LinkedIn
and we can chat there so let's talk about what agents are because you've probably been hearing a lot about it depending on where you're at in your journey building AI applications and there's a lot of hype around what is considered an agent and what's not and there's not a clear one this is the one definition unfortunately marketing terms have kind of taken over and now agents are everywhere
everything's an agent but the simplest way that we like to describe it is an agent you could consider it software with non-deterministic code so you kind of got to think about agents as a spectrum there's things that are less agentic and then there's things that that are more agentic and so it's not necessarily is something an agent or
not it's how agentic is it something that's less agentic would be a mostly deterministic workflow with maybe a call or two out to an llm maybe it's a simple trip planner go get the flights uh go get the uh hotels and then an llm puts those together in a package you know that is you know maybe a single llm call that kind of combines
data from multiple places it's not you know like a fully autonomous agent but it is kind of agentic you can have something that's a little bit more agentic which is you have llms that build an execute their own plans so they kind of come up with their own plans and they execute that plan an AI customer service assistant with knowledge to an
access you or access to a knowledge base where you ask it a question it can go out and look for information it can respond to you other things that might fall in this category if you've Used Services like uh lovable or bolt. new or vzer like they're a little bit more agentic or cursor or winser you know they kind of do these planning steps and so that
that's kind of falls in the more agentic spectrum and then you have you know kind of even the most agentic stuff which is you know a self-driving car so if you've ever been to San Francisco or La maybe seen the self-driving Whos those are uh much more agentic because they're much more uh hands-off with very little uh
human input although there is still some required sometimes and let's talk about you know do you a lot of people ask you do you really need an AI application or an AI agent framework and I just found this uh tweet kind of funny you know it's saying like Frameworks are stupid you just need an API key and a loop and a server and
good API designs and a tool execution sandbox and a database and some middleware code and some file storage and you know context management system and then by by the time you added all that stuff together you basically built a framework and so um that's why we built MRA so MRA is an open source AI agent framework for typescript and what's included with MRA
we have tools tool calling we have memory we have tracing we have state machine based workflows so you can do human in the loop so you can suspend or resume and have you know actions that a human can kind of come in and interrupt and then we have uh you know simple apis for that you can just do step then after you know you don't need to learn you
know graph Theory we have evals so you can track and measure AI output last week we had an evals Workshop we'll have an evals Workshop again in the future we have St for rag we have a Rags Workshop next week and then we have a local development playground which you're going to get to see today so really the goals of moner is to be opinionated but also flexible because we really don't want to lock you in we
want you to be able to really get further faster but then you know as you might need different things you can pull those in and actually use those just alongside moner so 8 minutes I'm I'm getting uh I've done this work shot up a few times every time I get faster going through that first those first few steps normally takes about 10ish minutes so
we're going to have more time to play in code which is good so what we're going to do is we're actually going to start looking at some code we're going to create a new masterer project we're going to create an agent that can tell us the weather we'll create and run a simple eval just to show that you know it's not
that hard to run some off-the-shelf evals we won't do a lot of custom eval metrics today that's where that other Workshop comes in but at least you can see what an eval looks like and how it shows in Master's local Dev we'll create uh we'll talk about a workflow that can plan activities based on the weather we'll look at how that works and we'll create an agent that can act answer
questions about a personal finance transactions from a spreadsheet and then we'll deploy and test our agent kind of in a production environment okay so with that give me a second and we will get started let me share this all right there we go all right so I'm going to go to the Master website I'm just going to copy this right from the
website that's how we're going to get started I'm going to drop into the CLI here let me make this a touch bigger can you all see that is that hopefully good all right so we're going to go ahead and run this and this is just going to install let me just pull the chat up so I can see what's going on looks good all right this is going to run masterer
create and we're just going to call this March 6th workshop and this is going to take a little bit uh but what it's doing is it's going out it's installing MRA it's getting all the dependencies that are required your masterer is built on top of AISD if you're familiar with that so we do use that uh kind of under the hood for all the model model routing so if you've used that before that'll all feel
familiar and um it kind of has to go through this install process so there now it's installing masterer core asks where we want to create the masterer files we'll just keep it in the default we will just start by turning on agents and workflows so we can kind of show everything we will add tools and we can select our default
provider I think I will use anthropic why not and I'll skip an API key we will add an example because that's the code we'll look at first so we can get started all right so now we can CD into our March 6th workshop and I'm going to go ahead and copy an environment file just that all it has is um my environment variables in it that way you
know my API Keys you don't need to see those uh so if we go ahead and we can immediately run mpm run Dev this will run uh Master Dev which is our local playground and the nice thing about this if I can get it to pull up here is it I'm going to actually just make this on a little bit bigger screen today uh it allows you to immediately get feedback and see uh how to interact with
your your agents your tools and your workflows so we're going to click around in master Dev first and then from there we will eventually look at the code of how the how all this is running so let's just click on our weather agent and you can see it tells us you know we're a weather agent you have accurate information to the weather it's using
Sonet it's using anthropic it has a tool called get weather we could you know see what that does but let's just ask it what capabilities it has okay that's great so let's say what is the weather in sou Falls that's where I'm from sou Falls South Dakota in the United States so it's saying it's going to go out and get the weather and this is
going to make a call so it's 5.4 Celsius so it's cold but not too cold today actually for this time of year so it's actually the weather is uh pretty nice a little bit windy but not too bad uh so if we want to look we can actually see what happened in this so this is the nice thing about the masterer dev is you can see that in our traces these streams anytime I actually
called out to this uh this agent I can see the stream I can see every step that happened in the Stream so you can see there's this tool call here where you know first it's this stream text and it tells me get the weather location we can see it's going to check the weather conditions and sou Falls it made this tool call for uh from the weather agent and then we can look down
here and we should be able to see the actual results of this tool call so you can see right here it made this weather tool and this is the tool result and then the agent or the llm took that and then synthesized this into a response so you can kind of dive in and see every step of every you know every
step that this agent is making so that that's pretty cool so we can immediately chat with this agent test it uh one other thing I'll just note is it's all available through an API too so once you're running the dev server you could hit this from any frontend application and you could just call stream call generate and you'd be able
to interact with it from another application not just this playground in tools you can see that we have this weather agent tool this is the weather tool that the actual agent can call so we can test this in isolation let's just test sup balls again and see how this works you can see I can just see that if I were the agent and I
called this tool this is the data I would get back so it's a good way to test your tools in isolation before you hand them off to your agents and then lastly there's a workflow here and let's look at what this workflow does so you can see this workflow has two steps and the first step and this is kind of just breaking down the problem in a different way so this is just an example
but it this step will fetch the weather and then it will plan activities so it'll based on the weather and the location you provide it'll plan activities for that location so if I type in Su Falls which we've already seen the weather you'll see that it's running and if I look it's now gotten the weather and you can see to my CLI
it's outputting it's starting to stream activities for uh based on the weather and the location so it's talking about the big Sue bike trail which is in sou Falls um downtown sculpture walk Old Courthouse Museum all those are things um in sou Falls but it basically plans out a a mock uh example of what I should do based on the
weather okay so now that that's running looks like it's you can see if I come back here I can actually see the output which is great so I can actually see the out the output of this weather and um and all the plan that it made so let's actually take a look at the code here so we we've seen all the UI stuff we we know it kind of works let's look at the
code so I'm using uh wind surf some of you have probably used cursor or maybe winds surf so I'm also going to if you've heard the term uh Vibe coding which has uh been very popularized lately we'll do some Vibe coding together and see how how wind surf or cursor actually do write writing code for us during this Workshop so we can uh
see how that goes I'll try to make this just a touch bigger hopefully you all can see that and let's take a look at what is in this masterer folder so again remember when we started it asked us to specify where we want to install masterer we said the source folder so there's this masterer directory we have agents tools and workflows well the nice thing is that kind of maps pretty closely to what we saw in the UI so it's
very clear what's in each of these folders so let's start by looking at our agent and you can see that this agent is pretty simple we have a weather agent here were the instructions we saw in the playground you can see we're just using aisk using the anthropic model and we're using Claude Sonet 35 and we're passing
in this tool let's go ahead and look at what that tool does uh and this tool is these are just some types so we'll kind of just skip over that and you can see the tool has an input schema which is the location you know we entered su falls in this case and then an output schema which is what it's going to return and then it's just a
function so all the tool is is a a function call and it's not that the llm is actually running this code the llm just tells us hey you should call this code on your system and then give me the results and then I will process that so the way that this interchange works is I request weather for sou Falls the llm says okay I'll get the weather but I need you to run this tool for me it
comes back to you know our local system and says run this function it runs the function and then it passes that back to the llm to then kind of synthesize the response for US based on the data that we pass back which is of course in this output schema and all it is is calling out to two apis first it gets a basically a
geocoding the location and then it goes out to this weather API and it returns it in this format so as you can see it's all uh just pretty much a simple uh simple function call here to get the weather now uh last let's look at this workflow and this there's this kind of does this in line we could really actually move this into the agents folder but thing I want to show is just how simple it is if you go
down here the weather workflow is a new workflow the trigger scheme as a city and it's first step fetch weather then plan activities and those are just you know plan activities calls out to an agent that does the planning and fetch weather just makes that that tool call that we showed before to get the weather so it's just another way to uh structure
kind of agentic workflows what we see a lot of people do is you often start trying to just give your agent a bunch of tools but sometimes that breaks down and it usually gets better results if you can move things into a more deterministic workflow and you can kind of uh guide the path so if there's like if there's defined steps we usually
recommend to try to put it into a workflow because you're going to get a better results because it's going to follow an execution path um but if it's uh if it's less defined and you need the agent to just completely make decisions then you can use an agent with tools so you kind of have a lot of Lego blocks we like to say that you can kind of piece these things together because agents can
call workflows workflows can call it to agents you can really Define these very complex multi-agent type systems with these just few small Primitives that are set up and last in you know kind of the root of this master folder is where we pull it all together so we just export this master class and we give it the workflows we
want we give it the agents we have and we can set up different loggers and there's a whole bunch of other things you can add in here that's all in the documentation but the nice thing about this and someone mentioned I think I saw can you use this in your in nextjs yes uh we're not going to go through that today but the nice thing about this is with this class you can just now import
this master class anywhere kind of inside your server components has to be server side but you can import that and you can call get agent you can call agent. generate agent. stream and you can go ahead and just use all these things for mastra within you know whether it's nextjs or whatever any any kind of typescript node project
again needs to run on the server but uh it's really easy to just import this MRA and use it uh so I see a question looks like Sam's maybe going to answer that otherwise we can answer it at the end on logs okay so now now that we kind of look through that code let's actually write some stuff so let's let's expand the
capabilities a little bit bit more and we will have time at the end for questions so Sam's answering a lot as we go and any ones that are a little bit longer more involved we will uh we'll we'll spend like 10 minutes at the end 10 or 15 minutes hopefully having time for questions so I created this really simple
uh fake transactions list you could imagine this as some kind of you know Financial API or some kind of you know spreadsheet that you want to keep up to date but I want to get my agent to be able to talk to this so I can ask it questions about you know my transaction history the first thing I'm going to do
is I'm just going to publish this thing uh normally you'd want to deal with O and all that but I don't really want to have to care about we've all dealt with o before we know it's not fun I'm just going to bypass that part and publish it to the web uh so I'm going to stop publishing and then republish this thing just to
make sure I think it's the same oh yeah it's just the same okay so we will just publish this and you can see this is a pretty uh contrived example right but the nice thing is it's really the aha moment for me at least and I think for a lot of people that I've talked to is when you can see an llm interacting with your data that that's kind of the really cool part and so I'm
just going to publish this and it's going to give me you you know a URL here that I can get and I'm going to give this I'm basically going to ask my uh have a tool call that can go out and get this data and I'm going to pass that tool into the agent and we're going to talk to it and see what it can do so let's go into wind Surf and we're
going to try this and this uh our results May Vary we don't know how this is going to work but let's go ahead and just say I want to create another tool that can get Finance get transactions from a CSV it should fetch the transactions from this URL let's return the results has a CSV string as the output
schema okay let's see if there's any I think that's all we really need let's see if there's an let's just let it run and see what happens I'm just going to give it a little bit more Direction use the G weather tool as an example so of course this is a pretty uh simple example but if you haven't used tools like this it's kind of cool
that in a lot of ways uh you know the way that I've I write code and the way that a lot of people on our team write code has changed a lot you know instead of you actually writing all the code yourself a lot of times it's like you're working with a partner in this um now if you've ever used these you also know that it often goes off the rails and it doesn't always get it right but we're going to go ahead
and just accept this and we'll kind of take a look so it says it creates a transactions tool great it's called get get transactions CSV string containing transactions get transactions from a CSV um the one thing I will caution you all when you start working with agents and tools this tool description is very important because
that is the information that gets pass to the llm and you you got to kind of think about it if you were a human and if you gave these tools to um some you know relatively you know fairly smart person but maybe not you know very smart and could could by they decide when to call this tool based on the description you give it so I'm going to add to this description Get Personal Finance
transactions from a CSV just to give a little bit more clarity but you'll probably end up spending if you've ever spent a lot of time writing prompts you'll probably end up spending a lot of time writing tool calls when you start to work with agents as far as like what the description should be because you should be very descriptive you might even put examples of when to call this in here this is a
pretty simple example so I think it's going to work but if it doesn't we'll come back and we'll uh improve this description and you can see all this is doing is calling get transactions input schema is an empty object output schema is just a string it's calling out to this URL just fetching it and it's just
returning it so that's a pretty simple example uh if we should be able to test this in isolation I hope that's kind of the one of the points of Master playground I do need to uh see here oh I am not running it anymore need to run itm run Dev now let's check this okay it's not showing up because I'm not exporting it let's go ahead and just whoops give it to our agent
so we'll just actually pull it into our agent here transaction tool and we'll give it to our agent so then it will show up and it's not not typically you know the one thing I will say is it's no longer just a weather agent but we can test this transaction tool in isolation you can see it pulls in all the data so
this has all the data from this spreadsheet so that's great and now I could ask this thing to get transactions let's actually try it I don't think this is going to work but what did I spend at Amazon it's actually yeah normally I'd only show the happy path but let's show the not there we go this is what I was
hoping to see it's saying I'm a weather assistant so it does have the tool to get transactions but if you look it says I'm this helpful weather assistant that provides accurate weather information oops I didn't want to click on that so that makes sense it's it's not trying to even though I do have did give it this tool I need to update the instructions
to tell it when it can use this tool so in masterer Dev we do have this uh kind of instruction tuning uh prompt enhancer that will help you write prompts because one of the things we've realized uh and maybe you've realized it writing promps isn't is kind of hard it's kind of like a black box you don't know what always works or what's best so we use the llm
to to help it improve it basically improve itself so let's go ahead and ask it to improve itself um you are now a general purpose assistant that can fetch weather and help with personal finance questions let's just see what this does and if it doesn't work again we can write this ourselves but we have the tools let's
see if we can use them and usually gives us a better result this way all right so now it says you are versatile digital assistant specializing weather information and personal finance guidance your core capabilities are Weather Services personal finance support and it it has a whole bunch of information we would you know of course I'd recommend you read all this stuff but for the sake of time and you know
moving forward we'll just click save and I can click play Here and Now I can set that this is the active version now let's try how much did I spend at Amazon all right so it says based on the transaction data I spent 4599 at Amazon this was paid using a credit card if I look here hopefully that's right I did $45.99 at Amazon so it's using my data it's analyzing it it's helping me out uh and we could say how
much did I spend on my PayPal account so we we used it for Netflix we used it for Spotify so 26.98 and let's just take a look at that Netflix Spotify cool so as you can see it was very easy to create this agent that can then interact with data in the real world one thing I will have to do here is I will have to copy this in so we
don't automatically save this to your code this is a playground so you can play around with your instructions I actually want this this is way better I got better results so I'm going to go back into my code if I can cck on the right link and I'm just going to drop this in uh I didn't did not click copy correctly let's try this again
oh I clicked on the wrong one that's why all right I will copy this in here you can see uh the system prompt is significantly larger now and again I would probably never written this much this level of detail but when you start to give llms a higher level of detail like this you're going to get more accurate results and so
that's why it's often nice to use end up using an llm to help you improve the prompts that you send that llm so it's almost like you it knows how to talk to itself in a way that's it's going to understand and it's not always the case but it is often the case so I would recommend trying that out all right so let's go ahead and try to add a simple
eval to this so we can actually see how evals work so let's go to the masterer docs here one second I'll just go to MRA docs here and we'll look at evals and just for a quick overview of evals if you haven't heard of evals uh is what the way to think about eval is is just like software tests the one of the biggest difference with evals is kind of like testing a non-deterministic system it's it's
usually you know if if you have the same inputs you're can get the same outputs in most systems right but with llm calls it's a little more unpredictable so eals are a way to measure that and so rather than necessarily just a pass or fail often in evals you'll have some kind of grade or a score and over time you want to track that your system is not getting
is not degrading it's actually either getting better or at least you hold a certain quality line but you might be there's a lot of things you can eval you can eval uh and we'll look through some of the ones that kind of come off the you know kind of off the shelf we like to call it we you can measure if the llm
is hallucinating you can measure if you know if they're providing a complete answer you can measure if it's relevant you can measure for bias you can measure for prompt alignment um I'm going to just use a really simple one so we most of the I would say most good evals you're going to use an llm as the judge of these so
there's a couple ways you can think of an eval uh someone has to score the results of the llm often times in the early days you're just testing it yourself so you the human are deciding when I look back at this response I guess I I don't no longer have it but and we'll fix that in a second um but if I look back at the response did I get the right answer you know did it was it was it complete do I think it was is
good enough so you'd have a human judge that but you can also use an llm as the judge to decide if the response is good enough so you can actually use the llm in you know after the call happens to judge if it was a good response give it a grade and then over time you can see how that's working so we're going to go ahead and just use a very simple eval
here uh we can use tone consistencies may be a good one prompt alignment yeah we'll use tone consistency just because that's simple and we'll also add memory because memory is another thing that your your agents have so we'll do that first and then we'll add this eval so we're going to need to install some things so mpm install at mastra SL
evals and if we look at the docs here on you can also see we have docs on memory so you can see how this works and it's pretty easy to to pass in thread IDs and resource IDs for memory so I can just add here and I'm going to need to install memory leave okay so I installed memory install installed evals
that's not right I think it's like that so we're importing memory and memory equals new memory I think is how this works maybe I'm not doing that right look like that okay syntax all right so if I save that let's see if memory is working and then if memory is working we'll go ahead and add that eval we're just talking about so we can test this in the playground of course I need to run the dev server mpm
run Dev let's go ahead and do this you can see now I have chat history so if I say what is the weather in Austin since I know someone was calling in from Austin and then if I were to you can see now I have a chat history so with one you know one line we now have added memory to our agent let's now add this
simple eval that we're going to do for tone consistency just to show how easy it is to start to measure that uh let's see want new tone consistency metric let's grab that and we just need to call our evals tone and it's new tone consist consistency metric okay and we can of course pass a bunch of evals in here
we're going to the I would say the these off-the-shelf evals are nice for certain things but you really want to spend time writing your own evals that's what we have a workshop for because the evals should tie closely to your business results and this one you know what is tone consistency metric it might be
useful for us it might not but in this case we're just going to show it as an example but I would really recommend spending a little time uh I think evals are underrated you should be spending time with evals so if we go here uh let's just start a new chat and say how much did I spend at the Apple Store all right I got out they're spending quite a bit of money
unfortunately but let's look at the evals here and you can see I now have this tone consistency metric it gave me a score .96 I can click in and it can see my the input and the output if you did have an llm as a judge it would give you a reason for why it gave it a score so you could go ahead and see uh how you know
how why it's scoring it a certain way and you can course then measure that over time so that's evals let's go ahead and deploy this thing so we can actually get this thing live all right so we're just going to do a get a it and I'm just going to go create a new GitHub repo here we'll call this March March 6
Workshop we'll make it public so anyone who wants to see this afterwards can see it and we will go ahead and push that up and now we're going to uh go to master Cloud which if you you know if you want early access to master Cloud you go to the MRA website and click on request access I already have access so I'm going to log
in masterer cloud and we'll just go ahead and create this and we're just showing deploying this to masterer Cloud uh maybe maybe I'm not logged in there we go uh so we're just going to show this deploying to master Cloud you can deploy this stuff to uh cloudflare netlify forell uh it runs in all those different service providers so we have docks for different deployers you don't need to
just you can kind of basically deploy it as we mentioned before alongside your app if you already have an existing app or as a standalone service which is uh often useful if you want to kind of separate your AI logic from your actual frontend app so this is the workshop I'm going to need to get this anthropic key which I will do here in a different
screen so it's just anthropic API key and we're going to deploy it and this is going to run this will take like two or three minutes so I'll probably I'll finish up my slides then we'll come back and we'll hopefully do a test that this thing is live on the internet so as you can see it's deploying we can see the deployment
logs it's going to start streaming in here but while that's building let's go ahead and go through the last few slides we'll give this thing about two minutes and then we'll come back and hopefully do a quick test okay so what's next memory we did show memory real quick but there's a lot more you can do with memory rather than
just uh turning it on by default there's a lot of dials you can turn in order to tune how memories get stored you kind of think about yourself you have short and you have long-term memory you can architect a system like that and we give you a lot of those uh like I said those dials to turn to kind of configure how you want memory management to work for your agents we didn't show human in the
loop workflows we uh we didn't show we we showed how to write your own tools but there's other you might have heard of mCP we we're going to have a workshop on that in a couple weeks uh we've talked about we didn't talk about but compos is another option for uh being able to add tools to your agents really quickly to
integrate with other services we didn't talk about rag which is something we're going to have a workshop on next week that's how you can get if you have a large amount of data how you can kind of feed that data into the context window of your llm of the agent it's all about like context management that's is really
what working with agents is is deciding what information gets into the context window you have custom eval metrics which I mentioned before we've had a workshop on but that's where you should spend a lot of time if you are getting close to productionizing a system writing some custom evals so we have some examples out there we have a notebook LM clone we have a travel agent
we have a few others that we'll be releasing here soon if you follow us on social media you've probably seen some that we've been talking about I should probably update this slide because I think we have a couple more we can add now we didn't deploy a front end right we just were looking at the MRA Dev server so we often recommend using
assistant UI which is a really easy way I think it's all built on next you can basically just replace if you use this example here you can replace this with the URL to your back end and you could now have a front end that interacts with your uh with your masterer agents and lastly there's some links to
MRA the website join us in Discord if you do have questions uh let's see if this thing is live quick and then we'll come back to this all right so let's go and see it's still deploying oh it's success okay so if we look here we can see we should see our agents here we have our agent great and this you can notice this
looks a lot like Dev right it's very similar in that we can see this agent we can see the you we can see our workflows it all shows up we can also then access these agents through an API so in this case we have our weather agent so let me go ahead and pull up Postman here and let's see if we can test this out all right so let's pull up Postman
and what was our URL let's see what that is oops let's grab this this nicely uh worded nice named brown blue school I guess is our URL all right and if you looked at the API path it was API agents the actual uh you know slug for our agent and then we can call generate or stream in this case I'm going to pass in a
message what's the weather in sou Falls I already know what the weather is in sou Falls let's see what it is outside I haven't been outside yet today FR San Francisco let's run this current weather in San Francisco is 11.4 52 degrees fhe all right so it worked so that actually came out to moner cloud and hopefully if I go into here I should
be able to see this trace and see everything uh coming in so I see had this generate call I can see what is the weather in San Francisco and of course I can see the results so there we go instantly uh deployed agent or almost instant all right let's go ahead and finish this up so connect with me on Twitter if you haven't already uh
connect with me on LinkedIn uh if you do start building stuff with mastra please come to the Discord we have a pretty good community that helps each other out a lot of people from the team are there and last if you if I can ask you to do something right now if you uh haven't already if you could please uh star us
on GitHub that helps more people find MRA and if you want access to what I showed you with Master Cloud you want to play around with it join our beta we're slowly rolling out access to people uh so you'll kind of get on the wait list if you sign up using that link and you'll get access very soon and with that we have about 10 to
12 minutes for some questions so I'll will stop sharing and we can start answering some questions thanks everybody I don't know Sam or Obby if you're around if you have curated any questions yet otherwise I'll go through the chat I've answered a lot of them um but this one's really nice uh from Marvin do you do you allow like langra to pay for a
license to host it on our own servers we have clients in space Tech and defense who do only on Prem for data security I really like this question because there's a little history to this question we we got this question a lot when we built Gatsby cloud and the answer was always no and that's because we didn't design our software to be portable to on Prem or on Prem Cloud um let's just say that we have
already thought this way the problem though and this is for anybody who has this type of request the juice isn't worth the squeeze yet so if they're like if you're really serious ious about doing that talk to us but we're not going to go out of our way to do it but we can do it yeah yeah but honestly answer yeah we will eventually do that
Marvin it's not today but um very soon uh the the one thing though is the actual MRA framework is open source so you can of course you know use that and host that anywhere but if you want like Cloud if you wanted masterer Cloud to be portable and and hosted then you'll um you could you know we could talk about that but we don't support it quite yet but that's a good
question um okay any other questions how long have you been building this that's a good question um Renato I would like to know if the end points are accessible for a front-end service and what the response patterns are is there any documentation covering this including input and output this is particularly important since to our stream generate functionalities the answer is everything's in our docs um in terms of
how everything works uh so and they're decent but they can always be better so if you find anything Discord doesn't ping us we're very active there um and then we're actually the funny thing is a lot of the questions people asked today we're actively working on because we listen to people in Discord and you all
are like the people in Discord so in terms of patterns with front-end Services we're doing so much work this week on nextjs kind of showing how you use MRA and all that so there'll be more information Discord gets the information before these Workshops the next question in the same question is the status of what the agent is doing is important is
there a way to retrieve it directly I did answer this question already but I'll answer it again we're going to improve this um because you know every agent call is like and you can see the trace right there's a lot of stuff happening they're considered like these runs and you should be able to observe
them by default today you can use these parameters called like onstep finish onfinish callbacks these are callbacks though that happen after things have happened in your agent right you kind of want the whole life cycle so the answer is not today but it's top of mind um yeah so that's that uh yeah just just the one answer the previous question a little bit more
we do have a client SDK so if you have if you want to use this client on the front end so you mentioned you is there a pattern of how you actually can use it from the front end to get the data here's basically how you can call agent. generate agent. stream from your front end using this masterer client uh so that kind of goes back to that question I did see another question in here uh can I replace Lang chain with
MRA Abby uh uh I want to build AI agents and multi-agent workflows PS and JavaScript yes yeah we do we do like yeah we do we do see this as you know in a lot of cases a replacement for Lane chain Lan graph uh you know but typescript first typescript only so if you're more familiar with JavaScript uh we we've focused on just
doing JavaScript and typescript really well does Master Cloud support multi-agent workflow orchestration uh Oliver yes it does you can uh you can wire up multi-agent workflows you can have agents that call other agents as tool calls so there's many ways you can architect this we have some examples in
the examples uh part of our docs where you can see how to wire up multiple agents together anything that you can do on your local Dev you can do in Cloud so you can do it locally you can see it all running you can deploy it anywhere you can deploy it to cloud and it it will run do we get the ability to know the cost of an agent run Leonardo the answer is
uh we will have that very soon in in Cloud so the answer is yes but not quite yet but probably in the next week or two and there's a workshop is there a workshop for integrating Master with xjs that's actually a good idea so we we might add that to the list we're always looking for ideas like that for what you all want to learn so if you do have
ideas you can respond to the email you get uh later you can you know let us know in Discord you you can find us on you know on Twitter or whatever um R Ron has a good question here um Obby if you host masterer in an API on say a nextjs app on versell what do you need to do to protect the endpoint that might be specific to MRA um so the answer is not necessarily
nothing it's what you would do normally for a nextjs API route if that's what you're doing um just what we use for at MRA to build MRA is this a company called Oso um it's a permissioning thing I think it's very good and I kind of want to make an example for how to use it with your MRA endpoints but it's open source you guys should take a look if you're
looking into something very simple to do that type of stuff otherwise it's whatever you were doing before just do keep doing it yes you can deploy oh sorry the next question can we shall we deploy Master agents in our own AWS uh one of our um one of our guys eug John's working on how to do like ec2 deploys Etc so there'll be docs here um so yeah you can read those and do your
thing I just wouldn't do it personally but if you have your own AWS setup up and you're not in the ability to move do your thing and uh yeah yeah it'll work it'll work there are there are some things you start to run uh running long long running workflow workloads at scale starts to get challenging but it's totally possible to do it all yourself
yeah we want everything to be self-hosted but the word self there is very bold right um I just want everyone to know that self there is very bull especially when you start to throw real traffic at it because yeah agents are unlike normal web requests right where a web request is always going to finish in a few seconds typically right the an
agent is often a longer running web request right there could be and depending on the decisions and the tools it makes it could be a potentially really long running web request so there there are some complexity when you're trying to kind of run this stuff on your own but it's totally possible to do uh I'd love it to replace trigger dodev when deploying AI workflows to serverless uh yeah that that should be
possible as well Wilson a lot of uh yeah I mean I mean I know that a lot of uh times you run into use trigger dodev in other inest is the another one that we we've used to kind of get around function timeouts you you don't have to worry about that on you know masterer Cloud but other places you you could still I guess could you use could you deploy this to trigger. Dev or something
Obby yeah you could but that's like in the self category yeah you you could definitely do it but yeah this is just JavaScript at the end of the day you can do whatever you want you know um okay how are you guys doing long running tasks um if you look at versel fluid we built that as well um but we didn't call it fluid or make a big blog post
about it so that's how it works um how tighter Integrations into cloudflare Dev products we deploy a cloud flare workers um once again that's a self kind of thing because bundle sizes are what you make it and then getting Wrangler to build is what you make of it like what's in your code base but we'll happy to help if you go to Discord and you have
problem um Mohammad has a question uh yes we we did answer this already but maybe you join late you can integrate MRA into an existing nextjs project we're going to have an example coming out really soon someone asked if we if we could do a workshop on that in the future but yes you can run MRA inside
nextjs and have it you know inside your your server functions or your server API routes you can just import the masterer class get your agents have it provide tools that access the database or whatever so totally possible I'll keep an eye out for that guide that should be coming out here in uh within the next
few days uh looking forward to the mCP Workshop yeah the mCP is getting a lot of Buzz right now so we we've been we've been supporting mCP before mCP was even cool so we've been we were kind of early on the mCP train and we're kind of excited that now uh everyone else is starting to come around to it because it it does we there needs to be a way to
give agents tools really easily and mCP is at least more of an open protocol for that so it it's it's not perfect by any means in in my opinion but it is uh it's something and it is a a nice protocol for at least the direction of where we need to go where we can give more tools more rapidly more easily uh to uh to agents um I have this one can I build a
desktop agent which can access any website or app or website the machine that's on the machine and Etc um I actually built one of these myself with MRA um yeah you can and with mCP uh with mCP locally right because it's a desktop agent it's like all local quote unquote local there are so many mCP servers you can use to do file
system um and a lot of python tools that exist in mCP land that you don't necessar have to worry about like writing yourself so yeah go try it out and have fun um next question here is a really good one the framework version in terms of sver is currently a pre-release does this mean that backwards compatibility is not a priority in future versions or are you planning to
keep things backwards compatible I'm asking this as laying chain docks and examples kept on going out of syn and they really did not care about it and it sucks essentially um yeah so two weeks ago we screwed over some early like early users by not being thorough and really like just not giving a about this and we broke our Cloud
product that was in Alpha at the time we broke everything and no one actually saw it um because it was before we blew up on Hacker News right we just freaking blew up after um this event and we talked to each other and we were like we can't be like Lang chain so we are doing deprecation warnings we are taking the time now if we're going to deprecate something like we're going to put in your face um and the docks are
going to be updated docs are a product of this business so anyway yeah that's the answer yeah we yeah I can't promise we'll never break anything but we're going to be very diligent about deprecation warnings and we we learned this like when we were doing Gatsby too right you we got to be uh it means it's
it does take more time time to sometimes get you know it's like a two-step process because you got to deprecate it for a while and then remove it but that's the the path that we want to take just because we know especially when you build something you often don't if you build something you might let it sit for a little while and then you come back and you don't want to do an up upgrade
and then have all the apis break and have to then go back to the docks and spend hours or days rebuilding things that were working all right thanks everybody uh yeah if you do have any uh questions please reach out join the Discord uh hopefully sign up for the cloud beta and we will see you all I'm sure another time thanks thank you peace
