Live from AI Engineer Paris!
We are in Paris at the AI Engineer conference. Learn a bit about what we are seeing on the ground floor and hear from some of the people at the event.
Guests in this episode

Marvin Frachet
MastraWatch on
Episode Transcript
I Hey everybody. Hello folks. So, we are here live kind of coming at you raw this week because we don't have a normal setup. So, the sound quality could be a little questionable. Uh, but glad you're all
here. We are live from AI engineer Paris. So, we are at station F. But for
those of you that don't know, Marvin, what is station F? So, in France, in France, station F is a place like there's a big wall here. I mean, can we show that? This is a big place where uh there are a lot of French
startups here like 1,000 which seems pretty big but I think it's a market number. Anyways, it's pretty big numbers of startups and yeah you can see big names like there is GTO right there. I've seen some maybe local AWS teams, some airport teams here, some other things that happening here. So that's pretty nice. The tech scene is here in
Paris. So yeah, pretty good place. Yeah, it kind of reminds me of uh gives me a little bit of YC vibes. You know, it's kind of a really cool place where there's a lot of startups, a lot of people working on tech. From what we
heard, uh a lot of AI, as you'd expect, a lot of AI companies. Also here with Abby. Hello. And Leonard from the team. Hello. So, we have a whole bunch. Hello.
We have a whole bunch of people from the MRA team here. And wanted to give you a little updates on some of the stuff we're seeing uh at the on the conference floor. some of the stuff we've been learning and maybe go meet some of the sponsors and see what see if we can find some cool AI demos. So that's the agenda for the day.
Yeah. Anything I guess Marvin so far from the conference that you that stuck out to you? I mean the place and the people obviously are amazing. You can have conversation with a lot of people. I've met a previous ex-colague from strappy
which are apparently building some stuff around AI and might be using mass under the hood which is pretty nice. Yeah. uh and then yeah you can connect with people and etc and I think there are a lot of conference around MCPS I've seen a bunch of them uh you've seen too we've seen a lot of talks about neo forj
which is overly prison which is kind of nice and yeah a bunch of great things what about you what have you enjoyed so far yeah I thought so one of the things at the keynote yesterday that and it was actually Swix talking said that the prediction was 2025 was going to be the year of agents which I I think we've all
probably heard that you everyone's asking what's an agent. They hear agent all the time, but I think one of the things that was mentioned is really it's 2025 to 2035 is the expectation is going to be the decade of agents because it's it's not a it's not going to be something that just comes and and goes. It's likely to as we discover what an
agent is, how what are the best practices to build them, how do you scale them, that's going to take a long time. And so I think that is, you know, good validation for the stuff we're doing at MRA, of course, but also it's good to hear that others are kind of thinking long term of how do we actually make these things work? It's not something where you can just figure it out and it just goes away. I think
there's a there's a lot for us still to learn. So I think that is it was more reassurance that other people are going through some of the same struggles. Um, but what we also wanted to do, Obby, you know, we wanted to get you on the camera and we're going to go walk the the floor a little bit and maybe talk to some sponsors and maybe some random people. It might be a
little noisy, so we'll see how the noise is, but we're going to do our best. And if you aren't able to attend the conference, hopefully you get a little taste of it and maybe learn some things about yeah what some of the people are doing and why why everyone's excited to be in in Paris this week. They're building crazy things.
Let's go. All right. So, can you guys hear me? We we can hear you. Can you hear us?
I can hear you. Can you hear me now? All right. Well, here we go. Also, Shane, feel free to join me whenever you like.
Once you make sure uh it's going to actually work, then I'll come down. And this is uh he's probably went in a hallway. I imagine he'll come back when he gets uh outside the stairwell.
just frozen. That's funny. We see some frames of Abby in one place and then in another one. Teleportation exists.
But if you are watching this, this is a live show. So, as you know, many of you have probably watched before. If not, welcome. And you can chat with us. We're
on X, we're on LinkedIn, we're on YouTube. So, please just uh drop us a comment and we'll try to answer questions along the way. If there's anything specifically if you're watching this that you wanted you're curious about from AI engineer Paris or anything going on in the AI or agent world space let us know. We'll definitely answer com
comments and questions along the way. All right, Obby, you're back. All right, so I'm at the bottom of station F. It's lunchtime here. I see a bunch of people. Um let me wonder if I
could turn this camera around. So, it's sponsored by Coab. Oh, wait.
Got to change that, too. Well, it's all mirrored, but I don't know what conference conference Wi-Fi, you know, it's like you can never trust conference Wi-Fi. Everybody's installing dependencies right now. I keep getting cut off, huh?
Yeah, there just too many people here. It's fine. Figure it out. Too many people on the conference Wi-Fi.
Yeah. Well, you can see there's a bunch of people here. Um, and we're here at the discovery track. This is where all this is where all these vendors are. So,
that's cool. There's Sam over there. What's up? and uh
been giving out these uh these books as you know that's cool sucks about the quality but here we are yeah the quality is questionable to find here's Alex hey Alex here's Ryan from Pursu what on the ground on the floor. Keep going. There we go. Let's go to our
Let's go to our um this was a good experiment. Yeah. But now we know conference Wi-Fi is not built for live streaming.
But sometimes you have to test that theory. And so that's what we did today. We are testing the theory of can conference Wi-Fi hold up to a live stream. And the answer is the same as it was 10 years
ago. No, it still can't. You're So, Abby, we just decided we're testing the theory that conference Wi-Fi can handle live streaming, and the answer is no.
That's so funny. Have we lost Abby? I think we lost him.
I can see Alex from here, but I cannot see Abby. I'm back. Excuse me.
Yeah. Is it better now? Not really. That's fun. Yeah, we we might we might
have to call this, you know, we try some experiments and some are are good and and some weren't. This one might this one might we might be testing the limits of conference Wi-Fi capabilities. Yeah, I'm sure I'm sure all the other people using the Wi-Fi are really happy with us. Yeah,
for sure. Well, we're going to talk to our first our first guest first. We'll see how it goes. What up?
Hi, guys. Hi. So, they're up there in one of the booths controlling everything.
Why don't you introduce yourself? Yeah. Hi, I'm Jacob from Aifi working as an engineer and I'm personally building an MCP server and I we prepared a little demo that we will show you with publishing AI agents build using master on aify monetizing them and then using them via the MCP. Awesome. Let me send you the reream so you can get on. Okay, we'll be one
second. All right. All right. It's on this laptop, actually. Uh, yeah. Okay. We almost done with the
No, we're going to get him out of here. Steal that. All right, Obby. I'm gonna mute you.
Let's do it. All right. While we wait, uh, if you are watching this, feel free to drop a comment if there's any, uh, if you looked on the AI engineer Paris site, is there any talks you wanted to see? Uh, because we someone from our
team will probably have seen it and maybe we can give you an update. I do think everything at least the mainstage stuff is livereamed. So if you are interested in seeing uh some of what's going on with AI engineer Paris you can um you can tune in and see at least the keynotes I think and uh one of the things that I I guess I'm impressed with is I didn't I didn't know what to expect. Uh you know I'm I'm in
from the US. I'm in the US most of the time and so I didn't know what what kind of sponsors would be here and it's a lot of the same a lot of the same faces I would expect to see in a New York or a San Francisco conference as well. So, you know, you have tons of really good high quality sponsors. You have tons of uh really good attendees. We've had some
really good conversations. Hey, Jacob, you're back. Hi, guys. Can you hear us?
Yes, we can. Yeah. Nice. So, may I share the screen? Yeah. Yes, please do. Yeah, let's go. So I'll share the
screen. Let me share the whole screen. So at Appify, we prepared a little demo. So
I'll open up aifi.com and show you the agent that we prepared. So it's travel agent for uh looking up flights on kb.com. So we have uh we have uh MCP
servers deployed on aifi. So if you go to appify.commcps servers, let me just place the laptop servers.
Uh there is a bunch of MCP servers deployed. We used the KV1 which allows you to search for flights and I uh and I built a flight travel agent in Mastra. It was nice and easy. We could easily connect to API MCP server. So I just
visited MCP aify com selected a bunch of actors that I want to connect got the URL pasted in in Mastra and it was like pleasant and easily integrated. So this actor is deployed on appi uh it's it's published. I priced it. So,
it's priced uh uh it costs uh 25 cents for uh completing the task and I connected the API MCP server to my cloud desktop instance. So, as you can see uh it's a connected in here. So, I'll ask it to search for flights. So, I'll ask use a travel agent to find me
a flight from Paris to Prague. uh on 25 uh 2005 in the evening for example cuz we are leaving tomorrow in the evening and let's hope for the best. I hope it will find the right actor and everything will work. Yeah, that's so cool. So now it's looking for the actor. So
hopefully it will find the right one that we published. Nice. Ah, it didn't find the right one. Not
the right one. Not the right one. Okay. live live demos.
Yeah, it's a live demo. So, I will tell him uh to use the the one from me because I just published it yesterday. It doesn't have much traction. So, let's go use the
uh use the Yeah, not possible. Let's uh use the actor from me. Yesterday it worked, but yeah, it Yeah. So it will it will call this
actor and it will now get the information about the actor like what it can do the input schema and so on. So now it says hey perfect uh I found the actor it will prompt the agent. So you can basically use this as an agent to agent communication. So I hit approve. It will spawn the agent. The agent
internally uh spawns the MCP server. So we can see on the platform uh if I go to console uh and see runs it uh spawn the agent. We are also uh hosting open router proxy. So you don't have to bring your own open router key. You can use the
appy one and the agent internally spawns the KBMCP server interacts with the TV searches for the best flights that fit the use case and then it should present me the result. That's so cool. Yeah. So hopefully in a moment we will
see uh we will see how it uh presents the flights. Actually I can even show the pre previous conversation. It found our flight that we'll be using tomorrow in the evening to get back to Brag to our headquarters. So hopefully in Yeah. Nice. It found uh
Yeah, this one uh this one the first one we will be using this flight actually. Nice. That's funny. So yeah here here is how
you can easily deploy uh and monetize epify agent uh on AI agent on aifi. We also provide a templates. So if you go to appify.com/templates
uh here's for example master agent template right here. You can just clone it uh and modify it easily and deploy it on aifi and monetize it. Well that's cool. Yeah. Yeah.
Guys, let's go back to the studio. Yeah. Yo, that was sick, right? That was sick.
Demo come on Friday demo day. Yeah, that was a really cool demo. Yeah, we always like to see uh cool use case.
I did like too like the the monetization aspect of it and yeah and it's always always good to you know dog food and use your own your own tools so you use it to book your own flight so that's good. Yeah yeah yeah you can search and then easily book the flight through KV. Yeah.
Sweet. Nice. Thanks. Thank you guys. Thanks Jacob. Thanks for coming on the
show. Yeah. That's so nice to see real use cases like this using NRA. Yeah. Yeah, we definitely uh that that's
one of the cool things that we've seen quite a few people we met like just real world users that are using MRA at the conference. It's always cool to see and ask questions like what are you doing? And then of course that always leads to what challenges are you having? What problems? Because there's always problems, right? Um, and I think then
we, yeah, we're able to like work with them, fix, if there are issues, we fix them. But most of the time it's just nice to see that people are using MRA in the wild and they're telling us all the cool things that they're building with it. Um, so we do have a I feel like this question is a little bit like bait. Uh, Satwick says, uh, give me the quick TLDDR. Why people should use
Master over Ligma chain? I'm assuming that means lang chain. So I'm gonna assume that's lang chain. If I'm wrong, satwick, I don't know what ligma chain is. So but why would we use why should you use
I think we are muted. And uh yeah, one of the actors put me back in. There we are. Hey, we got sponsor Wi-Fi, so we're
doing better than that. Nice. All right. See you. Thank you. Of course. All right, Abby, I got to answer a question. So, we we got a question from
Satwick of the TLDDR. Why people should use MRA over Langchain? So, I'm going to take this one.
Good. So, first of all, if you are using Python and you're happy with using Python, like maybe you could still use Langchain. like maybe um there's maybe other I would argue better Python agent frameworks but it's it's a relatively good well well understood option. If you
want to use JavaScript and TypeScript then I definitely would say you should be using MRA. Uh the documentation of lang chain is really confusing if you're trying to use JavaScript because you always end up on the Python docs. It's really I don't I don't believe I kind of have a hot take that I don't think frameworks should be multil- language. most good frameworks, whether they're web frameworks, game design frameworks,
AI agent frameworks, typically should pick one language and just be very good at that one thing. And that's what we do at MRA. So, if you want to use JavaScript and TypeScript, you should use us. Uh the one other thing that we
have that I think our users really love is the fact that when you get started there's a very good getting started experience and you get this live uh local playground where you can test agents you can see how agents interact with workflows and we have all the primitives where you can kind of connect things and we have the eval
scorers we have tracing we have all the things it's kind of like an agent development environment in a box. So, I would say that's the biggest differentiator is that uh you're kind of it's kind of an all-in-one solution. It has kind of everything you need to get started. Yeah. But you should also try those ones like side by side and see what is the
experience to you. Yes. Build just build a simple agent in in both and see which one's better to use. I think that then you might understand. Yeah. Yeah. You you'd understand. You'd be able to make your own decision, too. So,
I would highly recommend just do like the getting started. spend 15 minutes with each and then uh I believe you'll you'll agree with us especially if you want JavaScript and TypeScript. Uh that that's where we really shine because we we don't support Python. All right, Abby, where are we two? Where we going to next?
Sentry, but the homie is not there yet. So, I have to come back for that one. What you see booth? That's awesome.
It's lunchtime here. See if I can find another familiar face. For those of you that are just tuning in, you know, we are doing this thing live from AI engineer Paris. The sound
quality is not the best. We're doing it raw, but we're having fun, right? And so hopefully you're here having fun with us. If you have questions, um, you know,
please please drop it in. And, you know, Satwick had another comment. Currently using AISDK wondering if we should do the monster jump. Um, so the nice thing about MRA is it works really nicely with
AI SDK. So this is actually very common that they start with AI SDK, but you want a more defined set of primitives like agents and workflows and agent networks, things that uh you'd have to build yourself on top of AI SDK, but we've kind of already done that work and we obviously have the nice development playground and the tracing all baked in.
But uh so Satig a lot of people use both, right? You use AISDK for your model routing because MRA is kind of built on AIDS. Hey, that's awesome. Do you want to want to
talk on it? Uh, yeah, sure. Hey, we're with the rise. Go ahead. Hey, really nice to meet you. Uh, we're
big fans of Mashra and Abby. Um, it's the, you know, uh, AI engineer Paris. I've got the original AI engineer shirt from the the first one in in San Francisco, but really excited to have live stream. So, what are you guys demoing here? Uh what are we demoing? So um we want to
flip the camera, but what we do is we're in the AI observability space in Evals. Uh we integrate with Master as well. So uh we've got Sally in the picture right here. If you want to beautifully show the the TV behind you, white please.
Okay. Prop learning. Prop learning is something that is a little bit unique to us. Uh it's a lot of research that we've
done into the prompt optimization space. So you might be familiar with like a DSI with like some of their other optimizers. Uh we've done some research and compared the most common optimization frameworks and really found that the explanations that come from evaluations are extremely valuable and really important for optimizing your
props. We're kind of working in the same modality as the models. We're showing here is the prop optimization task that lives in our platform. Uh so this is something you can run on a data set with a hub. it's automatically going to run
iterate through that feedback um and then produce a new prop version for you. So super cool. And was that released at uh the your conference? Yeah, Europe this year. And we're still
making like more improvements like we're working on the having like eval uh iterating and improving. So I think there's a lot to come and I definitely recommend check out our blog post on it. A lot of stuff and we're working on a MRAA horizon observability package. So, I think we'll be a little bit more closer, too.
Absolutely. We're super excited for that. Of course.
Cool. Well, that was cool. Yeah. Did the sound come in that time?
Yeah, it it it was not I mean, dude, speak sponsor Wi-Fi is the way to go. Yeah, it is. The quality is much better than before, for sure. But, uh way to you know, we're not
sponsoring, but I'm glad we got on the sponsor Wi-Fi. We got on the sponsor Wi-Fi which is great. Let's see if I can find some more people but you guys keep talking. Yeah. Uh so I think you know one of the
interesting things you know back to Satwick's question around AI SDK and MRA. So you know we are we're definitely like originally we were kind of built on top of AI SDK and we still are very What's up Nick? We're still very uh tied closely to the AI AIDK. So we use it for a model routing. So we do have a lot of
people that you come to Maestra from AISDK because they started using it and they wanted you know a little bit more of the batteries included experience which I guess is what MRA provides. So it makes it makes you have to make less decisions but we still have a a high level of flexibility and control.
I'm gonna come back to home base real quick. BRB and Yeah. And I All right. You You gonna
bounce? Um and so, you know, we'll probably keep this stream pretty short today. You know, we we didn't have our normally we do this every day. Typically, it's or
not every day, every week on Mondays, usually around noon Pacific time. But we're in the EU. So if you if you're watching and you're in Pacific time, it's really early for you. So, uh, welcome. But we we'd like to, uh, when
we do get the chance, do it on some different time zones to let people really get to, you know, hopefully interact with us, ask questions, and, yeah, try to share share knowledge that the things that we're seeing. Um, I did think that it's always it's nice to see because we're I always feel like I spend
a lot of time in SF and I feel like we're in a little bit of a a bubble of where like AI is very very popular, right? But it's not it's sometimes hard to see outside of that bubble and you know, of course you hear about New York, but it is really validating to see that there's a lot of excitement even in, you
know, places that I haven't been. I've never been to Paris before and there's a ton of, as you can all see, you know, there's a ton of excitement. There's, you know, probably I don't know, maybe a thousand people here. So, it's it's a pretty big event.
Uh, Aby's back. Welcome. What's up, y'all? That was fun. Question.
I'm gonna go back down there soon. Yeah, but just charging my devices, you know. Yeah. Well, we got a question here. I'll
leave send this one to you, Obby. What about using live kit agentic framework and connecting master for complex workflows in memory? So using uh I guess using a another maybe the more general question is what about using another agentic framework with master with workflows in memory.
Yeah, it's a really good question. Um our so master workflows are very generic um tool in our tool belt. uh you don't have to use Masha agents with those workflows. Um so if you want to build a live kit agent that runs in Masha
workflows, you can. If you want to use OpenAI SDK, you can. Um except the only difference right now is our agents use our memory primitive. And so you're not going to have that in the live kit uh
the world for those agents. But maybe in the future we will build a generic memory so you can bring it to any other agents. But you can still run them in our workflows. But when when you say live kit's an agent framework, I could
say that's somewhat true, somewhat not true. It was a voice platform that became an agent framework, not the other way around. So yeah, but you should use it if you like if you're doing voice agents. It's pretty good. Voppy is also good.
Yep. So great question. Uh yeah, I mean you you want to go back down? Should we should we keep this at a half hour today or do you want to do some more? Yeah, let's keep we can go down. Uh so
we talked to those two. There more people to talk to except the technical challenges. Yeah.
of uh being down there. But maybe what we can do is um we can go back down there and then we can reream later as well. Yeah. Yeah, maybe what we'll do is we'll uh maybe give a stream for the people
that are in the, you know, the American time zones to maybe capture an end of the day stream and we'll get a few more. Um, but we did a half hour. We talked to some cool people. Yeah, we gave a little preview of what the
yesterday's keynote was about, I guess, and we'll maybe fill some people in later with some of the other stuff we learned from the sessions, from the keynotes, from just talking to people. I think honestly I don't I don't I I enjoy the sessions but I always know that I can go to the I can watch the sessions after. Yeah.
The thing that I like about conferences is that you meet people that are actually like building interesting cool things. You can ask them questions figure out you kind of get a consensus around what are some of the biggest problems that people are running into and the nice thing is the same things that we hear from just talking to users generally. So, we're getting validation
that Yeah, people have a lot of questions on eval. How do you productionize and scale agents? How do you take my agent has 124 tool calls? How what do I do when I need
to add, you know, a dozen more? And it starts to break down the quality. Like those are the things that we hear when we're um at conferences like this. And
it's nice to then like we talk to strategies that we see that are working with other mastery users and you know hopefully help them but also get validation on like these are the problems that we need to provide really good patterns and examples to solve. I think there's going to be a new class of companies out there. We we
say big eval but there's going to be something called big memory because Neo4j has five talks trying to push their agenda on graph rag and memory. Just throwing that out there. There might be a big memory thing in the future, right? Definitely can sense it here. Yeah, there is something very interesting about that. And also that
Swix mentioned yesterday during the keynote, it's like there was this big picture of Excal which was about in the center there is a square, there was a square with written LLM and everything around like voice video this kind of thing, memory with a question mark, all these kind of things and if you replace LLM with Monra, this is basically what
we are. What we think is the future is actually here and it exists. It will just make it a beast. So just wanted to
say that. But the fact that memory is a question mark in that diagram means big memory has an opportunity and no dude prediction. Dude, there's a memory company right there. Bunch of LLM platforms. We
got our homie Sentry. David, your booth looks tight. Um, you know, and our homies from Arise you saw earlier. So like it's just interesting that you know these are the type of companies that get
built from user problems and some people are inventing problems to solve. So I'm just saying that. All right. Well with that uh thank you all for tuning in. This has been kind of the the weirdest AI agents hour as far
as like technical sophistication of trying to do a conference. Uh maybe we need some extra equipment next time, but then we'll be the guys walking around with all this extra sound equipment. But, uh, it was fun and we'll see you all. Probably do a live stream later. Yeah.