Back to all episodes

OpenAI Dev Day recap and guests from Netlify and Weaviate

October 6, 2025

OpenAI Dev Day Recap, AI news, Sean from Netlify, and Daniel from Weaviate!

Guests in this episode

Sean Roberts

Sean Roberts

Netlify
Daniel Phiri

Daniel Phiri

Weaviate

Episode Transcript

3:40

What's up everyone? Welcome to AI Agents Hour. I'm with Abhi. I'm Shane and today

4:17

we're gonna be, you know, it wasn't that long ago when we had the last AI agents hour. Normally we do this thing on Mondays just like today. We're a little early, but we did have uh some guests and we wanted to have a little extra time to talk about OpenAI's dev day, talk about some of the AI news since last Thursday when we last had an episode and yeah, just uh hang out for a

4:39

bit. Yeah, it's been an exciting day already. Yeah, I mean I think anytime there's a big, you know, launch, especially like a a launch keynote, we always try to tune in and so we will definitely give you our hot takes and our opinions as we go through the list of things. We'll try to

4:59

pull up uh as well maybe some opinions from others in the in the community so to speak and see what they're thinking about all the launches that happened specifically on OpenAI dev day because I think it's yeah there's some cool things some interesting things and we'll be interested to see and hear what you all

5:16

think as always this thing is live so you know like Tim already said like the apps SDK he asked the previous question how does it influence us oh we'll talk about Yeah, we we'll talk about that. But this is live, so if you have questions, drop them in the chat. Whether you're watching on X, LinkedIn,

5:35

YouTube, wherever, and we will try to answer questions as we go. This week is SF Tech Week, so happy tech week to all those who celebrate who are probably going to waste a lot of time going to events. But, uh, we have an event on Thursday. Uh, it's a book

5:52

signing. Um, if you don't have your book, bring it and we'll sign it. Um, and there's a plenty of other events going on. I believe tomorrow we have a JS trivia night that we're sponsoring

6:05

that's at I forget. It's close to YC. So, if you want to play trivia, come on down. So, tons of things going on this week in SF, the other SF. Yeah. Right now, we are both located in

6:17

Sou Falls, South Dakota. But Oby's heading back to SF today. I'm heading there tomorrow. And then, yeah, we'll be

6:23

there all week. So, we will if you're in uh San Francisco, stop out at one of the events and hang out with us. Cool. And yeah, should we uh talk about OpenAI

6:38

Dev Day a bit? I guess uh maybe we can go high level of like what they announced and then maybe we'll throw up some tweets and some uh posts so we can see some of the UI and some of the things that they released. I would say they kind of had I mean the one big thing for me was what they call I guess they called it their agent was it agent

7:01

agent kit agent kit agent kit but it had kind of three components that they wanted to really announce they had agent builder chat kit and eval so I guess tell us about agent kit Abby what is it so agent kit is an agent builder type of product it's like a canvas where you can essentially create agentic flows post um

7:24

and what how it works you have like a nice UI you can put things together connect nodes you essentially have a agentic workflow run you can take that ID can go and run it in your open AAI SDK or you know so that's one way other way other thing is they have chatkit which is an embeddible chat UI that you can then also use with their SDKs and

7:48

then evalu Evals. So like you know trying to like they're trying to get into the game of a lot of different companies. So it's pretty interesting.

8:02

And yeah, let's pull up this here. We'll see from the OpenAI developers account and we'll show their little pre promo 10-second video on the agent kit which was just released which I don't not much more than what we just said there but kind of nice to see some graphics around this. So wow that's just a workflow chart. Okay. Well, it says agent kit. Build,

8:33

deploy, and optimize your agentic workflows. So, they have chat kit, which is embeddible customizable chat UI. So, we'll talk about that a little bit more. Agent Builder, which is your wizzywig workflow creator. Definitely going

8:45

directly at your Zapier and Nad Nadn, Vellum. Yeah. Then they have guardrails. Yeah, I can name all the companies. A ton of them. Yeah. A ton of people are

8:57

now probably a little worried after after today, but you know this is the industry. This is startups, right? Then you have guard rails and evals which are again all around like improving, optimizing, measuring your agent. So

9:11

it's a whole kit around what I would say is like lower code building of agents. Yeah. With the ability to kind of build your own chat UI on top of it.

9:22

Yeah. And I think this this product really helps the like developer adjacent roles, product managers, etc. Um, as more so than a developer, but we'll see. You know, things like this can expand.

9:36

When we first started MRA, there were a bunch of these agent builder type products out there. Um, and we actually attempted to build one, too. But we kind of realized that developers, like true developers, wanted more control, which is kind of why we built MRA. But it's nice to see that those ideas we had back

9:53

then are still valid and are still being kind of worked on right now and you know doesn't mean that we're not going to work on them but it's just awesome to see that you know good ideas are on the stage. OpenAI has great marketing and you know outreach so it kind of validates a lot of things that what people are working on. I do see a lot of

10:13

tweets saying you know NAD is dead and I don't know about that. Let's not like jump to conclusions. Yeah. But that's usually what happens on these developer days. You have like Swick saying like 100 startups are now

10:24

dead and like I don't know. It's just to put FOMO into everybody. I'm not scared about So that's where it's at. Yeah. I think you know not everything

10:36

that OpenAI launches has been has turned out to be great. So they're always at these events, you always have to overhype what you have and then there's like this period of where it actually settles and people realize what can you actually do with it? How good is it? Is it something that they're just

10:52

launching or is it something that they're going to continually invest time and effort into which is also important? Yeah. And you have a big company that's trying to tackle, you know, hundred different pro big projects at one time. You know,

11:02

they're building models. They're they got an SDK. They got you know now a workflow engine like with a UI low code environment. They have obviously like all their integrations. So they got a lot of things. Of course they are very

11:14

well funded but at some point you know there's very likely that individual startups like Naden or you know Zappy or whatever can still out compete someone who's focused on trying to not focus on doing a lot of things. So we will see how it all plays out. It is interesting especially you know Replet released their agent 3 which has an agent builder

11:32

built in. you know, Lovable had some stuff where they're doing some things with with like AI and the ability to build like agents right within your apps. So, it is all these things kind of start to collide over time. And I guess

11:45

that's part of the game is everyone like carves out a niche and then expands and we kind of see where everything ends up uh going longer term. I mean, OpenAI wants to take over the world, right? So, naturally, they'll get into everything. there's like news about them having a device. So like, you know,

12:01

they're like the Apple of of this industry. Yeah, I guess that positions us more as the Android in this case. You can bring whatever whatever pieces you want. Uh so let's jump in. We do have some comments

12:13

here. Uh so Tim did say, you know, moved over to YouTube as the X player sucks. Well, hopefully they can fix that, but I agree. Tim said, would love to have the book. Well, that's good, Tim, because

12:25

you can actually get the book if you go to master.aibook. It will send you a digital copy, but I'm pretty sure, if I remember correctly, in that email, there's actually like a link where you can request a physical copy and we will send it to you. I'm pretty sure we can get it to Germany. Yeah, we ship there.

12:43

So, I think that's one of the places we can ship. We can't ship to every country, unfortunately. Although some of you that might be watching this in India, we used to not be able to ship to India. And I don't think we can yet, but within a week or two, we will be able

12:57

to. So, if you're in India and you've been requesting a book and we've said, "I'm sorry, we don't have a way to get it there." We figured out a way. We had a lot of people request it. So, we wanted to find out

13:07

how to get it there. And so, just heads up that that is coming. You know, if it's not already, it it will be very soon. Uh let's see. Brazil. I don't know if we can ship to Brazil. Maybe. Not sure about that

13:21

actually. And coando Carlos, uh, you want to work with MRA AI and you already have experience with Typescript? Cool. Go to master.ai

13:34

and copy copy the command and just start building stuff. Yeah. Now, if you want to work for Monster AI, uh, we have some open jobre you can go apply and see how things will work. Yeah.

13:47

All right. Uh, so what else on dev day? So this chatk kit I didn't look into the details of chatkit but that's also interesting too because they have this agent builder which is definitely targeted towards like competing with Zapier Nadn you know replet's new agent builder but they also have this chat kit

14:06

which kind of sounds like they're coming for copilot kit assistant UI uh AI elements from versell which is just like a set of react components to allow you to build a chat interface. So if that's the case, then also not only is OpenAI competing with some of the workflow builder companies, they're also trying to go after the the UI as well. So I

14:32

didn't see much on that kind of went under the radar a little bit. They mentioned it, they showed on stage, if you do watch OpenAI's uh like the keynote, there was an eight minute let's build an agent and they showed really quickly. They copy pasted some React code and they showed that like how it works, but it was very

14:49

quick. So, you didn't really get a good uh piece of it. I can share how it all looks. Yeah.

14:58

So, here's the chat kit dots. You can see we can just run through this little demo they have here. So, you put your API key in, you know, and that's that.

15:16

And it's just a mini chat GBT essentially. Yeah. Does it give you a lot of use to be able to like customize it? I

15:27

believe you can embed this. Okay. From here. So you like embed the UI as like a component?

15:32

Yeah, it's like pretty much bringing Chad GBT everywhere. Um Yeah, it's pretty cool. Yeah. So essentially, you can use, you know, kind of a simplified chat GBT for

15:45

your agent, which makes sense. That's probably what you want to do. You have an application, you want to, if you build an agent, you likely want people to chat with it. I do think that's a little limiting because what we've

15:56

learned is it's not ju just a chat. People don't just want a chat interface to interact with agents. I think that's what people generally think of. That's

16:02

the most common way to interact, but oftent times you're interacting with agents through tri external triggers or honestly like even Slack or other external systems as well. Um, cool. Siobhan, uh, I don't know how to pronounce your name. Siobhan's

16:24

you're from Sou Falls. Cool. I'm from Sou Falls. We're in Sou Falls. We're in Sou Falls right now. So,

16:29

see you. Welcome. No, don't go there. Don't go to oilies. Um, and so Davi asks, which is a little

16:38

bit more ma specific, can we we probably won't demo the networks feature today, but we still have it documented as experimental, but that's changing, right? Yeah. I mean, November 6 will be our 1.0

16:52

release where we won't have experimentals anymore. And uh we have a workshop on multi- aent systems coming up in the next couple weeks. So you should check that out. Yeah, I think it's going to be right

17:03

after November 6. So So November 6, we have the TSAI conference. Which reminds me, if you have not registered, even if you're not can't attend in San Francisco in person, you can register for the virtual option so you can watch the live stream. Uh so it's tsmp.ai. I'm just going to drop that in.

17:24

We haven't really been talking. Oh, there it is. So, tsmp.ai.

17:29

Go register if you haven't already. We have a ton of great speakers. We have Swix. We have Paul from Browserbase. We

17:36

have Dex from Human Layer. I mean, Michael Greenwich from Work OS. We have uh people from Langfuse. I mean, we got we got all kinds of great speakers. If you go look on the website,

17:48

you'll see you also might get to hear Obby and I talk as well. You know, we'll be there. Oh We will be there and we will be doing our thing. All right. So, we talked

18:00

about agent builder. We talked about chat kit. The evals and the guardrails were also pretty interesting. So, again,

18:07

it if you imagine this agent builder, it's just like a drag and drop workflow builder. And what they basically demonstrated was it was very easy to just easily add drag in a guardrail as a node in your workflow. And then you can wire up, you know, like you don't want you want to make sure there's no personally identifiable information and

18:25

you want to make sure that uh any other types of guardrails that you you can basically block the execution and then you can of course set up evals which happen after the execution actually happens and the eval products are infringing on the companies that do eval right so they have data sets here prompt optimization

18:44

tracing can grade them with annotations. So, seems like OpenAI hit everyone today. Yeah. Yeah. They're they're coming after, you know, Arise and Langmith and

18:56

Langfuse and all the other like observability provider data set cur creation eval companies, Brain Trust as well to some extent. Um, you know, I I do think that this is positioned much more towards like technical but not developer necessarily to build agents, which is interesting and a little bit more towards the NADN, but they also have the the agent SDK. So, they're

19:20

trying to they're really trying to own the entire the entire thing from beginning to end. And so, next up, let's talk about some of the other stuff that they released. So Sam Alman came back on the stage a little later and he announced some new things for developers. So if you were actually

19:41

a developer and you know engineer, this is probably the only thing that really might impact your day-to-day, but it's tight. Yeah. So they did announce that they have GPT5 Pro available in the API. So

19:52

that's a cool thing because you get kind of that higher level of reasoning. It's a better model that can run for kind of longer, more complex tasks. So now that's available in the API. This one was probably uh

20:03

the best one the one of the best ones in my opinion, one of the best announcements for me and what we do is there's GPT 5 or GPT realtime mini. So their voice model it's now smaller and 70% cheaper than the existing real time API. So if you have considered real time voice, what we've always told people is like OpenAI's

20:22

real-time voice is great, but it's too expensive for most use cases. We had people that build voice agents tell us that they would love to use it. It's just not financially feasible for like where their margins are in their business. 70% cheaper gets it closer for sure. I mean, they didn't say if it was better, but it

20:39

is cheaper. I I think what he said is the voice was he basically said it was just as good. Now, just as good is it you you you be the judge. Uh you go use it, but assuming it is just as

20:51

good and 70% cheaper and because it's a smaller model, it should be faster as well. So, ideally like slightly faster. If that is actually true, that's a pretty big game changer for voice agents. Yeah, because usually voice agent

21:04

companies do two different models for SST and TTS on two different sides just because real time is just super expensive. And then if that's not the case, then you'll see a bunch of voice agents come out. Also, they also said that they're going to focus a lot on voice. It's a big deal to them. So,

21:22

we'll see what happens. Yeah. And last but not least, they announced, you know, if you've been paying attention, we talked about it last week. Sora came out, the new version of Sora, Sora 2, and they

21:33

announced that Sora 2 is now available in the API. And I think this is going to unlock a whole like new wave of startups that OpenAI will eventually just try to kill, of course, because that's what they do. But there's going to be this this idea of like video editors and all these tools that you know like Mosaic is

21:51

one right that the people that we went to YC with but all these other tools now have access to potentially use Sora in their like video editing tools whether you need like B-roll for movies or you know commercials or whatever or you want to actually build more comprehensive like video editing software or creation

22:11

software really around a model like Sora 2. And so having it available through the API just opens up a whole bunch of different types of software I think we're going to see that can kind of embed this stuff or even just like add this capability in existing software. Again, I do think that as OpenAI, you know, continues to evolve, they're going

22:30

to add more features just to the Sora app that'll probably allow you to do this kind of creation. If you if you have used the Sora app, like we tried to show it last week on the web, the web version is not very good. I wouldn't recommend using it. The app version is pretty good. It's pretty compelling, but they're almost trying to build like a

22:47

Tik Tok type environment for AI generated videos. And they I I did hear they got they've kind of rolled back some of the copyright things. You used to be able to get it to do anything. You could get Pikachu to shoot lightning

23:02

bolts at Mario, right? And I but I've heard they've kind of toned that back after the first week because they had a lot of copyright complaints. Surprise, surprise. Yeah. Uh, so there might be some limitations in what you can actually have it do now, but they basically want

23:14

to make this like Tik Tok feed of AI generated video slop in my opinion, but some of it's kind of compelling. It's interesting. It's kind of funny. There's a lot of slop on X already. Like

23:25

Ken Wheeler is making so many Sora videos with him in it and he's just a funny character. But yeah, the cameo feature is super cool. I think they put Dax in like so many Soras cuz he, you know, you can give your cameo public so it can be used. Yeah. um which I guess is how you get out of privacy concerns. So

23:42

yeah, you can make Yeah, you can choose to make your cameo public and then yeah, you can obviously put multiple cameos in a video, get them interacting. It's good. It's not great. Like you you what you see is definitely a selection of

23:56

many iterations, but I do imagine just like a lot of people use like the Tik Tok video editor to like edit videos and stuff. I think that OpenAI is probably going to add video editing features right into Sora. So, they're going to, you know, they're going to be consuming and competing with a lot of these other

24:12

video editing software that's going to use Sora as an API eventually. They're just probably not there yet. But assuming that it it keeps the interest that it has had the last couple weeks or the last week and a half or whatever. I think that they'll

24:24

probably continue to, you know, improve that that app and they'll add some of that video editing features where you can kind of slice together and have more fine grain control over the cuts that happen in the videos that are generated. So that's it. I mean that those are the big OpenAI dev day announcements. there

24:42

was uh you know kind of like a bunch of apps built right into chat GBT Spotify, Zillow, Corsera. So they're kind of built into the right into the chat GBT experience. So that's kind of a big deal as well. But so if you're using ChatGpt, you know, you might notice that or be able to kind of

25:05

interact with those things. But besides that, I think that's most of the OpenAI dev day recap. Anything anything we missed? No, I think we'll see what happens. This is like one of those we'll see what happens to everybody else

25:17

type of situations. Yeah, if you're in the chat, if we miss something, what do we miss? Is there anything else interesting from OpenAI dev day we should talk about? Let us know. Drop us a comment. But we do have

25:29

some other AI news. So, let's talk about some of the other things. Speaking of, you know, voice agents and workflow builders, this is very uh similar. And it's funny because 11 Labs announced it before OpenAI's dev

25:46

day. So, I think they had some inside information. People were saying there have like this workflow agent builder is going to launch. So, it was kind of

25:54

funny to see that 11 Labs introduced agent workflows about an hour or so before dev day started. So it's basically a visual editor for designing conversation flows in the agents platform. So instead of building all your business logic in a single agent, workflows enable you to handle more complex scenarios by routing to specialized sub aents. Duh. Triggering.

26:17

11 Labs just took a major step towards agentic control. And it all starts here. Our new visual editor for designing complex agent conversation flows. This

26:29

brand new functionality lets you orchestrate every scenario, blending artificial intelligence, complex logic, and deterministic action. Whether it's triggering real world events, verifying and authorizing callers, or transferring to a human operator, you decide the user flow for every scenario. And each sub agent can draw from its own tools, knowledge base, or language model,

26:54

optimizing for cost, latency, and accuracy at every stage. control, precision, and escalation. All done at scale with agent testing integration, allowing you to depend on 11 Labs agents with confidence.

27:10

So, take control today with workflows from 11 Labs. Okay. I love how they they think they invented workflows.

27:20

And I mean, yeah, the canvas, same Yeah. Everyone, you know, we've all seen this this, right? um connect to it's basically all the the same thing. I thought it was interesting

27:33

though because obviously OpenAI released their cheaper voice real-time voice and their workflows. So, you know, they're also OpenAI is also competing pretty heavily with 11 Labs, right? Yeah, totally. And if you're 11 labs, it makes sense that you wanted to try to get your stuff out first because if they

27:50

announced now, it would be, you know, it definitely would have the impact. You know, it's still like I guess that's some decent some decent reach on this, but I don't think they probably, you know, that's not the reach that you want. I I don't think it's the launch, you know, in five hours. I was bet they were

28:07

hoping for more reach than that. Yeah. But it is cool. you know, if you're building voice agents, you know, this

28:14

this feels like 11 Labs is now trying to compete a little bit more directly with, you know, Voppy, which of course they're on a collision course and then OpenAI is of course right in there as well. Voppy has a workflow builder as well. Yeah. So, everyone's coalesing on these patterns. I remember though people told us that workflows weren't necessary when

28:32

we first started. Jokes on y'all. Yeah. Well, that one of the honestly one

28:38

of the funny things was how many people did say why do you like why do you recommend workflows and it was basically because you can't just turn over control all the time to the LLM and hope that it's going to call the right tools in all the situations. So what we almost always end up doing is like telling people like yes, you pull out the stuff

28:57

that it gets wrong, turn those into workflows if you can be more prescriptive and then yeah, let the agent do what it can do well. But you know that that's why you know we did have some comments and questions around like multi- aent networks typically even with multi- aent networks it it's always better if you can kind of you do the routing yourself or like control the

29:16

routing with a workflow because all of all our multi-agent network under the hood does is it's basically a workflow that we've just like pre-wired up and said here's like a basic multi-agent network that allows you to call between different uh agents. But if you can, then you should uh control those control the control flow a bit more yourself with workflows.

29:35

Yeah. And and that's what all these uh all these tools are doing, right? If if if OpenAI was that confident, they wouldn't need a workflow engine, right? They would just say like tell the agent what to do and it'll just do it. But of course, that's not the reality.

29:46

How it works. All right. Uh should we pause for the guest or Yeah, we do have a guest. So, we have a couple other uh pieces of AI

29:58

news. We'll come back to those at the end. We got a guest here we want to bring on and talk to, you know, if you are watching this and you're new, we try to bring on interesting guests from across the industry that are doing cool things with AI and we want to just ask questions, see what they're working on, get to know them a bit, and see if

30:16

there's anything they can either demo or talk about as far as like what they're doing on a daily basis. So, we're going to bring out our first guest here. We are going to bring on Sean Roberts from Netlefi. Sean, welcome to the show. Hey, good to see you all. How you doing?

30:34

Good. It's been a while, man. It's been a while. I always always love uh the skateboard decks in the

30:40

background. I remember, you know, having a few conversations with you while I was at NFI with with those in the background. So, it's it feels familiar. I know. I need to swap them up. as a as a kid dreaming of just having boards

30:52

that I can just use for decor. I'm like I'm excited about it, but now I need to cover the rest of the wall. Yeah. Well, that's my always like why I have guitars on the wall like you know in different places in my house is like

31:04

I'm I'm not gonna that's my version of hanging art, right? So, yeah. Yeah, exactly.

31:10

But, uh yeah, welcome to the show. Welcome to AI Agents Hour. Can you tell us a little bit just about yourself and then of course I'm sure we're going to ask a lot of questions about what you're doing at Netifi as well. Yeah. Um I am Sean. I live in just south

31:24

of Atlanta and you know just trying to make the web awesome and with this kind of um rise in AI and and just all that it kind of unlocks just finding all the interesting ways to make web development better, the web better and just like the integration between agents and AI and the web like just better because right now it's just kind of this um we're making it work but there's so so many

31:49

things that we can do to improve Yeah, for sure. Um, when so when did you start getting into AI? Um, I I mean I was definitely kind of on the consumer side of it. So like you know after chat GBT and and all these kind of big kind of obvious

32:08

announcements happened in the market that's when it was like okay now this is like something I should really look at because at least for me um having you know been in development for you know two decades right like I've seen these things you know machine learning uh models that can do really incredible

32:27

things I mean these that's not new but like where we started seeing this like pretty consistent unstructured to structured kind of output experience. But that's when it was just like whoa, this is there's so many things I've always wanted to do, but turns out it was like impossible to do before. Um that that really kind of just sparked

32:49

this kind of spiral of like, oh, we can do this and then that and then, you know, and then I guess just kind of grew from there. So, um, but yeah, the my my background is primarily in kind of the web architecture space and it really took over in the last several years, uh, just bringing AI into that fold.

33:08

Yeah. Well, I mean, I I remember, you know, and and for those that don't know, we worked with Sean, you know, we were Abby and I were both at Gatsby. we kind of came over to Netlifi and we we both you know worked with Sean a bit but I do remember you it did seem and I we didn't work that closely together but you were always kind of like bringing new and

33:26

interesting things to Netlefi while while I was there and then AI obviously came you know while I was there but started to pick up steam even after I left and so it seems like now you've been leaning even more into that over the last two years or so. Yeah. Yeah, absolutely. It's um I've had the the real privilege of being able to

33:45

kind of work on these um speculative projects and things that we're not sure if we should distract the larger team with yet and things like that. And and just my own curiosities have been um just driving, you know, that intrinsic motivation of, you know, what would be cool to see. Um, and yeah, Netifi has really been a great place to, you know,

34:09

enable that type of creativity in the in the team. Yeah. So, what's Nellifi's AI strategy now? Like now that we're all here and we're doing it. Yeah. So uh I I think the you know the biggest

34:23

thing is it actually started um you know actually it's almost been about a year now where um you know Bolt um actually even before that so Devon you know whenever whenever it first launched you know came to Netifi and was like hey you know we want to use you all to do the web deployment side of things and we started kind of um down this path of

34:50

like how can we optimize Netlefi to be really good with agents and we started out pretty bad at that and then we got a little bit better and then we were like actually this whole problem space of like just trying to have a great agent experience is not unique to Netifi problem and we stepped back and examined

35:11

like hey you know if we if we look at what the um the next five ten years are going to look like it's going to be these businesses that optimize for having a great agent experience that are going to be preferred by agents, discoverable by agents and actually you know working well with agents. So since

35:33

you know the beginning of this year that's been our strategy is just focus on trying to be the best agent experience for the web and web deployments um that we can be and then everything else has just kind of come from that right and even with our launch last week um we're now getting into this new era of trying to have a great agent experience for these more asynchronous

35:56

background agents not just these synchronous uh agents that you use like winer for um bolt new things like that, right? Those are more synchronous ones and the background ones have kind of been like, you know, self-contained, but now we can help run those on your behalf and things like that. So, you know, that

36:14

strategy really comes down to building the best agent experience that we can um and helping our customers do the same. What are like the tenants of agent experience? Because I know you have first of all, Nellifi is the best at coining terms. So, for sure. Yeah. Yeah,

36:31

AX. AX. Um, and I think I posted it in the chat here for other people, but the Asian Experience.AX, I remember when that first came out. Yeah. What do you think of now like like the

36:42

tenants of Asian Experience? So, it's it's been interesting because um you know that that website has a lot of this information. And first of all, you know, Matias is gets the credit for for the the coiner of names. Uh the chief coiner name. Yeah, he's pretty good at that. Jamsack,

37:01

AX, that's that's not bad. There's not a bad trackable composable web. Yeah, there's a pretty good track record there of like, you know, some names that that are definitely sticky.

37:14

Yeah. So, the uh I mean for me, right, so like this is something that me and Matt spent a lot of time thinking on before uh we actually really pushed and focused the company on. was this. You know, if we look back at the early web,

37:32

at some point, if you didn't have a website at all, you just weren't relevant. And then when search engines were like how people discovered content on the web, if you weren't ranking pretty high on search engines, you also weren't relevant, right? Like that's just the math that it turned out to be. And now, if agents are this new entry

37:49

point, then like how are you actually going to show up? And how are you going to be useful? like you if you don't do that it's not going to be relevant. So so that itself is like this call to action. It's not meant to be uh one of

38:01

fear either. It's just like, hey, this we've seen this now. This is at least the third time this has happened on the web. Like, let's let's not pretend like this isn't what's happening here and go after it. and and you know we've set out

38:15

a lot of you know a lot of energy into studying and researching how agents work with our platform working with partners to figure out how you know what they're learning and then coalesing a lot of that information into industry resources and stuff that we write around agent experience and especially because we've

38:34

had this launch coming up over the la within October we hadn't been able to focus on this the site itself so you know within the last month or so. But even over the last, you know, six months, right, where, you know, this stuff has been written, but now we have need to see how well it actually plays out. And 100% of it is still spot on. So, like I think we've done a pretty

38:57

good job in terms of identifying those principles and seeing those um really become what is important for working with agents. Um, but even more so, it's it it's really impressive to see the industry changing in different ways that I don't think that they're intentionally trying to improve the state of AX, but they are right from from agents um auditability

39:23

and like identity like that that's whole space like that's one of the core tenants around you or principles of AX is around differentiating agent interactivity and agents broadcasting who they are in a trustworthy way and then making sure those things are audible is right there on it. So, it's it's really coming um

39:43

all the stuff is really come to fruition now. Yeah. I think like a couple months ago AX was like the best way like some people's definition of AX was like how do you write applications that are that are like human to agent and then agent to external system and predominantly was like enforced by chat applications and stuff but like I'm pretty sure that's

40:06

not necessarily the end all be all for everything. What are you seeing out there in the Neifi world on, you know, just background agents versus people like do people want chat still? Is that a thing?

40:18

Uh, definitely. So, so for better or worse, at least from what I've seen, I uh what the reality is is to your point, the market's going to decide like like what people actually end up, you know, voting with their dollars and and and what they're investing in is going to decide it. However, I the the more of

40:40

these systems I use and trying to integrate in my personal life, the more I actually crave more voice interactions, right? Like there's a lot of times like I'll be, you know, when I'm doing things that I don't want to do and have and and would like to like like if I'm doing the dishes or cleaning or something, I'd love to just kind of

40:59

like, you know, working on projects, but like being able to talk to these agents and get them to do work while I'm doing other stuff and I can't use my hands and things like that. Like that that's that ends up being where I want to do a lot of my personal exploration. So, all that to say is I I think that we're going to see, you know, a nice range of mediums

41:21

that people want to work with to fit what, you know, fits their needs at that moment in time. And and many times, right, if I'm like at my daughter's, you know, gymnastics, you know, uh, school, like I don't want to be talking out loud. I just want to be, you know, texting. So, like chat is very useful in that world. So, I

41:42

Yeah, I think we're gonna see all of them. Yeah. Well, I mean, I I think it's kind of like how you would traditionally work with another person. Sometimes you want to talk to them because you feel like

41:52

you can have a more direct conversation. Sometimes you want to send them a message and let them do their thing, come back to you. Other times there's like external triggers that will indicate that something has to happen between, you know, wi with that, whether that's like they got a a Slack message or an email or some kind of like external trigger that then dictates they need to do

42:10

something. So I think all those things are probably going to be true and we need to we'll eventually live in a reality where all are important but I do agree with you that I think voice that I find myself more and more frequently uh just chatting with uh typ typically chat GBT I use all of them but I probably talk to chat GBT at least voice- wise the most

42:29

and yeah it's becoming more and more of of my time is just having a conversation rather than just like sending a message. Yeah, 100%. And I think where it's getting on the net, you know, Obby, you asked about the Netifi side of things. I think we we don't I mean obviously there's people building agents on Netifi

42:49

and and and we absolutely support all that but like and we want to support the different mediums and their ability to do so but where you know Netleifi gets the most kind of interactivity with agents is obviously on the kind of extraction of data and so like they're trying to scrape data or search for data or pulling LM's text or whatever they're trying to visit a customer website and pull information

43:12

out of it. So, we're, you know, a step two or three past that entry point and now we're trying to help provide that information in in a really great way. And I think for me, that's where a lot of my interest and kind of research focus is is like what is the best version of the web for an agent so that,

43:32

you know, we're we're not, you know, only leaning into what agents can do, right? and optimizing for them, but like the web, you know, owner is also, you know, having a good experience. It's not, you know, super expensive for them to do these things and they don't have to learn what web 3 is maybe to pay for all of it. Um, so yeah.

43:56

So, so can you tell us a little bit? So, Netifi just had a launch. Yes. Last week, right? Maybe October 1st if I'm

44:02

remembering my date correctly. Can you tell us and the audience a little bit more about what what did you launch? What are you excited about? What are the things that you've been cooking over there that maybe people missed or haven't quite seen yet?

44:15

Right. So, um just a small bit of background context. So, Netifi um has been doing like web infrastructure um for 10 years now. So, you know, we make

44:27

it as easy as possible for you to go from either a repo or just some files and having a published um website on the other side of it. And you know, we do that not just for the stack stuff, but the full stack nature of sites as well. And we has, you know, spent this decade optimizing for the developer experience

44:46

and just having an opinion on not just like how to build any technology ever, but like how to build websites and web apps. And so that allows us to cut out a lot of noise and generic stuff. And so we can be very opinionated and develop a great, you know, uh, developer experience from that. And as soon as agents started generating

45:10

code right now, like first of all, code is cool to look at, I guess, like I like looking at code, but like it's kind of useless unless you can run it somewhere. And so agents really needed um this ability to host and and run their um that code on the web. Um and so it's a full end to-end experience, not just this whole like isolated feature. So

45:37

that's where Netleifi really got into supporting the AI community in this new AI development era. Um supporting like I had mentioned earlier various different synchronous um what we call synchronous um uh agents where you're kind of collaborating with them real time. So like if you're in cursor or copilot and

45:57

you're you're talking to them talking to the agent and it's generating code and you you're seeing a preview and doing some back and forth. Um and building up to you know uh last week we launched two really um important new parts of our offering. One is called agent runners and the other one is our AI gateway. the AI gateway which is you know probably the easiest one to talk to

46:23

is is you know is just a standard you know ability to um start using inference from providers like Enthropic and OpenAI through Netlefi. Um but where our kind of take on this is is we wanted to optimize again for the developer experience. So with Netlefi, if you write a serverless function that starts using, let's say the OpenAI or Enthropic

46:50

SDK, it'll automatically start, you know, using that inference from them. There's no configuration needed on their side or bring your own token. Um, you obviously can, but um, you don't have to. Um, and there's also no additional configuration layer on our side for you to also set these up. It's literally

47:11

create a serverless file, import their SD, you can copy and paste their SDK from their um from their websites, throw it in a serverless function and it will just work. So that developer experience is huge. We've already seen uh over a billion uh tokens going through our AI gateway, which is amazing um given that we launched it last uh Wednesday. And

47:34

then the agent runner side, this is my favorite part, right? So agent runners is our ability to orchestrate agents on behalf of customers. So we Nellifi doesn't, you know, want to build our own agent. We believe that in terms of like

47:54

we don't want to build our own say bolt. or uh cursor competitor, things like that. Like we believe that they're going to kind of be many different types of agents in this market and like they're going to suit pe you know different people in different ways. We so we want to support that ecosystem

48:11

and with agent runners we're able to import these foundation model agents um is what we support at launch. So cloud code, codec cli and gemini cli. We um you know you can go to netlefi write a prompt and we will run that prompt with some additional context against those agents on your behalf do a build and then do a

48:39

preview deploy of that. And what that means is you're not just getting the code generation, but you're also getting a validated uh build and a uh preview deploy um so that you can validate whether or not it's working the way you want and then decide to, you know, iterate on it or or merge it or create a PR from it. And I can even demo this one today. That one's pretty easy to show.

49:06

Um, but at least for me, there's this gap in the market right now where there's like the Bolts and the Lovables of the world that are really easy to use um for prototypes and and net new, you know, sites and things like that and they're amazing. Um, and then there's the cloud codes and the codec clis of the world which are really really

49:31

powerful and can work on much more complex sites and systems but you have to be much more technical to use those. Yeah. So there's this gap between wow I want the really best models or sorry the best agents but the team that really needs them doesn't can't use that. My marketing team can't use it. my my finance team that wants to update the

49:51

finance page can't use them. So by us taking on the orchestration of this you can go to Nellifi see you know that you know go to your site and just anyone on your team can ask hey update this page to do X Y and Z and it's going to do all that for you and you didn't need to know anything about what a CLI is how to pull down a GitHub repo for it to work on how to

50:17

push a PR any of that stuff it just kind of handles all the technical parts because that's what Netifi does already. Yeah. So, we just make it really easy to run these models on your behalf or these agents. And then the preview deployment is super clutch because if they don't know what

50:34

they're doing, they at least have can visually check it with their eyes. Yeah. So, that's tight.

50:40

Exactly. I it's like I'm I'm not just gonna try to throw all the Kool-Aid at everybody here, but it's been really cool to see the team like how they've actually changed their work now that they have access to to agent runners because they actually use agent runners to build agent runners. At a certain point, it was like, okay, it's stable and it's doing mostly what I want, but

51:04

man, I wish it did this. and someone would throw that idea into an agent run, go to a meeting, whatever, and then come back and it's implemented. And instead of it being like a, hey, let's talk about this ticket. Should we do this? Should we do that? Someone just threw the experiment out there, showed the

51:22

demo of it, and it was like, we're doing this, right? Yeah. Yeah. Okay. Like like

51:28

that's And a lot of times Yeah. A lot of times, even if it's so much easier to have throwaway prototypes, too. Like you might not even use it might just be it gets the idea flowing and now you can say okay this part was right these things were wrong now we kind of know the direction we want to go so the next time you actually get more sophisticated and you can actually get it closer to

51:47

where you want you know what you're doing now absolutely the the the sunk cost fallacy like drops to zero and now you're not just deciding to do something because we invested a lot of time in prototyping idea or researching it it's like let's just try this thing out like you know Mat TS, our CEO, he uh um there was this

52:07

one problem where it was like, you know, we DNS is hard. I think we all know that. Uh and whenever, you know, because we host domains and and help people register them, we have to deal with a lot of DNS issues and people registering their domains on external providers. There's a thousand external providers

52:27

and they have all their own version of how to do this. Um, and so when whenever someone comes to our app, you know, there's this question of like, hey, what if we just kind of taught them how to do this and register within other people's, you know, registar, right? So like, can you register this domain on GoDaddy or, you know, uh, namecheep or

52:50

name.com? And the prospect of doing that before agent runners was like, okay, well, where are we going to source the data? How are we going to figure this thing out? And like it's a scale problem

53:02

because we don't we're not experts in all the systems and nor should we ever become that. But with, you know, AI, AI is trained on basically all of that data and we can just get an agent runner to build out a proof of concept of this, like like would the experience be in totally different if whenever someone

53:22

hit an issue where it was like, hey, you're supposed to go to godaddy.com to register this and we said, here's exactly how you do it. And now these less technical users who are getting really into web development but still have to deal with DNS are really walked through this problem. And again the

53:40

decision to do this was not like okay how are we going to scale this? How are we going to figure out all this data? Blah blah blah. It was, "Hey, Matt

53:47

kicked off an agent run while we were going into a few meetings, iterated on it on a few times, and now we're pretty we're like 95% sure we should just ship this is, you know, let's fix a few issues here and there, and we're done." That that's a different world and a different workflow and and velocity that the entire team gets to have now.

54:10

Absolutely. All right, Sean. Uh we we went a little long, but we do appreciate you coming on. Uh what's the best way

54:16

for people to get in touch with you or stay up to date with all the stuff that you're doing over at Netifi? Yeah, I am uh JavaScript, which is what my kids think I do for a living uh on all the things X uh Blue Sky and and also check out Netifi. We have a lot of updates coming and a lot of more videos

54:35

and and content to show you how to kind of really, you know, get a lot from this uh this new AI workflow. And we're going to see you at the conference, right? Yes, I will be there. I'm excited to see you all in person again and uh yeah,

54:50

looking forward to it. See you. All right, we'll see you.

54:57

That's awesome. All right. Yeah. I wonder if agent runners passed the beer test. Can I go to the bar and make

55:05

some moves? That was always that's how we tested like cursor background agents. It was that you know we needed we you know it was like can Abby and I go have a drink and then when we come back we have some we have a prototype ready for us to actually play with. So yeah that's we'll have to test the agent runners the next time.

55:24

Yeah. And I think Sean said yes. So we can do that. Yeah. We're going to we're going to we're going to test it though. We're going to try that out. Uh but let's go

55:30

on. We are a little behind and we are on a tight schedule. So, we're going to bring on our next guest right away. We

55:37

have Daniel from Weeviate. Daniel, welcome to AI Agents Hour. Hey Shane and Abby, how are you? Doing really well. Doing really well.

55:47

Thanks for having me. Yeah, of course. Thanks for coming on. Uh, yeah. Well, we basically, you know, if you're not

55:54

familiar with the show, we just like to ask you some basic questions. What's Weevate? But first, you know, who are you? you know, what's your background and how are you like how did you end up with uh with being at Weev8 as a

56:07

developer experience engineer? Yeah, that's a great question. Um, well, like it says on the screen, I'm Daniel Perryi. I'm a developer at heart to be honest and I guess developer experience

56:18

engineer is an extension of all of that. I have been building in some way, shape or form for quite some time. I think I started out with like my uncle bugging me to do like Jumla sites I don't know how many years ago dude I I was I was it was me it was Drupal but so I can relate to that.

56:38

Yeah. Um and I was like yeah so it's back then and then you know I was I did a lot in like the GraphQL space. Um, and so I really liked tools like Gatsby. And so I was like a really web dev person. And then at some point I decided to

56:52

formalize my education, you know, as most people do. Um, and then I did like computer science and got into some data science and AI. Then I realized that wasn't for me. And so I went back to the web space and I I did a bunch of um developer relations work for some really

57:10

open some really awesome open source companies. Um, I worked with the team at Hassura for a bit. I worked with the team at Strappy. Um, and then I decided I needed some time to sort of like

57:23

recover and so I spent a couple times just a couple months just like traveling and like rebuilding this vintage bike that I bought really cheap. Um, and then like riding it around. And then this was sort of around the time when people started to realize that AI was not going anywhere. So around like GPT 3.5. So

57:44

that's like chat GPT 3.5. Um when people were like past the oh this is impressive and I think other companies started to open up APIs. Um and at this point I was like I was

57:58

done building my bike and I figured I should probably build something else. And so I started to build a an expense management tool um for myself because I realized I was spending way too much money on subscriptions and expenses in general. Um, and so I was building this app. I realized I need a place to store my vectors because I was doing a bunch of retrieval of like

58:26

documents that I was chunking essentially getting a lot of like my p like my receipts and invoices and putting them somewhere, storing them somewhere. And a lot of the tools I tried to use didn't make a lot of sense. And so that's how I found out about weate um cuz I ended up using it. And yeah,

58:43

then I ended up joining the team as a developer experience engineer. It was um it was wild. I feel like you would like exactly what I was working on. I I joined to work specifically on Typescript and JavaScript. Um

58:57

nice. Yeah. Yeah. There it's always good to hear there are more people like us coming

59:03

from the web world, right? So a I think a lot of people this is a lot of our audience, you know, because we we're formerly from the web world. So you start in web, you've worked with JavaScript, you've done some TypeScript, you built some websites, you've probably played with React and a lot of other things and you but now AI is everywhere,

59:21

right? You can't you can't even I don't think you can really be a web developer without having to like dip your toes in in building things with AI, right? Whether you're just using it personally or integrating it in the applications you're building. So it's cool to hear

59:33

the story of it's somewhat similar to I think Abby and I both where like we were both pretty heavily in the web world for a long time. when you joined was BV8 trying to make that I guess move to JavaScript TypeScript or were they Python first or what was like the the like how was it set up when you got there? Yeah, it's a great question. I don't think I've talked about we much. So we

59:56

is a um I like to call us like an AI infrastructure company primarily through like our vector database. So we essentially try and help AI native or AI anyone building with AI developers building with AI to build sort of like these scalable applications and the background of we actually comes way before this like AI 2.0 know you know I

1:00:19

would say it sort of came around the time where people were really into TensorFlow and so our core user base was Python developers data scientists and I think at some point they realized there's a huge excuse me like lack of effort uh or rather lack of involvement in the web and JavaScript space and they needed someone to sort of like set up base on the developer experience side. Um, so yeah, I got in

1:00:50

because they were actively trying to do that. So the first couple of months was really refreshing whatever SDK we had and building like a foundation for resources and applications. So like sample react application for a simple rag like a simple rag app for example all the way to like agents when they

1:01:09

became a thing. So that was sort of like the reasoning behind me joining. Awesome. Yeah, I mean we always Yeah. What was

1:01:20

funny is I think you know so in 2024 also like I've heard a lot it was that was the year of rag right and you you did see that like anytime you were starting to work with AI you you'd almost always end up with okay I need a vector DB I need to set up semantic search I need to and now you you hear all these people saying rag is dead but

1:01:38

we all know like okay no it is definitely not it it's just it just now becomes such a core part of what you're doing that you don't need to talk about it quite as frequently like people don't quite talk about it as much as they used to because it's just I think now become an accepted expected part of your stack that you need like this this knowledge retrieval or this information retrieval

1:01:58

mechanism when you're building either like an agentic application a workflow an AI agent um so I guess you can you tell me a little bit about like what what makes weevate special right there there's a lot of uh vector dbs out there and there's a lot of pretty good ones and I I put weev8 in that bucket What do you think? What do you like about Weev8 that you think like separates it a bit?

1:02:22

Yeah, I that's a that's a great question. Before I get into that, actually, like there's a lot of people saying rag is dead. Um, which I think is a very like cyclical thing. People have been saying that for I think two years, but I think what people are calling it

1:02:36

now is like context engineering, right? Just because it's sort of table stakes at this point. Exactly. And so like I think everyone

1:02:43

got tired of how rag wasn't this big concept that was jarring and so then it wasn't fun to just like mention and show off and so now everyone's like oh context engineering um and now it's like oh what is that how do I do this um yeah I think rag is like it felt like rag you know pe people think rag was very prescriptive it means you have to have a vector db you have to do this and

1:03:07

rag was more general than that but I think context engineering captures that better that you know like there are components of context engineering and it and that's incredibly important and there are tools that you use when you're doing context engineering and you know and you can be one of those tools

1:03:23

as a pessimist I think people invent new words so they can make money so um you know maybe they couldn't make money off rag the word rag anymore they had to change some up you know and then people had to come up with context yeah I I absolutely agree I feel like it's the same thing with agents when agents And I I don't know if you

1:03:41

might relate to this even more coming from like a web world and then you discover what all these AI people are calling things and you're like that's it. Yeah. Yeah.

1:03:52

Like that that that's all you're describing. Um so yeah, I mean it's it's I I absolutely agree. I think people just try to make money out of it. But but more on we like what makes us

1:04:03

special. U we we're primarily built to be as flexible as possible. Um, and sometimes that can be a bad thing because people aren't really sure um, what you do really well. And to give

1:04:17

people an idea of of sort of like the areas where we thrive, one, we're sort of like built in we're built in Go, so you can build your own modules for whatever you want to do. Um, and when I said like we're an infrastructure company, we're really trying to help make your life as easy as possible. Sort

1:04:33

of like from ingestion to querying, right? And I think a lot of people, a lot of databases only find you at query and storage, but we really do our best to abstract a lot of the difficulty from getting whatever data in whatever form you have it into the database and then serving whatever needs you might have when you actually do query that database. So our ingestion process is extremely

1:04:59

extremely extremely robust, I would feel. And we have some really cool features coming out um that would make that even nicer. I I'll sort of like tease those um to basically let you dump a bunch of data, go to sleep, and then when you wake up, you wake up to all the data in your database. Because thank you for being honest, vector databases deal with a

1:05:25

huge amount of data, an an imaginable amount of data. Like before I joined, I had no idea people were processing like terabytes of data. Um I was that that's another level of another level of scale for me. I don't know about you too, right?

1:05:45

Yeah. I mean it it's the amount of data especially you know that if you think about some of these enterprises that we talked to the amount of data they have in just like historical that they need data that they need to get into you know a vector DB and they need to figure out the right chunking strategy and they need to figure out how do they inject that into their context in the right

1:06:03

way. It is it is a a massive scale for some of these uh companies. They limit too like with regular databases like it hits too many limits like too much concurrency. It becomes like a big uh engineering problem

1:06:19

actually. Yeah, it's a huge engineering problem and a lot of people forget that vector databases are databases and you have to have efficient ways of dealing with race conditions, dealing with scalability, dealing with sharding, uh dealing with multi-tenency and all those are things that we sort of brings to the table. Um, which to be honest you only

1:06:38

encounter when you really function at that scale. So building your simple rag pipeline isn't something that you would probably encounter the need to sort of have like a I don't know like five replic five five replicas of you know your database just because you have so much um so much sort of throughput going

1:06:56

on to different different clients you know um so those are types of things and all the way to like compression because tokens as cheap as they can be for embeddings you sort of don't want to spend money when you don't want to, right? So, you know, we do a lot of really interesting things on the compression side of things, trying to save as much

1:07:18

for our customers, but also trying to make things more efficient. I think a lot of people just run vector databases um in default mode, not knowing that you could probably save like I think 30% on storage um with maybe like a 5% loss on accuracy, maybe even less. Um, and for a lot of use cases that's very negligible. So it's it's a lot of these really I I I

1:07:45

say boring because you know they're flashy but for applications that are built at a huge scale are really important. Yeah, those are the things that you don't appreciate right away until you need them. And then having the right dials to be able to turn and you know crank up the compression maybe at the cost of slate accuracy but you you everything in

1:08:10

engineering almost ends up being trade-offs right it's like cost versus latency versus and so but having that and being able to understand that I think is is important and and the defaults you definitely don't at least understand that and once you do understand it then you realize okay there's a lot of stuff in order for me to do that that I have to wire up

1:08:30

if I want to do it myself. And so we see like a lot of different types of people who need sort of some sort of retrieval tool like ranging from these big enterprise companies with like I don't know terabytes of data to basically like a web developer getting into this and we try and cater to everyone exactly where they are. Uh and because of that we

1:08:55

built this really cool thing um as everyone is doing everyone's building agents right now right um so we built what we call a query agent because retrievo is a very funny engineering problem it's also I think a very funny scientific problem because getting a result is only half of the story you know it's the relevance of that result um like objectively but also when you

1:09:23

contextualize it to what data you have also the relevance of that result um matters and all of that is all of these have very different answers essentially um a question could be sort of like a query you might have could have a I don't know a numeric answer being the valid response where other times you sort of have a I don't know sort of like

1:09:47

a text answer and there's a lot of nuance that goes into making sure that users have the response from whatever query that they put into into into an application or into a vector database. And so we built um a a query agent to sort of handle all of that overhead for people who don't want to go into having to set up, you know, all these all these knobs to make the

1:10:13

the search or the retrieval work a certain way. Um and we're finding a lot of people in the web space are finding it quite useful. Yo, the query agent is tight, dude, because like embedding, sorry, embedding, inserting, and querying, that's like so much context that engineer needs to know to then make

1:10:31

their own agent to do that type of stuff. But if I can just have an agent that is in front of my vector database that already knows how to query it and I'm just talking to it or even my own agent is is prompting it. Yeah, that is legit. So, we we do have that and I don't know if we have a lot of time. I could

1:10:48

probably demo it at some point. Um, it's it's good to hear that you'd find that really useful to be honest because the the goal we had with building this is a lot of people want to interact with vector databases in a very different way. A lot of people want to have that pipeline. A lot of people just want to have this exposed over an API. And so we

1:11:06

realized this this sort of helps a lot of people um because we have like a cloud offering where we've also built this really nice front end for this query agent. Um, and you essentially have this UI where you can chat with your data. Um, and all of this works really well with whatever enterprise uh raw and management systems you have. So someone who doesn't have access to

1:11:29

specific things is not able to talk to or like at least access these things. Um, but even on the other side, right, like there's an whole API. So you could build or you could expose this agent to your agent and then your agent is able to see that these are you know the specific tools and the specific things.

1:11:46

I've actually built a couple uh multi- aent systems with this sort of like integrating with um I think I did something with Twilio um and resend where I had a bunch of uh data in my wevate cluster. So, this is just like newsletter data or like basic sort of like snippets of my day. Um, and I'd be like, "Hey, I just came back from this trip. Could you send like a little newsletter to my my friends like with

1:12:19

the highlights of this trip?" And the query agent would pick out exactly the right thing in the right collection because it sort of knows where everything is. and then sort of like trigger the email tool to send an email and then then boom, my friends have this thing in their inbox and that's so much time saved for me. Not that I don't like

1:12:37

to, you know, write. So, you don't like to send your friends messages. We won't let them know.

1:12:43

Yeah, we won't tell them. Hopefully, they don't watch this. Hopefully, they aren't watching this, but yeah, all the all the me all the emails you receive with like my trip update that that's all that's all an agent. Yeah. Um, yeah, we so we don't have time

1:12:55

necessarily for a demo today, but you you you kind of teased some things coming up in Weev8. So maybe once you know if you know when those are going to launch, we should just have you on again and maybe make some time so you can demo this a little bit more on the query agent or whatever else you might be uh cooking up over there that you haven't been able to announce. So we definitely love to have you back.

1:13:14

Oh, that would be great. Yeah. Um I Are you SF, Daniel? No, not at the moment, but I should be

1:13:20

next uh April. Nice. Okay. Anytime. Just let us know when you're there. We can

1:13:25

hang out. Yeah. When Yeah. When you're when you're in town, hit us up. We definitely want to should hang out.

1:13:30

Yeah, absolutely. I'm I'm I didn't get the chance to chat with you, Abby, when you're in Paris, but um hopefully NSF. Yeah, definitely. All right, Daniel. Well, we do have to run. Not that we want to kick you off, but Abby is probably going to miss a

1:13:44

flight if he doesn't leave in like the next three minutes because he's got to go. We're in my hometown now. He has to go actually to SF. So, uh, but it was great to have you on and next time you launch something like

1:13:55

reach out. We wanna we want to see it. We want to see a demo of some of this stuff, too, because I think it be really cool to see and our audience would love.

1:14:01

Anytime. Yeah. Thanks so much for having me and good luck with your flight. Thanks. All right. See you, Daniel.

1:14:07

Bye. You wrap the show up? Yeah. Yeah. I mean, Ob's got to take off here, so you're seeing him pack up. And while he's packing up, I'm gonna wrap

1:14:19

this thing up. Thank you all for watching uh AI Agents Hour with Obby and myself. Uh please, if you haven't already signed up for the TS Comp, go to tsmp.ai and sign up. You can register virtually. It's free. Or come in person is even

1:14:37

better. See you, dude. Um OB's going to catch a flight. And yeah, we today we

1:14:42

talked about OpenAI Dev Day. We went through all the news, all the announcements. We we talked about a little bit other AI news. We talked to Sean from Netlefi. We talked to Daniel from Weev8. It was kind of rapid pace because we were on a tighter timeline

1:14:54

than normal. Sometimes we'll go for two hours if time permits, but unfortunately today time did not permit. We were in a hurry. But we do appreciate you all for watching. Uh please uh go to master.ai

1:15:07

if you are curious on what we do. Go to master.aibook and grab that. If you don't have your

1:15:14

copy, we will be uh in SF if you're around SF this week. And yeah, we look forward to uh chatting with you all then. Have a great day and we'll see you