skip to content

AI should do chores, not the fun stuff

What’s the *right* use for AI? Laurie Voss thinks it’s great at doing boring chores, and in this episode we learn what that means and how we can put the robots to work so we have more time for the fun stuff.

Full Transcript

Click to toggle the visibility of the transcript

Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON LENGSTORF: Hello, everyone. And welcome to another episode of Learn With Jason. Today, we are going to dig into something that's been� it feels like everything anybody's talk about today, AI. We're going to dig into maybe a hot take? I don't even know if this is a hot take. I feel like I've been struggling with where AI fits into the landscape for developers. I don't think it's going to take our job, I don't think it's going to replace us. I do think it's going to stay. I wanted to talk to somebody who can help me sort of navigate the way that I'm feeling about AI and to do that, I wanted to bring on my friend, Laurie Voss. Laurie, how you doing?

LAURIE VOSS: Hello. I'm doing great, thanks.

JASON LENGSTORF: I'm superhappy to have you here. Can I ask you to give a bit of a background?

LAURIE VOSS: Sure. I identify primarily as a web developer. I've been a web developer for 27 years now. Which means I have, like, blog posts that are older than some people's entire careers.
For a while there, I was cofounder of npm, Inc, which is how a lot of people know me. I head up LlamaIndex.

JASON LENGSTORF: Excellent. So, I find this whole space to be sort of� it's interesting to me because the more I learn about it, the more I realize a lot of my fears were unfounded. I find it to be very opaque. It seems like a lot of stuff is happening that is either wrapped up in hype, this is going to take our jobs, this is going to replace developers, humans will be obsolete. It's so densely academic that I can't really parse it, right.
I feel like a lot of us are sort of watching this happen and still sort of unsure how it fits into reality. So not the hypey, it's all over, but the more real, how does this settle into everyday life? And what that means from an AI standpoint.
And so as somebody who's sort of came from the web dev world and transitioned into the AI, LLM management space, what's your take on this?

LAURIE VOSS: Um...I mean, obviously, I have a lot of feeling about it. [Laughter]. I agree mostly with you about, um, the hype train. The hype train is AI's biggest enemy, at the moment. People are right at the top of that curve of hype, where they're like, yeah, this is going to� you know, it's going to take over the world and we should have serious� we've had serious international conferences about whether or not AI is going to kill us and whether or not that's good. [Laughter].

JASON LENGSTORF: Whether or not it's there yet. [Laughter].

LAURIE VOSS: It can't count the "r's" in strawberry. It's not selfreplicated gray goo yet. One of the things it's particularly good at is code and we should zoom out a little bit, right. LLMs are autocomplete. They are autocomplete on steroids. So you've given them a huge corpus of everything they can find on the internet. You can autocomplete when you see something that looks similar and it turns out that if you do that at large enough scale, it begins to look like reasoning. Can computers think, blah, blah, blah, blah, blah. The problem is, the gold standard for AI stuff, for a long time, has been the churning test. Can a human operator tell that it is talking to a computer when it can't see the operator's talking to a computer? And the test is way in the rear view, at this point. You can't tell that this code was written� or that this language was written by AI half of the time.
But, the reason it's so good at code is because the training data's full of code. Right. It's looked at the entire internet and read pages to get autocomplete about the content of the web page but it's also reading the web page. It's reading a pile of HTML and JavaScript. Extraordinarily overtrained on code, in general, and web code, in particular. So they're really, really good at generating web code, at this point.
One of the things about them� about their training data, is that it is full of extent web code so they're not going to be good at the latest and greatest thing. It's not going to be good at this code you just invented yesterday because it's never seen it before.
One of the things you have to correct for when you're coding with an AI assist, you have to have it look at new stuff. Because it just has a lot more training data of how to do it the old way.

JASON LENGSTORF: That makes a lot of sense, actually. It's ultimately� I think the argument I've heard� and the one that's sort of stuck in my brain is, it's very much a patternmatching machine. It's using the sum of its training data to basically assign a probability to what the next thing in the chain would be and so what you just said about, like, using old data makes a lot of sense because based on what it's read on the internet, most of the code is old so the probability it would use older code would be much higher because, yeah, most of it's done that way.

LAURIE VOSS: Right.

JASON LENGSTORF: Okay.

LAURIE VOSS: But one of the ways that's good is that it tends to� it tends to gravitate towards, um� I don't know if I'd call it "best practice." It does, by default, what everyone else is doing because that is the majority of the stuff that it gets trained on.
As a coding assistant� I think I was discussing this when we were talking about going on the show� it's really unlocked recreational coding for me, on my end. I stuffed building stuff on the weekend because every time I wanted to get started, I was like, ah, I have to figure out this framework works and how this web API works these days and a whole bunch of stuff that is boilerplate, that is not the problem that I was trying to solve and an AI assistant gets that out of the way. You can say, build me something that looks like that and you can start reiterating on that and it gets you the first 50% of the way instantly, which means I can focus on the thing that I was excited to work on.

JASON LENGSTORF: That is, I think, a good point. In that case, a lot of the concerns I have about AIgenerated code fall to the wayside because if what you're trying to do is experiment with an idea, learn something new, try out a new� you know, a new workflow or thought or platform or whatever, there's a lot of boilerplate that gets in the way and the way we used to solve that is look for a library or a npmcreate. And, the� you know, the benefit of that is we didn't have to think about what that other stuff was, we just got to work on the thing that we wanted to build. When we npmcreate Vite at latest, we're eliminating the need to think about SSR and hot reloading and all the things that used to be a huge amount of the work we did.
From that standpoint, I totally get that argument. I have to hunt down the right boilerplate. I can say, give me a start for "thing." So one of the big concerns that I do have about this is, what happens as this code ages? Right. And, depending on who you talk to, there's been reports say AI is making its way faster. There's reports that say it's making its way slower at coding. Have you seen significant AI usage and seen firsthand results? I'm only going off secondhand reporting.

LAURIE VOSS: Sure. There is LlamaIndex. It's a framework for building AI applications. It is� it's created by a company that is full of LLM enthusiasts so an enormous amount of LLM usage goes into writing the framework, itself, and the framework, itself, is pretty big at this point.
We find� Logan, who's the head of our open source, he loves using LLMs for refactors. Okay, I wrote this six months ago, it's obviously garbage now. Let's restructure this so that I can add new methods that do this new thing and you feed in an entire function body to the LLM and it will refactor the code to make it more expandable. Refactors are a thing that they're extremely good at because there's no new thinking at all. It's just, rephrase this exist thing so that it's tidier. We use that all the time.

JASON LENGSTORF: Okay.

LAURIE VOSS: The, um� the pitfall of a large code base can be if you have a very specific house style or you have, like, a lot of internal APIs, the LLM isn't going to know about that stuff, but there are tools. The name of the biggest one is, of course, escaping me. Basically, there are tools that read your entire company's code base and enhance your LLM assistant with knowledge of your house style and your existing APIs and all of that kind of stuff.

JASON LENGSTORF: And that's the� if I remember correctly, that's the promise of tools like Cursor, right, is that they're sort of aware of your actual code. They're not just drawing on the whole body� they are drawing on the whole body of the internet, but they know it.

LAURIE VOSS: Cursor� I'm not 100% of sure of the level. It is definitely aware of the files in a repo. It will refer to APIs that I haven't told it about explicitly. The tools I'm thinking about do something much deeper, they inhale all the code and keep it�

JASON LENGSTORF: Stuff like Sourcegraph. Their promise is they just don't look at the current repo, they will look at every repo in your private GitHub org and purport to understand how they fit together so you can do not just, like, repo refactors, but an orgwide, allcode refactors, is the way that I've seen people talk about it. I've never been at a company large enough to use Sourcegraph, but it sounds pretty cool, from what I've heard.

LAURIE VOSS: Speaking of refactors, the other use case I've heard a whole bunch about, for LLMs, is upgrading large legacy codebases from, like, one version of Java to the next version of Java. I've been told, by some of our mutual friends at Google� I'm sure you can think who they are� that Google has, you know, gigantic legacy Java codebases and they have saves thousands upon thousands of person hours by turning the task of upgrading old repos from, you know, we have to hire an intern and the intern has to manually change every API to the new API, to we point the LLM at it, say switch it from Java 14 to Java 17 or whatever, I don't know the current versions of Java, and it just does that and it's somebody's job to review the PR, instead, which is much faster.

JASON LENGSTORF: That one seems very reasonable. Right. It's not fun to do an upgrade of legacy code to the next version of the legacy code and that almost always leads to somebody not just wanting to upgrade, but start to refactor and when you get to refactors, there's scope creep. I used to work at IBM. A quick project turns into 900ing months of, we're just going to rebuild the whole thing.
Letting a bot do that feels like a good way to stay focused. Bots don't get bored. They don't have opinions about whether or not the work is below them. The review does put an interesting onus on people to, again, like not get bored. Not� you know, that is a challenge.

LAURIE VOSS: I mean, if the code has unit tests, then, you know, as long as the tests pass, you're probably fine. One of the other things you can get an LLM to do is point it at the old codebase, say, write the unit tests. And it will say, okay, I know how to write tests for things. I will write machinegenerated tests that make sure it works the way it says that it works and when you point the LLM at the code and say, upgrade it, you can use the tests it generated previously to make sure it doesn't break anything when it does the update.

JASON LENGSTORF: That's a pretty cool way to think about this stuff, too.

LAURIE VOSS: All of the best uses of LLM and GenAI, in general, are getting it to do boring stuff. That was my thesis for coming on this show, right, was, like, it's not good at creative work. Don't ask it to write your thesis for you because it's not going to know what the hell you want it to write. It's not going to have any original ideas but it's really, really good at summarizes and rephrasing and in code, that is refactoring or generating boilerplate.

JASON LENGSTORF: Right. And I think this is maybe the crux of why AI doesn't scare me and why I'm not worried about my job or about the industry. I think that the only jobs that are threatened by LLMs are jobs that� that were sort of designed to automate themselves out of existence. If the only thing you do is move code from one existing codebase to another existing codebase, or if there's no generative thought happening in your job, than your job is always at risk, whether that's because they find a cheaper contractor and offshore it or we outsource it to an LLM. If you're not doing the human part of the project, which is thinking about problems, understanding context, thinking about how to expand or improve or otherwise alter in novel ways, then the value� I don't want to get into a philosophical thing about what it is to be human. [Laughter]. This is the danger of any of these, right, is it starts to tip down that path. But I do think there's something to be said about if what you're doing is generating anything novel, AI's not� not coming for you, at least not for presumably quite a while, given the difficulty we're seeing getting past the current plateau of, like, anything generative, like, generative, that isn't basic word salad. It can only do patternmatching based on information in the model.
Actually, this is a really good question that Robert just asked, that is a little bit of a tangent. So what are your thoughts about AI being a snake that will eat its own tail? The one I saw somebody describe that as it's a downward spiral, where it gets increasingly more in comprehensible because it is machines talking to machines.

LAURIE VOSS: I'm not qualified to talk about the ML implications of training on machine� on, you know, machines on machines. I know that one of the things that we've discovered recently, with training LLMs, is that synthetic data, data that is explicitly generated for the purposes of training LLMs, works quite well. It doesn't make things work, it makes things better.
I think there is this� I forget the technical name for it, there is this pattern of if you endlessly train the data on itself, it will go in to this spiral and destroy itself. But, you� the machines are relativelygood at detecting when that's happening to them so you can say, okay, this data created a downward spiral in the data set, we can just undo it. No, go back to how you were three versions ago. We'll throw away that new data. So I don't think it's some existential threat to how LLMs work.
As a user of the internet, being AIgenerated slop is annoying. [Laughter].

JASON LENGSTORF: It is. That does pull us back to the core thesis of what we're talking about. I'm not offended by AI, but I'm offended when somebody says AI is going to creative things. If someone tries to pass of AIgenerated text that is great storytelling. They're like, AI, generate a story. Yeah, I can tell� it's all derivative stuff because that's the nature of AI. When somebody uses the image generation and they're like, look at my beautiful art. It's an AI image. I love it when it's used as part of the project.
I was trying to think of something and I wanted to get out of a rut so I had AI generate, give me a list of 15 funny app ideas. It helps me get a few more threads to tug out so I can get out of the local maximum I was at. Similarly with images. If I'm trying to think of something to draw but I'm locked on one thing, I say, all right, give me 15 images of this kind of fantastical thing, it'll give me something I've never seen before.

LAURIE VOSS: Those uses that go beyond what I do with AI, I'm very much, like, using it to transform existing stuff. I find it's very bad at black sheet of paper stuff. The one� the one thing I've used, as blank sheet of paper stuff, somebody asked me, when I first got hired at LlamaIndex, what my 90day plan should be? What would the 90day plan for the head, it wrote me 15 points.

JASON LENGSTORF: Right. None of us are going to look at this after I turn it in. [Laughter].

LAURIE VOSS: I'm a year into my job now and I feel like I'm relatively competent at it and when my boss sees this recording and was like, that was AI? Fooled ya. [Laughter].

JASON LENGSTORF: I was just showing you I know how to use the tools, boss. [Laughter].

LAURIE VOSS: Exactly.

JASON LENGSTORF: There is a question I do want to address because I don't want to fly past it. Some of us have been let go because of AI and a manager fired an entire team because he thought AI would replace us. I'm going to preemptively say, the principle cause of pain in this industry is people being generally terrible at understanding problems and progress. That is not a new thing with AI. Like, I've seen people get laid off for the silliest reasons, because some manager read a report somewhere and panicked or decided everything was different. AI, I think, is maybe better at tricking people than some of the other technologies. I do think it's worth acknowledging that, yes, it is happening. I do not think it's sustainable and we're seeing a lot of those jobs that people just eliminated over the few years, they're starting to pop up on job boards again because they're realizing, oh, it turns out for a company to function, you need people who can do the work. [Laughter].

LAURIE VOSS: Yeah. There's� there's� there's a bunch of ways I could go with that. The first is that we are in a weird, economic time. We were at zero interest rates for a long time. We had the pandemic caused this explosion in hiring. There are a lot of layoffs now where AI's an excuse rather than a reason. People are just like, Wall Street would like me to lay off a bunch of people and that will boost my stock price. There's a lot of jobs being lost to AI that aren't being lost to AI, they are just jobs being lost because companies are shortsighted and stupid.

JASON LENGSTORF: Want to hear my hottest take, that I'll say on camera? I'm somewhat convinced a lot of these layoffs were because the pandemic, companies feel like they lost leverage over their workers and the layoffs are a reassertion of dominance and reminding workers that we need jobs and they wanted it turn it back from a buyers� market to a sellers� market.

LAURIE VOSS: I mean, it's true although� I don't know. If people were doing it just for that reason, you'd think the people who hung on to their employees would have this startling advantage. I didn't fire all of my employees for no reason, therefore, my company's doing a lot better. So I guess we'll see how that shakes out. Whether or not the people who did the herd layoffs do any better rather than the people who held on to their employees. I think we both agree how that's going to go.
The broader point to� to that question, um, gets into philosophy a little bit, which is the creative destruction that companies, any boost in productivity, right. You know, imagine, like, you know, the famous one is� is, um, the Luddites of the 17th century whose objection was to mechanical looms for weaving cloth versus handweaving cloth. And, they were simultaneously right and wrong. Their objection was that this is going to generate a whole bunch of shitty cloth, generated by crappy machines, that is not as good as the cloth generated by humans and that was true and that is the same thing that's happening right now. But also, it turns out having a bunch of really cheap cloth means a bunch of people who couldn't afford clothing before can afford clothing now. A bunch of people who were naked are clothed now and a bunch of people who, you know, previously were doing very expensive tailoring for rich people can now do, like, clothing for the masses and everybody's life getting better.
Productivity advances will cost people jobs but they also create more jobs than were lost in the longterm. That doesn't mean it doesn't suck to get laid off. It doesn't mean I'm expecting you to feel happy about getting laid off because your job got automated away. Netnet, more people are going to get jobs doing stuff that was previously uneconomical to do because of the vast productivity increases.
Like, I said that this is� weekend coding for me. It's because I'm going much faster. I'm developing a lot faster than I used to and that is, like, an obvious, measurable productivity increase for me, personally, and that is being multiplied by millions and millions of devs across the industry. We're all going to code a whole lot faster because we are getting this assist. Things we couldn't do before. Stuff that companies would have been too hard to do.

JASON LENGSTORF: That's a good question  or a good point. If you take it in that light and it's sort of like every advancement always has a fairly painful turnover. The Industrial Revolution was a blood bath of one job and a huge creation of another type of job that we'd never really seen before.
I think you're probably right, it has a similar thing. One parallel that I just thought of is, at the beginning of the internet, there were a bunch of people saying, it's a fad, it's not going to last. It created this ability for people who never had the leverage to start the business, they didn't have the startup capital to get a brickandmortar store and manage all the things you need to do a thing in real life, they could suddenly start businesses online and we saw this new explosion of different business models and things that could be done, that weren't really feasible in other ways.
And we also saw, you know, some pretty tragic losses of things that used to be really normal, are pretty rare now.

LAURIE VOSS: Yeah. I always think about travel agents. Remember you used to book a vacation was you go to a physical actual and a human would be like, I'm going to book these flights for you and tell you where your hotel should be, and stuff like that. Everybody does that online. I don't think anybody is wistful of the days of travel agents. That was a whole industry that got destroyed by the early internet.

JASON LENGSTORF: Uhhuh. And now, right, we� we have the advancement of technology on the internet has gotten to the point where it's becoming less feasible for a small business to easily get online and build something unique. Right. They can get Wix or Squarespace or the templating engines and maybe get WordPress, but it's becoming less� it feels like in the 90s, you could find somebody that could build you a website, which was the kid down the street, which was a couplehundred bucks. I know that's true because I was the kid, down the street, who could build you a website for a couplehundred bucks.
My neighbors have kids that are in their teens and getting into their early 20s, and this is not� they're not focused on building businesses� building websites for small businesses or starting startups. They're thinking about other stuff because it all feels all too unapproachable to build something that is competitive. Who's going to stop Amazon for setting up a drop shipping thing or stop Etsy from selling the thing you want to sell. Who's going to stop Open Table or Instagram from being the way restaurants run now, which is my leastfavorite of the food scene.
All of these things have removed the pressure of getting a small website, but AI, now, you can kind of say, hey, can I have a website that's for my restaurant and it'll give you one. And it might not be perfect, but it's pretty good and it's not $6,000 or a retainer, whatever you have to pay today, that didn't exist 1015 years ago. It is giving leverage back to a group of people that were running out of any sort of leverage and� I don't know, looking at it that way, it is kind of exciting because we get to see what people do when they get access to tools and capabilities that they didn't have a few years ago.

LAURIE VOSS: Yeah. The buzzword I've heard is "personal software." When the cost of writing software becomes to drop to zero, the number of users of a piece of software that you need, to make it a viable piece of software to write, begins to approach. If I need a piece of software that does this, I no longer have to sweat about whether or not there's a viable market for it, I can churn it out. I have a number of pieces of personal software like that already. I have an app whose purpose is to post what time the sunset�

JASON LENGSTORF: I know this account. [Laughter].

LAURIE VOSS: Nobody uses it, except for me. I'm the only user of this software. It posts to Twitter and Mastodon during daylight savings. So, personal software is a thing that can happen. I've also met� a LlamaIndex user, actually� he was a domain expert in medical records. That was the thing he knew about and he wanted a startup. Instead of hiring a team of contractors, he has been bruteforcing this with LLM assistance. He has no coding. He knows exactly what he wants this piece of software to do and he's just been hammering at coding assistance going, make it do this, make it do this. It only works for him, like, half of the time, but he has managed bootstrap a successful startup, that makes it easier for doctors to read medical records. On the basis of no coding knowledge, whatsoever.

JASON LENGSTORF: That's kind of fascinating. I do think, points to what I was talking about earlier. If we want to remain valuable, the point isn't to become, like, the best at coding. It's to have, like, novel thoughts and to be able to sort of see connections and see opportunities and describe problems and describe outcomes.
The� that's always been a killer� a killer skill set and it just seems like it's going to be more important than ever especially as some of the harder skill sets are being abstracted over. We can be grumpy about that if we want. Or embrace� let's say we're completely wrong and this AI things all blows up tomorrow. Learning how to be a better communicator, both to yourself, your ideas, it does not matter what industry you're in. That will 1,000% be a huge accelerant. It's a nolose game to get better at that. So, you know, hey, why not hedge. [Laughter].

LAURIE VOSS: Sure.

JASON LENGSTORF: Okay. So I'm looking at the time, I realize� I would easily talk about this for the rest of today. But, I do want to look at some code because you had� we're doing things a little differently today and I'm really excited about that and I want to make sure we have plenty of time for that.
Let me go into the pair programming view here. Before I get started, I just want to drop a quick shoutout, if you are not following Laurie on social, it's @seldo.
We have Vanessa here, from White Coat Captioning. We are pair programming with Tuple. It's good stuff. So, thank you, Tuple, for supporting the show, as well.
With that, I'm going to switch over to the Tuple window, where we are now looking at Laurie's computer. And so, today, Laurie's going to drive and I'm� I'm going to just stand here and be interested. [Laughter].

LAURIE VOSS: That's what we're going to try and do. Yeah. So, the� the challenge that you gave me, I think, was, you've got this little API of yours that, um, gets data about all of the episodes of Learn With Jason, that have ever happened. And, it� it links to every YouTube video that you've created and there are resources in the Show Notes for each show of, like, stuff that you've linked to, but what you told me was, um, that you talk about other stuff in the body of the show. You talk about resources, you talk about things that you point out and those don't always make it into the Show Notes, so, let's see if we can do� you know� a thing that AI's good at, summarizing. And let's get the LLM to read the YouTube transcripts and find all of the resources that you mentioned, that are not already mentioned in the Show Notes.
And if we're getting really fancy, if it works out, we'll also provide links to where, in the episode, you mentioned� you mentioned that resource so you can link people directly to. And this is where we started talking about this and this is where we started talking about that.

JASON LENGSTORF: Right. And this is one of those things that it's� you know, I� I vaguely know how I would attempt to approach this without an LLM for the language processing. And I know� like, I've tried things like this before and it is extremely difficult. It's extremely rickety. It almost always has falsepositives so I'm very interested to see what the quality bump is in trying this kind of natural language processing with an LLM and I have a hunch it's going to be a lot better. [Laughter].

LAURIE VOSS: Let's see. I confess, I was nervous about how this was going to work. I have already tried this out, you can see in backend tests. It did work. I did manage to produce something.

JASON LENGSTORF: That's good because I talked at you for 40 solid minutes.

LAURIE VOSS: I'm not going to refer to it unless I absolutely panic. But the other thing that I wanted this session to show was, like, this is Cursor. This is the AIenabled VS Code clone. And I want to show how you use an AI assistant effectively. Let's� let me show you how somebody who's using this every day uses it so you get a sense of what does AIenabled coding look like.

JASON LENGSTORF: Yeah. Somebody's asking is Cursor going to use the concept of backend test?

LAURIE VOSS: That is a good question. Do you want me to move it out of the way so it doesn't do that?

JASON LENGSTORF: I don't have strong feelings about that. I'm not particularly worried about the specifics of this. I'm more interested in the flow.

LAURIE VOSS: I think even if it does, it won't make that much difference because it's not a complicated piece of code. It's something we can get done in 30 minutes.

JASON LENGSTORF: I feel like I keep trying and failing to integrate LLMs into my coding flow. Because probably due to my own habits, right, the way that I write code, I find the coding flow to be so interruptive with an LLM agent and it, like, takes me out of the flow and I� like, I haven't found a way to actually integrate it where I feel like I'm more productive. I feel like I'm battling the tools when they're on. I'm interested to see how� like you said� somebody who uses this every day, how you interact with this.

LAURIE VOSS: I think Copilot, I get annoyed because it keeps tabbing things in and I'm like, I'm not ready for you. Hold up. Cursor works a little bit differently.
Let's get started and see how things go, shall we?

JASON LENGSTORF: Let's go.

LAURIE VOSS: It's already telling me what I've done wrong. You can't just put things in there. [Laughter]. It's doing exactly what I would do, right. It's already figured out, this is what you were going to do. You were going to run the code and see what the� what the API's spitting out.

JASON LENGSTORF: Sure.

LAURIE VOSS: Dododododo...

JASON LENGSTORF: Are we in the right folder?

LAURIE VOSS: No, probably not. No. Thank you. If I was being really lazy, I could have just pasted that into the thingy and it would have said, are you in the right folder? You're even better than an AI. There you go, my highest praise. [Laughter]. This is�

JASON LENGSTORF: We're getting asked if you can zoom in just a little bit. Maybe give it a couple taps on Command+.

LAURIE VOSS: How's that?

JASON LENGSTORF: That's great. And then they've asked if we can go fullscreen.

LAURIE VOSS: This is the data structure we're working with, we've got an ID, YouTube, the links. That's the thing we're primary going to want. So, let's first...let's first just get, um...one episode and one� one transcript. So...this is where we pop up the chat and we're like, this is where you start talking to the AI. Like, it's an actual person, like, it's your intern. So I've got an API that returns an object...that contains a YouTube URI, a YouTube video ID at...I think it was video ID. YouTube...

JASON LENGSTORF: Video...

LAURIE VOSS: YouTube ID. Oh, is it video?

JASON LENGSTORF: No. It's ID, you're right.

LAURIE VOSS: YouTube ID. It returns an array of objects that contain a YouTube video. Let's fetch the full array, sort it by date, which is at...date. So we get the most recent episode and then, fetch the YouTube transcripts for that video ID. Go...

JASON LENGSTORF: Okay.
[Laughter].

LAURIE VOSS: So, it's talking to me. "Get sorted episode" functions. Use this function to extract the mostrecent episode. Putting it all together, here's the complete code. It's already figured out about YouTube transcripts, which is relatively wellknown. Npmpackage to do this for you. What I'm going to do is, I'm just going to hit "apply." It's going to think for a second...yeah. Apply that. It gives me a diff that I can look at and it's the whole fucking thing.

JASON LENGSTORF: I like the diff, though. I'll be honest, this is the first time I've seen Cursor. I hear everybody talk about it.

LAURIE VOSS: It's much better than copying and pasting from a window or having tabbing. It has changed basically everything, but I'm going to accept that diff. So, uh, it's imported YouTube transcripts, so let's make sure it has those. It doesn't run npminstall for me.

JASON LENGSTORF: Sure. It doesn't update your package JSON.

LAURIE VOSS: There's no package JSON at all. It's assuming I will do that.

JASON LENGSTORF: Got it.

LAURIE VOSS: So, get sorted episodes. There's a sort function there. Date verses date, that looks like a reasonable function to do that. It's producing an async function, total episodes, mostrecent episode, title and date. Oh, does it know about title?

JASON LENGSTORF: Looks like it figured out. Yeah. It probably made an assumption, would be my guess, because that's a pretty common field.

LAURIE VOSS: Then it fetches the most recent episode and fetches a transcript.

JASON LENGSTORF: It got close, but I can see a couple things, for example, it split the URI instead of using the YouTube ID. There's a couple pieces where it� it got there, but it�

LAURIE VOSS: Well, the funny thing is, it exists.

JASON LENGSTORF: It does exist, which is fascinating. A viewer said it's reading stuff in backend test, which is probably correct, because how would it know about URI.
Let's see what we get. That's how I code. All right. Excellent.
Look at that go. Okay. This YouTube transcript thing is kind of the worth the price of admission. I have an endpoint where you can get the full transcript, but it's not timecoded.

LAURIE VOSS: This is a feature of the YouTube API, it turns out. I use it all the time because I� [Laughter]. Because I'm posting to social media as part of my job and I often get videos and I'm like, what is this video about? I don't have time to watch a 30minute video. I paste it into Claude and say, tell me what this is about.

JASON LENGSTORF: Not to flex here, but I'm going to flex. This is one of the main reasons I love having a human captioner and why I'm so happy Vanessa's here. Not only do we have this transcript, but we also know that the transcript is accurate and, like, the White Coat Captioning team knows technical terms so they spell things right, it's not, like, the inferred sort of correct transcription. Brand names are spelled right. I'm really excited about this. You're talking about personal software, this is about to be personal software for me. [Laughter].

LAURIE VOSS: All right. So, uh, we've got that. Now, what is the next thing� like, the hardest thing when you're working with LLMs is just remembering it is you want it to do next. All right. So we've got the transcript. Let's say� oh, right, the other thing that's inside of the...that's inside of your output is, um, the, uh�

JASON LENGSTORF: Resources?

LAURIE VOSS: Resources. Thank you. So, let's...first...consolelog the most recent episode thank you. Don't consolelog the transcript. Just give me that thing, please. And then...what I'm going to do is, I'm going to take this...and I'm just going to do what I always do in this situation which is [Indiscernible]�.JSON.
So, you've got a thing called "links." And inside of links, you have "resources." Now, I guess the next thing is to get the LLM involved.

JASON LENGSTORF: Okay.

LAURIE VOSS: So...so, I've highlighted the whole thing. I don't actually think you need to do that, but that's what I do, and I hit Command+k, because what I'm asking for is a refactor. Change this so that Main is a function called Fetch Most Recent Transcript and it returns the transcript. It thinks about that and runs through each line of code. What do I need to do to get that done? Not very much. Cool.

JASON LENGSTORF: Okay. That was dope. [Laughter].

LAURIE VOSS: All right. So, now we've got Fetch Most Recent Transcript. And we're in MJS, so I can do "talk level awaits." That transcript, thank you, that's exactly what I was going to do. [Laughter]. All right. That transcript, "await most recent transcript." Now bring in LlamaIndex using Claude 3.5. And pass the transcript to llm.complete. Now, this is tricky because I don't know that it knows LlamaIndex well enough to be able to do that. LlamaIndex is not� all right, it sent it wrong. It's bringing in OpenAI, bringing it in from the wrong place.
[Laughter]. Oh, no. This is it, trying to do a RAG pipeline because that's the most common thing you do with LlamaIndex is you put together a RAG pipeline. Do you know what a RAG pipeline?

JASON LENGSTORF: I feel like it's a word I've heard a lot.

LAURIE VOSS: Retrieval Augmented Generation. You store all your text in a vector store and then you run a query that fetches the most semanticallyrelevant. If you say, what is the capital of South Dakota, it would feed it back to the LLM, here's two relevant documents, and the query, answer the question.

JASON LENGSTORF: Oh, okay. Yeah. I didn't realize what that was called. Yeah, I did a whole episode of Web Dev Challenge where we were loading stuff into vector stores. Hey, I've built a RAG. [Laughter].

LAURIE VOSS: Excellent. So, instead, let's bring it in manually. Anthropic from LlamaIndex. And I'll just go ahead and install it. Dododo...

JASON LENGSTORF: Okay, so this is making me feel better, as you're sort of stepping through this process, where my� my struggle has been that I� I just kind of assumed that if it wasn't working, that I was doing it wrong. You're like, try it in the bot and if it screws up, do it the way you know how.

LAURIE VOSS: I knew I was getting out on a limb there to use LlamaIndex, which is not the world's most famous library. And also, I was kind of vague about what I wanted.
Now I've got Anthropic. We've got a transcript. Let's say...so, this is the other way that you can do stuff. So, notice that it's trying to guess what I'm going to do next, even to the point of autocompleting what my comments are going to be, which I wish it wouldn't do. The comments are what I tell you what I'm going to create. You fill in the code.
Instantiate LlamaIndex client with Anthropic 3.5. There we go. Process.env. Anthropic AI key, that's probably right. Let's see... are you there, Claude? We have to do a prompt here... and you see how it's just, like, yes, you're obviously going to assign this to a variable and you're going to await that thing.

JASON LENGSTORF: I do like that it's picking up those little bits� those little bits of� I get that wrong all the time, I forget to write "await" all the time. That little cleanup is really helpful.

LAURIE VOSS: It's just smart enough to get me somewhere with that.
Let's try node.try. Yes. Tada!
So now we get into prompting. So, let's say our prompt is going to be "prompt." And this is where we're not talking about using AI to do the coding. We're talking about using AI to actually get the work done.

JASON LENGSTORF: Right.

LAURIE VOSS: So there's a bunch of stuff that it's worth knowing about prompting Claude, in particular. Claude was trained on a ton of XML so it really likes XML. It's very good at dealing with things when you ask it to generate and understand XML so that's the kind of the way you talk to it is, you give it XML tags about� around stuff, so that it knows what it's looking at.
So...in we have links.resources. Let's say...that we're going to return...an object containing transcripts�

JASON LENGSTORF: Resources would be�

LAURIE VOSS: Yeah. MostRecentEpisodes.Resources. Thanks.
Claude knows who you are. You, specifically, Jason Lengstorf. It knows who you are. It knows what you talk about. So even if I accidentally leave out the transcript and ask it to summarize what the most recent episode of Learn With Jason, it makes a pretty good guess. [Laughter]. It's, like, Jason talks about Astro all the time so I'm going to assume everything is about Astro. [Laughter].

JASON LENGSTORF: I mean, that's not wrong.

LAURIE VOSS: Your task is to look for resources mentioned in a YouTube transcript of the podcast, Learn With Jason, specifically, you are looking for resources that are not mentioned in the existing Show Notes.
The transcript is in "transcripts" tags. The existing resources are in "resources" tags.
And, we're going to iterate on this. There are a bunch of things I learned about how this is going to work and not work. Let's give it� this is pretty close to the first thing I tried. So...let's show, transcripts is equal to show.transcripts and resources is equal to show.resources. That is going to give me some object objects, I think.

JASON LENGSTORF: Oh, yeah, you'll have to stringify that, right?

LAURIE VOSS: Yeah, we'll consolelog to get that first. Dododododo... horrific. [Laughter]. So, we could just� we could just lazily json.stringify this, is what we could do. And, why not? That's a good start. See what we get...

JASON LENGSTORF: What kind of stuff is happening? Oh, we commented out our response.

LAURIE VOSS: Yeah, yeah, yeah. I didn't want to waste tokens on it.
All right. So, we got the episode. We got the transcript. We've got� at the end of this, we should have the resources. Yes. Okay. Cool. So now we know what those look like.
Let's say� let's do an edit. The transcript is an array of objects, each of which has text and offset keys. Map the array to a set of chunk tags with a text and offset tag for each one.

JASON LENGSTORF: Interesting. Okay.

LAURIE VOSS: Cool. Uh�

JASON LENGSTORF: And so, like you said, you're using XML, because Claude likes XML.

LAURIE VOSS: Yeah, if I was using OpenAI, I wouldn't be using all this Claude stuff because it doesn't care.
Let's do this...likewise, map this to an array of resource tags, each of which is one entry in the array. Like, this is so close to just� I could just write this map myself but it's still faster for me to get Claude to do it than for me to remember what I was going to do. Cool. Comment that out so we don't get an error every time we run it.
Cool. So now we've got resources. We've got offsets. Cool. Now let's give it to the LLM.
Let's see how it does...

JASON LENGSTORF: Okay.

LAURIE VOSS: Dododododododo. There's really nothing to do while it's thinking. Cool. All right. So, it gives me� we're outputting the response instead of the response text. I'll fix that for next time. Here are things mentioned, Doppler, Astro's Discord, feed loader, the notion loader for Astro. Does that sound right to you?

JASON LENGSTORF: Yep.

LAURIE VOSS: All right. And now, let's say that your output should be a JSON array of objects, with the following structure...resource, human readable name, URL to the resource only if you know what it is, otherwise false.
Um...oh, and that's the other thing� oops, dododododo. This is so realistic because I'm making all the typos I would usually make. [Laughter].

JASON LENGSTORF: You need a trailing comma on that, I think.

LAURIE VOSS: Yeah, I do. I don't necessarily, because it can just figure it out. The other thing I was going to do is timestamped. Timestamped in the resources.

JASON LENGSTORF: Because we had the offset. I can see what you mean about getting lost in this. You're like, gah, what's it going to do next?
[Laughter].

LAURIE VOSS: I think one of the fun things here is it's just spit out a bunch of JSON for me, because I asked it to. So�

JASON LENGSTORF: This is very cool.

LAURIE VOSS: So, here's the thing, it's come up with a different list than it did last time. The previous version had a bunch of other stuff. It's also� it's guessing what these URLs are.

JASON LENGSTORF: Is it guessing?

LAURIE VOSS: Yes, it's absolutely guessing, unless you spoke out loud what your thing was going to be� whoops. No, not Discord.

JASON LENGSTORF: That one is correct, in fact. [Laughter].

LAURIE VOSS: Where did my window go? Yes. Yeah. Like I said, it know about you, so it knows things about your show. Cool. Now, let's say...I'm just going to open up a window here, so that we can find this later.
Oops. Okay. Let's say...that's a good idea. Claude, thank you. Yes. Yes. Like that. Thank you.

JASON LENGSTORF: It's helping so much.

LAURIE VOSS: It's helping so much.

JASON LENGSTORF: This is� honestly, this is what gets me to turn this off. You've helped. You're done helping. Cease with the help. [Laughter].

LAURIE VOSS: It doesn't always work. Now let's say...you know what the video ID is. All right. Direct link...

JASON LENGSTORF: Oooohhhhhh.

LAURIE VOSS: To the exact timestamp of the resource. Does it know enough about YouTube links to be able to just do that?

JASON LENGSTORF: That would be cool. Let's find out. [Laughter]. [Whistling].

LAURIE VOSS: There's nothing to do while you're waiting for it to think. And, there it is.

JASON LENGSTORF: Dang. Content layer loaders.

LAURIE VOSS: Where did it get the video ID from?

JASON LENGSTORF: That's a good question, where is it pulling it from?

LAURIE VOSS: We didn't pull that video ID.

JASON LENGSTORF: That's a risky thing. Is it going to Rick Roll us? I hope it Rick Rolls us.

LAURIE VOSS: It picked a random video. Well, then, let's fix that...because we know what the video ID is.

JASON LENGSTORF: Uhhuh.

LAURIE VOSS: The ID of the video for linking to it is� is it "show video ID"?

JASON LENGSTORF: Show.video.ID.

LAURIE VOSS: Yes, that's right. Oh, right, because I didn't return the whole show.

JASON LENGSTORF: Oh, right, so we should maybe give it the video ID?

LAURIE VOSS: Do that and say [Away from mic] because that's where we just mapped that to. Dododoooo.
Part of this is a needless flex, right. We already have� we already know what the video's going to look like.

JASON LENGSTORF: We could build that, yeah.

LAURIE VOSS: We could generate that link ourselves. Tada!

JASON LENGSTORF: Amazing.

LAURIE VOSS: Oh, no, unskippable ad.

JASON LENGSTORF: There we go, we're talking about loaders. That is pretty dang cool. Okay. All right. I'm into this. I like this a lot. Being able to identify where a resource is mentioned, so you can get context about why we were talking about it� because a lot of times, if you look at the notes on an episode, there are things that seemingly were not about the episode. Being able to jump in and see, what were they talking about when this was mentioned, is a useful context provider.

LAURIE VOSS: Yep. And I hope you're getting a sense about how this thing can be useful and also, where its limitations are. I didn't have to figure out, in what order A and B have to go in order for sorting to happen in the right order which I always have to do twice to get it correct. If knew about YouTube transcripts and knew I wanted to pull in�.ns because it knew I would want Anthropic's AI key. It knows. And then, again, we're talking about what can the LLM do, itself, to solve code. We've given it a pretty short prompt here, right. We've said, this is your task. This is what it looks like. Produce this thing. And it's spitting out some perfectlyformatted JSON for us, that does exactly what we wanted.

JASON LENGSTORF: Right. This is pretty slick. I can see why this is pretty cool. Let me do a little celebration. [Laughter]. And just absolutely bog down the GPU. There were a couple of things that I thought were cool, highlight and Command+k and it showed you a diff. That, to me, is such a subtle thing. But being able to look at a diff versus having to look at the LLM output and look at the code I wrote, that's a little quality of life improvement that kind of� I can see why people are stoked about Cursor because that doesn't exist in other things.

LAURIE VOSS: Right. That's basically Cursor's central innovation. They have a whole model, like, when it does that diff, it's actually feeding the outside of Claude through a model that they finetuned specifically, whose job is to create diffs.

JASON LENGSTORF: You said this is really good at refactor so if we go into this one, for example, which is currently the, kind of JavaScriptformatted stuff. You can hit Command+k and it'll make it Java?

LAURIE VOSS: That's a great idea. This isn't valid JSON, can you fix it, please.

JASON LENGSTORF: Always good to be polite to the robots. I say this, I'm not joking. When I talk to Siri, please and thank you. [Laughter].

LAURIE VOSS: It fixed the JSON for us.

JASON LENGSTORF: This sort of stuff is really where it starts to feel worth the price of admission because I have burned so much time on just, like, quickly formatting stuff. I copypasted something from the web and need to clean up returns or correct a bunch of smart quotes I need to be not smart quotes because I copypasted code off a website. It burns a lot of time. Being able to highlight it and being like, remove the smart quotes, that's the stuff that burns hours over the course of the average week.

LAURIE VOSS: So, do you want to try a flex, do we have time?

JASON LENGSTORF: We have about 10 minutes.

LAURIE VOSS: My biggest beef with Cursor is I never know when I should do Command+k or Command+l.
This currently works for only the latest episode, please refactor it so that it fetches the transcripts for all of the episodes.

JASON LENGSTORF: That's going to bunch your entire budget for AI for the year. [Laughter].

LAURIE VOSS: For the most recent 10 episodes, make sure to do it in parallel.

JASON LENGSTORF: Do it in parallel...get sorted episodes. All right. So, we're popping through here.

LAURIE VOSS: So what I'm just going to do is hit "apply" and we'll look at the diff rather than what it said in the chat. So, instead of fetching each episode, recent episodes, slice of 10, that's good. That says, promise.all, transcript promises. All right. This is the�

JASON LENGSTORF: The await.

LAURIE VOSS: This is the "in parallel" I was talking about. Fetch recent transcripts and then for shows, it does that. All right. That's good. That's close.

JASON LENGSTORF: And then it's� yeah, I got it. Okay.

LAURIE VOSS: So that would work. That's� that has done the episodefetching in parallel. So, let's say...also do the prompting and retrieval in parallel. And combine the results into a single JSON object. See, now it's doing shows.map.

JASON LENGSTORF: Coooool. Okay. All right. I'm seeing� I'm seeing the benefits here. [Laughter].

LAURIE VOSS: So I could run that, now, and it will probably not work, first try. But let's� let's just see...but you can tell, even if� even if we haven't got there, it has got me a lot, lot closer and it did that whole parallelization stuff in 10 seconds. It was as fast as I thought about it is as fast as it did it. Unexpected token here. Debug with AI.

JASON LENGSTORF: 195?

LAURIE VOSS: It's a syntax error. It's not valid JSON.

JASON LENGSTORF: Oh, because it's returning back, like, here are the resources so we need to tell it, only return the JSON and return nothing else.

LAURIE VOSS: Yes, you are correct. Thank you.

JASON LENGSTORF: Yeah, we're getting structured output would solve this.

LAURIE VOSS: Your response should only be JSON, with no Preamble or Markdown tags. Dododo...

JASON LENGSTORF: This� this kind of stuff is, I think, where it starts to become a skill set, how to get the LLM to not do the� the stuff that is helpful when you're building, but not helpful when you're trying to use it as part of a coding pipeline.

LAURIE VOSS: Exactly. The models are being specifically trained on this, right. They are not just automatically good at outputting JSON, it's one of the jumps they've made. In the Release Notes, you keep asking us to just generate JSON so we've trained it specially to do that.

JASON LENGSTORF: Very, very interesting.

LAURIE VOSS: I think it's doing 10 episodes. If we wanted to wait ten times as we long as before, it would get the right answer.

JASON LENGSTORF: This is supercool. While we're waiting for that complete, let's talk about next steps for people who want to� who want to learn more and so I'm going to actually flip over into this window. So, the big things that we talked about today is Cursor, itself. That's the IDE you were using today.

LAURIE VOSS: That's how it works. The thing that does diffs costs $20 a month. It's not� it's not�

JASON LENGSTORF: It's not a free tool.

LAURIE VOSS: It's not completely free.

JASON LENGSTORF: Got it. Got it. Got it.

LAURIE VOSS: Because it's making tons of LLM calls for you, right. Like, it's costing them money.

JASON LENGSTORF: And that makes sense. If you're paying for a different AI thing to do coding, instead of paying for, like, chat� like ChatGPT directly, you could switch to this and have coding through your IDE. Now it's sort of interesting because we're paying for the aim AI models, you get it in your apps and IDEs. Where you are getting it is important for triaging your costs.
Oh, oh, it's $20 a month, plus API usage?

LAURIE VOSS: So, yeah. What I did is I put in my� you don't have to put in your own key. They will just charge you if you go over a certain amount of usage. They'll charge you by how many tokens you're using. What I did is, I'm already playing for Claude so I put in my Claude key. If you put in your own key, they don't charge you anything extra. You have to pay for the diff engine.

JASON LENGSTORF: Right. Right. Okay. Cool. Let me throw another link to your stuff here. Anything else I should be linking to, that I'm not?

LAURIE VOSS: LlamaIndex.

JASON LENGSTORF: LlamaIndex. And that is what we used to pull in that Anthropic model.
I see people talk about LlamaIndex and I� I also see people talk about Ollama, are those different things?

LAURIE VOSS: Those are different things. We work really well with them. We're big fans of them and vice versa. Ollama is a way of running local LLM models. They're an open source framework. They're very different things.

JASON LENGSTORF: Got it. I love that different animals sort of get their spotlight. Llamas, in the AI community, there's a lot of llama puns and stuff.

LAURIE VOSS: So this might or might not be obvious, if you take the word, LLM, and add vowels to it, you get llama. That's why people keep using llama.

JASON LENGSTORF: It was not obvious. It's very obvious now. [Laughter]. Cool. All right. Well, that was� that's one of those ones that now that I see it, oh, yeah, that explains everything. [Laughter].

LAURIE VOSS: That's why everybody keeps picking llamas. Everybody's doing the same pun.

JASON LENGSTORF: Love it. I love it. I mean, this is great. I� I love� I really do appreciate you taking the time on this because I feel like I've been dragging my feet on this stuff, not necessarily because I don't like it, but because it's been hard for me to understand how it fits. Right. And so, that� getting a� getting a guided tour� did it finish?

LAURIE VOSS: It finished, but it� it ran into rate limits, so it stopped.

JASON LENGSTORF: Oh, well, that makes sense. Each of my transcripts, because they're 90 minutes long, we throw a ton of data at the� [Laughter]. At the LLM.

LAURIE VOSS: It's a good thing I didn't do it for every episode, otherwise Claude would be like, no, you're never allowed to use Claude again. [Laughter].

JASON LENGSTORF: All right. So, let's see, with that, we are approaching the end here. So, is there anything else that we didn't link to, that we didn't talk about, any "getting started" materials, next steps for people who want to learn more about this stuff?

LAURIE VOSS: I mean, obviously, I spend a lot of time writing the documentation for LlamaIndex. That's where I would send people. That's in Python. We have been using the TypeScript version.

JASON LENGSTORF: Actually, I feel like everybody's going to be� this is definitely the� the JavaScript crowd here, I think.

LAURIE VOSS: Yeah, I agree.

JASON LENGSTORF: If I'm wrong, Python fans, stand up. [Laughter]. But I don't think I'm wrong. So, this is great. We'll make sure this shows up in the notes and we also have� there are additional docs. This is great. We've got� we linked to Cursor. We threw a link to you in there, yes. And, let me do one more shoutout to our captioner. We've had Vanessa here all day. That's White Coat Captioning. That's sponsored by Netlify and we've been using Tuple all day, which is why I've been able to draw all over Laurie's screen, thank you to Tuple for sponsoring the show.
This little app, I'm very happy with. Let's ship it. It's so silly and it brings me so much joy.
Laurie, any parting words before we� before we wrap this one up?

LAURIE VOSS: Um, no, I think� I think I've� I've achieved my mission here, which was to show you that, you know, AI can do the boring stuff and it makes you go faster and that's really, really great.

JASON LENGSTORF: Yeah. I� I� I will agree, it� this got me further down the road of maybe trying to figure out how to use this stuff for real, because it was the first time that I've seen it in a way that didn't feel like it was intensely in the way. So, thank you very much for that. I hope you all had a good time learning about this stuff because I'm definitely leaving with a lot to think about. Go check out those links, go check out Laurie online and I think� oh, while you're checking things out, check out what's going on, on the site. We're going to do Native Apps on Tori, which is a Rust framework, but we're not going to write any Rust. Every time somebody tells me about one of these, I get really excited and I haven't yet found one that fits.
We've got Matt to talk about TypeScript Generics.
Bree is going to come on and try to change my mind about Tailwind.
This is going to be a whole lot of fun. Make sure you get on the Learn With Jason Discord, subscribe on YouTube. Thank you, all, very much for being here. We'll see you next time.
Thanks, y'all.

Learn With Jason is made possible by our sponsors: