skip to content

Web Performance Auditing

We all know performance is important, but we *don't* always know how to find what's causing slowdowns. In this episode, Tim Kadlec will teach us how to audit websites to find the slowdowns!

Full Transcript

Click to toggle the visibility of the transcript

Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

Jason: Hello, everybody, and welcome to another episode of Learn With Jason. Today on the show, we are bringing on Mr. Performance himself, Tim Kadlec. How you doing?

Tim: Doing pretty good. How about you, Jason?

Jason: Doing well. As that came out of my mouth, I immediately regretted it. I'm sorry.

Tim: No, I appreciate being put on the spot like that. It's good.

Jason: So I am super excited to have you on the show. I've been following your work for a long time. I've seen you speak at conferences like Perf Matters. I've seen your work around the web. For those of us who are not familiar with your work, you want to give us a little bit of a background?

Tim: Yeah, sure. Like Jason said, my name is Tim Kadlec. I live in the middle of nowhere Wisconsin, which is why I think I started getting into performance in the first place. I think, you know, we tend to be five, ten years behind everybody else when it comes to mobile connectivity and all that stuff. So I think I probably just got bored with things being so slow and, you know, that's how it all kind of started. But I've been doing performance now for 10, 12 years, I guess I've been focusing on it. You know, some of that was starting off at an agency. I did that in-house for a publisher. Then probably the bulk of that time was working for myself as a performance consultant, helping organizations, you know, do things like this, like audit and find where we can make improvements, as well as strategic, you know, how do we build up a culture of performance in the organization. You know, then now as of last December, I joined Catch Point to work on web page testing. So that's awesome.

Jason: And web page tests, for those of y'all who have not seen it, is an invaluable tool. I use it all the time. It's a huge, huge resource in my tool belt. But what I will admit is I don't actually understand a lot of what's happening on web page test. So I go for the high-level stuff. I tend to look at like, oh, that bar is really long. That's probably a problem. I should dig into that. But you know, I've never been one to have a strong grasp on flame charts or really being able to dig into the depths of those reports, which is one of the things that I think is maybe true for a lot of devs. I've talked to a lot of folks who feel pretty intimidated by anything but the console in dev tools. Hey, what's up, Ben? Ben and friends, I hope y'all had a good stream. Welcome, welcome. We're talking about performance today with Tim Kadlec. But so like, I guess is that something you run into a lot, where people want to but are kind of intimidated by even where to start?

Tim: Yeah, I mean, I think -- I've long felt that performance sits -- I think Paul Lewis called them like the boring pillars of the web, right. Performance, accessibility, and security. They all kind of sit in that little area where they're really, really important and not necessarily the thing, though, that you get to focus full time on as a developer. Most engineers, you're tasked with building the next feature out, you happen to also have to make that feature accessible and secure and performant and everything else that goes into building well. So developing some level of expertise to be able to jump through that stuff right away, that's tough. And it's also not, I don't think, a very fair expectation for engineers or developers to all have that level of depth and understanding. So I think the goal is to, like, get at least an awareness. Honestly, just being aware that performance is something you need to factor into your work is such a great starting point. Yeah, it's pretty typical for folks to maybe run a Lighthouse audit, but they're not entirely sure what to do with the recommendations. Maybe they can do a little here, a little there. They're just not sure how to take it to that next step. I think that's where most people sit.

Jason: I think so.

Tim: Most people don't sit with these tools open every day.

Jason: Kind of what you're talking about here with accessibility and performance and security being the, as you put it, the boring pillars of the web, I feel like that really taps into something just fundamentally human. It's like being told to eat your vegetables. Very rarely do people want to sit down and eat a whole plateful of vegetables, but we know if we don't do it, we end up getting scurvy. You know, we know we need it to survive. We know that it's critical for -- you feel better, things work better. It's definitely the sort of thing where when someone has spent the time on accessibility, when someone has spent the time on performance and when you have that level of trust that the security is there, apps feel better. Like when you use an app that is really fast, when you use an app that has all the right keyboard navigation pieces and all the things actually fit together the way they should to be accessible, it feels amazing. You're like, dang, this app is rock solid. Like, what a great app. It's hard to know why, but it's that subtle stuff like oh, it didn't take 500 milliseconds to see the next screen. Just, boom, it was there.

Tim: You definitely nailed something there. There are those fundamental pieces of user experience, but to your point, you know, you don't notice it until something is wrong. Even then, if it's working well when you load it, you're not exactly able to say, oh, this is because they optimize performance or whatever. It just seems to be working well and you maybe can't point your finger at it. The other thing is we miss a lot of this stuff if we're not proactively looking for it. From the accessibility perspective, right, if I never tried to navigate by a keyboard only, if I never use a screen reader, I have no idea that stuff is there. It's not something I encounter on a day-to-day basis. From a performance perspective, it's the same. If I'm using the latest M1 MacBook and the iPhone Pro, whatever it's called, if those are my primary devices I'm using, they're so powerful, they're going to skip over all of my JavaScript issues. I pay out the wazoo for a ridiculous monthly connection because I need that for my work and it makes me better at what I do. It also means if there's network-related issues, I'm in the going to see that unless I'm actively finding ways to expose those. Those things that they just don't jump out at you unless you're looking for them.

Jason: Absolutely.

Tim: Or you're in a situation where, you know, that's your reality. You're in a situation where your connectivity isn't as great or your device isn't souped up. It's a regular, normal device that most folks would buy at the store. Then you start running into those problems.

Jason: Yes, yeah. And I think -- yeah, thinking about the where do you actually run into those problems, right. Unfortunately, you'll hear people make the argument, well, our users are all on iPhone, our users are all in, you know, the U.S. and have good connections. So there's this sense of like, well, the technology problems exist somewhere else and not here. It's not for our audience. But in my experience, at least, that's very much not the case. Go ahead.

Tim: Accessibility calls that -- like in the accessibility industry, they've got a term. Situational accessibility or something like that.

Jason: Situational disability, yeah.

Tim: Okay, right. So this idea you're not necessarily permanently in this bucket or this category. You're dipping in and out of it in different situations or you break your arm and suddenly you have to interact with a page in a different way, that kind of thing. It's the same with performance. You know, I just did it in my explanation. When we do the short, you know, brief sound snippets and stuff, it makes it sound like people forever live in one bucket or another. The reality is all of us are going to be in those situations where at some point, you know, we hit a situation where the network is lagging for whatever reason. It's overtaxed or just on a poor connection. Maybe we're at Starbucks or something. Or where our device, you know, slows down because it's being weighed down by a bunch of other stuff going on or background processes or whatever. All of us are going to be in that situation at some point or another. In fact, I think most of us, if we think hard enough, can probably remember a situation or two where we're like, oh, my gosh, this is painfully slow.

Jason: I think about it. I'm in those situations constantly. I have the latest iPhone, but I walk outside of my house, and there's an area of my neighborhood that just has bad cell reception. If I need to look something up, I will wait a full minute to load a website, unless I hit one that's really optimized. You know, if I go to, like, a restaurant, all the restaurants have QR codes for their menus now. If I don't have a strong signal or their Wi-Fi network is overloaded, it takes forever to look at their menu. Each click is seven or eight seconds to get between pages. Like, it's just text. I'm trying to buy a sandwich here. (Laughter) So I think, you know, there's never been a point in my life where I am always on a high-speed connection, where I'm always using the latest and greatest. You always feel those perf issues, but we can just rationalize it. Oh, whatever, I'm on a mobile network. Truly, my folks lived out in Montana for years, and they lived in eastern Washington by a River where their only data connection was like through a tethered cell phone. So their whole experience is if it can't stream over 3G, that thing is not happening. You know, these are people who my dad works in tech. He's a programmer specifically because he can get stuff to move. It's like, I don't need much data for that.

Tim: No, yeah, and Ryan, I think in the chat, mentioned something about asking if web performance is directly proportional to the probability of these other factors. You're not far off. In terms of at least the importance of performance for any given thing. It's about resiliency and reach, right. You're sort of -- by making your site or your application faster, lighter weight, you are making it more resilient to less than ideal situations, which means if you are going to hit those factors in real life or your users are, you know, the better, the more resilient and faster you've made your site, the more usable it's going to be. If you're targeting -- it's different for different folks, right. I think the example everybody always throws out when it comes to perf is Apple. When they announce a new MacBook or iPhone, if you look at their product pages, they're like 80 mgs. But they can get away with it because Apple tends to go after a pretty highly affluent, technically savvy audience that tends to have a lot of purchasing power, tends to be coming in on those powerful devices and networks anyway. So reaching that audience means that, you know, maybe it's not quite as important to them as, say, Walmart, who's trying to reach everybody, sort of this mass population scale. They're going to have a much more diverse audience in terms of the network and the devices and things like that. That being said, I suspect Apple would still see benefits from making that product page lightweight, smaller, more performant. I don't think I've ever come across somebody who doesn't see at least some benefit from doing that. But looking at the probability of their users hitting those different factors, it's a little less important for them than, say, Walmart or something like that.

Jason: Yeah. Well, and this is the sort of thing that you bring up the business impact. So you worked as a consultant for years and years. So you've seen this firsthand over and over again. So in your experience, how big is the impact? You read these banner studies. There's the big ones like Google always says, hey, if you cut off a half second, you drop your bounce rate by whatever percent. These kind of big figures or Amazon's for every 100 milliseconds, you add this many percents of conversion. But we're not Google. We're not Amazon. So what does it mean for like an average-sized business, somebody who's big but not? Like not FANG.

Tim: First off, just as a shameless thing, Tammy Evertz and I maintain a site. It is literally just -- I think Netlify hosts it -- a site that shows off highlights of different business case studies around performance and links off to whatever source it happens to be, whether it's a blog post or video. So that's good for sort of getting those anecdotes and seeing the statistics from that perspective if you want to see how that's all working. As far as for actual impact, predicting business impact is going to be tricky. In fact, measuring the impact is tricky for folks. It's not always the easiest thing to draw a line between your business metrics and your performance impacts and improvements. Certainly, you can't -- I have seen folks say, you know, in fact there are tools built around the premise that Walmart and Amazon both in different times saw for every 100 milliseconds, they got a 100% increase in conversion. So there are tools that use that as an estimate for what are my conversion benefits going to be. That doesn't -- that's not super -- it's okay for like a rough estimate of a potential thing, I guess, to kind of get you in the ballpark. But if you go in and expect that this tool told me it's going to make $200,000 more by shaving 100 milliseconds, don't be surprised if you don't. Like, if it's very, very different, right. I do think, like, part of this is making sure that as an organization, you have real user performance data. So data that's recording real user traffic and real user performance. So you can connect that to those business metrics and see how they relate.

Jason: So when you're talking about that, the real user data, when you hear people talk about rum data, is that what you're referencing? I see that kind of thrown around. And can you maybe take a second to explain what that means?

Tim: Yeah, absolutely. So there's two primary forms of, like, performance monitoring. We have real user monitoring, which is the rum data. That's collecting performance data from real user sessions, which means what's happening is you're putting some JavaScript on your page, that page is recording on every page load these key performance metrics, and then beaconing them back somewhere, usually a service, you know, catch point, speed curve, impulse that's doing that, or you've rolled your own. But it's collecting those data on the fly for every single page load session for every page you have this script on. The other kind of measuring you have is synthetic. Synthetic is lab metrics. So synthetic is me saying, okay, I'm going to run a test against these pages, and I'm going to run the test from this location on this hardware or emulated hardware over this kind of a network, and I'm going to record everything that happens. So they both play very important -- like, I guess it was Harry Roberts' proactive and passive. I like the way of thinking about that. Rum is passive. You collect the script and collect everything. Synthetic is proactive. You have to choose these pages I'm going to test, and I have to trigger those tests and make them run. The rum data is important because that's where the rubber meets the road. If the real user data is showing you have performance problems, you have performance problems. It doesn't matter what tests you're running elsewhere. If people are hitting problems, there's something wrong going on. On the other hand, the synthetic data, you can get very, very detailed, you can record the entire browser session, so from that perspective, it's helpful to dive deep, and it's also a better, consistent baseline or benchmark. Rum is going to be noisy because if you think about it, you have a thousand users. That's 150 different situations they're all browsing from. The data is going to be all over the place. Synthetic data, because I control all the variables, I can get a nice consistent comparison. So if I'm testing on deploy or if I just want to experiment with optimizations and see how they're impacting my site, synthetic data is going to be a lot easier to do that.

Jason: For sure, yeah. That's super helpful. I also found an article that came from Catch Point about rum versus synthetic monitoring that I threw in the chat. And it'll be in the show notes. So with that, I feel like I could ask you 10 million questions about this. It would all be very abstract and philosophical, but I'd like to see this in practice. If you don't mind, I'm going to switch us over. Let's do a -- let's just start picking a site apart. We'll probably pick on me. So let me --

Tim: Put you you should the microscope, man.

Jason: I'm ready. Okay. So before we start that, a quick shout out. We're going to give a shout out to Rachel from White Coat Captioning, who is here with us today doing live captioning so the show is more accessible. Make sure you go to the home page to check that out. That's where you can follow those captions. That's made possible through the support of our sponsors. We have Netlify, Fauna, Auth0, and Hasura all kicking in to make the show more accessible to more people, which means a whole lot to me. While you're clicking on things on the internet, make sure you go and follow Tim. Nonstop information, a very, very helpful account to be following. So I think I'm ready. So we have a couple sites that we can check. We can pick apart the Learn With Jason site. I've also got my personal site here. Now, both of these are relatively small and not, like, ad driven or anything like that. It also might be kind of fun to pick on some of those like very ad-heavy sites to start to look at how third-party scripts and things weigh in. But I'll let you take the lead here. If we want to start digging into this, what's the -- like which site do you want to start with, and what would our first step be?

Tim: Let's start with one of yours. You know, if we can, maybe we can get to something that has a little more complexity and ads so we can show the impact there. Might be fun just to get our bearings on something like yours. So which one do you want us to pick on? We'll be nice. We'll be kind, Jason. I'll be gentle.

Jason: Let's pick on this one. I feel like this is more likely to have issues.

Tim: Okay, all right. Cool. So the first thing, if I was doing this -- like if you had hired and it was a formal audit, the first thing I'd be asking is about the traffic itself. You want to start by focusing on a page or a couple pages that are going to be the most important from a traffic perspective. The other thing -- so that's part of the thing to factor in here, not necessarily just the home page but maybe an episode page or a schedule or whatever. The other thing is mobile versus desktop. If we were going to focus our energy, do you happen to know if it would be desktop users or mobile would be the primary audience?

Jason: Let's go and look at the data because I'm not 100% sure. If I go into this and then we look at analytics, then we can see where's everybody breaking down from. So here's kind of the country breakdown.

Tim: So U.S. dominant, yeah.

Jason: Yep. You know when you write a blog and you think you're taking a note for yourself and it ends up being the most popular content? (Laughter) So it's like individual pages. This is an episode page. These are all episode pages. So it looks like the home page, the schedule, and then independent episode pages.

Tim: And that's pretty typical. You're not, you know, like an e-com site. I'd be expecting a lot of product display pages. But you wouldn't necessarily have to test all of them. You find it's a page type, right, and you find one that's a fairly representative page type and start there maybe.

Jason: Yeah, okay. Great. So we've got like this is our page type. It looks like I don't have any breakdown between device types. So I don't know.

Tim: Okay, that's fine.

Jason: I don't have any analytics installed on my site.

Tim: I respect that. That's cool. That keeps it nice and light. Okay. Well, we can kind of do both. Boy, we got a lot of boops dropping in here. I dig it.

Jason: Oh, Cassidy has arrived. Hello, Cassidy. Thank you for the sub.

Tim: Yeah, so I guess from here, there's two places we could go to start, or a few different places. We're going to bury Jason here. I like it.

Jason: I know. We're going to have to work in the top third of the site.

Tim: Perfect. So there's no shortage of different performance tools you can go to. The recommendation I always give to folks, other than getting super worried or upset about the fact there are, you know, so many of them and kind of getting overwhelmed by that, is to pick one or two. If you get really good at one or two tools that can help you analyze performance, you're going to be in great shape. I would say 90% of the auditing I do sits inside of either Chrome Dev Tools or web page tests. Then Chrome Dev Tools I tend to use for a lot more in-the-moment analysis. Some of the things you can do to be able to immediately save a file and do a local override is super cool. Or like play with JavaScript in the flame chart is awesome. So that kind of thing is where Chrome dev tools comes in. For web page tests, that's where I usually go for sort of I want to get the picture of how things are performing, you know, network, what the browser is doing, but also have sort of a stable, consistent comparison point.

Jason: Yeah, yeah.

Tim: Because dev tools is super powerful, particularly Chrome has done an amazing job. But the ability to reproduce is tough. Dev tools on my machine, dev tools on your machine, we've got different powerful machines, different kinds of networks. So I may run a test and see something here, and you may run a test on yours and not be able to reproduce it. Or even if the throttling is applied. The CPU throttling is based on the actual device that you're on. So that's going to change a little bit. The network throttling -- actually, the way dev tools works, it sits in between the network layer of the browser and the render layer of the browser, which means it will like throttle download times, for example, for content. But it will not have any impact on how long does it take to go connect to this URL. Or how long does it take for a redirect to occur? That work all happens in the network layer. So even if you're doing like throttling in the browser at sort of a 3G network, those connection speeds are going to look super, super fast in dev tools. So there's reproduceability issues and stuff, which is why I kind of bounce between the two. So for yours, if you don't mind, can we grab one of those episode pages and drop the web page test?

Jason: Let's do this one with Brian. That's got all the features. It's got the full transcript in here and everything. So we'll be able to see. It's a huge dom because of that transcript. So let's go to web page test.

Tim: I thought that was boop performance. Yeah, dot-org. We don't have the dot-com thing. Last time we asked, I think -- it's a lot. I thought that said boo performance, but I think it's supposed to be boop performance. That's good. I'll take that. If you drop it in there, before you run the test, if you could just scroll down a little bit. There's a ton of different locations. We know it's U.S. We can just keep it in Virginia. That's fine. Chrome is fine. If you don't mind clicking on advanced settings, we'll expand that out a little bit. Okay. So you can see there's a ton of stuff we could change. We'll probably fire off two tests. Let's slow the connection down from cable to 4G.

Jason: Okay.

Tim: The reason I do this is I like the higher latency. It's going to slow things down just a little bit. I like to stress test. The more -- less than ideal scenario we're testing in, the more likely it is we're going to find underlying performance issues. So I like to go a little less powerful on the device if I can, higher latency on the network. That kind of thing. Let's do first view and repeat view. That'll load the page the first time as a fresh page view, as if nobody had ever come there before. Then it'll refresh the page so you can see what does it look like for people who are coming back to your site after they've already visited.

Jason: So like with a warm cache and everything.

Tim: Exactly, yeah. I think that's probably good for now. If you can do like a -- run a control or command. What are you on? Enter on the start test to fire that off in the background.

Jason: Oh, I hit the wrong button.

Tim: Yeah, command click or whatever it is.

Jason: Whoops. I thought I command clicked. Didn't do the thing I wanted.

Tim: That's okay. Grab another test with the web page test and we'll fire the same test off. This time let's run it on an Android device.

Jason: Okay. So here, here. Like here?

Tim: You're good for the test location. If you go under browser --

Jason: Oh, gotcha.

Tim: Text location is awesome, like if you were not U.S. dominant. We could go India or somewhere, Europe or whatever. Here let's do the Chrome Device Emulation for the G4. It's the top, Motorola G. That's the one is a good resource for anybody who has curious. And you can fire this test off so we can get running. It'll take a second. But a really good resource for anybody who's curious about why that particular device is -- Alex Russell, formerly of Google, now of Microsoft, wrote a fantastic post around device inequality and network stuff like that. He did a ridiculous amount of research to identify what is a good median test device and network condition and stuff. It's a highly recommended read. But the G4 is, for two years running, fits that profile. So what these will do is run these tests. In both of these cases, we're running off an AWS instance. Then, yeah, you get results like this. This is our desktop one.

Jason: Yeah, this is the desktop. So Chrome, 4G. We ran our two tests. And let's see. I like seeing green. Green makes me feel good.

Tim: That's the core web vitals. If you had problems with any of the web vitals, we'd flag it as orange or red to tell you it needs improvement or bad. On this summary, this metrics summary section, see how it says first view, run 2.

Jason: Here to the left.

Tim: Yeah, this is telling us the second run was the median run for the two runs that you ran based on the speed index. If you click on that, if it was a bigger site, hopefully we can get a chance to run one of those later, we'll actually pull data from Google's Chrome user experience report for that page to let you see, like, this test result is about in line with your 75th percentile, or it's way too fast or it's way too slow, like to help you sort of did I get a realistic test result here.

Jason: What's the -- like, when you -- what's the threshold you have to clear for that to start happening?

Tim: So it's got to be inside of the crux database. I can't remember, exactly. I don't know if they've published exactly how many page sessions they have to see. For different reasons, like security reasons, privacy reasons, if you're below a certain threshold of traffic, they're not going to include all that kind of stuff there. But, yeah. Okay. So let's see. Run 2. Bless you.

Jason: Sorry, trying to inhale my coffee over here.

Tim: All the way on the right, click on the filmstrip. This will give you visual progression. We're taking screen shots as the page is loading. As you're scrolling back and forth, do you see the little red line all the way to the left there?

Jason: Mm-hmm.

Tim: If you scroll down a little bit on the page, we may need to go -- yeah, there's a red line on that waterfall as well. As you move back and forth through the filmstrip, that red line will shift over the waterfall. So what's nice is we can line up with your filmstrip to the waterfall to see, all right, at the point that stuff starts appearing here, what was holding up -- you know, what were the resources up to that point. So if you scroll up just a hair again, we're going to change one thing on the filmstrip. I think it's a little nicer. For the thumbnail interval, which is there, yeah, change that to 0.1 second. Just for our little more granular.

Jason: Oh, nice, okay.

Tim: If you get it close to that 1.6 when we start seeing content, there. Now if we go down here, we can see anything to the left of that red line, those are resources that, you know, seem to be potentially in the way of that initial render or at least loading before that initial render occurs. So if we know, for example, we want to speed up how quickly that first thing happens, we focus on that stuff. We don't worry about anything after.

Jason: Okay.

Tim: So there's a few things that look -- what did you say earlier about looking at the length of the bars?

Jason: Yeah, so this is the part where I would start looking and say, like, you know, here's one that looks like something weird is happening. It starts here and then nothing actually happens until here. So that seems like something I'd want to fix. Then I'm also looking at, like, there's a few things in here that I could probably pre-load, like I could try to get the browser to start downloading them before we need them so this moves in parallel here, even though it's not needed until here. Some of this I can't do because I use Toast, and Toast just kind of does stuff under the hood, and I don't have tight control over it. But some of this I feel like I do control. Like the fact that all my fonts are here. I have font display swap. Should I just, like, defer those? Something that'll let this all happen a little bit faster.

Tim: Sure. So I think you're further along than you think you are. That's a really good starting point. You zeroed in on a couple things right away that jumped out to me as well. So looking at the length of the bars or something that seems a little off is a really solid way of doing it. So for each of these, let's look at the first request for example, for your HTML. You have this color progression, and there's a key right above that. That teal part, that's the time it takes for the browser to resolve the DNS. So you typed in learnwithjason.dev, go out, figure what IP address that exists at. The orange bit is establishing the connection. That's a three-way handshake. We have the IP address. We go out and kind of send some packets back and forth and connect to the server. Then the purple part, that's SSL negotiation. So that's all warming up that connection. When we see it on line 13, which is that image that jumped out to you, that's because we're opening a connection to a different domain. So we have to go through that same process, how to figure out to connect, open up that connection before we can start downloading the content.

Jason: So does that mean that -- actually, I think you just connected a dot in my head for something that I've seen before. I've seen the link rel preload and prefetch. Those make sense to me. I know I'm going to get some content on the next page, so I should preload it. I know I'm going to use this in a second, so let's prefetch it. Did I get those in the right order?

Tim: You flip-flopped them.

Jason: I do that every time.

Tim: That's all right. Preload is for the existing page load. Prefetch is that next one.

Jason: Got it, okay. But there's other ones I've seen, but I've never used because I don't know what they mean. So there's one for pre-connect. I think there was one for DNS, something like that.

Tim: Yep.

Jason: So looking at this, if I use the link rel, the DNS one, this theoretically would happen up here so that this bar would then move back. Is that right?

Tim: Well, so you're on the right track. Pre-connect and pre-fetch, that's -- so DNS pre-fetch would say I know I'm going to make a connection to this DNS, you know, this domain, go ahead and do the DNS resolution as soon as you possibly can. Like, do it before the browser discovers that resource. With pre-connect, it's basically DNS pre-fetch on steroids. It just has like the teeny-tiniest less good browser support. That was not a very well-constructed sentence, but whatever. Pre-connect is like the exact same use case. We're going to load something from a domain. In this case, we're saying go ahead, do the DNS resolution, the TCP connection, and the SSL negotiation. Like do all of that as soon as you can without -- you know, you don't have to wait until you find the resource.

Jason: Got it. So this whole section is the pre-connect, but the DNS pre-fetch would only be this bit here.

Tim: Exactly.

Jason: Got it, got it. Okay.

Tim: In your case, though, applying it to that resource isn't going to help. If you look at your first request, you see how we've got those blue bars, right. After the SSL negotiation. So each of these bars where they're sort of light colored in some shaded stuff, each request type gets a different color. You know, HTML is blue, CSS is green, JavaScript is yellow/orange. Fonts are red because when Pat built this, added font support, he did not like font support, so he wanted to scare people off with scary, dangerous colors. So those are red.

Jason: (Laughter)

Tim: So when we look at that request thing, what we see is the light part. That's where we've made the request off to the server, but we're not actively downloading content. We're waiting for the server to come back. When we see a dark chunk, that's the server passing stuff back. We're actually downloading content. So if we look at where request 13 kind of lines up with that initial request, you're making that -- like that connection is happening, that DNS resolution is happening after that first bit of HTML comes back. We can't shift it any earlier.

Jason: Got you, okay. Now, I saw just now when I was looking at the -- here. Can you do anything with headers?

Tim: So headers would potentially give you a fractional improvement. But again, because it's coming back with that first chunk that's coming back from your HTML in this case, it's not -- the headers don't arrive any earlier than that first bit of content does. So if you had, like, a situation where we were seeing -- let's pretend it wasn't finding request 13 until after your second bit of content had downloaded. You see that second dark section on request 1 there.

Jason: I can see one way down here where we're doing like a century request at the very end.

Tim: Yeah, so that's an example where if we wanted to make that connection earlier, we could open a pre-connect. That would shift that entire connection process all the way over to, you know, the same spot you see your Cloudinary request open. So Sia in the chat just hit on the optimization on the images, which is putting it on your own domain. We don't want to lose Cloudinary here. They're doing awesome stuff.

Jason: Yeah, I use it way more heavily on this site. All of these are the same image with like cropping and thumbnail stuff and a bunch of cool things. I use -- yeah, I actually, with this one, use Cloudinary to assemble the entire image. Each of these pieces is like, this is one image, this is another image, this is all like a placeholder. This is text that gets written by Cloudinary. I automated the thumbnail generation.

Tim: I remember reading an article, which you should drop a link to, the article you wrote on CSS tricks. I pinged you about this recently. It's awesome. I had never realized you could do that with Cloudinary. For me, the performance -- I viewed Cloudinary as like do all my image optimization. Compress it, get it to the right thing. But I never thought about the whole auto generate videos dynamically from images and stuff like that. The video cards and all that. It's just bonkers. It was really cool to read.

Jason: Yeah, I think this is the one where I talk about the social images specifically. Then I did one on CSS tricks -- I don't remember which one.

Tim: Just like a Netlify video thing.

Jason: Oh, that's right, yeah. The Cloudinary stuff. So this is also really cool because Cloudinary does the images, but it also lets you stitch together videos. So if we have like a video like this that's a standard piece of content -- oops. Then you can go down here and, like, add a bunch of stuff. This is generated by Cloudinary from a video and a piece of text. Then this is like an interstitial video. It'll cross fade. Like, check this out. That transition is actually another video. Then when you get to the end, you can do -- anyway. Cloudinary is dope. You could play with it.

Tim: It is. It is dope.

Jason: I absolutely rabbit holed. Sorry. (Laughter)

Tim: No, that's fine. I'm happy to go nuts about Cloudinary, too. So I guess the point is we don't want to self-host these images and completely eliminate all the benefits you're getting from Cloudinary. It continues to have nice segues there. I did write a post on my site about how to use Netlify, like, to proxy request a Cloudinary through my own domain. So I don't have that connection cost up front.

Jason: Oh, nice, okay.

Tim: So if you go down on this a little bit here, I have code somewhere. Right there in the Netlify TOML file, basically proxying there. I decided to write up about it for my own purposes. I don't have access to them yet, but Netlify's edge handlers seem like a good use case for this, too. The edge handlers are -- it's edge computing basically.

Jason: They're very cool. Yeah, they're like little functions you can run as part of the request. You can do really, really interesting things with them. Soon. Early access now, but soon we'll be able to do more public content because people will be able to actually try them. For now, they're still invite only and very early access.

Tim: I expect, as long as my understanding of them isn't off, I expect them to work similarly to like cloud flow workers or Fastly's edge compute, to be able to manipulate the response from a request or a request itself, like on the edge, on Netlify's edge servers before it gets passed down. That would be another really good place to do that. Either with the TOML or the handler, you could proxy your requests to Cloudinary through your own domain and get rid of that connection cost.

Jason: So one of the first things I did with edge handlers was re-created this experiment by Jake Archibald where it replaced every instance of the word cloud with butt. That was fun. So let's drop that on in there.

Tim: No, that's a good handy one. Useful. (Laughter)

Jason: I mean, streams in general are very cool, very head bendy. Probably worth a whole episode focusing on how they work.

Tim: For sure.

Jason: They enable some really cool stuff, and edge handlers will allow you to do stream-based processing of web pages. So in-place modification. You can detect where somebody is coming from and localize or detect which cohort somebody is part of and personalize. There's really fascinating stuff you can do. But that's not what we're talking about today.

Tim: No, that's okay. So the edge handlers or the Netlify TOML approach I had there, either one of them, what that does is you're still using Cloudinary behind the scenes. It's still going to Cloudinary. You're still getting the benefits of the auto optimization and all that jazz. Now the way the browser sees it, it sees it as a request to your domain. That means that that connection cost is not occurring in the browser. It's occurring from, you know, Netlify's servers to edge servers to Cloudinary, which in most cases is going to be significantly faster. So this is one of those things where like the benefit of it is going to be more substantial for folks on those slower networks because they won't have to pay that cost. Whereas, you know, if you're already on a super souped up connection, you're probably not going to see much of an improvement doing this kind of thing.

Jason: But this is a solid half second here.

Tim: Oh, yeah. If you click on that, actually -- click on the -- yeah.

Jason: I didn't realize things were clickable. Oh, this is cool. Okay.

Tim: This is the thing with web page tests. There's a lot of stuff hidden. If you look here, you see the DNS lookup time, initial connection time, SSL negotiation time listed here. So that's, what, 340, 400, yeah, 520, little over half a millisecond. That image then gets down -- you know, cut from 718 milliseconds to 500 in this case.

Jason: Yeah, that improvement would be notable, right. If you look up here, this is what we're seeing. This is where we start downloading the image. This is where we actually get it. So it eliminates that need. You know, we would get at least a couple hundred milliseconds back. And going from two seconds to one and a half seconds feels pretty snappy in terms of your perception. I feel like if something loads in a second, it feels native. At least that's my perception. Holy crap, that was so fast.

Tim: I think there's been experiments that back that up, too. Is that threshold there? Again, just to set the expectations, your site is already doing really well. It's not like you have terrible performance issues. But we are talking like, you know, shaving off another 300, 400 milliseconds. It goes from that difference of, hey, this site is fast to, this site is snapping, right. Like blazing. So that's one thing that jumps out. Another thing that I would look at here is -- so the fonts being so high is probably because you're preloading them, I'm guessing.

Jason: Yeah. Well, they're in the style sheet, which it looks like is the first resource that comes in.

Tim: But they're requested before the style sheet is ever discovered. So that tells me -- typically the way --

Jason: Let's look. What did I do?

Tim: I'm guessing you're preloading.

Jason: Let's see. So yeah, I've got some preloads in here. Let me open this in dev tools instead so it's parsed.

Tim: Yeah, nice and pretty.

Jason: So we're doing meta stuff. That's all the social media. Rel preload. Yeah, so we're preloading all the fonts. Then -- oh, I do pre-connect the Cloudinary and it doesn't do anything. You just explained why.

Tim: It's not hurting you, for what it's worth. It's not going to pre-connect somewhere and reconnect elsewhere. It's also not providing much there.

Jason: Okay.

Tim: So yeah, and there's your style sheet kind of at the end before the no script, right. So with the style sheet, the way the browser is going to work normally is download the HTML, download the CSS, do any JavaScript work, parse -- like create the dom, create what it's going to render on to the page. At that point, it says, oh, now I need a font. I'm going to go request it. So the fonts usually are going to come after. Yours are up front because of the preload, which is one of those things. For the longest time, like especially when preload first came out, this is what everybody was saying to do with preload. In reality, we've sort of found out that might not be the right approach in many cases.

Jason: Okay.

Tim: What happens is whenever you choose preload, you're immediately promoting that preloaded resource basically to the top of the queue and saying this is more important than the other things. So by definition, you're pushing those other things out. So you can see like, for example, your CSS, we need your CSS to display the page, but the browser is spending time in bandwidth downloading those fonts instead of the CSS up front.

Jason: Gotcha.

Tim: So it's actually pushing out the request, or the download, I should say, of that CSS until after.

Jason: Somewhere in here is my -- all my preload stuff. So what you're saying is with this here, we actually don't want -- I'm just going to take a note. So don't preload fonts.

Tim: You're going to hate this answer. The answer is maybe.

Jason: Maybe.

Tim: The answer is maybe. Preload is one of those things that sometimes it can be beneficial, and sometimes it can hurt you. This is one of those great ones for testing. Try a version of the page without the preloads in place and see what happens to a couple things. You're going to want to watch when that first paint occurs. I think it was Barry in there who was expressing his disdain at removing all the preloads, which is not wrong. The idea that, like, by not preloading, you're also kind of ensuring that, you know, depending on your font display settings, you may get that fallback font first and sort of snaps in the other fonts after the font arrives. So there's definitely a little bit of balance. It might not be get all the preloads. Maybe it is like preload one or two versions of the font.

Jason: So look at whichever font this is, for example. That's going to be the most notable one that pops in. So maybe we only preload this one, and the other ones can swap when they get in.

Tim: Exactly. That one is going to be a prominent thing. If your fallback isn't close, it might be disruptive. So exactly, that's a great way of looking at it. You know, so then that gets still a few of those files out of the way of the CSS, which is nice. The other thing that -- we have another thing we can look at on the CSS for an opportunity that may end up offloading the font stuff entirely.

Jason: Got it, okay. So I'm just going to toss that in as a thing we should look at. Then we'll go back here. You said in the CSS?

Tim: Yes, yeah. So the CSS itself, if you can click on that once -- okay. So this is pretty small. Your CSS is neat and tidy. Is this the CSS you use across the site?

Jason: I think so. I've purposely stripped this site down to make it small. Otherwise, I have to maintain it. (Laughter)

Tim: (Laughter) Yeah, makes sense. Makes sense. All right. So we're talking about a couple K. You can close that one. Click on your HTML request. How big is that?

Jason: Pretty big, I think.

Tim: 27, okay. All right. You know, I've seen much worse. A lot of the stuff that injects state as those JSON objects, those balloon like bonkers. 27K is not that bad. The other thing you could look at here then is that CSS.

Jason: So this is the state data. That's another 45K. Sorry. Let me go back to the HTML.

Tim: Yeah, so I was going to say since the CSS is, what, 3K, it was pretty tiny. It's not going to add much to the HTML in size in terms of forcing another entire round-trip back to the server, anything funky like that. Since it's so tiny, you don't even have to try to -- you know, if you've heard of the critical CSS optimization, which is pull out the subset of CSS for the content above the view port, inject that inline in the page. You don't even need to go that fancy. Since your CSS is just 3K, you can inline that in the page. What's going to happen is the browser doesn't have to make another request for your CSS at all. As soon as that last bit of HTML arrives, it should be able to display the page pretty much immediately, as long as there's no blocking JavaScript in the way.

Jason: Theoretically then, that means this would all kind of scooch back this way a little bit.

Tim: Yeah, somewhere closer.

Jason: Okay, all right.

Tim: So if you go down a little bit, like below the waterfall here --

Jason: I'm just going to try inlining CSS. I'm going to link to all these issues. I'll label them or something so we can look at the issues. Because CSS is small, we might be able to inline for more perf.

Tim: Is this Eleventy, by the way?

Jason: It's all ESM, which is wonderful, but you definitely feel the pain of the ESM ecosystem not having fully reached -- it hasn't reached critical adoption yet. So half the time you pull something and you're like, oh, great, it doesn't work with ESM yet.

Tim: Living life on the edge. I dig it. Okay. Yeah, I haven't actually played with that before. I'm a big fan. I really like Preact. I'm a big fan of Preact. It's just like that whole philosophy. Let's see what we can do to still enable that kind of code syntax but do it in a way that actually prioritizes, you know, the user experience by having a small JavaScript. It makes me happy.

Jason: One of my bucket list guests is to get Jason Miller to come on and teach us Preact and just talk about the philosophy behind it. It's just such a cool project. I love the API compatibility with, what, 10% of the size, if that.

Tim: Yeah, it's huge.

Jason: It's amazing. A really, really cool setup.

Tim: In my dream world, that's what everybody would start with. Then only reach for React if they had to.

Jason: Agreed.

Tim: That would be awesome. By the way, if you hadn't seen it, Etsy did a whole migration from the React code base to Preact. They haven't talked about it a ton yet. Right at the top there. They haven't talked about it a ton. There was this tweet about it. Somewhere in here, he mentions a GitHub read me they had written up. Right there. There we go.

Jason: This one?

Tim: Yeah, nice finding skills, man. So if you go there, that walks through like sort of their thought process in terms of all the things they considered about, you know, what do we need to do to be able to do the migration, what the impacts were. It's a really nice walk through. But that's cool. I think we need more of that, too. We need more companies being like, we actually did the Preact thing and here's what we found out. Often it's not a very difficult transition.

Jason: Yeah, that's been my experience with Preact. After you get over the initial hurdle, which is like you have to -- to use Preact, you have to first learn pragma czar. Then when you have to ruin that illusion and realize there's magic happening with JSX and dive back into it, you're like, that's a lot of work. It's really not. If you're interested, you should absolutely try it. It is amazing. Preact is a fascinating tool. So good.

Tim: Yeah, so on the CSS thing, that was the other thing we wanted to verify, that there was no other blocking JavaScript. If you could click on request 9, then we'll look at 10 and 12 as well. I think it's safe they're not. So there's this render block status at the bottom. That's something Chrome is starting to ship in their dev tools protocol as of version 92. So Chrome is starting to indicate in the dev tools protocol whether or not a resource is blocking.

Jason: That's very cool.

Tim: So you're seeing up front this isn't blocking your rendering. I suspect if you click on the next request or two as well, it looked like you were loading all of these in a non-blocking manner. Yeah, non-blocking. That's JSON, so it doesn't really apply. Non-blocking. Okay. So yeah, if we can get that CSS down with that initial HTML, we're going to shift your first paint significantly earlier in the page load process here, too.

Jason: Interesting, okay.

Tim: Even if we don't remove the font preloads, we're going to see that benefit there.

Jason: Okay. Well, that's -- so, yeah, I wish I knew how to do that really fast so we could just try it.

Tim: I was going to say, if it was Eleventy, I could do this pretty quick. But I'm not familiar with -- I'm sorry, Toast? I'm not familiar with Toast.

Jason: I know enough about it that I feel like we could get there, but I'm worried it would eat the rest of the episode. So I'm going to -- here's what I'll do. I will try this. I will open a pull request, and anybody who wants to, I'll publish the web page test of that PR so that we can see the comparison. But yeah, I'll do a little follow-up because this is really interesting stuff that I think would -- what you're recommending here, inlining CSS, all right. I need to add a step to my build where I'm going to find that CSS and just inject the contents of the file. Cool. I know how to do that. I can make that work. The preloads, I'm just going to comment those out and see what happens, right. This doesn't feel like I'm doing huge lifting. A lot of times I worry that I have to go in and carefully tune my images or convert all my animated GIFs to MP4s. That's a lot of work for perf. But this, this feels tractable. It doesn't feel like I'm out here saying, well, crap, I got to set a whole sprint aside for perf.

Tim: Sure. And I think that's the goal, I think, with perf. Images are kind of the perfect example of this, right. If you're going to sit there and manually optimize images, which I'm not naming names, but some people on this video call do that because they're sick, twisted individuals who just enjoy that process.

Jason: (Laughter)

Tim: But if you're going to manually sit there and optimize it, it takes time, it takes energy. You're going to forget to do it. Stuff like that. Whereas, if you can get, in your case, using Cloudinary, Cloudinary is going to do all that stuff for you. That smooths that away. That smooths that friction away. Same with a good build process. I understand, you know -- certainly I've experienced build processes for what I felt was overcomplicated. But at the same time, a build process that's well put together can do things like immediately minimize that CSS without ever having to touch it.

Jason: You know what, I'm actually wondering if you can just -- inline CSS. No, what's that? What is that? That was a weird thing that just happened. So yeah, there's an inline critical CSS. I might be able to install this plug-in and it'll just work.

Tim: You might be able to.

Jason: I think you have to add a critical attribute to the CSS you want to inline. This might be the sort of thing that because I'm on Netlify, I can click the button, I'll add one attribute, and hey, it's done. I'll start here and see if I can make this work. But if y'all want to give that a try, that could be an area to explore. But yeah, this is fascinating. But I want to be mindful. We've got about, I think, 30 minutes left here. So I would love to take a look at what I think is a scourge of a lot of sites, third-party scripts. I have intentionally, because I'm the only person who's a stakeholder on my site, don't have any. I use Century because I want the error reporting and that's easy. But I don't have Google ads. I don't have -- what's the big one? Google tag manager. I don't use segment or full story or any of the things you'll see in a lot of production sites. I also don't have any ads on the site. So what would be a good place to look if we wanted to get a sense of what does that look like, the impact that has, and maybe some of the strategies we can use to mitigate that without just saying, hey, no ads?

Tim: Well, let's try -- I guess somebody just dropped android.com in the chat. Do they have that stuff running? I was going to say -- I want to avoid -- the usual go-to is CNN. I feel bad because CNN gets -- they've had performance issues for so long, they're the go-to poster child whenever we perf people want to demonstrate how bad something is. And it's kind of mean.

Jason: You know what we could do is we could look -- let's look really quick at my Twitter ads. There's always one that's like a gossip mag. Those are just a wreck. Let's see. How long before they show me one of these gossip ads.

Tim: Shouldn't take too long. They're jacking those things all over the place there.

Jason: No, nothing. Today is the day you're not going to show me anything.

Tim: That's disappointing. The one time you want Twitter to show you ads.

Jason: You know you want to tell me there's a cute relationship between Ryan Reynolds and Blake Lively.

Tim: Oh, you've seen that one too.

Jason: They're pushing that one so hard.

Tim: I get that one a lot.

Jason: Today is the day your ad spend runs out. I'm so disappointed in Twitter. I know you're listening to me talk right now, Twitter. Where's my ad?

Tim: They don't do that, Jason.

Jason: Okay, fine. What are those things called? Is it like Pop Sugar? Is that one of them.

Tim: Maybe? I don't know. This is beyond my area.

Jason: This is going to be great. How about this one.

Tim: Rock on.

Jason: So we've got pop-over. We've got whatever that is. Here are some ads popping in. There's another ad popping in. Is this going to be one where those ads jumps down the screen in the content? No, but this is good enough. There's tons and tons of stuff on here.

Tim: There's a lot going on here.

Jason: Can you hear my computer? The fan just took off.

Tim: All right. Drop that in there. Let's make sure that's on the desktop version.

Jason: Okay. So I'm going to do this on Chrome. All right. Same otherwise settings?

Tim: Yeah. We can keep it at 4G. First repeat is good. Let's do that for now. We'll go back through in a second. We're going to run another test, I think, in a second to see the impact. This is good for the starting point, just to see where we're at.

Jason: All right. So this, I think, is going to be a much bigger -- like, we're going to see a lot more happening. While we're waiting for this to run, maybe we can talk about what is the -- we have this kind of tension between we want visibility into what's going on. We need, for a lot of these sites especially, ad revenue is the way they function. Thank you for the elevator music, chat. And thank you. I saw that Ben subscribed. Thank you. I saw earlier that other folks subscribed, and I forgot who it was and I'm really sorry, but thank you. Cassidy subscribed. That's right. But yeah, thank you all so much for subs. Okay. Third-party scripts enable us to do a lot. They're very important from a product standpoint, from a marketing standpoint for like business development. But they are hell on performance.

Tim: Yes, they are.

Jason: So what have you kind of found is -- I don't know. Do you have any general advice you give to people for heuristics to think about third-party scripts?

Tim: This goes back to, again, why we do performance optimization in the first place. We're not just making it faster for the sake of making it faster. We're making it faster because, two things. We expect it to provide a better user experience, and pragmatically speaking, as a company, we expect it's going to help business metrics. So that's easy when those two align really well. E-com always feels like a good example. If I optimize performance, I'm making the revenue off of those sales, optimizing performance is going to lead to a better user experience, which is almost certainly going to lead to a higher conversion rate. Good for them, good for me. Where it gets interesting it starting to get in those third-party situations through ads or through affiliate marketing or something. That's a big chunk of the company's revenue. Now, there is a little bit of tension, right, because the fastest experience we have has got all of that stuff out of there. It's going to be amazing from a user experience perspective, but the business is going to make no money, go bottom up, and that's not the goal here. There is a tension there to some extent and a balance we have to maintain. So the goal here, what I would always say is we need to be able to prioritize the stuff that is most important to the people visiting that site. So in the case of something like Popsugar, I need to make sure the key story and image and text related to that comes up as early as possible. If I delay that too long, nobody is going to stick around to see the ads I've got. It does not matter. So I need to prioritize that and get those ads up as quickly as possible, sort of, to help the business side of things as well. So what I'm measuring is a combination of user experience metrics around, you know, largest content to paint, et cetera, as well as some business metrics around ad visibility, when do those things pop up. Then you're constantly looking to connect the dots between the two and find what is the balance. How far can I push in one direction without offsetting the other too much? There's no hard set answer. It's a thing you have to kind of experiment with and continue to keep an eye on as you're working on it.

Jason: For sure. Okay. Well, so we've got a real page test here We're looking at this site, which is one of the ones that's going to have all of the ads. This one absolutely makes its money off ad revenues. So looking at this test here, we can see it's got some yellow, got some red. We've got some issues. If you were going to start diagnosing this, where would you start?

Tim: So first thing I'm looking at is the summary you were just looking at, as well as the Chrome field data. Now that we've got a page that's popular enough, that Chrome field data is web page tests going out to the crux database for this page and comparing on a desktop run for this URL, you know, what's the P75. So I can see how it lines up to the test. This test we just ran actually shows a little bit slower first paint and largest contentful paint than what Chrome is experiencing at the 75th percentile. It's not obnoxiously slower, so I'm not too worried, but it is a little slower, just to keep that in mind. But we can run with this. What I'm more concerned with is the opposite. If it looks amazing but crux data shows me it's awful, then I've got a problem. In this case, I can deal with that. So then the next thing I'm doing is looking at those metric summaries across the top. We know right away the largest contentful paint, CLS and total blocking time, needs some work. I'm also looking for gaps between these metrics. So first byte time is 837 milliseconds. That's how long it takes to make the first request to get the first byte of something back from the server. But if you don't mind going back up to the metrics summary quick, actually --

Jason: Oh, yeah, sorry.

Tim: That's okay.

Jason: Getting ahead of myself here.

Tim: You're excited. I like it. So there's gaps that jump out here, right. I've got a 1.5 second gap between when that first byte comes back and when I start to show something on the page. That tells me I probably have render blocking resources on the page, and there's a gap we could try to close. The other gap I'm seeing is between that start render and that largest contentful paint, we have another, like, 1.2 seconds there. In a dream world, an ideal world, your largest piece of content is one of the first things to paint. So that gap, we want that as tight as possible. In fact, we want them firing at the same time if we can. That tells me largest contentful paint, knowing this page, is probably the image. There's something that probably delays that image coming out. Or it's down in the chain. Now if you click through -- actually, before you click through the waterfall, this is a good example since largest contentful paint we know is an issue, can you just click the largest contentful paint link in the metrics summary?

Jason: Okay, here we go.

Tim: So this is the web vitals diagnostics page that we're working on in the open. For the core web vitals, it'll zero in on diagnostic information related to these. In this case, yeah, we can see right away the largest contentful paint is that image. We get a little bit of information there to tell us what the image is. You know, how big it was, the source, all that stuff. Then right below that, we get a waterfall that's immediately truncated at the point that largest contentful paint fires.

Jason: Oh, okay.

Tim: So the request is highlighted automatically for us.

Jason: That's really nice.

Tim: Yeah, so we just again that zoomed in view, the image right away that's causing the problem. A couple things jump out here. Actually, Jason, you want to -- you were kind of on the right path with the other waterfall. We kind of walked through a few things. I'm going to put you on the spot. If you were looking at this for that first contentful paint, you know, we knew we had to delay between that start render and render blocking resources, anything that jumps out?

Jason: Yeah, there's a couple things here. This one looks like it's blocking, right? So we have a blocking JavaScript resource that's pushing everything else back here.

Tim: Yep.

Jason: It also looks like -- let's see. What is that line?

Tim: That would be your first paint, your start render.

Jason: So that leads me to believe this is also blocking. In body parser blocking. Okay. Then is this one blocking? Yes. So basically, we've got some kind of a dependency chain where we need to not only download this, but then we also need to download this, and it looks like this and this are on the same domain, which is not the top-level domain.

Tim: Correct.

Jason: So we have to do this DNS negotiation here. So we load. So DNS and the whole negotiation, downloading HTML, parsing and HTML, then it gets displayed. Then we have to do all of the DNS negotiation again for the media1.popsugar assets. Then we have to download all this JavaScript and execute it. Then it gets more stuff. Then we finally start downloading our images. So it looks like a lot of things could potentially be preloaded or preconnected here.

Tim: Yeah, so you're on the right path here for sure. We've got that secondary domain, nailed it. We have to open that connection to be able to grab the fonts, which are presumably being preloaded, which is why they're up front, as well as the CSS and the JavaScript. So that's delaying us a little bit. It takes time for those requests to occur. We've got the CSS we know is going to block render. But you nailed that JavaScript that's blocking render. And those little pink bars, like if you zoom in, you'll see pink marks on that waterfall. I don't know how visible that is to folks watching. That's JavaScript execution.

Jason: Oh! Okay.

Tim: That script arrives, and then yeah, we see JavaScript execution right away after that.

Jason: A quick bit of clarification. This pale yellow is download, and the dark yellow is parse?

Tim: Sorry, no. The dark -- no. The pale yellow is the request has been made. We're waiting for the server to send something back. The dark yellow is the content is being downloaded. We don't show parse in the waterfall view here. We'd show it in some of the JavaScript stuff. We only show execution. So main thread execution.

Jason: Got it, got it, got it. But if it's a long parse time, we can see here's where we got the JavaScript, and here's where it was executed. That gap is pretty obviously like something else was happening other than running the JavaScript.

Tim: Yeah, the server was busy with something. Something delayed it. So we've got line 8, 9, 10 -- 10 is another domain entirely. Probably 11 and 12 are blocking. If they're not blocking, they're just oddly prioritized.

Jason: In body parser blocking.

Tim: Okay. Which means, yeah, it's blocking. It's in the body of the page, but it's blocking the parser below it. So those are all blocking. Then yeah, it looks like you're right, line 13 there, that request, if you click on it, can we see if the initiator -- loaded by -- it says it's loaded by the doc. But maybe it's not queued up.

Jason: Oh, you know what I bet it is? It's the in body parser blocking. If it can't get further down the document, this is 3,000 lines down the HTML.

Tim: Which is a big file to begin with.

Jason: And we can probably look at this. Let's see how big it is.

Tim: Oh, it's not that bad. 32 is not as bad as I would have thought. Okay. But there are a couple that do look like big resources on this page. Close that off. The image itself, look how much of that is dark shaded, line 16 request. So if you click on that, yeah, that's a 300K image right there that's triggering the largest contentful paint. So even -- first off, we'd want to shift that over by getting rid of some of this render blocking stuff. The other thing is with that 300K, almost guaranteed if we ran that through some sort of optimization process, we'd have that much lower.

Jason: I mean, honestly, we can just do this, right? Let's copy it and go to Squoosh. This is a really cool tool that will let you do a quick comparison. Let me reload. Then I'm going to paste. So here's our image. Looking at this image, we can then compress it. So we're able to bring it down. I guess this was showing us the original. 4.46 megabytes. If I take the quality down to 50, this is going to be 151. Then if I zoom in, we can even see -- let's maybe look at her face. This is the parsed image, and this is the original. So you can start to see artifacts, but it's not that big of a deal, right. This is pretty acceptable, and we just made this thing 97% smaller. So especially for big images, I use this tool all the time.

Tim: Yeah, that's awesome. Squoosh is great.

Jason: Let me drop a link for that. So back to here.

Tim: The other thing I'm wondering about that file itself -- can you click on that request again?

Jason: Yeah.

Tim: Line 16. Click on the object tab. Okay.

Jason: I wonder it loads a full-size one later on.

Tim: I'm wondering if it itself is just oversized. Just open the image in a new tab would be fine.

Jason: Okay. Open image in new tab. It's a pretty big image.

Tim: Yeah, it's big. So it's larger than it needs to be. If you go back to the web page test run, actually, and scroll all the way up, almost all the way up maybe. Right there. Oh, a little bit more. Little more. Little more. Stop. Yeah, there. Can you do a command click or control click on the analysis tab. Cloudinary is going to actually grab all the images on this page and either apply optimizations and see what they think the size could be. Critically, if you optimize the image in terms of compression, but also if you sized it more appropriately.

Jason: Oh, this is such a cool -- so basically what this is doing, if I'm understanding correctly, we're going to go through, it's going to look at this site, and it's going to say, here's an image, grab that, here's an image, grab that. For each of those, it'll figure out its size on the page and resize and optimize properly.

Tim: Yes.

Jason: What an amazing -- okay. That's a great idea.

Tim: So if you go back to that and scroll down, I think they're loading it up on a mobile version to test it, to keep that in mind. But look at the size difference here. It is being served at like 2048 wide, bringing that down to 728. They're showing you below, like at the most, even if you go with the ping format, it's still less than 60%.

Jason: Right. So we could drop this thing down by 80% if we served WEBP. If you're using Cloudinary, they'll do content negotiation where they check what your browser can support and do the right format. So you can format auto. If your browser supports AVIF, you get 42%. It just kind of goes down the chain until you hit something you can support. But this is incredible. Look. We could go down to 85% drop here, 80% drop here, 88% drop. You could save so much wait on this page.

Tim: Righting. So some of that is ads, which maybe they don't have as much control over, but certainly for any of their own images, that's going to be a huge part of that experience, and bringing those down will absolutely help tremendously.

Jason: That would be a huge improvement. Again, this is the sort of thing that you can automate. I'll show you how I do it right on this page. I actually use, for each of these, the Cloudinary fetch thing. So you can go in and look at these, and basically what I'm doing is -- where's the image? It's in here somewhere. There's the image. So if we look at this URL, it like grabs the fetch end point and applies these transformations, but then it just gets the URL. So I'm literally saying take whatever image I give you and then resize it to be, you know, 500 pixels by 250 pixels, fill the image when you crop it, focus on a face if there's a face, use the automatic quality, automatic format. So all of this is done, you know, basically right out of the gate based on this original image. So this is the original image. Cloudinary automatically gives me that. It's amazing how powerful that is. All I had to do was edit my image tags to use that fetch format. Basically, it's exactingly what you're doing in your blog post.

Tim: Exactly, yeah. It's awesome. You literally set it one time and forget about it. Set it and forget it to a tee.

Jason: So anyway, if you take one thing away, go fix your images.

Tim: Yeah, exactly. Do we have a few minutes yet?

Jason: We have about three minutes before I need to take us home.

Tim: Oh, man. Okay. If you scroll down again, I know we were talking about third-party stuff specifically.

Jason: Yeah.

Tim: Two things I would note right away. When you have third-parties that are also blocking, like render blocking, so for example that script.fixel.ai, looks like it might be blocking, I believe.

Jason: Yeah, this one is.

Tim: In body parser blocking. Or even just loading query from the Google API's dot-com thing. When you have that other third-party domain sitting in your rendering path like that, you inherit any of their performance problems. So if their server is slow to respond, if it's hanging, they're having a bad day, whatever it happens to be, your page display is going to be delayed at least as long as their server is. It's called a single point of failure. You've created this point where like if their thing goes down, yours goes down. So that's one thing to keep in mind. If you're going to load third parties, which if you have to, that's fine, you need to try and defer and async those as much as possible. Get them out of that render, like that critical path. Otherwise, they're going to create this weakness, this vulnerability in the page load process for you, which is not great.

Jason: Oh, my goodness. They're all blocking. I was trying to see if any were deferred or anything. So the way that you would do this, like if it was you, would you set these with script tag but then add the async attribute? Or would you defer? I know it depends.

Tim: It depends, yeah. So async or defer. Async is going to say download the JavaScript, continue parsing the HTML while you're downloading, but as soon as you have the file, execute it. Which means it could still potentially block the page. Actually, most of the potential blockings you'll see, that's what it is. If the page hasn't displayed by the time the script arrives, it'll block. Defer is the nuclear option. That says download but do not execute until after the rest of it is done. Like, I've got stuff displayed on the page. So if it's a script that you are -- so ads are actually a pretty good candidate for defer. If you can get that initial render out fast enough, that initial display page out fast enough that it's not pushing your ads so late that you're not going to get any revenue off of it. So defer would be the ideal candidate. Async is sort of the fallback on that. And we don't have time to run the test because it takes a while to run, but one thing you can do to sort of demonstrate impact on these third parties as well is if you go to the webpagetest.org home page, there's a type of test called the -- I guess it's under advanced. Scroll down and go to the SPOF tab.

Jason: SPOF.

Tim: So if you were to drop one of those third-party providers' host names here and hit enter, it would run the test, proxy that request to blackhole.webpage.test.org, which is a site that does not respond. It hangs. So you can get -- it'll run a test of your normal page, then a page of that third party and show you the filmstrips. So you can see the actual difference and the risk you're taking on with those third parties.

Jason: So that means -- so to just repeat this back to you to make sure I understand it, what you're saying is this will show what happens if you're looking at Popsugar but Fixel is down. In this case, if Fixel is down, you have to wait for that request to time out before anything else happens on the screen, right?

Tim: Correct.

Jason: Oof. That's rough. The time out is long. It's like over a minute.

Tim: Yeah, it's long. That's why we can't wait for the test. It's going to take a while to do that. That's one thing that you can do to sort of demonstrate the risk. Then to demonstrate the impact, if you go back to that page again, to the homepage, if you go to the block tab there, here you can actually block requests or domains. So this doesn't make them hang. This excludes them from the test and pretends like they don't exist. So you can run a test of the page without some of those third-party providers and then compare the performance results. Like how bad is it actually hurting us that we're using a third-party provider here or there?

Jason: So what this tool gives you is -- you know when you have -- like a lot of times when I'm having conversations with people around the company, the struggle is that I'm talking about one goal, and they're talking about another goal, and they're ultimately the same goal but we don't have any shared language to communicate what we're trying to get at. What this does is gives me shared language. What I'm able to do is I'm able to say our site is not loading fast enough, and they say we need ad revenue. Then I can say look, if I run the site, here's a performance test without any of the third-party scripts, and here's a performance test with them. Do you see that this is, you know, ten times slower? We're not getting ad revenue because people are waiting so long. We need to find the medium ground. That makes it tangible. It's not like arbitrary perf. It's, look, I can literally show you the site is ten times faster if we take these scripts out. So we need to start making some trade-offs.

Tim: That's a really good way of putting it. Since it's all on hardware versus like on dev tools on your machine, it's reproducible. Again, I guess we didn't do the dev tools thing. I love dev tools. You can do a lot of the same stuff here. But if I'm going to share the result with the team, I'll hop in here first.

Jason: I mean, I share this, right? Anybody who wants to see it can look at this test we just did. Okay, perfect. Unfortunately, that means we are out of time. So let me swing it on back to the home page here. We're going to just do a quick shout out. We've had Rachel with us all day doing live captioning. Thank you so much, Rachel. She's from White Coat Captioning. And that's made possible through the support of our sponsors. We've got Netlify, Fauna, Auth0, and Hasura all kicking in to make the show more accessible. Tim, other than Twitter, which I'm going to drop in the chat right now, where else should people go if they want to either learn more from you and/or follow up and do more with their web perf?

Tim: Sure. So my site, which is linked from the Twitter bio there. Although, the other one, I guess, may be better nowadays is blog.webpagetest.org. I think I blogged for my own site maybe twice this year. I think I've written like eight or so for the web page test blog. I'm not very good at writing for myself when I start writing for other things.

Jason: I have similar problems. (Laughter)

Tim: So this is a good one for both a mix of we try to do the product updates, like you see up top. If you go down, there's more technical posts. We'll walk through how do you do this in web page test or actually the benchmarking or studies and stuff like that. So it's a nice mix of content. It's not just me. Jeena's is really good. That kind of thing. This would be the other place to keep in mind.

Jason: That's fantastic. Tim, thank you so much. Chat, thank you as always for hanging out. Head over to the site and check out the schedule. We have some good stuff coming up. Later this week, we're bringing on John Breen, learning how to do a command line interface in Rust. Then I'm taking a week off. I'm going to a cabin, doing nothing. When we get back, we're talking to Daniel Phiri and Tomasz. So many good things are coming. Get on this schedule. Click on that Google calendar link so you have them listed for you so you never miss one. With that, Tim, thank you again so much. Chat, stay tuned. We're going to go find somebody to raid. Thanks, y'all. We'll see you next time.

Tim: Thanks.

Learn With Jason is made possible by our sponsors: