Automate Performance Boosts Using Experiments
WebPageTest just launched new features where you can run a new test with fixes applied to see how they'll affect your Core Web Vitals. Scott Jehl will teach us how it works.
Links & Resources
- https://blog.webpagetest.org/posts/introducing-opportunities-and-experiments/
- https://www.learnwithjason.dev/
- https://twitter.com/scottjehl
- https://www.webpagetest.org/
- https://www.webpagetest.org/themetrictimes/index.php
- https://twitter.com/tkadlec/status/1536689811473543172
- https://twitter.com/realwebpagetest
- https://www.twitch.tv/webpagetest
Full Transcript
Click to toggle the visibility of the transcript
Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.
JASON: Hello, everyone! And welcome to another episode of Learn With Jason.
Today on the show we've got Scott Jehl.
Scott, how you doing?
SCOTT: Hey! I'm great, Jason. Thanks for having me.
JASON: Yeah, I'm super excited to have you on the show. I think this is going to be a really fun one, because, you know, I feel like maybe -- maybe people don't know about this. But one of my favorite tools is WebPageTest, because it lets me get all of this really good insight into the things that I work on, completely for free. Right?
Like it's sort of like a Lighthouse test, plus, plus. Right? Because it gives you so much more information.
And that is what you work on at Catchpoint. Right?
SCOTT: Yeah, I mean, I'd have to agree as a user for a lot longer than I've worked for Catchpoint, that I'm a big fan too. [Chuckling].
JASON: [Laughing].
SCOTT: Yeah, I work at WebPageTest, senior experience engineer on the team. And I've been working on the new direction of the product. We just had a pretty big upgrade, I guess, of features both free and paid.
JASON: Yeah.
SCOTT: Yeah, I think you described it we will. It's similar to Lighthouse. We run it, so it's a complement.
JASON: Absolutely.
SCOTT: Cross browser, different devices around the world, so it's got its own unique offering.
JASON: Yeah, for sure. I think what -- no, I got ahead of myself. Because what I should have done, I should have asked about you. Do you want to give us a bit of a background on yourself before we dive into Catchpoint WebPageTest?
SCOTT: Sure. Yeah. So I've been making websites for a couple of decades, I guess, at least. [Chuckling].
And the last ... I guess about nine months ago? Maybe or so? I joined Catchpoint on the WebPageTest team.
And prior to that, I had been with Filament Group, design agency out of Boston for about 15 years.
So I don't move around much. [Laughing].
JASON: [Laughing].
SCOTT: Yeah, so at Filament, I think, you know, what we would do, we would do -- work for clients, and we had a lot of really interesting work over the years through that. And I think in recent years, we had started going in the direction more and more of doing audits, like from a performance and accessibility perspective, which, you know, paired with I guess our interests in those areas, that we had been working on for a long time prior to that.
And so we would use WebPageTest all the time, and I think a lot of that -- that, you know, consultant kind of experience led me to, you know, the sorts of things that I care about working on for Catchpoint now.
JASON: Very cool. Yeah. So I think ... 15 years at one company is, like, kind of boggling my mind. Now when you hear somebody's been in a company for their fourth anniversary, they're like, "Wow, you're really committed staying at one company!"
The world is very different.
SCOTT: Yeah, I think, you know, the nature of working for a company that does a different project every three to six months keeps it pretty interesting.
JASON: That's fair.
SCOTT: And it's a great team.
JASON: That's the bigger thing. When you find a team that's just wonderful to work with, and that treats you well. You know, there's --
SCOTT: Right.
JASON: I don't know. I think you can get swept up in this idea that there's a perfect company out there, but all companies have their flaws. You just are trying to find the one that doesn't actively make you feel bad about yourself.
[Laughter.]
All right. So that's a rabbit hole we probably shouldn't go down. Let's talk a little bit about, you know, you said you were a WebPageTest user before you joined the company.
I've been a WebPageTest user for long time. Chat. Have you tried this there? Have you used WebPageTest -- so for folks who haven't seen it before, you know, I described it as Lighthouse plus plus. Do you maybe want to give just a kind of high level of what type of information you get when you run a web page test? Or even more, what is actually happening? Because I feel like it's a little different than when you use the dev tools for a Lighthouse test.
SCOTT: Yeah, it is a little different. Just from the home page of WebPageTest.org, you can kick off a variety of different kinds of tests, from the default, I guess, which would be a site speed sort of performance test. You can do visual comparisons between two different pages, where you would get like time line views, and be able to see how they load differently, visually.
You could test trace roots, there's a variety from the home page.
But I think the most common way to run it is to just put a URL in for any old site, and choose a location from -- we have actual servers throughout the world that host our test agent. And those test agents can run browsers with, you know, different connection speeds.
And, you know, you can test from somewhere in Italy on Firefox on an Android device, or Chrome from China on a tablet. Things like that.
JASON: Uh-huh.
SCOTT: And at variety of different connection speeds like 3G or 4G, or cable, wired connection. So you get the idea. There's a lot of variety there.
And, you know, at a high level, I think the first thing that you get when you hit "go" is, it actually kicks off that test and tests it from that physical location.
And you get a wealth of information about how the page loaded on that ... particular device from that location and speed.
And a lot of that is familiar, or will be familiar to you if you've run Lighthouse or something like that before. You get, you know, upfront the sort of metrics that we typically talk about when we're talking about, you know, web -- web page performance, which is like, you know, usercentric sort of perceived metrics, like how soon the page appears visually.
Renders in the -- on the screen.
And how long it takes to become interactive, to touch or click or keyboard. So things like that, at a high level, are what you see first.
JASON: Right, right.
So this is actually -- this might be ... we can go as deep down this rabbit hole as you want. But when you start looking at metrics like these, there's a lot that comes at you. You start looking at -- what's the blanket -- core web vitals. And I don't know which ones are vitals and which aren't, but we see like first content full paint, last content, time to interactive, first input delay, all of these different metrics that you see come back.
I find that to be overwhelming, and I don't think I'm alone in -- you get all this information, and you just go, "Phew, okay, I got a C on my website. I don't -- what do I do? How do I fix it?"
How do you -- when you look at these metrics, how are you prioritizing, or how are you kind of sorting through this data to turn the information into action?
SCOTT: Yeah. That's a great question. I agree with your, you know, your sentiment about the metric.
I think, you know, for a lot of users there's actually more information than you need.
I like to -- I like to start by looking at, you know, usually start render, or the first content full paint kind of metrics. Just like, you know, I think it's the most natural user-perceived sort of type of metric to look at, which is like when I hit a URL and hit "go," how soon do I see something?
JASON: Uh-huh.
SCOTT: And just, you know, by looking at that metric, say first content full paint or -- or largest content full paint, one of the two, you know, it tells you a lot.
JASON: Right.
SCOTT: These metrics are, by nature, sort of, you know, additive.
If LCP, like largest content full paint, the moment that the largest piece of content on the screen finishes loading, if that takes a long time, then, you know, there are a number of things that could have contributed to that.
So you can start to step back and see, okay, well, was there a large gap between the first time something painted to the screen, and that moment when LCP came in? Okay.
JASON: Right.
SCOTT: Then I can start to thinking about, you know, something's happening kind of after the HTML writes on the device. Like additional requests. Not efficient, something like that.
JASON: Yeah, and these are things that we feel. Right? So, like, first content full paint, I enter the URL and hit enter, and -- I'm looking at the blank tab. And between me hitting enter and something showing up on my screen, that amount of time, if I wait longer, honestly, that it's perceptual, I'm like, is this site broken? Is my internet down? And I start wondering. There's immediate quality hit. And then you load the site and you see a header and a bunch of blank space, and you're like, is the site broken?
And then a giant photo pops in, and go, "oh! " And those types of experiences.
These are metrics that describe that experience. Like, if --
SCOTT: Yeah. You described them both.
JASON: Yeah, right? First content full paint is five seconds. As data you don't understand what that means, but it takes you five seconds to go from -- I hit enter to literally anything on the screen, your site is broken. Like, you're like, oh, no. That's not good.
And the same is -- if it's five seconds to get something on screen, but then 15 seconds to get the bulk of the content, now you're like, "oh, boy that's really bad."
It's -- I think that was the point where it started to make sense to me. When I really understood not just what these things were called, but what the experience is, where those lines are drawn.
SCOTT: Yeah.
JASON: Of seeing, you know, of experiencing a first content full paint. And that's something that I love about WebPageTest, the filmstrip view. Because it helps show those a little more ... like it lets me experience it.
Where it's not just somebody saying "It took seven seconds to load." And you go, that's not bad. And look at the film strip, and actually your site was showing unstyled text for four and a half seconds and then it popped -- and you go, oh, that's actually -- I don't want that. I want to fix that.
SCOTT: Yeah. I think you really nailed it, where, you know, these metrics define moments to the user of the site, but a little more fine grained than what you can see. Even if you were to see every key frame within that seven seconds, there's still more going on. Right?
Like blank screen entirely, from like zero to six seconds, may not just be ... you know, something concerning first content full paint. It may be that the HTML arrived really quickly. But the browser was forced to wait to show anything to the user, because of subsequent requests that it was making.
And that's something that the user cannot tell. Right? So, like, we have -- we have tooling, like WebPageTest and Lighthouse, that can give us a little more information about, like, what the blank screen --
JASON: Right.
SCOTT: -- what's going on, actually, behind the scenes. Because there may be a step or two in there that's still blank.
But it's pretty meaningful, you know, and can be --
JASON: Yeah. And that reminds me of the metric that is maybe my biggest source of frustration, is first input delay.
Because thinking about that one, like, if we put that in the context, that's when you load the site, and there's a button, and you click the button, and nothing happens, and you're like, did that button click? So you click it again, and nothing happens. And you're like, "What the hell is happening?"
And it pops a little bit, and you go oh, it was still loading something, and now it's interactive. And you have super frustrating moments.
SCOTT: Yeah.
JASON: I can understand that I am frustrated, but I can't understand why all that happened. And tools like this are -- give me -- how do we -- I guess we will look at this later, but at a high level, how do you identify those types of slowdowns when you're kind of looking at the results? Where would you look to see how that's all happening?
SCOTT: That one's a big deal, because, you know, you might end up getting charged twice for a product. [Laughing].
For example.
You know, clicking the submit button or something.
So yeah, I mean, there's real, you know, user consequence to a lot of these beyond just inconvenience, you know, and waiting.
But yeah, I mean, it varies in the -- in the situation you're talking about, where, you know, click something, and nothing seems to be happening, typically there's, you know, some level of JavaScript you know applied to that action that you -- made in the browser, whether it's clicking or tapping something or a keystroke, and it's taking too long.
Often it blocks, you know, there's one thread, and it takes up the whole thread, and there's nothing getting through until it finishes its task. And we can see that in the -- in the tooling, if you look at a WebPageTest result, it will show a little strip of, you know, as the page loads, here's the main thread, and it's a flame graph, and you can see based on color-coding, what's contributing to filling up that main thread and blocking it, and often it's JavaScript in the case that you're talking about.
Not always, sometimes, you know, images, decoding images can be quite costly on the main thread.
So other types of resources take their -- take their toll on it.
But more and more, I think we're seeing, you know, in development in general, you know, in the last 10 years, a lot more dependence on the client side to JavaScript.
JASON: For sure.
SCOTT: And now we're sort of seeing maybe a little movement. Maybe you can observe this too, back the other direction.
Seeing, okay, I think we went a little too far. There's clearly a performance hit from all of this scripting. So what can we move out of the browser and back to the server to kind of make things efficient again?
It's sort of a cycle, I think. [Chuckling] -- in web development that happens over and over.
JASON: I've heard it described as an upwards spiral. And the thing that's interesting about it, it seems like we kind of move from -- we find something exciting, and we start working on it.
And then we start to reach its limitations, and so we move over here, and we start looking at, like, how can we work around those limitations? And we get excited over here.
As we're doing that, this thing is improving. And then we reach the limitations of this, and we cycle back, and go, "Oh, this is better now. Let's work on this," because we found the limitations over here.
You can steadily go upward on back and forth.
And it definitely is a little -- you know, it feels a little bit like the pendulum swinging between client and server. But what I've noticed, we've seen the -- we're not, like, throwing away what happened on the client. We're just letting go of -- the client that were really heavy. Like, we used to need to load all of React, or jQuery or whatever the front end library to do just about anything. Because the browsers had so much authorization that was required.
But as browser have solidified on the spec, and they're good at big compliant with the spec, you need less and less JavaScript. Now you can rely more on the platform. Which means that stuff like using form submissions, which is -- I know Remix is really excited about being able to just like submit a form, instead of using JavaScript to catch the form, and handle it in that script. It's possible. And I'm sure we will find the limitations of Remix and start solving them on the client, and platform will continue to improve.
SCOTT: [Laughing].
JASON: It seems to be the way we roll, but that's good. That's how the march of innovation continues.
SCOTT: Yeah, I agree. I think, you know, at Netlify, for example, among other companies, there's a push to move to the closest version of a server, like in edge locations where the latency is low, but you're still not doing everything in the browser where it's too costly.
JASON: And also, when you do things in the browser, you introduce -- you're offloading the burden to the user. Right?
You are also introducing some variability, because you don't know exactly how, like, Firefox's JS engine and Chrome's are going to be close, but they're not exactly the same. And is that person, their connection getting interrupted because they're going through a subway tunnel, or things like that, that are just hard to -- it's just hard.
If you put it on a server, it's far more predictable, but then you're duplicating work. Thing they're trying to find the happy medium. Let's put server, like, stuff near the user. And then let's also find ways to, you know, do that work once when it makes sense, so we don't have to do it repeatedly, and see that as prerendering, or in the case of things like SSR, you don't necessarily want to have to -- SSR a page every single time somebody loads it. That's why we used to get the Twitter fail whale or things like that. The servers are overcapacity.
If you don't do that work more than one time, then cache it forever. That's why Netlify has the builders, build it once, and then it's a static as far as the site is concerned.
Or, you know, you can do caching headers, which is what Remix. And there's a lot of ways that you can kind of approach these problems to -- to get that dynamic opportunity without the -- the burden of having to actually do everything server rendered, which I think we've all -- for the most part, agreed that's not really the best way. That's kind of like --
SCOTT: Yeah, I mean it's that classic -- I mean, I think back to, like, the earliest time where I ran into this.
Even before we were doing Ajax, necessarily, or fetching or templating or anything like that.
It was like form -- form validation. Right?
JASON: Right.
SCOTT: You know, of course you have form validation on the server.
But, you know, waiting for the server to parse your form and give you an error was such a pain, that we had to duplicate the logic, you know, on the client side as well.
And usually it was written differently. [Chuckling].
Different languages, and now we have a little bit -- better sharing going on.
But back then we didn't.
So yeah, I mean, that -- that's the earliest example I can remember, and that's still, frankly, you know, we can do a lot more validation work without JavaScript now, like in HTML, so things have gotten better, but it's still kind of a concern. Right?
Where does that logic live, and can it live in both, you know ...
JASON: Right.
SCOTT: ... maintain it.
JASON: Yeah, this is -- it's a really interesting space to be in.
And it actually -- what I'm excited about with this latest move toward the idea of edge compute is how many things that weren't possible before have become possible now. Right?
One of the things that I think about is, you know, I've been doing experiments with edge functions, and I am able to set up a plain HTML page that is like a link tree.
And it's just a link to my social profiles, but then using an edge function, without adding JavaScript to the page, so the thing that's sent to everybody who loads the page is still static HTML.
But I can pull a latest like Twitter account or video account or blog post account, and enrich that button at the edge.
SCOTT: Yeah, that's neat.
JASON: Or geolocation, update the map to show somebody their neighborhood instead of a generic neighborhood. There's some cool things you can do without adding JavaScript, or at least client-side JavaScript. But I think my favorite use case is -- and I swear to God this is not, like, because you're on the episode and trying to bid you up.
It's the way that y'all have used them.
Because it's so novel and so incredible.
So let's talk about that for a minute, and then I really want to show it off.
SCOTT: Yeah.
JASON: Can you talk about like this new feature of testing code changes without changing code?
SCOTT: Yeah.
JASON: How are y'all building that?
SCOTT: Yeah, the short answer, I guess, is edge functions. [Laughing].
Yeah. As you noted.
About yeah, I mean, this is something, I think, that -- that the performance community at least from my purview, has been building towards for quite a while. Because, you know, for a long time, so first of all, we've had, you know, performance results that are giving you a lot of information, but not always being clear will what exactly you should do about it, or how to prioritize, you know, the information you're getting, and what to fix. And some tools have done better than others in recent years to start to address that problem.
Like page speed insights, for example, was one of the early ones to give us a lot of recommendations about, you know, if you pay attention to these blocking resources over here, you know, they look like they're causing your page to be blank for a long time.
JASON: Hmm!
SCOTT: "Maybe you could load them in a different way. Here's what we recommend." That was a really big step, to be able to provide that kind of, you know, insight recommendation from, like, a performance expert. You know? It's like kind of having your own consultant.
JASON: Yeah.
SCOTT: For a while, WebPageTest has always kind of -- it started as somewhat of a power tool, I think, and it still is.
And its audience was, you know, more skewed early on towards people who were doing, like, deep performance audits, and knew what they were looking at when they look at a performance result. And so we've been -- over the last few years, trying to expand, without limiting its capabilities, but expand the audience that can understand what they're looking at in WebPageTest when they get a result.
So it's more, you know, more user-friendly, even to nondevelopers.
JASON: Right, right!
SCOTT: And that's tricky to do. Because you're dealing with very -- very technical developer-oriented content.
But the concerns are relevant to all of us. Anyone who uses a website. And especially anyone who runs a website.
JASON: Right.
SCOTT: So -- yeah. Long way around to get to this. But, you know, as we've been doing performance audits over the years, I think when we, as consultants, have identified ways that we could fix a particular bottleneck, there are a couple of ways that we would approach testing that. Right?
JASON: Right.
SCOTT: Because, you know, it's one thing to say that you think this is going to fix the problem, and another to actually demonstrate that it does or doesn't. And that's both useful to know.
And traditionally, that would mean, you know, a number of things. You would either try to experiment with changes on a live website by, say, having maybe a query string flag, something like that. And the change would actually happen in the live code base, and you would have to, you know, change your production code base to support that.
Which is very risky. Right? With --
JASON: Risky, and also just expensive. Like, who --
SCOTT: Yep!
JASON: What company is going to prioritize that kind of work? It takes a lot. Right? You need --
SCOTT: Right.
JASON: Buy-in at the highest level to make production code changes. And a lot of that is so you can prove that there is a problem, which makes it difficult to get the buy-in. Because you're saying, "I'm pretty sure we can fix it, but you got to let me do prework to show you how bad it actually is." And that's hard.
SCOTT: And in the process, maybe you introduce bugs, security issues, you know.
Which -- all of which are good reasons to not do the work.
JASON: Yeah! [Chuckling].
SCOTT: It was hard to get buy-in as a consultant. Hard to get a company that we're working for to say, "Try this out. Here's what we want to try."
So other ways to do it, where we would copy their code base as best we could. Host it somewhere else and start manipulating that. Maybe make branches in GitHub. Down side, you're not dealing with the live site. All you can do, at best, say "We think this'll help because our -- you know -- our copy behaved similarly to your site, and our optimized copy fixes --" whatever problem we're looking at.
A ton of work to do. Even one optimization, and test it. Okay.
So -- but we would do that all the time. Right? And it would limit like -- the amount of work it involved to test each theory really was limiting to the number of theories we could test. Right?
So we would have, like, you know, a handful at best of all-star optimizations that we know from our expertise over the years, are probably going to work.
JASON: Yeah.
SCOTT: And sometimes we were wrong. But, you know, we would go to those first.
So expensive work.
The last part, I'll say, is we would, in recent years, write edge functions by hand.
So we would use a feature like Netlify or other companies have, let you write a proxy on this -- on all of their edge server locations that you could pass a site through, and it'll just make a little change as the site goes through, and makes its way to the browser. Maybe you request Google fonts.com, and it goes through, and find all of the images on the page that don't have a width and height attribute, and just add them, and pass it on through. Right?
JASON: Right, right, right.
SCOTT: That was a huge advance, because the risk went away of modifying a real site.
But you introduce some new -- new problems, which was, you know -- which necessarily -- or haven't necessarily gone away with modifying, but you're not testing a live site.
It's a simulation. But anyway --
JASON: Well, it's a simulation, and yeah, just an unbelievable amount of work. Out in the space right now, you've got Netlify edge functions powered by Deno, and the other one is Cloudflare, and everything else is built on that. So for the cell -- Netlify is using edge, or Deno, and I think you know, like super base is using Deno, and so, you know, you -- you have to go use one of those tools. I think Deno is on an open spec, so you got to climb in and write this stuff by hand, you got to figure out how this particular website is built so that you can transform it.
It's -- there's just so many little steps that, you know, like you said, it just -- it just means that you're not really going to -- you're not going to do a lot of the tests. You're going to do the big ones.
SCOTT: Right.
JASON: -- going to be like -- most of the time, this works, so we're just going to do it. But you're not going to validate that. You're just going to, like, ship it and see what happens.
SCOTT: Yeah, and even if you get a test that's working pretty well, you have to remember to set up your environments the same way, make sure you do the same number of test runs through it, and also, you want to probably run your control run, so to speak, so they go through that same environment without modifying the site.
JASON: Right.
SCOTT: Because just passing through an edge worker is probably going to change some things about the way the page loads --
JASON: If nothing else, it's a relay. Right?
**SCOTT:**yeah, but it's also -- it could be closer to the device, you know, latency-wise. So the server, the original server was.
JASON: Uh-huh.
SCOTT: And, you know, the nature of how that super powerful edge server is proxying it might make it actually faster. [Chuckling].
Sort of like a CDN, just to fetch it.
So yeah, I mean, there are just factors that make it, you know, make it fairer to compare apples to apples, I guess.
JASON: Yeah. So, you know, historically, then, you know, just to kind of do a speedrun of everything you just said, originally you had expert consultants who would have to manly do these tests, either by modifying production, by scraping the content, making their own approximations of what production looks like, or by writing these -- these edge proxies.
SCOTT: Right.
JASON: All of those are highly specialized, highly manual, highly expensive ways of doing it.
SCOTT: Right.
JASON: And that's why I think what you've -- what you've done is exciting.
Let's talk about what --
SCOTT: Yeah.
JASON: What did you --
SCOTT: There's history to get to this moment.
But yeah, what we've released in I guess it's been, like, almost two weeks now?
So -- yeah, just about two weeks, is sort of two-fold. We've added a section to test results that give you recommendations, sort of like the ones that I talked about with PageSpeed Insights, although they're our own flavors of those audits that we run and recommendations based on what we find to be relevant.
Through our own experience.
And then depending on, you know, how those diagnostics -- those diagnostic checks that we run happen to go, we either say that things look good, or we identify opportunities to improve. And when there are opportunities to improve, we can either, you know, recommend a tip, if it's not something that we can change very easily ourselves. Maybe go off, try this in your own code base.
Or, the killer feature is being able to try experiments just by checking the box.
So I think you'll probably be sharing the screen at some point, but basically some of the -- some of the optimizations that we were just talking about are as easy as saying, "Yes, why don't I try deferring these for JavaScript files that the tool says are causing my page to be blank for four seconds?" Maybe load them in parallel and show the page, and you can try it out, hit go, and get a result page that compares your experiment to the original site, both in a controlled, simulated proxy setting.
So you're reliably testing a fair comparison.
It's just ... you know, it's many days, at least -- [chuckling] -- of work saved each time you click that button.
Yeah, I'm pretty excited about it. It's --
JASON: Yeah --
SCOTT: -- like video games to me.
JASON: What I love about it, it's a flattening of the -- the landscape to something that's more approachable. Right? Anybody who's ever heard me get excited about software, it's almost always because we've gotten an abstraction to make something that was previously inaccessible more approachable to those who are everyday devs, not specialized like a performance consultant.
Tools like Netlify or Remix, they're taking these things that were super complicated to do and building good abstractions so that you can do 80% of the best practices, just by doing the default.
All you have to do is show up, and you've won. Right? And it feels like one of those things, where you now, when you're doing perf testing, not just show me numbers, and then I have no idea how to fix.
Here are your numbers, and if you did this change, it would help. Do you want to try it and feel the difference?
That feels huge.
SCOTT: Yeah. And I think, you know, it's ... there's two audience considerations that I think are really important.
One is the enabling audience. Right? Who was unable to either identify what to do, or unable to test it if they had an idea what to do.
Now they can, and they can refer to their developer engineer team and say, "You know, we have this problem, and WebPageTest shows that if we add this attribute to these elements, it'll, you know, speed things up and fix the problem. So if you could do that." Because that's not their role, necessarily, but they know the right person to talk to.
That's a new audience who could not do that prior to this tool. That's excite to go me.
JASON: Well, it saves the -- we were talking about, just to get the data to make an argument for why you should make a change, you have to get buy-in at all these different levels, because it takes too much work to set this up manually.
So what I find exciting, it eliminates that step. You can go, and with very little buy-in, you can get a pretty solid hypothesis that, like, this is what we're doing now. This is how it would change. And this is the benefit we would get. We need X days, or X weeks, to go and solve this problem in our production code base.
You don't need, like, an executive sponsor to do the exploratory work anymore.
SCOTT: Right, right. Exactly.
So yeah, I mean, that aspect is a really big deal, I think.
But I think, before we go too far in sounding like we've replaced the consultant role --
JASON: [Laughing].
SCOTT: As a consultant, I would be offended.
JASON: Sure.
SCOTT: I think it's blown wide open the possibilities of what you can do and how much testing you can do as a consultant, because there's kind of two aspects to the -- the experiment section.
And a lot of them, the first aspect, so to speak, is sort of reactive. Where it's identifying problems and suggesting an experiment you could try.
And you have a little control over how it's applied, but not a lot. It's very targeted.
But there's a whole separate section of that experiment's page, which is just open-ended. And this is like the power tool section, I think, where the consultant can say, "I understand the page that I'm dealing with, the website that I'm dealing with, and I see the bottle necks and work maybe WebPageTest didn't notice this particular thing that I believe will be helpful, so I can go in and say, okay, I'm going to write a -- you know, find and replace on the HTML of this page, and add the -- add an attribute to these particular tags in the page."
Or, "I'm going to add these script tags to the head of this page and see how it loads."
In that case, sounds like a consultant power user sort of thing, but there's also an enabling aspect to that for, say, you know, say you're on a marketing team and you're trying to convince the test of your team that this chat bot is a good idea to add to the site, and the engineers are like, "whoa! This is going to -- this is going to influence our page speed. It could slow down the page. Can you show us the impact?"
And being able to just say, "Okay. I'm going to add this script to the end of the body, or the end of the head. And here's a comparison. And it shows it's not slowing things down." That's huge. Being able to inject that step, versus finding on the later that it ruined your metrics. [Chuckling].
Because added something that degraded performance, or caused an accessibility issue.
JASON: Right! Yeah. That's a -- okay.
So I feel like all the other questions I have are going to be easier if you're looking at something. I'm going to switch over to the paired programming view. Give me just a second. And I'm going to take this opportunity to do a shout-out to our live captioning. We have White Coat captioning. Kathryn is doing the captioning for us. Those are available on the home page of LearnWithJason.dev. Head over there if you want to follow along with the captions. Those are made possible through the support of our sponsors. Netlify, Backlight, and NX, making this show more accessible to more people. And we are talking to Scott. Make sure you go and give Scott a follow on Twitter for more insights into all this -- all this goodness around perf -- why is my video being weird? Hold on one second. I got something weird going on with my video here.
Here? Yeah? Let's just make that a little bit smaller.
I think I had it in solo view and edit the source instead of adding a new one like an adult. That's better. Now I'm not on top of Scott's face. We're talking about WebPageTest, and I shared this in the chat, but this is the blog post that introduces all of the things that changed. And this, for anybody who's not familiar, is WebPageTest.org. What's my entry point if we want to see how things work? What should we test?
SCOTT: Yeah, good question. First of all, just from the tech side of things on the stream, is there a way that I can see your screen as well?
JASON: Oh, yeah.
SCOTT: Sorry about that. I just realized I never -- give me a second. I'm going to get this off the screen.
SCOTT: I can probably guess, but ... [chuckling].
JASON: Sorry. I -- yeah. I don't know why I didn't set that up today.
Just really doing a great job.
All right. Let me do this. And then ...
SCOTT: Ah, there it is.
JASON: And we will get ...
SCOTT: Perfect!
JASON: If you want to hover over that, you can copy the embed link and put it in a few tab and it will be full size.
Okay. Now that Scott can actually see what's going on, and I'm ... yeah. Let's -- [chuckling].
SCOTT: Hey, nice!
JASON: Ahhh! Chat, as you can see, I'm -- it's not like I've done, what, 300 of these before.
[Laughter.]
Amateur hour!
SCOTT: No, this is good. All right.
JASON: What should my first step be?
SCOTT: Yeah, so as far as the features that we just talked about go, those sort of happen later, after -- after you run the test, for the most part.
From here, it's the same as it's been for a while.
You can start a test by pasting any website URL in to that box there.
JASON: So we can take Learn With Jason --
SCOTT: Sure. And below that --
JASON: I just hit enter. I didn't even --
SCOTT: Shift-tab -- or shift back, just so -- yeah.
Just below that, there's some configurations. The simple one has some defaults that you can choose from. These are just kind of -- a favorite selection, I guess, that we have, of location browser device that you can choose from.
So again, cross browser is kind of like the big deal for WebPageTest. You can run in Edge, Firefox, Safari, Chrome. Those are the sample settings.
JASON: Looks like we got options over here for --
SCOTT: Yeah -- repeat view is for caching. If you want to test, you know, how a page loads on refresh. See if things are cached well in the browser. You can include a Lighthouse audit. And for pro users, which I'll get to, that's our new plan.
You see the private option is -- is available there. So you can make a test private.
JASON: Yeah.
SCOTT: They're public by default.
JASON: Let's start there so we can set expectations -- oh, that's the wrong thing. No, that one's wrong.
SCOTT: Yeah. Well -- yeah. That's what I was thinking of. If we go to opportunities and experiments -- not that one.
JASON: NOT that one. Opportunities and experiments.
SCOTT: Yeah. It's just because you're signed in, that --
JASON: Got it. Okay.
SCOTT: -- the page is -- from here, if you wanted to find out about features, you can compare plans and go from there.
JASON: Great.
SCOTT: For a basic rundown of which parts are paid versus which parts are, you know, free to anyone with an account, basically most experiments, all but one, are part of the paid plan.
But we do offer one that you can try, as long as you have a for-free account, and that tends to be the one that improves performance on the most sites, which is the defer render blocking scripts.
JASON: Got it.
SCOTT: So -- yeah. That's pretty neat. Just with the -- a free account, you can -- you can get access to trying some experiments.
There's also one site that I should mention. You can go to your performance result, if you want.
JASON: Oh, yeah. Let's do it.
SCOTT: Okay. That's still running. Yeah. You chose 4G connection speed, and three runs.
And that's -- yeah, from Virginia. It takes a minute, once that -- you know, there's always -- typically a queue to wait for a test to start from Virginia, because it's a popular location.
JASON: It's a default. I just went in there and hit the button.
SCOTT: Exactly.
JASON: We got a performance summary. We can see -- we got some issues! Got some issues.
SCOTT: Yeah. So if you are new to -- or if you haven't been to the site in the last few weeks, this is the section that'll look most familiar to you.
We added a little bit above it in the last week or two that's totally new, and that's the part that sort of starts to tease those opportunities that we've -- that we've identified in the page that could be improved. Either through changes that you make, or experiments that you can run.
JASON: Uh-huh.
SCOTT: That's that section there. That's sort of a summary of it. You would have to click through the "explore all" to get to them and start to see a little more detail about each of those sections. And they're broken down into three.
These are an attempt, an early attempt thing we're moving towards, more and more each day with WebPageTest, which is to broaden the sort of diagnostics we test, from strictly just performance-related concerns, so usability, accessibility, and resilience.
So we've put them into these quickness, usability, and resilience categories to organize them as such. But you will find a lot of nonperformance from a strict sense information in those latter two categories.
For example, under usability, we run the axe accessibility suite. So we can identify accessibility issues on a page.
And I should point out, all those observations are -- you don't even need to be logged in to see them.
So those are just part of the default test result. So really, logging in is -- is where you start to get into the, you know, the experiments feature, I guess.
But yeah. Just looking at this result, like, if you scroll down in the metrics, you can see this is kind of the standard stuff that you would see all along in my page test.
10 first -- start render, first content full paint, everything looks pretty good here.
TBT, the total blocking time, is a little high. So that's noted in red.
JASON: Uh-huh.
SCOTT: And that can be a number of things, but tends to be JavaScript related, like we were talking about. What it refers to, though, is the amount of time that the -- the main thread is blocked for interaction purposes.
So -- yeah.
You get that high level view there, and you can still dig --
JASON: So if everybody just ignores this one --
SCOTT: [Chuckling].
JASON: -- pretend that's not there. Look how good Learn With Jason is! Pat myself on the back. And remember, this number is not real. Just ignore that one.
SCOTT: [Chuckling] Exactly.
And of the colored metrics there, those are your core web vitals. So, you know, the others in navy are metrics that we track that aren't included in the set that Google identifies as particularly critical. Right?
But you're looking good on LCP.
CLS is the layout stability metric. So that, you know, that's a big deal --
JASON: Should be zero.
SCOTT: Should be zero, but you're still very close to zero, and it's in the good range. You're not being penalized in search ranking for it.
JASON: Right.
SCOTT: Yeah. But more importantly, you know, than just search ranking, you're not loading a page that shifts all over the place as someone's trying to read it. That's what that metric tells you.
You have waterfalls here you can click through, and that tells you about the requests that contribute to how the page loads.
JASON: This lets us dive into what is really nice, where we can see ... like ...
SCOTT: You see all the requests --
JASON: -- and everything. Yeah.
SCOTT: And --
JASON: Yeah, I've got a bunch of third-party stuff -- oh! You know why? We just reason a test with the live player. Because we hit the home page. Right? So that would be --
SCOTT: Yep.
JASON: -- all the time to load the live player, load the caption thing. This is also a -- not a best-case scenario for the website. [Chuckling].
SCOTT: Got it.
Well, we could run an experiment in a little bit that tries to get that TBT down. [Chuckling].
JASON: Yeah! Yeah, yeah.
SCOTT: But yeah, let's see. If you scroll back up. Those are your requests, and then you've got -- if you go back to the -- or click that filmstrip, I guess. That would be ... yeah. That would get you into ... that's the classic filmstrip view of -- it's broken down into every half-second here.
But you can adjust it to 60 frames per second, whatever you'd like.
And what's cool about this view is, as you scroll it, you can see on the way left of that scrollable pane, there's a little red line with a -- a loop circle at the top.
And that lines up with the same kind of line in the waterfall.
JASON: Oooooh!
SCOTT: So you can see exactly where requests are being made on a visual timeline. So yeah.
And, you know, like you already said, there's -- this is a pretty good, well-performing page. You can see the time line's pretty -- all the requests are kicked off in a line there like that.
You know, nothing too staggered. Suggesting that there's a lot of chained, you know, chained requests. Instead, it's looking pretty good.
Yeah.
So a couple of things that I noticed in that waterfall that are potentially interesting, and we could confirm in the opportunities. But you can see in the left column, there's the names of the requests, and two of them are blocking, rendering. As that little indicator shows. Yeah.
JASON: Got it.
SCOTT: What that means is, there's CSS files that are requested normally, right? And -- in a link rel stylesheet that references a CSS file is going to cause the browser to stop what it's doing from a rendering perspective until it can go fetch that CSS, and you know, get it ready to present the page in a layout.
And then proceed. Right?
JASON: Uh-huh.
SCOTT: Sometimes you can avoid that by, you know, loading the CSS in different ways, or deprioritizing CSS that you don't need for that initial view.
So there's some things you can do.
But generally, you kind of want that behavior with CSS [chuckling].
JASON: Right.
SCOTT: It's not necessarily a bad thing.
JASON: If we didn't have the CSS, our cumulative layout shift would be all over the map, because the site would be unstyled and then styled.
SCOTT: You don't necessarily --
JASON: -- depending how big it is, we would also look at something like inlining it or --
SCOTT: Exactly. Yeah. And that's an example of one of the experiments you could decide to try.
From here, typically I would -- I would go ... well, you can get to it through there. Yeah. "Opportunities and experiments" is the new section we launched in the last couple of weeks. And it's broken down by quickness, usability, and resilience, and each has a series of checks that we ran.
And when they're ready, that means that we've identified an opportunity. And we either suggest, like, in the first one, the time to first was slow. We suggested a tip.
You know, you could try server timing headers to see what's taking a while on your server side. For example.
Which are surfaced in WebPageTest and kind of useful in that way. But we're not really changing anything with those. We're just suggesting things.
These that you're looking the now are experiments.
JASON: Right.
SCOTT: In that first one, got a couple -- if you spin it down, actually, you can see which one it applies to. If there's more than one, there would be check boxes.
You've just got that one render blocking script. And we looked at the waterfall. It didn't look like that script was necessarily contributing to the first paint. So it may not be blocking at a meaningful time. But if you wanted to test it you could just click "run this experiment," and that would add it to your cart, so to speak.
JASON: Now, I can share this link here. If I share it in the chat, everybody can check it out.
SCOTT: Yep.
JASON: If anybody wants to explore these results on your own.
SCOTT: Yeah, I mean, the only difference between what they'll see and what you're seeing is the experiments will only be available if you're a pro user.
JASON: Got it.
SCOTT: The first one is available if you're logged in.
JASON: Got it.
So we got a request from "malcaptain" to see what the CSS -- that makes sense.
SCOTT: Sure.
JASON: Let's give it a shot.
SCOTT: And we didn't have a change in the queue to put the file size next to each of those, but it's worth noting, any time you inline a file, it's going to bloat the size or increase the size of the HTML. And it's not necessarily going to be better. Right?
It might slow down the page. But it's useful to know.
So I would suggest, when you're running experiments like inlining, maybe do them one at a time, so just that one, but not, like, you know, further experiments down the -- down the page. Then you could just hit "go" and see, okay, what does inlining do.
JASON: Yeah. There's some things that I, for example, maybe what I want to do, I know you're not going to hit the search when you first --
SCOTT: Ah. Perfect!
JASON: You got to -- [indistinct] -- main CSS and let's see if we can solve the whole problem without bloating everything.
SCOTT: Let's try it.
JASON: Do you want to add anything else or should I rerun this?
SCOTT: No, that looks good. At the bottom it's going to give you the number of experiment runs that you would like to run.
Generally the more that you run, the better likelihood that you'll get a reliable comparison.
Because we're dealing with real network conditions. Sometimes you'll get a slow test result. And we like to be able to kick those out, from the -- the end comparison.
So WebPageTest will do that by default when you run more than one test.
It'll -- it'll keep the median run.
JASON: Yeah. And so do you have, like, in -- in your ... you know, standard bag of tricks, are you running -- is three where you go? Do you run five? Do you -- more? How many tests --
SCOTT: Yeah. It really depends.
So first of all, just to mention while it's loading what you're looking at here.
We've got the experiment run, and the control run --
JASON: [Cough].
SCOTT: -- and the experiment run will apply the two tasks that it says are being applied. So they're both applied to that experiment run.
So inlining, and async.
And the control run will go through our experiment proxy anyway.
JASON: Yeah.
SCOTT: So we can compare both of them that went through the edge function, and feel good about the performance being accurate to compare.
Okay. So yeah, to go back to your question.
The number of experiment runs that I like to do really varies on what I'm testing. If I'm testing -- here's a good one.
Say I have a hero image that starts out with no dimension. Whenever the browser loads it, it suddenly fills the space and pushes text down beneath it. There's no box reserved. There's the default behavior for an image on the web. And there are ways to avoid that.
By, for example, in recent years, browsers had let us start adding back the width and height attributes on an image to tell the browser to reserve an aspect ratio space for the image, and it will, if you put those attributes on.
Now, if I were running a test like that, I would only need one. I don't care if it's ... you know, an outlier and it took 40 seconds to run, even though I have a fast site. I'm just looking for, did it reserve a box until the image started loading, or did it not.
In that case, I just need one.
But if I really want to see, you know, a comparison of an optimization that deals with speed, for example, and, you know, I want to factor out network reliability, you know, differences that just happen, then I would do more runs. And I think, you know, three is a good minimum. It's kind of, you know, based on how much patience you have, because it's going to take a little while, as you see. For each experiment we run one experiment run, one control run.
So it takes a little while.
JASON: Just to recap as we're looking at this, for anybody who wasn't here earlier, just tuning in now, the way this is working is, we are loading the Learn With Jason site through an edge function.
And that's happening in both the experiment and the control.
In the control, all we're doing is loading it and running the test there. And in the experiment, you're doing HTML transformation to inline that CSS to defer to the other CSS. Just whatever HTML transformation needs to be done to make that work.
And that's -- yeah. That's just -- it's really, really cool.
SCOTT: Yeah, and I should mention that load CSS asynchronously, might sound to anyone who's unfamiliar with -- or is familiar with standard HTML, might say "How are you doing that?" Because there really -- with JavaScript, we have a defer attribute, to make it load in parallel.
But -- in the same order it's defined.
And then we have an async attribute. But we don't have these things with stylesheets.
What we're doing here is this classic kind of work-around hack -- [laughing] -- that people have been using for a long time, to -- to make a CSS file load asynchronously by first setting its media attribute to print, and on load, setting it to screen so that it applies. And by nature --
JASON: Oh!
-- a print stylesheet, a browser will load it in -- you know, in parallel. So it won't background for different stylesheet. It's kind of a work around to get low priority stylesheets to load that way.
And we look at the result, and it looks like we've got unstyled content.
JASON: A little bit of weirdness.
Looking at this, the -- the search text, I think it is, is doing something weird. But if it looks like it's making this site faster, which is interesting.
SCOTT: Yeah.
JASON: We're getting on screen in a second earlier --
SCOTT: Yeah, so --
JASON: -- weird.
SCOTT: Like we predicted, changing the way the CSS loads is not necessarily what you want to do. But it will, if you get something out of the critical path, it's going to let it show something earlier. Right? And in this case it showed content without styles, which isn't necessarily keeper -- [chuckling].
JASON: It's interesting, because it like -- it loaded the styles, but it looks like, when the async load kicked in, it somehow deleted the other styles or override them somehow.
My guess is this might be -- all my CSS is processed, the site's built in Remix, and the CSS might have something happening that when we pull it out, it's somehow no longer referencing the things it's supposed to be, or who knows how -- how this might have actually happened.
SCOTT: Yeah. That's a good point, because a lot of ... well, several experiments, anyway, you know, are designed to work best against sort of like the default way that you would reference a stylesheet, or for example react sites that you run through this sometimes will give you a different result than you'd expect, because they -- you know, they embed styles with style components, for example, in a way that are not easy to retrieve, you know, looking at the dom. Just the way that they're embedded in there, they're not retrieved in the same way, I guess, when we run the experiment. Those are all things we're trying to improve.
The more sites that you throw at this tool, and the community runs through it, the more we're, you know, noticing cases that we need to, you know, address a little better.
JASON: Yeah. I have a working theory here.
I have a suspicion that this is due to hydration.
SCOTT: Hmm!
JASON: Because we would have, by running this experiment, what is in the HTML wouldn't match the -- the tracked version in the JavaScript anymore.
So it would be like, "Oh, I'm missing that node. Let me just delete it," or something.
SCOTT: Yeah. So any time the JavaScript is heavily involved in generating -- or, you know, manipulating the styles, it's going to be a little harder to apply these kind of experiments.
But you still got a little useful information here. Just by deferring -- getting those links out of the way. The page showed a lot sooner. Right?
JASON: Yeah. And so, you know, taking that to a more, like, practical -- now that I know this, I could go in and look at, like -- it looked like this is the only part of the search stylesheet that I would need early, so I can move this button style into the main stylesheet, inline that, and this part is all on interaction.
So I could potentially make sure this only loads when you actually put the search button, which would further decrease the stuff you load, unless you actually run the -- the command K or click the search button to load that experience.
So I have -- you know, without having to write any code, I now have direction on this, and optimization for the site.
SCOTT: Right. Yeah.
So it's useful to know. And sometimes the takeaway is, I'm using technology that's going to be difficult to modify. Right?
Like in your case, it could be a little harder to make the optimization, if this was a successful experiment.
JASON: Right.
SCOTT: But it gives you information at least to be able to act on it.
JASON: And so I have my personal site is built with Levity, so it's got less of that, and I know you have one that's your demo site that I want to pull up and show.
SCOTT: Oh, yeah! [Chuckling].
JASON: This one here is kind of ... intentionally made this bad.
SCOTT: Yeah, it's sort of like -- sort of like a unit test suite, almost. [Laughing]. Of, you know, performance antipatterns that -- that will cause every diagnostic in our opportunities page to fail.
JASON: Should I run one of these?
SCOTT: Sure! Yep.
JASON: I'm going to do it. I'm going to hit this -- do we want to change any of the setup? Should I do one for desktop.
SCOTT: Sure. That makes sense.
JASON: I'm going to do desktop. We're going to start the run.
Off to the races.
SCOTT: Yeah.
So -- yeah, that one has some -- some identifiable bottlenecks right from the start. You will see that will show up. We're referencing a bunch of scripts in the head of the page that are block rendering.
So those could be deferred.
Of course, you know, we always defer to the developer, yourself, to know if they can be deferred or not. Right?
Like you're familiar with that list of scripts, and maybe one of them is very important to rendering the content right on the top of the page. Maybe it won't apply to you, but you at least have that information.
Yeah. The metrics are a little different. We've got a higher -- largest content full paint, which suggests some things up with that, the image of the dog, making it load too slowly. You can see it coming in there.
Starting at around three and a half seconds, finishes at the highlighted red there, five and a half.
So something's wrong with that. We could look at that one, if you want. There's going to be an experiment related to that. And then in the waterfall, yeah, there's some blocking resources here. There's some redirects. There's some 404s highlighted in red.
Lots of stuff. [Chuckling].
JASON: Yeah, yeah!
Okay.
What --
SCOTT: Yeah.
JASON: What's the -- I guess, if you were going to break this down, where would you start with this? I'm going to do to opportunities and experiments.
SCOTT: Or if you started at the performance summary, you would see some of the metrics, and, you know, start to formulate a guess on the gaps. Like, for example, my first content full paint here is pretty good. Less than a second. Very good. Right? On cable.
But LCP is high.
So I'm not so concerned about how quickly the HTML was delivered up front. That seems to be pretty good. But some delay is occurring between that moment and when the -- what I know to be the largest content, which is that image. That's not showing up as early as I'd like. Right?
And again, this is on the desktop connection. So the problem would dramatically worsen if you were on 4G, 3G connection, something like that.
JASON: Right. Because this particular test, running on cable, that's -- this is like best-case scenario.
SCOTT: Yep.
So -- yeah, so from there, I would say, okay, let's see what's up with the LCP. I might go to the web vitals page. So there's a -- under the one -- go up a little bit.
That menu. The web vitals page is going to highlight -- if you click the ... well, you've got it right here. It'll highlight in green --
JASON: Oh, nice!
SCOTT: Yeah, that's my largest content. It's highlighted there. I can get information on what that -- the markup is that's driving it. It might not be an image. It could be -- it could be a background image for one thing. It could be text.
You know, sometimes if it's text, it's like font-related. You know, custom fonts taking a while. All sorts of different things.
So in this case, it's an image. That's pretty straightforward. And you can see the markup there, right off the bat, we've got an antipattern. Loading equals lazy. Lazy loading images is kind of one of those things that you classically think as a good idea for performance. But it's only good for images that you want to load slowly. [Chuckling].
And that's sort of a weird concept, but, you know, you want lazy loading on images that are likely to be unimportant.
So images that are further down the page. Right?
JASON: So to -- to rephrase that as a heuristic for people who just, like me, kind of have their jaw drop when you're like, don't use lazy loading, what? I thought I was supposed to lazy load all the images.
The heuristic, if it's above the fold, don't lazy load.
SCOTT: Exactly. And it varies, depending on which device size you're looking at. But generally, if an image is likely to be in the first, you know, screen full of content for the user, you don't want to lazy load. And the reason for that is, when you put that attribute on an image, it tells the browser to wait until the page renders to decide if it should fetch that image at a high priority or not.
If that image is likely to be in the rendered viewport at load, then it'll fetch it right away.
But it still has to wait for the page to visibly render before it goes out and gets it.
And that's much later than an image would normally be requested.
JASON: Yeah, and so we're, like, here, we're actually -- you know, here's the site itself, and then we've got the -- the CSS here.
So theoretically --
SCOTT: Yep. Ordinarily, you would see that image picked off at around 0.2 seconds. That is a very fast result.
So -- [overlapping voices] -- green line there.
That green line is when first content full paint started. So the browser rendered the page, and it said, "Ope, this image isn't in the viewport. Let's get it."
That's what lazy loading does, and it does that as you scroll down the page or resize your browser. Images come into the viewpoint and says, ope, I need to get this one.
It's good if you're using images that are further down the page. They're not hogging resources early on in page loading.
But in this case, we really want it to hog -- [chuckling] -- resources a little better than it is.
JASON: Right.
SCOTT: So right away, we've identified our problem here. That request is kicking off way too late, and we even saw the reason. Because we happen to know -- but if we didn't know, the opportunities page would tell us that.
So -- yeah, if we go to there.
We can scroll down to ... ahh ... let's see. LCP is high.
So this will -- this will be highlighted any time it's over two and a half seconds. We don't vary that timing for the kind of device. We just say, two and a half seconds is a while. [Chuckling].
For any user. Expectations have changed with the use of apps. And I think, you know, more and more we're trying to standardize on is it fast or not. And I think more and more users just have an idea of that in their head, right?
So, you know, we flag it if it's over two and a half seconds, and it says, you know, this is the image that we're dealing with in this case. It was an image. And we can do a couple of things with it.
So we can preload it, which would put a link in the head of the page that actually force fully fetches that image at a high priority early on, potentially blocking other requests in its way to go do that.
Or we can add a priority in, which is pretty neat. What that does, is adds this attribute, fetch priority high, something so for is only on chromium browsers, Edge and Chrome, but others ignore it, so it's harmless. It will add that to the image itself, and you can add them to a variety of different things like scripts or stylesheets as well.
So you can go ahead and click that one.
The other thing that we know we're going to want to apply here is this next one. Images within the initial viewport are being lazy loaded. That's a big deal.
That lines up with the problem that we saw. So what this will do is remove that loading equals lazy attribute from the image. So you can run those two, and, you know -- we don't have to wait for it if you don't want to.
But what it would do is cause the browser to kick off that request very early because of the priority hint. You see the high priority, when it discovers the image, and ope, this one needs to go right away.
And then the lazy loading attribute would not fight with it. [Chuckling].
JASON: Yeah! So, okay. And so scrolling through here, looks like we got other things. We could load fonts so they are visible. That helps with the content shift or cumulative layout shift.
SCOTT: Yep.
JASON: Third-party hosts.
SCOTT: Like fonts that are on a third party, you know, it's going to take a little longer to make a connection to that domain. And then get the font. So you can try self-hosting it. WebPageTest just mimics how it would act if it was on your own server.
Yeah.
So a lot of things dealing with fonts there.
One thing that I did want to get to that we didn't talk about yet, if you scroll all the way past the opportunities, to the bottom, there's that Create section.
JASON: Oh, yeah.
SCOTT: That's the power tool feature that, if you spin that up -- this one is really neat.
Because we're not recommending any changes from here down. It's just kind of a playground for you to apply anything that you want to. If you wanted to insert some arbitrary HTML script.
JASON: Yeah, so we can -- so we can add something to the end of the body, we could do like ...
Do something like that. Throw something in.
And we can also do something like this, where we do a web page --
SCOTT: Exactly. The classic browser extension.
JASON: Exactly.
SCOTT: [Chuckling] And that'll apply either one off, or you could run a regular expression, and match, you know, all the instances in the page and change them.
JASON: Let's do it.
SCOTT: My ... my colleague Tim at WebPageTest yesterday found a really cool one, where this page was ... had preloaded -- I think it was almost 400 CSS files accidentally. And it was just --
JASON: Oof.
SCOTT: And I can't remember how long it was taking. A really long time before it could display the page. And he was able to run a find and replace that found the link for preloads and delete them from the page, and sped it up dramatically. And that's in his Twitter account if you want to check it out.
But yeah. Really useful.
So yeah, you've run a number of experiments there.
JASON: Yeah, so we've got -- we're running just a few. We're going to do the priority hint and the lazy loading fix for the images. And then we added some custom HTML, and we did the find and replace just to kind of show those power tools now.
These are not really useful ways of doing it, but we can dig in a little bit more and, you know, like said. Go find this tweet from Tim, actually.
SCOTT: Yeah. Actually, Henri just dropped it in the chat.
JASON: Perfect. Always rely on the ...
SCOTT: There it is. Either way. [Chuckling]. We've been trying to highlight some interesting experiments each day.
JASON: Double-drop it in here, because I have a bot that pulls all the link I post for show notes.
SCOTT: Yeah, this one is pretty cool. If you click through to that one, it was like a six-second faster first paint, just by removing preloads. And this was on a live site, so it's not just the -- the locked site but a good test case. If you want to see the experiment that was run, you can open that little menu there, and the find replace text one will show you ...
JASON: Oh, nice.
SCOTT: Yep.
So just a little regular expression that finds mini data field that shows the replacement, but it was blank. [Chuckling].
So yeah. It just took all those out and you see the page paint is dramatically faster.
JASON: Yeah.
SCOTT: So we should send that off to them.
JASON: Yeah!
SCOTT: But yeah. So ... that's kind of the essence of the -- of the tool.
JASON: It's -- this is really exciting stuff. And I hope the folks in the chat got their gears turning about ways that this is just going to give you an automatic upgrade on the way that you measure perf. You don't necessarily have to do it manually. You can -- you can just dive right in, and ... try it. Right? You don't have to go and code it yourself. You don't have to do really anything hyperspecial. Just click a button and see if it's better. Which is --
SCOTT: -- necessarily know what to test. Or need to know what to test.
JASON: Yeah!
So let me ask, because we're running up on the end of the hour here.
Where should people go if they want to learn more?
SCOTT: Yeah. So the -- on Twitter, the Real WebPageTest account is where you can keep up-to-date on WebPageTest stuff.
There's all WebPageTest.org that you've been showing a great deal. You can get a free account on there to start running at least one of the experiments, and go from there, if you want to upgrade.
So that's -- that's the best place to start.
Beyond that, you know, if you want to see what I'm up to, Scott Jehl -- [chuckling] -- on Twitter. You can find me there.
Oh, and I should mention that each week we have our own livestream on Twitch that is usually an audit, kind of like we just did, where we just open up real sites on the web and run experiments on them, and -- so yeah, you can check out our Twitch feed for that.
WebPageTest Live I believe we call it.
JASON: WebPageTest Live. Okay.
SCOTT: So ...
JASON: Let's see.
SCOTT: I'm sure Henri will drop the link, either way. Looks like he's in the chat.
JASON: WebPageTest -- just straight-up WebPageTest. Let me drop this in.
And let's see. Am I following? I think so. Yes, I am. Okay. Great. Yeah, so this is -- this is -- this has been a lot of fun. Let me see if we got the results back. We did!
So this gave us -- we got a little bit of these, but I think that's -- this one's kind of to be expected. Where we knew it was going to block a little bit more, because we're not doing the lazy loading.
SCOTT: Yep. Yeah, so what did we do? Oh, right. The priority hint. The ... yeah.
JASON: Yeah, we did the priority hint --
SCOTT: We were looking -- oh, the find and replace text. Yeah.
[Laughter.]
I think you added it to the end of the body. So it probably wouldn't show up in that -- [chuckling] -- oh! Did -- oh, perfect. Web page to butts. There we go.
JASON: [Laughing].
SCOTT: And really, what else are we here for?
JASON: Exactly. But we did get this -- this big one, the first content full paint goes way down. The largest content full paint went way down. And we get a better experience overall.
SCOTT: We were trying to optimize the image, and we didn't get the LCP loading faster, and that's probably because we didn't also defer the scripts that came before it.
So --
JASON: Hmm!
SCOTT: -- a couple of things that you'd probably want to combine and want to experiment to say, you know, you can test some things in isolation, but since, you know, these metrics are additive, often you need to combine them in smart ways to get the -- you know, the result we were looking for.
JASON: Right.
SCOTT: So I think I would go back and defer those scripts, get them out of the waterfall, you know, that kind of thing.
JASON: Yeah.
Cool! Well, Scott, we are out of time.
So I'm going to take this opportunity to shout-out one more time to the captioning. We've had Kathryn here today with White Coat Captioning, taking down all of these words. Thank you very much for that.
SCOTT: That's great, thank you!
JASON: And it's made possible by our sponsors, Netlify, Nx, and Backlight, making it accessible to more people. And we've got so many good things coming up. Next week we will learn blender. I'm excited about that. 3D rendering with Prince. Page building, and we will talk about edge computing and if you're interested in how this is enabled with how WebPageTest is using edge functions, don't miss this episode. We're going to talk about a lot about just what it is. Like, why it's different than server versus traditional hosting and other things.
So that's going to be a really, really fun episode.
With that, I think --
SCOTT: That's awesome. Thank you so much for having me! This was fun.
JASON: That was a blast! Any parting words before we call this a success?
SCOTT: Um, no, I think we covered it. Thank you so much.
JASON: All right, Scott. Thank you so much for hanging out with us. We're going to find somebody to rate. We will see you all next time!
SCOTT: All right. Take care. Thanks, Jason!
This text is being provided in a rough draft format. Live captioning, or Communication Access Realtime Translation (CART), is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.
Learn With Jason is made possible by our sponsors: