HTTP/2 has been one of my areas of interest. In fact, I’ve written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:
If the user is on HTTP/2: You’ll serve more and smaller assets. You’ll avoid stuff like image sprites, inlined CSS, and scripts, and concatenated style sheets and scripts.
I wasn’t the only one to say this, though, in all fairness to Rachel, she qualifies her assertion with caveats in her article. To be fair, it’s not bad advice in theory. HTTP/2’s multiplexing ability gives us leeway to avoid bundling without suffering the ill effects of head-of-line blocking (something we’re painfully familiar with in HTTP/1 environments). Unraveling some of these HTTP/1-specific optimizations can make development easier, too. In a time when web development seems more complicated than ever, who wouldn’t appreciate a little more simplicity?
As with anything that seems simple in theory, putting something into practice can be a messy affair. As time has progressed, I’ve received great feedback from thoughtful readers on this subject that has made me re-think my unchecked assertions on what practices make the most sense for HTTP/2 environments.
The case against bundling
The debate over unbundling assets for HTTP/2 centers primarily around caching. The premise is if you serve more (and smaller) assets instead of a giant bundle, caching efficiency for return users with primed caches will be better. Makes sense. If one small asset changes and the cache entry for it is invalidated, it will be downloaded again on the next visit. However, if only one tiny part of a bundle changes, the entire giant bundle has to be downloaded again. Not exactly optimal.
Why unbundling could be suboptimal
There are times when unraveling bundles makes sense. For instance, code splitting promotes smaller and more numerous assets that are loaded only for specific parts of a site/app. This makes perfect sense. Rather than loading your site’s entire JS bundle up front, you chunk it out into smaller pieces that you load on demand. This keeps the payloads of individual pages low. It also minimizes parsing time. This is good, because excessive parsing can make for a janky and unpleasant experience as a page paints and becomes interactive, but has not yet not fully loaded.
|Filename||Uncompressed Size||Gzip (Ratio %)||Brotli (Ratio %)|
|jquery-ui-1.12.1.min.js||247.72 KB||66.47 KB (26.83%)||55.8 KB (22.53%)|
|angular-1.6.4.min.js||163.21 KB||57.13 KB (35%)||49.99 KB (30.63%)|
|react-0.14.3.min.js||118.44 KB||30.62 KB (25.85%)||25.1 KB (21.19%|
|jquery-3.2.1.min.js||84.63 KB||29.49 KB (34.85%)||26.63 KB (31.45%)|
|vue-2.3.3.min.js||77.16 KB||28.18 KB (36.52%)|
|zepto-1.2.0.min.js||25.77 KB||9.57 KB (37.14%)|
|preact-8.1.0.min.js||7.92 KB||3.31 KB (41.79%)||3.01 KB (38.01%)|
|rlite-2.0.1.min.js||1.07 KB||0.59 KB (55.14%)||0.5 KB (46.73%)|
Sure, this comparison table is overkill, but it illustrates a key point: Large files, as a rule of thumb, tend to yield higher compression ratios than smaller ones. When you split a large bundle into teeny tiny chunks, you won’t get as much benefit from compression.
Side note: One astute commenter has pointed out that Firefox dev tools show that in the unsprited test, approximately 38 KB of data was transferred. That could affect how you optimize. Just something to keep in mind.
Browsers that don’t support HTTP/2
Yep, this is a thing. Opera Mini in particular seems to be a holdout in this regard, and depending on your users, this may not be an audience segment to ignore. While around 80% of people globally surf with browsers that can support HTTP/2, that number declines in some corners of the world. Shy of 50% of all users in India, for example, use a browser that can communicate to HTTP/2 servers (according to caniuse, anyway). This is at least the picture for now, and support is trending upward, but we’re a long ways from ubiquitous support for the protocol in browsers.
What happens when a user talks to an HTTP/2 server with a browser that doesn’t support it? The server falls back to HTTP/1. This means you’re back to the old paradigms of performance optimization. So again, do your homework. Check your analytics and see where your users are coming from. Better yet, leverage caniuse.com‘s ability to analyze your analytics and see what your audience supports.
The reality check
Would any sane developer architect their front end code to load 223 separate SVG images? I hope not, but nothing really surprises me anymore. In all but the most complex and feature-rich applications, you’d be hard-pressed to find so much iconography. But, it could make more sense for you to coalesce those icons in a sprite and load it up front and reap the benefits of faster rendering on subsequent page navigations.
Which leads me to the inevitable conclusion: In the nooks and crannies of the web performance discipline there are no simple answers, except “do your research”. Rely on analytics to decide if bundling is a good idea for your HTTP/2-driven site. Do you have a lot of users that only go to one or two pages and leave? Maybe don’t waste your time bundling stuff. Do your users navigate deeply throughout your site and spend significant time there? Maybe bundle.
This much is clear to me: If you move your HTTP/1-optimized site to an HTTP/2 host and change nothing in your client-side architecture, it’s not going to be a big deal. So don’t trust blanket statements some web developer writing blog posts (i.e., me). Figure out how your users behave, what optimizations makes the best sense for your situation, and adjust your code accordingly. Good luck!
Check him out on Twitter: @malchata
CSS Tricks Go to Source
Author: Jeremy Wagner
Powered by WPeMatico