The ultimate guide to HTTP resource prioritization
How to make sure your data arrives at the browser in the optimal order
- Track: Web Performance devroom
- Room: H.1309 (Van Rijn)
- Day: Saturday
- Start: 17:00
- End: 17:35
Come learn about how browsers try to guess in what order web page resources should be loaded and how servers use that information to often (accidentally) make your web page slower instead. We look at what resource prioritization is, how it's often implemented terribly in modern HTTP/2 stacks and how we're trying to fix it in QUIC and HTTP/3. We use clear visualizations and images to help explain the nuances in this complex topic and also muse a bit on whether prioritization actually has that large an impact on web performance.
HTTP/2 started the move from multiple parallel TCP connections to a single underlying pipe. QUIC and HTTP/3 continue that trend. While this reduces the connection overhead and lets congestion controllers do their work, it also means we no longer send data in a truly parallel fashion. As such, we need to be careful about how exactly we send our resource data, as some files are more important than others to achieve good web performance.
To help regulate this, HTTP/2 introduced a complex prioritization mechanism. Browsers use complex heuristics to try and estimate the importance of a resource and, with various success, communicate their preferences to the servers. It has however become clear that this scheme does not work well in practice. Between server implementation bugs, questionable browser choices and bufferbloat in caches and network setups, HTTP/2 prioritization is sometimes more a liability than a useful feature.
For this reason, this feature is being completely reworked in HTTP/3 over QUIC. However, there a whole new can of worms is opened. One of QUIC's main features for improving performance over TCP is that it removes "head of line blocking": if one resource suffers packet loss, other can still make progress. That is... if there are other resources in progress! What performs well for lossy links turns out to be exactly what to prevent for high speed connections.
Along the way, we also discuss existing options for web developers to impact the browser's heuristics and server behaviour (such as resource hints (e.g., preload) and the upcoming priority hints).
Finally, we question about how we got in this terrible state of things to begin with: if people made so many mistakes implementing HTTP/2 prioritization, why didn't anyone really notice until 3 years later? Could it be its impact on web performance is actually limited? Or have we just not seen its full potential yet?
We make this complex topic approachable with plenty of visualizations and animations. The content is mainly based on our own research (and papers) and that of others in the web community, such as Patrick Meenan and Andy Davies.
Speakers
Robin Marx |