You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are exploring what a web-wide compression dictionary might look like where we ship a dictionary populated with common strings from web content with the browsers and update it on some cadence. Similar to the dictionary that brotli itself uses but much larger and leveraging compression-dictionary-transport for the negotiation.
This could include things like common code from jquery, react, comment blocks, etc. and would work better than shipping a specific version of a library because multiple versions could be compressed against the latest and still get significant benefit.
As part of this work, it would be helpful to gather all of the functions from all of the scripts (and HTML) and store each along with the URL it was from and the hash of the function body (and maybe do similar for top-level comment blocks and css style blocks). We could then group by the hash and find the most common function bodies and use that as a basis for building the dictionary.
This is probably best done directly on the agent and streamed to a separate table rather than a custom metric but keeping track of it here for discussion.
The text was updated successfully, but these errors were encountered:
We are exploring what a web-wide compression dictionary might look like where we ship a dictionary populated with common strings from web content with the browsers and update it on some cadence. Similar to the dictionary that brotli itself uses but much larger and leveraging compression-dictionary-transport for the negotiation.
This could include things like common code from jquery, react, comment blocks, etc. and would work better than shipping a specific version of a library because multiple versions could be compressed against the latest and still get significant benefit.
As part of this work, it would be helpful to gather all of the functions from all of the scripts (and HTML) and store each along with the URL it was from and the hash of the function body (and maybe do similar for top-level comment blocks and css style blocks). We could then group by the hash and find the most common function bodies and use that as a basis for building the dictionary.
This is probably best done directly on the agent and streamed to a separate table rather than a custom metric but keeping track of it here for discussion.
The text was updated successfully, but these errors were encountered: