Conversation
This was added when considering support for legacy formats with nested hashes. Since we now target only flattened keys (where values are either strings or arrays of strings), use `flatten` instead to simplify
app/assets/fonts/glyphs.txt
Outdated
| @@ -0,0 +1 @@ | |||
| !#%&(),-./0123456789:;?@ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz «»¿ÀÁÈÉÊÎÓÚàáâãçèéêëíîïñóôùúû ‑—‘’“”…‹中体文简 | |||
There was a problem hiding this comment.
Is there a way we could do this dynamically at deploy time? This seems like an annoying thing to fail the build on
There was a problem hiding this comment.
I do think it's burdensome, yes. But realistically I also think this is something that will almost never happen.
|
Chatting with @mitchellhenke offline about this, one consideration is if we could account for locale data from Rails and other third-party dependencies. Some options to consider:
|
|
Prompted by #10655 (comment), I was curious what characters aren't included anymore, to get a sense of how likely it'd be for one of them to be reintroduced in the future. These are the characters: Edit: After 1f5f81c, this is reduced further to just these characters: One interesting observation I made based off this is that our current fonts likely don't include the smart-quotes |
changelog: Internal, Performance, Optimize size of fonts to include only content character data
docs/frontend.md
Outdated
| 1. [Download Public Sans](https://public-sans.digital.gov/) and extract it to your project's `tmp/` directory | ||
| 2. Install [glyphhanger](https://github.com/zachleat/glyphhanger) and its dependencies: | ||
| 1. `npm install -g glyphhanger` | ||
| 2. `pip install fonttools brotli zopfli` |
There was a problem hiding this comment.
should we add a script that wraps this with something like a virtualenv setup?
There was a problem hiding this comment.
Similar to my last comment, at this stage I'm not sure it's worth investing too much into the apparatus around updating the fonts, assuming that this should be an exceedingly rare event. If it happens more often than I'm expecting (like more often than once every year or two), then I could see that as a future enhancement worth considering.
1. Avoid extra dependency 2. Faster to run since we're not creating file formats we don't need
| I18n.backend.eager_load! | ||
|
|
||
| data = I18n.backend.translations.slice(*I18n.available_locales - excluded_locales) |
There was a problem hiding this comment.
IMO we should do something similar to this in spec/i18n_spec.rb. Interestingly i18n-tasks doesn't load all string data eagerly like this, so we miss some strings in those specs. Having a more complete set of strings to test against could help prevent issues like one we encountered a few weeks back.
There was a problem hiding this comment.
makes sense to me to add that there
zachmargolis
left a comment
There was a problem hiding this comment.
The code looks good and I can't think of anything else we've missed, so it LGTM
Overall, I am not convinced if we need this PR?
- Pro: I totally understand the reduction in filesize is great for first-time loads, but I'd expect that fonts would get cached and so subsequent page loads would not be fresh.
- Con: My gut is that this process is on the border of "is juice worth the squeeze" because if we ever do introduce a new character, it appears a fairly annoying manual process to fix.
- Con: If a character slips through, I'm not sure that we have any automated feedback mechanisms to help us know (since browsers would just fall back to a different font that has the glyphs, right?)
| I18n.backend.eager_load! | ||
|
|
||
| data = I18n.backend.translations.slice(*I18n.available_locales - excluded_locales) |
There was a problem hiding this comment.
makes sense to me to add that there
|
@zachmargolis Those are all totally valid considerations, and I do think we're reaching the point of diminishing returns on some of the low-hanging fruit with frontend performance. But that's also part of the reason for some of my pushback in comments, since the juice-vs-squeeze here should require this to be minimal investment for it to be worthwhile.
True, but the same argument can (and I assume is) used to justify most apps' resource bloat, and when loading does need to occur, font loading is one of the most visually-noticeable changes and contributes to largest contentful paint metrics. This is also seeking to optimize assuming assets are loaded in parallel, where the page load is as long as its largest assets, and fonts are currently neck-and-neck with the main application stylesheet in competing for largest page asset.
True, though besides what I'd already commented about not expecting this to be common, I don't think it's that tedious a process? I also think it's different from something like vulnerable dependency updates failing builds on unrelated pull requests, since at least in this case the build should only fail as a direct result of the changes of the pull request, assuming someone is adding content including a character we don't already include in our fonts.
I think this is actually improved by this pull request. We were already subsetting fonts as of #6094, and as mentioned in #10655 (comment) we already had characters that had slipped through and weren't being included in the subset fonts. At least now we have some test coverage to compare our content with what's included in the fonts. And I think the other point about fallbacks is a reassurance that even in the worst-case scenario of the character being unavailable, the system will still render a fallback from the system fonts. So the overall risk is relatively low. |
May work better in CI
See: #10655 (comment) Co-authored-by: Zach Margolis <zachmargolis@users.noreply.github.com>
See: 675e0de#r1608353189 Co-authored-by: Zach Margolis <zachmargolis@users.noreply.github.com>
🛠 Summary of changes
Optimizes fonts to remove unused character data.
This is similar to #6094, but more-aggressively optimizes font files. With #6094, we used predefined Latin character set to reduce the size of fonts. With this approach, we specifically target only the character glyphs that exist in localized string data.
This also removes unused
.wofffiles. As of USWDS v3.4.0, only.woff2files are used.For future consideration: We could optimize this even more aggressively if we created per-locale font subsets, to avoid loading French and Spanish character data for English users (and vice-versa, as applicable).
Why?
After
application.css, these fonts are the second-largest first-party asset on https://secure.login.gov , each coming in around 21.5kb (vs.22.5kbforapplication.css), for a combined total of 42.9kb font data on the homepage.Performance Impact
The numbers below reflect the combined total of
PublicSans-Regular.woff2andPublicSans-Bold.woff2, the most common font combination loaded on each page (and preloaded by default).Before: 42.9kb (21.5kb + 21.4kb)
After: 31kb (15.5kb + 15.5kb)
Diff: -11.9kb (-27.7%)
📜 Testing Plan
Verify that text is still rendered as Public Sans: