[Sparse Strip]: Text API (outlines only)#883
Conversation
LaurenzV
left a comment
There was a problem hiding this comment.
There are definitely some things we will need to change in the long run, but I think it's a good start! :)
DJMcNab
left a comment
There was a problem hiding this comment.
Thanks! It's really great to get this working. It's impressive how little code this takes.
Nothing fundamental here, but no shortage of nits!
It will be good to get benchmarks in for this (not now)
| let font = Font::new(Blob::new(Arc::new(ROBOTO_FONT)), 0); | ||
| let font_ref = { | ||
| let file_ref = FileRef::new(font.data.as_ref()).unwrap(); | ||
| match file_ref { | ||
| FileRef::Font(f) => f, | ||
| FileRef::Collection(collection) => collection.get(font.index).unwrap(), | ||
| } | ||
| }; | ||
| let axes = font_ref.axes(); | ||
| let size = 52_f32; | ||
| let font_size = skrifa::instance::Size::new(size); | ||
| let variations: Vec<(&str, f32)> = vec![]; | ||
| let var_loc = axes.location(variations.as_slice()); | ||
| let charmap = font_ref.charmap(); | ||
| let metrics = font_ref.metrics(font_size, &var_loc); | ||
| let line_height = metrics.ascent - metrics.descent + metrics.leading; | ||
| let glyph_metrics = font_ref.glyph_metrics(font_size, &var_loc); | ||
|
|
||
| let mut pen_x = 0_f32; | ||
| let mut pen_y = 0_f32; | ||
|
|
||
| let text = "Hello, world!"; | ||
|
|
||
| let glyphs = text | ||
| .chars() | ||
| .filter_map(|ch| { | ||
| if ch == '\n' { | ||
| pen_y += line_height; | ||
| pen_x = 0.0; | ||
| return None; | ||
| } | ||
| let gid = charmap.map(ch).unwrap_or_default(); | ||
| let advance = glyph_metrics.advance_width(gid).unwrap_or_default(); | ||
| let x = pen_x; | ||
| pen_x += advance; | ||
| Some(Glyph { | ||
| id: gid.to_u32(), | ||
| x, | ||
| y: pen_y, | ||
| }) | ||
| }) | ||
| .collect::<Vec<_>>(); |
There was a problem hiding this comment.
I'd lean towards just using Parley here. Like yes, it's a lot more expensive, but we expect the majority of users of this to use it.
We have evidence of people copying very similar code uncritically, because text layout is not a familiar question to most people.
It would also be nice to use system fonts, but using Roboto would also be reasonable.
Yes, it makes the code longer, but it better emphasises the separation of concerns.
There was a problem hiding this comment.
I'll migrate this example to use Parley. I might not get time to it today.
There was a problem hiding this comment.
Updated example to use Parley in 7fdd1af (will update to use a variable font in separate commit)
|
|
||
| // Fill the text | ||
| ctx.glyph_run(&font) | ||
| .normalized_coords(vec![]) |
There was a problem hiding this comment.
It could be fun to really demonstrate variable font support here. The one tricky thing is that we don't have one in-repo. https://github.com/linebender/xilem/tree/main/xilem/resources/fonts/roboto_flex would be possible here (100KB)
There was a problem hiding this comment.
I ran out of time this morning to finish this. I've put it on my list of things to do.
There was a problem hiding this comment.
I'm also slightly concerned by the proliferation of examples here. What happens when we need to e.g. update winit; that seems like it would become very painful.
There was a problem hiding this comment.
Agree! Updating winit can be a pain! After taj-p#5, I'll refactor Vello Hybrid examples to instead use 1 winit example that cycles through a set number of scenes (while being able to pass flags to load a particular scene on start) (similar to how the root examples/scenes crate works) - WDYT @grebmeg ?
DJMcNab
left a comment
There was a problem hiding this comment.
A few nits, but otherwise I think this is ready.
| #[derive(Clone, Debug)] | ||
| struct GlyphRun<'a> { | ||
| /// Font for all glyphs in the run. | ||
| pub font: Font, |
There was a problem hiding this comment.
It's not immediately obvious to me why the fields of this private struct are public. But OTOH, if clippy isn't complaining about it, that's probably my lack of understanding here.
| let [a, b, c, d, _, _] = self.run.transform.as_coeffs(); | ||
| if a == d && b == 0.0 && c == 0.0 { | ||
| // TODO: Cache hinting instance. | ||
| HintingInstance::new(&outlines, size, self.run.normalized_coords, HINTING_OPTIONS) |
There was a problem hiding this comment.
So just to really lay this out, currently we scale glyphs post-hinting. Which as an MVP is fine, but it is definitely wrong behaviour (because you're not really hinting unless you have a scale of exactly 1.0!).
We should have a comment here, probably.
|
|
||
| fn draw_text( | ||
| ctx: &mut Scene, | ||
| text: String, |
| builder.push_default(StyleProperty::LineHeight(1.3)); | ||
| builder.push_default(StyleProperty::FontSize(32.0)); | ||
|
|
||
| let mut layout: Layout<ColorBrush> = builder.build(&text); |
There was a problem hiding this comment.
In theory, we should be caching the layout. I don't want to block this PR on that, though
| let normalized_coords = run | ||
| .normalized_coords() | ||
| .iter() | ||
| .map(|coord| NormalizedCoord::from_bits(*coord)) |
There was a problem hiding this comment.
This is where we'd use bytemuck::cast_slice, I think.
| scene: Scene::new(900, 600), | ||
| }; | ||
|
|
||
| app.font_cx.collection.register_fonts(ROBOTO_FONT.to_vec()); |
There was a problem hiding this comment.
One of "always using roboto" and collecting system fonts is unnecessary.
In terms of teaching how we expect people to use this, I'd probably lean towards only using system fonts (acknowledging the difficulty that has with web support - but this example doesn't yet run on the web)
There was a problem hiding this comment.
Ah, it fails CI when we try to build the vello_hybrid crate to Wasm. Instead of opting vello_hybrid out of Wasm build, I've reverted these changes. I will implement them as part of this idea next.
|
Thanks for all the comments in review. One point I'll make (based on the initial comments) is that emoji is not necessarily bitmap based, although that will be the case on Apple platforms. The COLRv1 format is outline based with a fairly sophisticated imaging model. It should be possible to port Vello's implementation (which lives mostly in vello/src/scene.rs). |
… size when hinting (#889) In #883, we discussed (#883 (comment)) how to handle hinting when the glyph is later transformed. The approach Vello uses (and shown in this PR): if hinting is requested and the transform contains a uniform scale (without skew / rotation), extract the scale from the transform and apply it to the font size and glyph positions. This behaviour is the same used by Vello: 1. Extracting and applying scale to font size when hinting is enabled: https://github.com/linebender/vello/blob/61f82e4e66a9111d6b2ee17f173167c3601c2e55/vello_encoding/src/resolve.rs#L414-L428 2. Applying that scale to glyph positions: https://github.com/linebender/vello/blob/61f82e4e66a9111d6b2ee17f173167c3601c2e55/vello_encoding/src/resolve.rs#L334 If a consumer doesn't want this behaviour, then they can disable hinting on scaling. You can see what effect this has on our snapshot: https://github.com/user-attachments/assets/9e8577d2-94e8-4722-8878-3a55dc64b6a6
Intent
Adds an API to
vello_cpuandvello_hybridfor drawing text:Background
This API does not perform text shaping and layout. Those functions should be performed by other libraries like Parley or Cosmic Text. The API proposed here merely takes glyph positions and rendering attributes and renders them to screen.
Comparison with Vello
The API should feel familiar to Vello's implementation with one change: We defer to the scene's style and transform.
[stroke|fill]_glyphs(...).Happy to change this, but I don't see why the consumer can't use the existing transform/paint/stroke APIs on
Scene. In other words, the scene's transform/paint/stroke applying to every primitive except text seems unexpected.If text deserves special attention, I'm keen to understand that choice and this PR is certainly flexible to that (and any other) change. Please let me know 🙏 .
Design
To support both CPU and Hybrid variants of the renderer, we expose a
GlyphRunBuilderfromvello_commonthat accepts aGlyphRenderertrait. The builder encapsulates the logic to "prepare" some glyph for rendering before passing it to theGlyphRendererto draw to screen. I've implementedGlyphRendererfor both our CPU and Hybrid variants. The CPU variant implementation is copied below:vello/sparse_strips/vello_cpu/src/render.rs
Lines 185 to 209 in 87d1801
Next Steps
This PR only supports outlined glyphs. We need to support Emoji through bitmap and colr variants. I'd prefer to do that in separate PRs and keep this PR focused on the API and overall strategy.
Unfortunately, I suspect these other two variants cannot be started until we support:
ColorPainter)Perhaps since clipping is underway, I could look at image support? But, again, I worry that, like fast text rendering, image rendering would likely need some form of caching 🤔 .
Notes
I haven't implemented caching of glyphs nor hinting instances. As per discussions, we wanted to keep the text API free from caching for now.